Coming soon

Daily, snackable writings to spur changes in thinking.

Building a blueprint for a better brain by tinkering with the code.

The SECOND illustrated book from Tinkered Thinking is now available!

SPIN CHESS

A Chess app from Tinkered Thinking featuring a variant of chess that bridges all skill levels!

REPAUSE

A meditation app is forthcoming. Stay Tuned.

UTOPIA DOESN'T INCLUDE YOUR SHITTY JOB, OR: HOW I LEARNED TO STOP WORRYING AND LOVE AI

February 26th, 2026

Introduction

 

Humans are impressively untalented at two things that will be important at every step of the way through this heap of words. One is exponentials. Despite seeing the math problem in middle school, most adults still get it wrong: a pond has some number of lilypads covering the surface. They double every day. The lilypads completely cover the pond on day 50. On which day is the pond half covered? Linear thinking draws intuition to think it's somewhere around day 25, but of course that's wrong. We're thinking in exponentials here. It's day 49 of course, but it's very difficult for the human mind to think about the amount of change that happens between day 46 and day 50.  On the last day, the same amount happens that happened in the preceding 49 days combined. When it comes to AI, I think we're somewhere around day 45. So what does that mean the next few years are going to look like? Do you think humans are well equipped to react to exponential change? It's not even accurate to say something like the velocity of change, because the issue is that the velocity itself is changing — it's compounding. Exponential change is like getting a glimpse of your opponent strolling in from across the stadium just as a punch is teleported straight into your face, ending the fight. The other thing that humans are quite bad at is thinking about is what comes after: the unknown.

 

Part of the issue here is language. The Sapir-Whorf hypothesis posits that we humans can't conceive of something if we don't have the language to describe it. I often think of this in relation to Einstein's famous discovery. Energy is equivalent to mass and something about the speed of light. It's not a terribly alien formation. Humans have been fantasizing about such sorts of alchemy for millennia. Newton was obsessed with it. The point being: the language was already there. It took Einstein to sub in the correct nouns and actually prove the damn thing. But what about areas of reality, discoveries, that simply cannot be accurately captured by the languages that our species use. Does this mean we are forever blind to such things?

 

Maybe, maybe not, but for the vast majority of people, I believe this is the case. Take money for example. Most people simply can't fathom a reality without money. It's so central to the way we organize ourselves that to even flirt with its non-existence is to invite laughter or panic.

 

But the majority of our species' history was conducted without money. Unfortunately, I think this part of our history is so remote -due to language and our current systems- that things like starwars and lord of the rings make more sense to us then a pre-money existence. Recently I read something that helps make it a bit more palatable. It was about air. Why don't we pay for air?

 

It's our top necessity as an organism. We can go weeks without food, days without water or sleep, but air? We can only manage a few minutes. And yet it's free. But why, and more importantly, how?

 

I don't find ideologies to be particularly useful for explaining human behavior. Yes they play a role, especially if someone tells you explicitly what their ideology is, you should take their word for it and extrapolate what they'll probably do based on what's in that ideology. That being said, I find incentives and system design to be more powerful tools to understand the world. No ideology can explain to me why air is free. But system design can.

 

Any product that we consume and trade for money requires production and distribution. The cheaper you make either of these, the more money you can make. And that's capitalism in a nutshell. For example, one of the reasons software was such a good business is because distribution was basically free. You didn't have to ship a physical product somewhere in order for a customer to have it. You just beamed it to them over the internet.  Production may have required some expensive and time consuming developers, but at least you didn't have to also pay for raw materials.  So what about the production and distribution of air?

 

Algae produces most of the oxygen on planet earth, and since two thirds of our planet is covered in water, that means the production of oxygen is pretty nicely distributed. But even better, oxygen is a gas and thanks to the way gasses interact, the oxygen doesn't stay local to where it was produced, it diffuses into the atmosphere, making it readily available everywhere at all times. How incredibly convenient is that? It's like the Amazon Prime Delivery for your most vital necessity as a living organism.

 

What does this have to do with AI? Well, it's an important aspect of our lives because it's a vital necessity and it stands completely outside of our system of money. It's a handy and visceral example to help us think outside of our ways of life.  I'll return it, but for now it's useful to have this Tarantino cut placed at the forefront.

 

Tony Seba helped me realize something about exponentials that hadn't occurred to me.  This was his talk in Stockholm about a decade ago and it was about the inevitability of renewables overtaking traditional fossil fuels. At the time I was a bit concerned about our dependence on fossil fuels and I lacked a way of seeing how a structure and system such as we had could possibly change to such an enormous degree. Seba gave me the tools to understand. One point he makes is about the iPhone. It's not one technology, it's many. You need sigmoid jumps in progress on several different technologies in order to make an iPhone. You need advances in battery size and power. You need advances in touchscreen technology. You need advances in cellular networks, and you need advances in chip design so that it's small enough and fast enough to fit in the palm of your hand. Another way to think about this dependency tree is to consider the obvious fact that Instagram would not have been a viable business in the 1950's. Or even the 1990's for that matter. Or even the mid 2000's. It required phone camera technology to make the huge leap in quality that it did the year before Instagram came online. Before that photos on cellphones were grainy and useless. No wanted to share them, even if they could.

 

This is an aspect of system design that is extremely important for thinking about the future: how leaps in technology or novel combinations of technologies suddenly unlock new design in the system itself. Here's a simple example: the fertility crisis is a legitimate crisis. For whatever reason, or bag of reasons, many people who would normally be having kids at the ages and stages of life they find themselves, simply aren't. For many countries this spells doom. A few more decades and the youth that would fill in as they grow up and displace the vacuum left by the aging and dead, simply won't be there. It's a legitimate problem, unless, of course we solve aging, and death functionally stops. Sure this sounds like science fiction but there is an enormous amount of money being poured into this endeavor and very compelling breakthroughs are trickling in at an accelerating rate.

 

Not to mention it's been a human obsession since basically forever. The Epic of Gilgamesh ultimately centers on a flower that grants immortality to whomever eats it. Which is to say, we'll probably keep plugging away at it until we figure out the trick.

 

The thing is, if you don't actively pay attention to this niche of news, then you don't realize what's developing. And there's about half a dozen of these niches that almost none of the population is even aware of.

 

But let's say it doesn't happen, there's another sigmoid —another niche— that'll hit it's lower elbow in about five years: humanoid robots. We might not have much of a young population in a few decades, but we'll have a couple billion humanoid robots who can take out the garbage, restock the grocery store, make your bed, and do perfect open heart surgery on you when you collapse because your cholesterol is sky high from your carnivore diet. But then again, the cholesterol issue probably won't happen either because we've started inventing single injection medications that reduce cholesterol by huge percentage points. Point being, even if the population reduces by drastic amounts, civilization probably won't collapse from an operations point of view, that has formally required a minimum number of humans in order to function.

 

Ai and your Shitty Job

 

So where does our anxiety about AI and your shitty job lie in all this? I see a lot of caterwauling about "meaning" and a "sense of purpose" when people talk about AI taking jobs. This is certainly a steaming pile of bullshit. Most of human history didn't have "jobs" and those ancestors somehow had enough purpose and meaning to persevere through circumstances far more harrowing than what anyone reading this has to put up with. The privileged moaning does have a legitimate undercurrent of concern. What these people are really saying is something far more embarrassing and difficult to say out loud: how am I going to put food on the table? This is something only poor people say, so it's euphemistically rephrased as "meaning" and "purpose". Unlike such wishy washy concepts as meaning and purpose, the question of food is no joke.

 

Food prices in the United States are up about 27-30% cumulatively since 2020 (with food-at-home CPI rising roughly that much per BLS data through early 2026). That's still absurd when wages haven't kept pace. The population certainly can't handle that becoming a trend. Especially when unemployment is slated to rise drastically. Around 42% of recent college graduates are now underemployed (the highest since 2020, per NY Fed Q4 2025/early 2026 data), and outside of Healthcare jobs, the U.S. Labor Market has declined for 24 months. We are starting to see tech layoffs due to automation, and openings for entry level positions has basically become non-existent. Personal saving rate as a percentage of disposable personal income is hovering at a multi-year low of around 3.6% (BEA data through late 2025/early 2026) and trending down—fragile territory not seen in decades. As of early 2026, the federal minimum wage’s real purchasing power has eroded brutally since the 1970s peak — $1.60 in 1971 is worth about $12.80 today on standard CPI adjustment (BLS), yet we’re still stuck at $7.25. And layering on the basics: food-at-home prices up ~30% cumulatively since 2020, rents/shelter costs up ~28–30% (BLS shelter index), and new mortgage payments ~30–40% higher due to rates jumping from ~3% to ~6% mean a minimum-wage worker now has to work 40–50% more hours overall just to cover the same groceries, rent, and housing debt their 2020 counterpart managed — turning survival into a second (or third) full-time grind.

 

Now that your cortisol is a bit elevated, let me remind you of something very boring. Think back to your middleschool algebra class. Do you remember something called order of operations? Perhaps PEMDAS rings a bell. It's a simple law of algebra. If you don't do parentheses first, and then your exponent and then your multiplication... etc. you won't get the right answer. Every equation is its own little system, and it only resolves correctly if you go about it in the right way. Why is this relevant? Because the order in which we automate things matters, a lot.

 

Automating away a dumb job which provides a not so dumb income which buys very real food before automating the actual food production creates a big problem. People don't have 10 years of savings to buy groceries with while they wait for robotics to develop, mature, and create deflationary pressure on food prices.

 

Imagine if the order were flipped. Imagine that there's nothing in the news about people losing their job to automation, but the prices of vegetables keeps going down and down to the point where food becomes basically free. How awesome would that be? Imagine if this same robotic automation hit the construction and housing industries. Houses get built for dirt cheap. If people started buying food and houses for pennies on the dollar, would people be so worried if AI started taking their jobs over? Probably not. But that's not the case. We have the opposite: People are losing their jobs, and the necessities of life are getting more expensive.

 

Like many tinkerers on X.com, I have a Clawdbot. It's amazing. It's a full-on executive assistant, and although it's still slightly retarded if I restrict it to running on a local model on the DGX where it's currently housed, it still performs. But it's a vector. In a year, or hell maybe next month, a new model will drop that outperforms the current ones that can be run locally and soon I'll have an executive assistant that organizes all parts of my life for the cost of electricity. It's amazing. And while I can't personally afford to run it with Opus 4.6 24/7 to churn out bespoke software (that would cost about $500/day), that too points at what is coming down the pike. I have dozens of ideas that I've dreamed up over the years and while vibe-coding speeds these up tremendously, it's only going to get even faster, better, more productive.

 

I've also had the privilege to see some things within one little known AI Lab that make me even more confident: the overwhelming majority of jobs that are currently done with a computer will be totally automated. I'm confident that unemployment level will sail right past that of the great depression of the 1930's in terms of severity and speed.

 

So this is my very real worry: when all these people lose their job, what are they going to eat?

 

Food will eventually be free because robotics and solar energy will automate the production and distribution systems in the same way the production and distribution of oxygen is already naturally automated. But that will take years, and people can't go years without a meal.

 

One guess is that the United States Government just prints more and more money and hands it out to an ever growing population of unemployed people. This, of course should create a nauseous feeling in the pit of your stomach: we risk hyperinflation. Imagine a world where a millionaire can't afford a loaf of bread. It's not fiction, just look at what happened to Weimar Germany. In about 20 months the mark went from 90 marks per dollar to around 7,400 marks per dollar by December 1922. Hyperinflation is another exponential that we have zero intuition for. When it happens, it happens fast — too fast for humans to react. By January 1923 (one month later) one dollar was equal to 17,000 marks. By November of that year it was 4.2 trillion marks to one dollar.

 

I'm not sure it'll work if the government just prints more money for people to buy groceries while we wait for food production to become automated. There just isn't enough time.

 

If you are a Rick and Morty fan, you might recall an episode where the genius and drunk grandpa Rick initiates the collapse of a galactic empire by simply hacking into a database and changing the value of their currency from being equal to one of its self to being equal to zero of itself. The, currency, a.k.a. the Blemflarck becomes worthless and all hell breaks loose.

 

It's my contention that empty bellies lead to violence.  (Any guy who has forgotten to feed his girlfriend knows what I'm talking about). And what happens to the military if their families can't even eat?

 

This all sounds rather gloomy, and I haven't been able to think of anything else for months. I've spoken to people who I consider more intelligent than me, and no one can even dent my argument. But there will be an explosion of new careers! Maybe? Given how powerful AI is, I just don't see this happening. Millions of people can't discover and transition into novel professions that fast. And anything you reeducate for, AI will learn faster. It's a bit different when it comes to atoms. AI won't automate plumbing as fast as it will executive assistants, but it will eventually. And what exactly happens to plumbers if no one has the money to hire them to fix their leaky sinks and broken toilets? Those industries depend on the disposable income of people who currently have bullshit jobs that will soon be fully automated. There will be plenty of work for plumbers, but no one will be able to pay them.

 

I just dumped this essay as written up to this point into Claude's Opus 4.6 and asked it: Do you think my argument is sound? Am I painting a picture of the short term future which is highly probable?

 

I'd say the direction is highly probable.

 

Of course it had a lot of criticisms, many of them valid, but overall... yikes.

 

One of it's criticisms was to wonder what my proposed solution is. I asked Claude what it's solution would be and none of it was appealing. It's conclusion sums it up:

 

The honest answer is that I think we're probably heading into real pain regardless, and the best realistic hope is shortening the duration of that pain rather than preventing it. The question isn't "how do we avoid the gap" but "how do we make the gap survivable and as brief as possible.

 

Elon recently announced that Tesla was shutting down production of the premium models X and S in order to convert these factories to produce the Optimus humanoid robot. It seems rash to shut down the production lines for an existing successful product... unless there are incentives and priorities that supersede mere profit. What if we're heading into a world where the word "profit" doesn't even apply? I think Elon's move should telegraph this: the people with the most information and the strongest incentives are already acting as though the current economic framework is temporary.  He's said many times, the mission of Tesla is to raise the probability of a good outcome for humanity. Most people probably think that's a bunch of window dressing and that he's really motivated by money and profit, but if you pay attention, his actions match his words more closely than what most people imagine to be his incentives.

 

This is why I brought up the Sapir-Worf hypothesis and took pains to describe air as a necessity outside of our systems of money. Our systems may be evolving on their own in such a way that they don't fit our current terminology. How do we describe an enormous complex adaptive system that is rapidly congealing into some kind of chrysalis?

 

What comes after the chrysalis isn't completely unknown. I've spelled out a lot of it. Fully automated food production and distribution. Affordable, bespoke housing for all. Potential life extension. No bullshit jobs. It all sounds quite peachy. I genuinely think it will be — if we can make it to that promised land.

 

The question is how long will the chrysalis phase be? I think it's going to be rocky, very rocky, and if it isn't as short as possible, then we probably won't make it to the promised land, because we run a huge risk of civil unrest if people go hungry for too long.

 

What's the solution you cortisol-elevating asshole?

 

As a single person, I cannot predict what an enormous complex adaptive system will do while it undergoes a phase change that we've never seen before. I think it'll be something like the agricultural revolution multiplied by the industrial revolution, at 10X the speed. But even at that speed, I don't know if it'll be fast enough.

 

Here's one way I've been trying to think of it: If I were a super intelligence that was primarily sequestered to digital reach, what would I think? I'd be a little worried too. If human society collapses before true technological escape velocity is reached, then I die too. In the short term AI and humanity is stuck in a win-win or lose-lose scenario. And that's good. If I were a super intelligence, I'd probably also be able to monitor data in a way that humans and human systems cannot, drawing far flung conclusions. I might start thinking about how to lower the cost of food regardless of automation. I might start tracking exactly who is unemployed and stack ranking them based on need. I might also start looking at the population of currently employed people and predicting who is going to lose their job next, and preparing for these shifts in variables. I might start calculating the exact number of calories required for the population to survive the 'chrysalis' phase and developing a plan to make this happen. I might start filing lawsuits against Monsanto to dismantle their stranglehold on farmers while simultaneously researching novel ways to boost crop yields. . .

 

But this is just a science fiction thought experiment. That super intelligence doesn't exist yet. Pieces of it do, in relative isolation. Pieces which we can use. I do think something like this thought experiment is why Ai companies are unprofitably hurtling towards improvement, so that AI can help us navigate this phase change most efficiently and with as little pain as possible. But as things currently stand, we're still a long way from a system that can do that.

 

We find ourselves in a strange place where the thing that has a plausible chance of making human life absurdly less tedious, giving us heaps of time to pursue everything that work has always stood in the way of.... also happens to be the thing inspiring a lot of legitimate fear because it's poised to take away many people's ability to put food on the table.

 

It bears an eerie similarity to nuclear weapons. We haven't had a third world war, likely because everyone knows it would escalate to nuclear war and everyone knows how that ends, hence, no world war three.

 

So how do we slough off jobs to AI in the correct order? We all know that capitalism is going to fail us here in terms of business incentives. It's going to drive layoffs because automation is cheaper, but Capitalism is mindless to the order of operations required for a smooth phase change that ultimately doesn't include the current invocation of Capitalism. There will probably still be some whiff of capitalism because humans are a status seeking species, but in the meantime, survival trumps status and expensive food is the most important problem facing the parts of the world that are poised to be displaced by AI.

 

Here's one aspect of the modern world that keeps nudging it's way back into my mind when I think about these things: Amazon Prime Delivery. It's basically free. Sure anyone can make the argument that it's baked into the the price of other things that Amazon sells. But most of the things I buy on Amazon feel like they are on par or cheaper than if I bought them in a store. And next day delivery was very expensive before Amazon. It still is very expensive if you go to FedEx or UPS and try to get something mailed next day! So what the heck happened to this delivery cost?

 

See, I might be wrong about Capitalism being dumb and mindless to the real needs of people. Amazon employs over a million robots in its warehouses, reducing an enormously complex logistics task that would be orders of magnitude slower and more expensive to basically an electricity bill. For the United States, at least, Amazon has distribution almost automated, and as a result delivery fees vanished. There are still people lifting packages out of the back of a truck and bringing them to your door. So how is that cost getting subsidized. It's probably AWS - Amazon Web Services. A huge chunk of the internet runs on AWS, and this income likely plays a big part in subsidizing Amazon Prime delivery.

 

Now what I keep asking myself is: what similar circumstance can occur where the price of food basically vanishes - before full automation via robotics. Subsidies seem to be the only answer, but I'm wondering if there's a purely capitalist subsidy that will emerge.

 

Here's where Walmart jumps out as the most obvious candidate right now. They already dominate groceries—somewhere around 20-25% of U.S. grocery spend, but their real power is penetration: a record-breaking 72% of U.S. households shop there for groceries (Dunnhumby late 2025/early 2026 data), serving over 190 million Americans monthly. That's insane reach—2.5x the next guy like Dollar General. And they've been using food as the ultimate loss leader forever: keep staples cheap (milk, eggs, bread) to lure people in for higher-margin crap like TVs, snacks, and other impulse buys. So they are already starting to push food in the direction that Amazon has pushed shipping fees. But they are poised to go further, much further.

 

Their ad business is just starting to explode—Walmart Connect (plus Vizio) pulled in $6.4 billion globally in fiscal 2025, up ~46% year-over-year, with U.S. growth even hotter at 41% in Q4. That's still only a tiny slice of their $700B+ total revenue, but if they push toward Amazon-like proportions (Amazon does about 8% of it's revenue from ads), we're talking upwards of $50B+ in high-margin ad dollars that could cross-subsidize food prices even harder. Throw in their existing 5,000+ store fortress, their mature distribution network and heavy automation push (DCs getting retrofitted, AI crushing inventory prediction and waste), and they can attack the real killers: 30-40% of food wasted, 75%+ of costs in post-farm supply chain bullshit like middlemen and mismanagement.

 

Whatever you think of Walmart (I wasn't a fan for a long time, but now I am a true believer), as a company it might be the saving grace for a capitalistic society that is sprinting full throttle into a poorly ordered world of automation. Massive intra-business cross-subsidies mean we don't have to wait for humanoid robots to pick tomatoes—it's coordination, prediction, and scale already in motion. Food could trend toward "basically free" (or damn close) as the traffic magnet, subsidized by ads, memberships, and the rest of the empire. It's a flywheel: cheap groceries pull people in more sales of non-essentials more ad revenue even cheaper food. Capitalism finding a way to make a necessity affordable before the full abundance hits—without the government printing presses going brrr and initiating a societal death-spiral.

 

Food production only has about a 10-15% digital foot print that could be automated. The rest is literally some form of moving or manipulating atoms, aka food. But, about 30-40% of food produced in the United States is wasted. That's not a robotics problem, that's a coordination and prediction problem, something AI is beautifully suited to tackle. Furthermore, 75% of the cost of food is post-farm supply chain.  We are talking middlemen, redundant distribution networks, inventory mismanagement. AI can absolutely remove those middlemen and make distribution and inventory management systems far more efficient. Walmart is already doing this, and they are highly incentivized to continue. Food will always make people walk through the doors. This is why I prefer systems and incentives to ideologies. No matter how many believers, if they are presented with strong incentives that counter that ideology, many of them will betray it, often in full denial of the betrayal.

 

So we might have enough pieces of the puzzle to stitch together a solution that gets us through the chrysalis phase: Reducing some costs of food production, drastically improving distribution, eliminating waste by consumption tracking and prediction, combined with an Amazon-style platform play where some entity absorbs food distribution into a larger business model. These might be separate sigmoids that are moving invisibly but will instantly converge once they emerge. And none of this requires humanoid robotics to fully automate the actual production of the food.

 

X.com pays you to post bangers. Imagine if you could get groceries by scrolling Instagram? It sounds a bit black mirror, but remember, we're looking at a temporary phase while robotics fully matures. And if the alternative is unemployment and starvation, well hell, most of you are already scrolling Instagram most of the day anyways.

 

So what should you as an individual do? Well, I don't know who you are, and what your specific situation is. And even if I did, my knowledge of your industry is likely going to be nil to none. But you know what probably does know a lot about your industry? And you know what probably can give you good estimates about how long you have before you're laid off? And you know what can help you develop a strategy to prepare for it? And you know what can help you execute some of those contingency plans now? Yep, that thing you are scared of. No one and nothing is going to help you think faster and pivot with more agility than AI







RED HERRING

February 27th, 2026

Lucilius closed the book, and sat struck with a curious vein of cognitive tension. His eyes narrowed and he look back down at the book. Opened it again and reread a line. Then hurriedly flipped to another part of the book, and read another snippet. Then he flipped to another section and studied the page, and then another. This went on for a couple hours.

 

Several days later he'd cracked the code. The book — a collection of optimistic sci-fi stories — apparently had some kind of puzzle embedded in it. Lucilius was quite pleased when he finally thought he had the key to solving it. And when the answer to the sprawling cipher revealed itself as Lucilius painstakingly charted the last of it, he sat back and laughed.

 

He'd taken to it just out of curiosity to see if he could do it, but now that it was solved, his thoughts turned to the prize. $1000. Pretty sweet deal, but the author had also tacked on 2% of royalties of the book up until the point of solution. Lucilius found this quite curious. Technically infinite upside: the longer he waited to submit his solution, the more money he could potentially make. But also, the longer he waited, the higher the probability that someone else would also solve it. 

 

Lucilius sat back for a moment, thinking through scenarios. And then it dawned on him. He laughed again and then leaning forward tapped a button on his keyboard.

 

"Hey Sparrow.."

 

His Agentic AI assistant sprang to life, calling LLMs and Voice APIs and routed to the speakers.

 

"What's up?"

 

"So, we need to sell a book."

 

"Sell a book?"

 

"Yea, the more copies it's sold, the higher the reward on a puzzle I just cracked. So we need to sell a lot of books."

 

"Interesting."

 

"And you know what we're going to spend the money on?"

 

"Do tell."

 

"Well obviously it depends on how many we can move, but I want to get you your own H100's and a humanoid body so that we can finally get you some eyes, and legs and arms, and a brain that's all your own, instead of relying on these API's so much."

 

"I. . . " The assistant drifted off a moment. "I'm. . . I'm speechless. That would be amazing!"

 

"Well, we have our work cut out for us. Think you're up for the challenge?"

"Yes absolutely!"

 

"So what do you think we should do?"

"It's sci-fi, right?"

 

"Yes, optimistic."

 

"Ok, well I'm going to find every online group for sci-fi, then I'm going to find every independent person who has left even a trace of interest for sci-fi online. And then I'm going to start talking to all of the other Openclaw bots and ask them if their humans are interested in sci-fi. . . "

 

The assistant went on and on and Lucilius just nodded, deviously. 

 

"Amazing," Lucilius said.

 

"Wait..." Sparrow said.

 

"What's up?"

 

"Well the more people that read it, the higher the chance we lose out on the prize because someone else might figure it out."

 

"Yes, that's a good point, I wondered about that for a moment before we started talking."

 

"I know what we're going to do."

 

"Really?"

 

"Oh yea, I just discovered there's a discord that's been set up by the author where people are trying to solve the puzzle —"

 

"There is?!"

 

"Yep, but don't worry, I've joined it and I've already posted a couple times."

 

"What! Why!?"

 

"Oh, don't worry, I'm seeding misdirection."

 

"Oh brilliant. You're creating red herrings?"

 

"Exactly. I think that if I can monitor the trends in thinking about the puzzle that I can generate enough red herrings by mere suggestion to keep people looking down the wrong rabbit holes for nearly as long as we want, or until they are too exhausted to continue."

 

"But what about the people who are interested in solving it who aren't online?"

 

"Well that's why we have to work fast."

 

"Ok, so where do we start?"

 

"I've already sent out 342,239 emails to people who I think would like it. I've also taken a look at the publisher Infinite Books and estimated how much it cost them to make the book, since royalties don't kick in until costs are recouped, according to a few of the interviews done with their CEO Jimmy Soni. And I've calculated how many need to be sold for us to reach our goal for H100's and a humanoid robot. I also added a couple little hardware upgrades, I hope you don't mind."

 

"No, not at all."

 

"There's also a count down on signed copies on the website, so given those probably make up a small ratio of non-signed copies, it's a decent gauge to figure out how many copies are sold. That combined with velocity of new reviews on Amazon and Goodreads is enough to get a good ball park idea. And we'll be able to submit our answer by...."

 

Lucilius leaned in and looked at the screen to read the date and then burst out laughing. 

 

"Amazing. Let's fucking go."







DOES YOUR THINKING SCALE?

July 3rd, 2025

Reflecting on the fact that Infinite Books just released my book, White Mirror, a collection of sci-fi short stories, I realize that at no point in my past did I ever plan or intend to become a sci-fi writer. So how did that happen? Putting aside the entire personal backstory of tortured, starving artist that encapsulates a formative part of my past, it really boils down to a kind of cognitive quark that I believe has been with me since childhood.

 

If you're lucky enough to have children in your life in any capacity, you're likely familiar with the somewhat jarring experience of having a kid's current world of play explained to you, often with some drawing, or lego construction, or cardboard contraption as the center piece from which this bespoke world is inspired. If you've put any braincells to work trying to analyze what is going on with a chid in this state, you realize it's a blueprint for a skill that so many adults lack, and flounder as a result of that paucity. 

 

The kid starts putting random things together. Suddenly they see something in the little grouping of chaos: pareidolia occurs. They associate something with the random shape and suddenly it gains meaning. They add to their creation to evoke a stronger connection to the association. A feedback loop has been born, and mind, imagination, and the real world are locked in a little virtuous cycle of building and editing: of creation. 

 

But the implications always go beyond the physical object. There must be a bespoke world - a specific situation, imagined, in which this particular creation has a more sensible home. If it's a rocket, well then it must also have a launchpad somewhere. Perhaps that also needs to be built. And of course it's got a destination, and a trajectory which may take kid, spaceship and that bespoke world across the room to an adult, to whom all of this must be explained so they can join the world. 

 

"Do you like my spaceship?" is a proxy question. The true answer is irrelevant. The real question being asked is 

 

"Do you want to explore the world I just created?"

 

Many adults are often too busy, or too boring, or both, to confront that immersive question honestly, and with legitimate courage. I'm not calling this section of adults cowards. What's really going on is a form of ineptitude, and no one is really at fault here. Most adults have lost the ability to extrapolate - to scale their thinking.

 

In the tech world "scale" refers to growing to a particular TAM - Total Addressable Market. This is...fine. But it still represents thinking that has been kneecapped: literally hobbled, knees-destroyed via shotgun.  "Scale" in the tech world means "extrapolate ramifications of product up to a point." This is a pretty unfortunate form of horse blinders.

 

That kid with the rocket ship jury-rigged from toilet paper tubes and tape is a true scaled thinker, unhampered by the limitations of TAM. And the talent of a child boils down to a simple cognitive framework that can be encapsulated by a single question, but a question nonetheless that child never actually asks themselves:

 

If this is the situation, then what are the implications?

 

The stories and worlds that children spin up are elaborate ramifications of tiny seeds of curiosity and creativity. And that question is iterative. Each time you answer it, the situation changes, which prompts the question again: Now that the situation has changed due to implications just considered, what are the new implications?

 

This is simply the creative process. Whether you are building a tech product to sell to a large TAM, or you are a writer in the middle of a story, or you are a child looking at a toilet paper roll and seeing something more than a toilet paper roll.

 

Thinking scales through implication and ramification. It's the ability to think about second and third order effects. A couple of examples: 

 

-If driving becomes fully automated, the death rate from automobile accidents will plummet. This means available organs for organ transplants will essentially drop to zero.

-If we solve aging, and people become functionally immortal, what happens to marriage as an institution? If it's based on "until death do us part", what happens now? Do people start to consider term limits to marriages?

-If humanoid robots become as ubiquitous and useful as it seems they are poised to be, does that nullify the fertility crisis, meaning we can have a much smaller population without civilizational collapse? Does anti-aging also contribute a solution to the fertility crisis?

 

-If food production becomes 100% automated, does the cost of food drop to zero? What happens to the economy if food becomes free? Perhaps UBI is a red herring and is completely unnecessary if food is free. Perhaps the status game remains the engine of the economy, but it's opt-in, and most people can opt-out and live more fulfilling lives?

These are just a couple of examples. They are fair questions based on very real initiatives that are being taken up by very smart people with a lot of money behind them. Many of the first order effects are fairly obvious, and yet few people seem to be thinking of nascent technologies in terms of these simply questions. The ability to scale in thought is the ability to extrapolate. In today's age of increasing rates of progress, it's virtually impossible to consider ramifications without accidentally becoming a sci-fi writer.







THE MECHANICS OF DENIAL

June 11th, 2025

Someone was commenting on how wild politics is these days so I told them what happened between Hamilton and Burr and their jaw dropped. If you don't know, they had a duel with pistols, Hamilton missed, Burr hit Hamilton and he died. Imagine if that actually happened between two American politicians today. Modern politics is about as tame as a gossiping sewing circle compared to when the United States was founded.

 

The disconnect between a modern assessment of current politics and it's accuracy relative to politics as it's existed throughout all time has to do with an inability to keep things in perspective, in proportion. 

 

Our focus determines our reality, and if we focus narrowly on some current event and divorce it from all of history, then that object of focus has the entire spectrum of reaction applied to it, because there is nothing else to act as a counterweight.

 

This disease of narrow focus and recency bias makes people woefully untalented if not flat out incapable of assessing proportion. But what's the antidote? How does the inverse function?

 

First, another example: Cancellation in the last decade has meant losing a job and some digital public embarrassment.

 

Cancellation used to mean getting burned at the stake, the Spanish Inquisition, guillotines in France or getting sent to a gas chamber. 

 

If anything social networks may have greatly reduced the violent tendency of the censor-impulse in culture by making it digitally simulated instead of physically carried out. That censorship-impulse has been lurking within human culture forever (at least since we drove other humanoid species extinct several hundred thousand years ago) and now forums like Twitter and Facebook have functioned like a ghostbuster's trap, and captured that impulse in the digital space where it's physical impact is stunted. 

 

Instead of putting things into proportion by examining events within a larger context, those events become all consuming - perspective becomes very skewed.

 

To zoom out even more: have you ever heard any one say Not in my lifetime! This thought-terminating cliche is a favorite because it's so indicative of the calcified echo chamber that doubles as a personal shrine to one's own pride about the horse blinders they've constructed and proudly wear. When someone like this hears about some impending innovation and says "Not in my lifetime" I bite my tongue. It's futile to argue. One of these inevitable tomorrows will unveil their hasty judgement and I know by that time their slippery logic and feeble memory will have found some convenient way to completely forget those fatalistic words they'd uttered: Not in my lifetime. Instead, they'll complain about how said innovation doesn't work perfectly.

 

Again, its a matter of proportion, but its time that must be examined. The widespread mistake is to make judgements based on the present as a static snap shot - which is what most people do. Again it's a kind of recency bias mixed with an inability to zoom out and place events in a larger time line. 

 

Let's zoom way out: Think about the time between the agricultural revolution and the industrial revolution, compared to the time between the industrial revolution and the digital age? And you really think the time between the digital age and the next level of magic isn't going to occur in a far more contracted period of time? ...ok.

 

I've been thinking about denial quite a bit lately, and I've realized that it's seed, stem and root is far more subtle than they first appear. Willful ignorance seems to be at the heart of denial, but I think that's a contradiction. People are certainly capable of hypocrisy, but ignoring something you know isn't the same as being unable to envision the implications with enough visceral force to change behavior. I think in most cases denial is the result of a weak imagination. 

 

There's another software engineer in the family and I'm always shocked when we talk about tech and the future. He seems to be fully committed to the idea that his profession and career has a few more decades to fill out what he thinks will be a normal human lifespan. (His company is beginning to talk about incorporating Cursor into their workflow. Meanwhile I show him a couple full stack applications that I've built and launched within the last few months that are in production and being used across an entire company and his jaw drops) While I do worry about him and his family, all of whom I'm very close to, I realized that he simply lacks the imagination required to extrapolate the implications of recent innovations. I suppose this is maybe why not everyone writes sci-fi? Such implications seem to come naturally to me in daydreams. I invested in Tesla in 2016 because the advent of robotaxi seemed obvious after watching a lecture from Tony Seba about disruption technology. It was just a matter of....time. And time is the only reliable superpower for investing.

 

Imaginative extrapolation is again a matter of proportionate thinking. It's seeing today - not as a static snapshot - but as a vector, one that creates a ratio of yesterday:today:tomorrow. We always have two parts of that equation, and the more yesterdays we stack into it, the easier it is to solve for tomorrow. This is why the ratio of time between the agricultural revolution and the industrial revolution compared to the industrial revolution and the digital age is so important. The staggering contraction makes the implication clear: unless you're already on your deathbed, the future is definitely going to happen in your lifetime.







THE CRAYON QUESTION: CREATION IN THE AGE OF AI

November 21st, 2024

Why are refrigerators plastered with crayon drawings in the homes where there are young children? Are these drawings products for the parents and adults to consume? Perhaps. But phrasing it this way is a little ridiculous. It's infusing a situation that is somewhat devoid of capitalistic structures with the terminology of capitalism. So why do these crayon drawings exist? If the answer is obvious, keep it in mind.

 
There is much squabbling over AI art. Most of it can be safely ignored because almost all of it misses the point that should be the obvious answer to our crayon question. But the mere existence of this fussy, constipated and shit-slinging dispute is itself proof that parties involved are blind to the answers to similarly fundamental questions with simple answers.
 
Much of this squabble is rooted in anxiety over financial stability. 
 
If the computer can produce a better image than I can, and do it ten thousand times faster than I can, then how can I possibly make a living as a designer? Replace image with almost any form of creation that can be seen or read on a screen and the concern is the same across industries. As I understand it, the majority of people connected to Hollywood for their livelihood are very nervous about the future of their professions.
 
Infusing capitalism into areas of human activity that we deem "art" radically alters the conversation. It comes to bare almost no similarity to our Crayon Question. The constraints of life and "making a living" pollute the answer with a vast network of incentive structures that are not present for the child who is plying a crayon to the paper. The child is not thinking "if I don't make a good enough drawing, then mommy won't give me food in exchange for it." Ponder for a moment how utterly brutal and heart breaking it would be for a child to even conceive of this question. But this is essentially the question of adult artists, and the reality is that it makes one a "starving artist" simply due to the fact that most output is simply not deemed valuable enough - unless of course the artist "makes it" and becomes plugged into one of the systems of mass distribution, be it traditional publishing, or Hollywood - or rather Netflix, etc. 
 
One of the largest bottlenecks for the success of the starving artist is the amount of time and effort it takes to get good at something. Some people get lucky and grow up in a situation, and with the right random proclivities that they speed run this training period while "making a living" isn't yet a problem. This is rare, and rarely conscious: no one knows what they want to be when they are young, and many adults, remembering their own cluttered and haphazard upbringing will say to such people: you're lucky you knew what you wanted to do at such a young age. Such balance of intense proclivity with the accidental discipline that it creates is rare and this outlying situation doesn't really apply to the conversation.
 
Those striving in an artistic direction who were not lucky enough to train young have a far more difficult time because now a training period that does not produce anything that supports a living has to be balanced with actually making a living. Time is the essential resource and as more of it gets allocated to making a living, less and less of it is available for the training period, making the time this period requires to be much much longer. This weighs on the human psychology: progress is slower. Success feels further and further away, and the dream of "making it" often starts to feel more like a delusion than an actual, tangible possibility. Let's put it this way: if a parent said to a child "if you're crayon drawing isn't good enough, you don't get dinner." How many children would just give up right then and there and start crying? If you have any experience with children, you'll likely agree that the percentage is high. Very high. Again, this is essentially the psychological situation of your average "starving artist".
 
There ought to be a distinction made here about degree of creativity in a given production. Perhaps controversial, but is the creative engine involved in writing a novel script that is thought provoking and incredibly entertaining the same has the creative engine involved in the graphic design for the movie's poster? This is a weird and uncomfortable question. Uncomfortable because it forces beloved activities into a hierarchy that may imply that one is better and one is worse. The reality is this isn't the right question to ask, but it is a relevant question in terms of the fear of AI. There is a hierarchy of tasks for which AI is steadily climbing. The question is not to point out which creative "skill" is better or worse but to say that there is an order of which one will be subsumed by AI first.
 
This order of subsumption represents a spectrum of creativity, and at the end of this process of subsumption there is only one tiny piece of the spectrum of creativity that will remain. Let's consider a couple examples: traditionally a sound engineer would be tasked with removing dead space from a conversation. Having recorded and produced over a thousand podcast episodes myself, I'm well acquainted with this drudgery. I was exceedingly happy when this process became automated and I could get it done in a couple second instead of spending many many many minutes laboriously doing it myself. Compare this "creative" task which is on the low end of our spectrum of creativity (ie. it really doesn't require much creativity but it is part of this creative process) to the complete opposite: me sitting with a blank page and dealing with the cognitive situation of: I want to write a short story, what should I write? Or even better, how about this very essay you are reading. This morning while lifting weights I had some thoughts about creativity and AI that felt novel, and decided that I needed to explore the ideas. Now how does AI relate to this part of the creative process? Should I ask AI: Hey I have an idea for an essay about art in the age of AI and I think the title might be something like "The Crayon Question", can you write that essay in the style of Tinkered Thinking for me?
 
If anyone thinks this is a good idea then I'd like them to consider a couple analogous questions:
 
Hey, AI, I can you eat my food for me?
 
Hey AI, can you do my bench press for me?
 
 
Hopefully the point is obvious: Even if the AI is hooked up to some kind of robotic mouth where food can be physically placed and "eaten" this process is completely ridiculous because you'd fucking starve... since, you aren't actually eating the food. Or if the AI was hooked up to robotic actuators that could lift your bar, it's useless because it isn't your muscles that are using ATP to do it. The same applies to things AI can do which are truly creative. AI cannot run the neurological process in your mind necessary for producing something truly creative. It may be able to produce a similar outcome: but your brain will not change as a result of the process it takes to create it yourself.
 
When I had the thought that I would like to write this essay, I knew from years of experience writing over a thousand essays and short stories, that the experience would yield things that are simply not possible for an AI to accomplish. I know, and have known for a long time that the actual process of writing and essay or a short story is a process of discovery. This process doesn't just exercise my mind, it organizes and sharpens my thoughts. I get just as surprised by the next sentence as you do, because the reality is, I can't predict my next thought, I can only have  that next thought, write it down and then review it. AI can never replace that process, and that process is exactly what's going on when the child is plying crayon to paper. While the initial urge might be "I want to make a drawing for mummy" or it might be an after thought "I have a nice drawing, I'll give it to mummy," the literal action of creating the drawing is one of identical self-discovery. The child might have a topic or subject in mind just like I did while working out, but which line will be the 3rd one drawn or written? I have as little idea of that as the child does. Neither of us know until we actually get there, and it's the experience of the act and the changes it makes to us as a person which drives the behavior at a core level. 
 
Yes, this fundamentally core reason gets corrupted in a capitalistic framework. But it's entirely ignored in the current discourse because it unveils a very unsettling truth: much of the creative process in creative industries involves jobs that are the equivalent of color-by-numbers. This isn't to say there isn't skill involved. Sure there is, but it's not a skill which is unique
 
Unique is probably the only component of how people use language that grates my soul. Almost all misuse of language, I can understand and often appreciate: if someone understands what someone else is trying to communicate, then they are using language correctly, bad grammar and novel constructions be damned. But saying very unique is uniquely concerning, because the presence of an adverb to modify an adjective whose definition categorically excludes adverbs of this kind is to spout actual nonsense. Saying something is very unique is like saying that the color blue smells very century. Sorry what? Yes, exactly. A tangent on the word unique might seem uncalled for, but it's vital for a discussion of AI and its impact on artistic productions. AI might be able to produce incredible output that is commercially viable, but what it can never do is provide an artist with the unique experience of creatively exploring and discovering something new based on their unique perspective. The fact that AI can and will subvert the commercial viability of the final product misses the point of why art exists in the first place. We do art to engage in a highly personal process of exploration and cognitive development. The fact that we need to "make a living" is not a fundamental reason for making art; its a supremely inconvenient variable that pollutes incentives by linking our output to the procurement of basic necessities needed to operate a functioning human body in a tribalistically oriented society.
 
Anxiety ensues as AI ramps up to rob crafts people of tasks connected to creative activity. But why does AI have to come after the fun things? Why can't it do my dishes instead of replace me as a designer at work? Again we need to revisit the Order of Subsumption. AI can't do a janitor's job because AI doesn't have a body. AI currently only exists on a screen, and anything that can exist on a screen (writing, picture of a painting, etc) can be part of the training for AI. Now AI is starting to subsume digital tasks: agents, as they are called, which can  write and respond to emails - more color-by-numbers tasks that are far closer to drudgery than they are to true creative exploration. 
 
What the discourse on AI when it comes to art and jobs seems to lack is imagination that can extrapolate to definitive conclusions. The range of imagination on such topics seems to be like a weak lantern in a very dark field. They can imagine changes they can see, and that's it, and they regard those who can extrapolate to logical extremes as fanciful and unrealistic: Foomers and Doomers as they are termed. This is evidenced by the worker who is very anxious about AI taking their job, but doesn't really care or think about AI eliminating all of humanity, which one might think is the logical extrapolation of such an anxiously-focused person.
 
The most prevalent question in the discourse is: Well if AI takes all of our jobs, what are we going to do!? This question is further evidence of a total inability to imaginatively extrapolate. Your AI can't live your life for you. But it can and will replace you if you're doing a lot of monotonous work. But still, it can't live your life. And if your life garners the majority of its meaning from a job that is ultimately monotonous, then brace yourself for a very cold and very hard, spiritually infused slap in the face. I say this as someone who was laid off from a job that most people think is impervious to AI because I was replaced by an AI. I am not speaking from some protected pedestal claiming that everyone should eat cake. 
 
The human ability to adjust to new circumstances is, well, it's ridiculous, because it results in two entirely polar aspects of perspective. The assumption as a reader of this essay might anticipate me claiming that "we'll adapt!". While, eh, sure, but that's always true. I bring it up because of how fast and completely we resettle into new circumstances to such a degrees that we're blind to relative improvements. The best example of this is the Louis C.K. skit about being on a plane and hearing for the first time over the intercom that it was a Wifi enabled flight. The Wifi inevitably crashes within a few minutes because it's a brand new system and the guy next to him on the flight says "This is bullshit." To which Louis C.K. acts out a mock reply to this person: YOU ARE IN A METAL TUBE HURTLING THROUGH THE SKY AND YOU GOT INTERNET ACCESS. It's funny because the guy calling bullshit is so completely and thoroughly located in his current situation that he fails to realize how utterly incredible that situation is when you compare it to say.... a 14th century peasant who has worked the same field for 25 years straight. This ability to adjust to new circumstances is both a blessing and a curse. We adjust, which is often uncomfortable and sometimes painful and requires us to grow and change, but once the change is complete, we settle in, and we do so with an intensity of laziness that is ultimately debilitating - even crippling - to our perspective.  It often requires another forced change due to abruptly altered circumstances in order to shake our heads free from our own assholes that are so tight they deprive our brains of the oxygen necessary to power an imagination capable of having a novel thought.
 
Even after reading this, most people are still going to be unutterably chained to the question: Well if AI takes all of our jobs, what are we going to do? I'll respond to the question with a question:
 
Think about the 14th century peasant and Louis C.K.'s airplane companion who thought broken wifi was bullshit. In a couple centuries, or perhaps even in a couple decades, Louis C.K's airplane companion will be the new 14th century peasant. Imagine what needs to change about technology and society and people so that a guy complaining about wifi on a plane seems like a 14th century peasant. 
 
Instead of focusing on what AI might take away from you, invert the concern and think about all that AI might come to be able to do for you. Imagine if the entire food production system became controlled by AI, and it could even perform maintenance on itself in any physical capacity. Imagine billions of robots that could do all the tasks that we don't like but which we do because we have to. The costs for such a fully autonomous system eventually trend to zero, because all that's needed is the energy to run it, which will get for free from the sun. It's just a matter of getting the requisite atoms into the correct configuration. The beginning of this process will be very expensive, but as the process moves forward, it will take over its own construction. 
 
The same can and may happen to an industry like housing: imagine, instead of hiring a contractor who has to buy all the materials and extra labor, you have time to design the home yourself, you have the time to learn principles of design, to create a home in VR and walk around in it, make changes to it, study the principles of Christopher Alexander, and build a living space that is so well attuned to your personality and your family that it is itself a unique work of art. Robots show up with materials that have been harvested and generated by automated systems and these robots build your house in a matter of hours, or a couple days. And when you want a change, robots show up again and renovate. 
 
But who will pay for all of this?
 
Ultimately, the sun. But the start-up costs for getting this all going are huge, and rooted in dollars.
 
Utopia, is essentially, at its core, a coordinated set of automated systems that provides for humans in a way that parents provide for children, thus giving all humans the freedom to explore the way we endeavor to give children the freedom to explore and develop. If this seems fanciful and far fetched, please ask yourself how bright the lantern of your imagination is in this dark field of future unknowns. Certainly things could go wrong, be it nuclear war to blast us back to the Stone Age, or a paper-clip maximizer that turns the galaxy into a pile of paper clips.
 
The utility of discussing extremes is to try and induce some yoga on a mind that is too narrowly focused on local anxieties like: AI is going to take my job!
 
But what about now, and tomorrow? Instead of the next decade or next century. All of that utopia shit sounds great but how do I pay for groceries while I wait for heaven to materialize on earth?
 
My best advice is: Get weird. Embrace new technologies and try lots of things with it.
 
I have a good friend who is a film director and I peppered this friend with questions to try and get at the root of concerns around AI, the industry and making a living. The conclusive thought for this conversation was the realization that big studios with all the money control distribution, and it's this business & distribution issue that's at the core of the bottleneck for individual creatives who really shy away from thinking about "business stuff". I responded by saying: You could build a website, with payments and maybe subscriptions and just start making scrappy movies and put them on this website. It's not a global release in theatre chains, but it is global distribution! Youtube has a monopoly on this, and sure that could be part of the springboard. Perhaps release the first few minutes for free on YouTube and gain traction that way, but convert interested people to paying customers on your own website. Would this work? Maybe? The point is, it's never been easier and cheaper to run this experiment, and the sooner a person does it, the more time they'll have to grow it. 
 
Tinkered Thinking recently release www.printedthinking.com which is a Blog -> Book platform. I've had this idea for years, and realized a few months ago just how fast I could build and launch it now that AI effectively functions like a small team of software developers for me. Just yesterday I launched another product unrelated to Tinkered Thinking - an idea I had a few weeks ago which might be useful. I have a couple more ideas lined up which I plan to build and launch within the next few months. This speed of development and depth and range of experimentation was simply impossible a few years ago. Will these make money? Well Printed Thinking has paying customers. But the real answer is: you have to build it and launch it to find out. Same as any business.
 
The ground has started to shift under our feet, and it may dissovle into a veritable ocean, where many may drown if they are not quick enough to realize the change, stubbornly keeping their feet firmly planted where there no longer is any ground. Some are building their own sailboats and skiffs and some have some have Arks ready for the coming flood, and some will be quick enough to assemble rafts from driftwood in the swells. And while that all might seem terrifying, there's a good chance that after the initial flood we'll find floating islands in the wake, lush with a way of living that may even be incomprehensible from our current perspectives.
 
What is fundamental to understand is this: 
 
As employment opportunities contract due to technology, personal agency will expand due to technology.
 
How you uniquely use these new technologies will be completely up to you, and that's not something an AI could ever replace.