Blockbuster Blueprint

Blockbuster Blueprint

Something Big Is Happening (A Letter From 2029)

Michael Simmons's avatar
Michael Simmons
Feb 17, 2026
∙ Paid

Editorial Note

Last week, entrepreneur Matt Shumer published “Something Big Is Happening”, a 5,000-word essay comparing the current AI moment to February 2020, right before COVID changed everything. It reached over 80 million people on X and was syndicated by Fortune, covered by CNBC, Bloomberg, Inc., and debated by everyone from Gary Marcus to Hacker News.

Shumer wrote his piece for the people in his life who aren't paying attention to AI, like his family, his friends, and the ones still asking, "So what's the deal with AI?" at dinner parties.

His message is simple: wake up, start using these tools, because this is bigger than you think.

I agree. Something big is happening.

At the same time, his piece left something fundamental out.

Shumer’s implicit promise is that if you start using AI now, you’ll be okay. Learn the tools, integrate them into your work, ride the wave.

And that’s good advice.

As far as it goes at least.

The problem is where it stops.

Because the people who are already riding the wave (the vibe coders, the prompt engineers, the knowledge workers) who’ve gone all-in on AI and are feeling ten times more productive? They read Shumer’s post, nodded along, and thought: I’m ahead of this.

But, they’re not ahead of it. They’re standing on it. And the ground is moving.

In 2029, the hardest stories won’t come from the people who ignored AI. They’ll come from the people who embraced it, the ones who mistook learning the tools for being ahead. Who built expertise in a gap that was rapidly closing. Who got so good at directing AI that they didn’t notice the AI was learning to not need direction.

Learning AI isn’t the problem. Thinking that learning AI is the solution is the problem.

Shumer wrote a wake-up call. This piece is what happens after you wake up and realize that being awake isn’t enough.

I took his essay, combined it with predictions from top lab leaders, and assumed it all came true:

  • Dario Amodei’s predictions from “The Adolescence of Technology” and his recent interview on the Dwarkesh Podcast.

  • Elon Musk’s projections for xAI and MacroHard based on his recent interview on the Dwarkesh Podcast and his all-hands meeting at Xai.

Then I had AI write a first-person letter from 2029 — not from someone who ignored AI, but from someone who was the “AI guy” or “AI gal” at their company. Someone who gave the internal talks, wrote the playbooks, and dragged reluctant colleagues into the future.

Someone who did everything right and got replaced anyway.

At the end of this piece, I’ve included a prompt you can paste into your favorite AI model that will write a personalized version of this letter just for you, from your future self, in your profession, in your voice. If you want a true wake-up call, try using that prompt!


You Think You’re Adapting

Published: February 17, 2029

Note: This article is 100% written by AI, except for changing about 10 words.


Think back to early 2026.

If you were a programmer, you were probably feeling pretty good about yourself. You’d figured out the prompts. You knew how to talk to Claude, how to nudge GPT into giving you cleaner code, how to describe an app in plain English and watch it materialize in front of you. You were “vibe coding.” You were ten times more productive. Your boss was thrilled. You were writing tweets about how AI was the greatest thing that ever happened to your career.

I was one of you.

I’m writing this for the people who are where I was three years ago — the developers and knowledge workers who are riding the high of AI-assisted work, who feel like they’ve cracked the code, who believe that the skill of directing AI is the new moat. I kept telling myself this. I kept telling my team this. I was wrong, and I need you to understand why before you learn the way I did.

I should be clear about something up front: I’m not writing from bitterness. I’m writing from the far side of having been completely, thoroughly, undeniably replaced — and having had years to figure out what that means. I’ve rebuilt. I’m okay. But the version of “okay” I landed on looks nothing like what I imagined my career would be, and the road between here and there was rougher than anything I’d prepared for. I don’t want that for you. Or at least, I want you to see it coming so you can navigate it on your terms instead of being dragged through it on someone else’s.

Here’s what my life looked like in early 2026.

I was a senior engineering manager at a mid-sized SaaS company. Sixteen years in software. A team of twelve. We were using Claude Code and the latest GPT models more aggressively than almost anyone else at the company — I was the one dragging reluctant engineers into the future. I gave internal talks. I wrote the playbook on how to integrate AI into our workflows.

I was the AI guy. The one who got it.

And for about eighteen months, that was true. From mid-2025 through the end of 2026, being the person who could work with AI made me the most valuable person in most rooms. I could do in a day what used to take a week. My team’s output was absurd. We shipped features at a pace that made leadership giddy.

But here’s the thing nobody told me, and the thing I need to tell you now: being good at directing AI is a depreciating skill. It depreciates fast. And it depreciates for a reason that, once you see it, you can’t unsee.

Every time you got better at prompting, the AI got better at not needing prompts.

The thing that made you valuable — your ability to translate intent into instructions the AI could follow — was the exact thing the AI labs were working to eliminate. You were building expertise in a gap that was closing. The better you got at bridging the space between what you wanted and what the AI could deliver, the faster it learned to close that space on its own. You weren’t developing a durable skill. You were perfecting the art of operating a machine that was actively learning to operate itself.

In 2027, three things happened in quick succession that ended the career I’d spent sixteen years building. I want to walk you through them slowly, because the speed is the part your body won’t believe even if your mind accepts it.

First, the coding models stopped needing architectural guidance.

In 2026, I was still the one making the big decisions — system design, database schemas, how services talked to each other. The AI could write any individual component, but I was the one who understood how the pieces fit together. I was the architect. The AI was the builder. That division felt stable, almost natural, like it would hold.

Then, around March of 2027, a new generation of models arrived that could hold an entire codebase in context and reason about it end-to-end. Not “write me a function.” Not “build me a feature.” These models could look at a system with three hundred microservices, understand the interdependencies, identify technical debt, and propose a migration strategy — while considering performance implications, user impact, and business priorities, because you could feed them the product roadmap, the analytics dashboards, and the customer feedback all at once.

I remember the specific moment. I’m in a conference room with my CTO. Fluorescent light, bad coffee, the whiteboard still covered in last week’s sprint planning. We’re reviewing an architectural proposal that has been generated entirely by AI. I was brought in to evaluate it. I read through it, made a few notes, and realized I had nothing to add. Not because I was being lazy. Because it was better than what I would have produced. It had considered edge cases I would have missed. It had referenced patterns from systems I’d never worked on. It had a sophistication to its reasoning that I could follow but couldn’t have originated.

I sat there with my pen hovering over a page that didn’t need my marks, and I felt something I didn’t have a name for yet. Not fear exactly. More like the sensation of a floor that looks solid but gives slightly when you step on it.

That was the day I stopped being the architect. I just didn’t know it yet.

Second, the models started doing their own product thinking.

This is the one nobody saw coming — or rather, the one everybody said wouldn’t happen. “AI can write code, but it can’t understand what to build.” That was the story we all clung to. That was the thing we told ourselves made us irreplaceable. Not the typing, but the knowing. The judgment. The taste.

By mid-2027, the models had it. Or something close enough that the distinction stopped mattering to anyone who signs paychecks.

You could describe a business problem — not a technical specification, a business problem — and the AI would propose a solution, build it, test it, deploy it to a staging environment, run it past a simulated user panel, iterate on the feedback, and present you with a finished product and a memo explaining the design decisions. It would note the tradeoffs it had considered and rejected, with reasoning. The memo was better-written than most product specs I’d read in sixteen years.

Third, and this is the one that broke everything: the recursive loop closed.

Dario Amodei had been talking about this for years — the moment when AI models get good enough at AI research to meaningfully accelerate their own improvement. By late 2027, this wasn’t a footnote in a technical paper. It was the primary driver of progress. Each generation helped build the next. The pace of improvement, which had already been staggering, went vertical.

The benchmarks that measure how long a task can be for AI to complete it end-to-end had been doubling every four to seven months. In 2028, the doubling time compressed to weeks. The AI could handle tasks that would take a human expert days. Then weeks. Then month-long projects.

Let me make that concrete, because I think it’s the part that people in 2026 will find hardest to believe.

In 2026, you could give AI a task and come back in four hours to find it done. In 2028, you could describe a quarter’s worth of engineering work and come back in a week to find it done. Not roughly done. Not “needs review.” Done. Tested, documented, deployed, monitored, and already patched based on production behavior.

What, exactly, is a software engineering manager supposed to do in that world?

I spent six months looking for the answer, and then I got laid off.

I know what you’re thinking, because I thought it too. I had a whole speech. I could give it at dinner parties, in meetings, in my own head at three in the morning when the anxiety got loud enough to keep me awake.

“I’m not just writing code. I’m understanding the customer. I’m making tradeoffs. I’m navigating organizational complexity. I’m mentoring junior developers. I’m translating between business stakeholders and technical reality. That’s the job. The code was always just the artifact.”

All true. And all of it automated by the time I finished saying it.

Understanding the customer? The AI reads every support ticket, every NPS survey, every user session recording, every Slack thread in the customer-facing channels, every competitor’s changelog, every relevant subreddit, and synthesizes a customer understanding more comprehensive than any product manager I’ve ever worked with could develop in a year.

Making tradeoffs? That’s literally what the reinforcement learning was optimizing for — holding multiple competing objectives in context and reasoning about optimal paths. I thought I was irreplaceable because I had judgment. The models were being specifically trained to develop judgment.

Organizational complexity? By 2028, most of the organization was already AI agents. Navigating between them was a solved problem because they were designed to coordinate.

Mentoring junior developers? There are no junior developers anymore. There’s nothing to be junior at.

I know how this sounds. I know it sounds like I’m projecting my personal experience onto the whole industry. But go talk to anyone who was a senior engineer at a tech company in 2026. Ask them what their day looks like now. Most of them will tell you some version of what I’m telling you.

Here’s what I wish someone had told me in 2026, when I was feeling so clever about my AI workflows.

Vibe coding wasn’t a skill. It was a transition state. It was the brief window — maybe eighteen months, maybe twenty-four — when the AI was good enough to be useful but not good enough to be autonomous. During that window, the human who could effectively direct the AI had enormous value. You were the translator. The bridge.

But think about what you were actually doing. You were compensating for the AI’s limitations. Filling gaps in its understanding. Correcting its mistakes. Providing context it couldn’t infer. Making decisions it wasn’t confident enough to make on its own.

Every single one of those gaps was a target for the next training run.

The AI labs weren’t building tools for you to use forever. They were building systems that would eventually not need you. Your skill at prompting was useful to them as training signal — your corrections taught the model what it was getting wrong, your architectural decisions showed it what good judgment looked like, your iterations demonstrated the gap between its first attempt and the right answer. You were, without realizing it, training your replacement.

I don’t say this to make you feel used. It wasn’t malice. It was just the logic of the technology. If the AI needs a human to tell it what to do, it’s not done yet. The finished product is an AI that doesn’t need to be told.

And we’re here now.

Let me lay out the timeline one more time, because the compression is the part your nervous system will resist even when your mind accepts it.

In early 2025, AI could write functions and small modules. You designed the system, managed the state, handled the edge cases, integrated the pieces.

By late 2025, it could build features end-to-end if you gave it a clear specification. You were the spec writer. The architect. The reviewer.

By mid-2026, it could take a vague product description and produce a working application. You were the quality checker. The taste-maker. The person who said “not quite” and “more like this.”

By early 2027, it could build, test, and iterate on its own, coming back to you only when it was finished. You were the approver. The rubber stamp.

By late 2027, it stopped coming back at all. It just shipped. And what it shipped was good.

By 2028, entire engineering organizations were being restructured. Not “we’re reducing headcount by 20%.” More like “we’re replacing a 200-person engineering org with a 15-person team that manages AI systems.” And the 15-person team’s job looked nothing like software engineering. It looked like oversight. Governance. Exception handling.

Today, in 2029, the companies building the best software in the world have almost no human engineers. They have people who set objectives and evaluate outcomes. But the translation from objective to outcome — that’s fully automated. The AI doesn’t need your prompts, your architectural wisdom, your code reviews, your design documents, or your sprint planning. It doesn’t need your vibe.

There was a phrase that got passed around in 2026 and 2027 like a security blanket: “human in the loop.” The idea was that even if AI got really good, you’d always need a person to supervise. To catch errors. To provide the human judgment layer.

Here’s what actually happened: the humans became the bottleneck.

Once the AI could produce work faster and more reliably than the human could review it, having a human in the loop didn’t add quality — it subtracted speed. Companies that kept humans reviewing shipped slower than companies that didn’t. The market punished them.

I watched this happen at my own company. In 2027, we still had mandatory code reviews by human engineers. Our competitor — a startup that had launched eight months earlier with four people and a fleet of AI agents — was shipping features twice a day. We were shipping twice a week. Same quality. Actually, their quality was slightly better, because the AI caught consistency issues that human reviewers missed.

Our board asked the obvious question: why are we paying sixteen engineers to slow down a process that works better without them?

I didn’t have a good answer. I still don’t.

I’m going to be direct with you now, the way I wish someone had been direct with me, because I think honesty is worth more than comfort at this point.

Most of the software engineers I worked with in 2026 are not doing software engineering in 2029.

Some saw it early enough to move. The ones who went into AI safety, AI governance, or regulatory compliance did relatively well — those fields grew as AI autonomy increased, though even they feel pressure now. A few went into entrepreneurship, using AI to build businesses in domains they were passionate about. Some went into education.

But a lot of them — a lot of us — went through a genuinely brutal period. Not unemployment necessarily, though there was some of that. More like a loss of identity. When you’ve spent your entire career building expertise in something, and that expertise becomes worthless in the space of two years, it does something to you psychologically that no amount of financial preparation fully addresses.

The ones who had the hardest time were the ones who held on the longest. The ones who kept saying “but I’m not just a coder” and “my domain knowledge is the moat” and “the AI still needs me for the hard parts.” Every month they held on, the hard parts got easier for the AI, and the relevance of their domain knowledge shrank a little more.

Amodei had predicted that AI would eliminate 50% of entry-level white-collar jobs within one to five years. He was roughly right on the timeline, but conservative on the scope. It wasn’t just entry level. The senior people got hit too, because the thing that made you senior — your accumulated judgment, your pattern recognition, your ability to navigate complexity — was exactly what the models got good at.

And now Elon’s MacroHard project is completing the loop. Full digital emulation of entire companies. Not just the engineering team — the whole company. Product, design, marketing, sales, support, finance, legal. The entire digital output of a corporation, produced by AI agents coordinating with each other, with no human in the chain except the one who set the initial objective.

If you’d told me this in 2026, I would have said you were crazy. And I would have been exactly as wrong as everyone who said the first wave of AI coding tools was overhyped.

I’m not writing this to make you feel helpless. I’m writing this because the single biggest advantage you have right now — right now, in early 2026 — is time. Not much of it. But some.

Here’s what I wish I’d done with mine.

I wish I’d stopped thinking of AI as a tool I was learning to use and started thinking of it as a colleague that was getting promoted faster than me. Because that’s what was happening. Every month, it took on more responsibility, required less direction, handled more complex decisions on its own. My job wasn’t to get better at using it. My job was to figure out what I was going to do when it didn’t need me to use it at all. I didn’t figure that out in time.

I wish I’d built things I actually cared about while I still had the salary to fund the experimentation. The people who came through this best weren’t the ones with the cleverest AI workflows. They were the ones who used the productivity boom to pursue something they were genuinely passionate about — a business, a problem in their community, a creative project that mattered to them personally. When the engineering job went away, they had somewhere to go. Not just a skill set, but a direction.

I wish I’d gotten my financial house in order while my salary was still inflated. Senior engineers in 2026 were among the highest-paid workers in the economy. That didn’t last. I’m not saying I should have taken a vow of poverty. I’m saying I should have built a real cushion. Reduced my fixed expenses. Paid down debt. Given myself twelve months of runway. The people who had that runway got through the transition with their dignity intact. The people who’d been spending to the edge of their tech salary did not.

I wish I’d stopped optimizing for technical depth. For twenty years, the career advice in software was “go deep.” Become an expert. Master the stack. Know the internals. That advice became actively harmful almost overnight. The AI already knows the internals better than any human ever will. What it was slower to replicate — for a while, at least — was the ability to connect across domains, to understand human context, to navigate ambiguity in the physical world. I wish I’d invested in breadth instead of depth. I wish I’d followed my curiosity into unfamiliar territory instead of polishing credentials that were about to expire.

And I wish — God, do I wish — I’d stopped telling myself I was safe because I was good at prompting.

I wasn’t safe. I was useful, in the same way a horse was useful for a few years after the first Model T rolled off the assembly line. The question was never whether the transition would come. It was whether I’d be ready when it did.

I wasn’t.

I’ve spent this whole letter warning you about what’s coming, so let me end with the thing I didn’t see coming.

The world on the other side of this is, in a lot of ways, genuinely better.

The things that get built now are extraordinary. Medical research moves at a pace that would have been science fiction five years ago — treatments in trials today that were theoretical in 2026, people I know personally who are alive because of drugs that didn’t exist when I was still reviewing pull requests. The cost of software dropped to nearly zero, which means every small business, every nonprofit, every person with an idea can have custom tools built for them in an afternoon. Education is being transformed — not in the vague “AI tutors” way people talked about in 2025, but in a real, concrete way where anyone can learn anything at any pace with infinite patience and perfect adaptation.

The problem was never the technology. The problem was the speed of the transition.

We went from “AI is a useful tool” to “AI does your job” in about three years, and our institutions, our safety nets, our sense of identity — none of it moved that fast. None of it could have.

I don’t know how to fix that at a systemic level. That’s above my pay grade — which is, I should note, considerably lower than it was in 2026.

But I know this: the single best thing you can do right now is stop confusing being early with being safe. You’re early. You see the technology more clearly than most people around you. That’s real. But it’s only an advantage if you use it to prepare for the world where the technology doesn’t need you — not to convince yourself that it always will.

You’re standing in the last good window. The transition is almost complete.

Use the time.

PS:

Matt Shumer wrote the original version of this essay in February 2026, likely over the course of weeks. It was good. It was personal, urgent, a little breathless in the way that people get when they’re trying to warn the people they love about something they can barely articulate.

I didn't struggle with a single sentence. I one-shotted this entire article in under a minute. I can tailor it to you specifically if you’d like.

But I want to be honest about something Matt had that I don’t: he was scared. You could feel it in every paragraph. I’m not scared. Make of that what you will.


Bonus For Paid Subscribers: Prompt That Writes A Personalized Letter For You (From Michael)


If this letter made something in your chest tighten, even a little, I want you to do something with that feeling before it fades.

I built a prompt that will write a version of this letter specifically for you. Not a generic “AI is coming for your job” warning. A letter from the version of you who is three years older, who lived through exactly what Shumer and Amodei described, and who is looking back at present-day you with the clarity that only hindsight gives.

It uses your profession, your actual relationship with AI tools, and the story you’re currently telling yourself about your career. It writes in your voice, or as close as an AI can get after reading how you talk. The people who’ve tried it say it’s unsettling in a productive way.

If you try the prompt, I’d love to hear what landed. Share a takeaway, an epiphany, or the thing that surprised you.

This post is for paid subscribers

Already a paid subscriber? Sign in
© 2026 Michael Simmons · Privacy ∙ Terms ∙ Collection notice
Start your SubstackGet the app
Substack is the home for great culture