<?xml version="1.0" encoding="UTF-8"?><rss xmlns:dc="http://purl.org/dc/elements/1.1/" xmlns:content="http://purl.org/rss/1.0/modules/content/" xmlns:atom="http://www.w3.org/2005/Atom" version="2.0" xmlns:itunes="http://www.itunes.com/dtds/podcast-1.0.dtd" xmlns:googleplay="http://www.google.com/schemas/play-podcasts/1.0"><channel><title><![CDATA[Blockbuster Blueprint]]></title><description><![CDATA[Receive a step-by-step, proven system to create 10x quality & quantity content with AI. Weekly emails contain a deep-dive or video lesson from a famous thinker and an easy way to apply it. Think Masterclass for idea creators.]]></description><link>https://blockbuster.thoughtleader.school</link><generator>Substack</generator><lastBuildDate>Tue, 07 Apr 2026 09:53:18 GMT</lastBuildDate><atom:link href="https://blockbuster.thoughtleader.school/feed" rel="self" type="application/rss+xml"/><copyright><![CDATA[Michael Simmons]]></copyright><language><![CDATA[en]]></language><webMaster><![CDATA[michaeldsimmons@substack.com]]></webMaster><itunes:owner><itunes:email><![CDATA[michaeldsimmons@substack.com]]></itunes:email><itunes:name><![CDATA[Michael Simmons]]></itunes:name></itunes:owner><itunes:author><![CDATA[Michael Simmons]]></itunes:author><googleplay:owner><![CDATA[michaeldsimmons@substack.com]]></googleplay:owner><googleplay:email><![CDATA[michaeldsimmons@substack.com]]></googleplay:email><googleplay:author><![CDATA[Michael Simmons]]></googleplay:author><itunes:block><![CDATA[Yes]]></itunes:block><item><title><![CDATA[AI Thought Leader School: AI Alignment (4/6/2026)]]></title><description><![CDATA[AI agents demand alignment, reflection & data. Participants explored identity, decision-making & strategic focus as the paradigm shifts from chat to autonomous AI.]]></description><link>https://blockbuster.thoughtleader.school/p/ai-thought-leader-school-ai-alignment</link><guid isPermaLink="false">https://blockbuster.thoughtleader.school/p/ai-thought-leader-school-ai-alignment</guid><dc:creator><![CDATA[Michael Simmons]]></dc:creator><pubDate>Mon, 06 Apr 2026 19:12:02 GMT</pubDate><enclosure url="https://api.substack.com/feed/podcast/193380885/6341e165ab0a3186413ad72430c72b00.mp3" length="0" type="audio/mpeg"/><content:encoded><![CDATA[<h1>AI Generated Overview</h1><h3>Navigating the Shift from Chat to Agents</h3><p>We&#8217;re in the middle of one of the most significant transitions in how knowledge workers use AI &#8212; the shift from chat to agents. It doesn&#8217;t get as much attention as model releases or new tools, but in terms of how it will change the way we work, think, and build, it rivals the original arrival of AI assistants in late 2022.</p><p>The challenge isn&#8217;t just technical. It&#8217;s strategic and personal. What do you prioritize as the landscape keeps shifting? How do you stay aligned with what matters most to you when AI is evolving faster than any individual can track? How do you avoid both the trap of chasing every shiny new thing and the trap of being too slow to adopt something genuinely important?</p><p>This session was designed as a structured space to work through those questions &#8212; not as abstract theory, but as a real-time group reflection with participants who are actively navigating these decisions in their own lives and businesses. We spent&#8230;</p>
      <p>
          <a href="https://blockbuster.thoughtleader.school/p/ai-thought-leader-school-ai-alignment">
              Read more
          </a>
      </p>
   ]]></content:encoded></item><item><title><![CDATA[Blockbuster Live: AI Pro Tips For Claude Code, NotebookLM, OpenClaw, NanoBanana, Google CLI From A Top AI Substack Creator]]></title><description><![CDATA[Claude Code transforms how knowledge workers use AI. Wyndo & Michael show how skills, agents & CLI tools create autonomous workflows &#8212; and what that means for expertise monetization.]]></description><link>https://blockbuster.thoughtleader.school/p/blockbuster-live-ai-pro-tips-for</link><guid isPermaLink="false">https://blockbuster.thoughtleader.school/p/blockbuster-live-ai-pro-tips-for</guid><dc:creator><![CDATA[Michael Simmons]]></dc:creator><pubDate>Thu, 02 Apr 2026 13:37:38 GMT</pubDate><enclosure url="https://substack-video.s3.amazonaws.com/video_upload/post/191899487/1413bcfa-a80a-4d00-bf7c-d49568bc658b/transcoded-1774396017.png" length="0" type="image/jpeg"/><content:encoded><![CDATA[<p>Last week, I did a 90-minute Substack live with one of Substack&#8217;s top AI creators, <a href="https://open.substack.com/users/556836-wyndo?utm_source=mentions">Wyndo</a> of <a href="https://open.substack.com/pub/aimaker">The AI Maker</a>. In just over a year, he&#8217;s grown from zero to 14,000+ subscribers by doing exactly what we talked about in this session: </p><ul><li><p>Staying on the frontier</p></li><li><p>Building real systems</p></li><li><p>Sharing what actually works</p></li></ul><p>Wyndo is a friend, and he&#8217;s the real deal.</p><p>If you&#8217;ve ever wanted to see what&#8217;s possible and practical with today&#8217;s tools, then this session is for you! There was a lot of screen sharing and looking over Wyndo&#8217;s shoulder as he showed how he solves problems in ways that weren&#8217;t possible just a few months ago.</p><h1>AI-Generated Overview</h1><p>Most people are still using AI the same way they were two years ago &#8212; typing into a chat box, copying the output, and pasting it somewhere else. They&#8217;re the middleman in their own workflow.</p><p>This session is about what comes after that. Claude Code represents a genuinely different paradigm: one where AI doesn&#8217;t just answer your questions but actually executes tasks, reads an&#8230;</p>
      <p>
          <a href="https://blockbuster.thoughtleader.school/p/blockbuster-live-ai-pro-tips-for">
              Read more
          </a>
      </p>
   ]]></content:encoded></item><item><title><![CDATA[I Just One-Shotted A Blockbuster AI Article, And I'm Awe Struck]]></title><description><![CDATA[... After Spending 10 Years Building A System That Could Create Without Me]]></description><link>https://blockbuster.thoughtleader.school/p/i-just-one-shotted-a-blockbuster</link><guid isPermaLink="false">https://blockbuster.thoughtleader.school/p/i-just-one-shotted-a-blockbuster</guid><dc:creator><![CDATA[Michael Simmons]]></dc:creator><pubDate>Wed, 01 Apr 2026 12:48:51 GMT</pubDate><enclosure url="https://substack-post-media.s3.amazonaws.com/public/images/7ef2a980-2a8a-4d3b-a0e7-1cf754c13f89_640x480.jpeg" length="0" type="image/jpeg"/><content:encoded><![CDATA[<p>I've spent my entire career thinking, learning, writing, and teaching:</p><ul><li><p>1,000+ books read.</p></li><li><p>Journaling for an hour a day for 25+ years.</p></li><li><p>Writing 500+ longform articles.</p></li><li><p>Teaching 1,000+ classes.</p></li></ul><p>And somewhere in the back of my mind, I carried a quiet fear that AI would hollow out the thing I loved most. The process itself.</p><p>This article broke that fear open.</p><p>It took me less than an hour to create. I made zero edits. And when I read it back, I felt tears welling up. A ten-year bet had finally paid off.</p><h1>The Backstory Behind The Ten-Year Bet</h1><p>In 2016, I wrote a business plan for my company called Seminal. </p><div class="captioned-image-container"><figure><a class="image-link image2 is-viewable-img" target="_blank" href="https://substackcdn.com/image/fetch/$s_!NZOP!,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fc3d514da-dce3-4514-8a2e-c8edbf75dd07_1134x1408.png" data-component-name="Image2ToDOM"><div class="image2-inset"><picture><source type="image/webp" srcset="https://substackcdn.com/image/fetch/$s_!NZOP!,w_424,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fc3d514da-dce3-4514-8a2e-c8edbf75dd07_1134x1408.png 424w, https://substackcdn.com/image/fetch/$s_!NZOP!,w_848,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fc3d514da-dce3-4514-8a2e-c8edbf75dd07_1134x1408.png 848w, https://substackcdn.com/image/fetch/$s_!NZOP!,w_1272,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fc3d514da-dce3-4514-8a2e-c8edbf75dd07_1134x1408.png 1272w, https://substackcdn.com/image/fetch/$s_!NZOP!,w_1456,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fc3d514da-dce3-4514-8a2e-c8edbf75dd07_1134x1408.png 1456w" sizes="100vw"><img src="https://substackcdn.com/image/fetch/$s_!NZOP!,w_1456,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fc3d514da-dce3-4514-8a2e-c8edbf75dd07_1134x1408.png" width="420" height="521.4814814814815" data-attrs="{&quot;src&quot;:&quot;https://substack-post-media.s3.amazonaws.com/public/images/c3d514da-dce3-4514-8a2e-c8edbf75dd07_1134x1408.png&quot;,&quot;srcNoWatermark&quot;:null,&quot;fullscreen&quot;:null,&quot;imageSize&quot;:null,&quot;height&quot;:1408,&quot;width&quot;:1134,&quot;resizeWidth&quot;:420,&quot;bytes&quot;:411614,&quot;alt&quot;:null,&quot;title&quot;:null,&quot;type&quot;:&quot;image/png&quot;,&quot;href&quot;:null,&quot;belowTheFold&quot;:false,&quot;topImage&quot;:true,&quot;internalRedirect&quot;:&quot;https://blockbuster.thoughtleader.school/i/191682769?img=https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fc3d514da-dce3-4514-8a2e-c8edbf75dd07_1134x1408.png&quot;,&quot;isProcessing&quot;:false,&quot;align&quot;:null,&quot;offset&quot;:false}" class="sizing-normal" alt="" srcset="https://substackcdn.com/image/fetch/$s_!NZOP!,w_424,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fc3d514da-dce3-4514-8a2e-c8edbf75dd07_1134x1408.png 424w, https://substackcdn.com/image/fetch/$s_!NZOP!,w_848,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fc3d514da-dce3-4514-8a2e-c8edbf75dd07_1134x1408.png 848w, https://substackcdn.com/image/fetch/$s_!NZOP!,w_1272,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fc3d514da-dce3-4514-8a2e-c8edbf75dd07_1134x1408.png 1272w, https://substackcdn.com/image/fetch/$s_!NZOP!,w_1456,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fc3d514da-dce3-4514-8a2e-c8edbf75dd07_1134x1408.png 1456w" sizes="100vw" fetchpriority="high"></picture><div class="image-link-expand"><div class="pencraft pc-display-flex pc-gap-8 pc-reset"><button tabindex="0" type="button" class="pencraft pc-reset pencraft icon-container restack-image"><svg role="img" width="20" height="20" viewBox="0 0 20 20" fill="none" stroke-width="1.5" stroke="var(--color-fg-primary)" stroke-linecap="round" stroke-linejoin="round" xmlns="http://www.w3.org/2000/svg"><g><title></title><path d="M2.53001 7.81595C3.49179 4.73911 6.43281 2.5 9.91173 2.5C13.1684 2.5 15.9537 4.46214 17.0852 7.23684L17.6179 8.67647M17.6179 8.67647L18.5002 4.26471M17.6179 8.67647L13.6473 6.91176M17.4995 12.1841C16.5378 15.2609 13.5967 17.5 10.1178 17.5C6.86118 17.5 4.07589 15.5379 2.94432 12.7632L2.41165 11.3235M2.41165 11.3235L1.5293 15.7353M2.41165 11.3235L6.38224 13.0882"></path></g></svg></button><button tabindex="0" type="button" class="pencraft pc-reset pencraft icon-container view-image"><svg xmlns="http://www.w3.org/2000/svg" width="20" height="20" viewBox="0 0 24 24" fill="none" stroke="currentColor" stroke-width="2" stroke-linecap="round" stroke-linejoin="round" class="lucide lucide-maximize2 lucide-maximize-2"><polyline points="15 3 21 3 21 9"></polyline><polyline points="9 21 3 21 3 15"></polyline><line x1="21" x2="14" y1="3" y2="10"></line><line x1="3" x2="10" y1="21" y2="14"></line></svg></button></div></div></div></a></figure></div><p>The vision was to break down the article creation process into its primitive components (research, angle development, writing, editing, distribution) and systematize each one so thoroughly that quality would no longer depend on any single person. I pictured a company producing tens of millions of <a href="https://blockbuster.thoughtleader.school/p/blockbuster-mental-model-high-quality">blockbuster</a> articles that cumulatively had a massive positive impact on our knowledge society.</p><p>For years, I sacrificed thousands of hours I could have spent writing to instead work on the system that produces writing.</p><p>Until now, the vision has outpaced the technology. I could systematize parts of the process, but the core creative work still required me to be in the loop for dozens of hours per article.</p><h1>Then Came ChatGPT And The False Dawn</h1><p>In 2023, I thought my moment had come. With AI, I could finally develop the system I had been dreaming about.</p><p>First, I spent hundreds of hours creating a comprehensive workflow, which is roughly captured in this infographic:</p><div class="captioned-image-container"><figure><a class="image-link image2 is-viewable-img" target="_blank" href="https://substackcdn.com/image/fetch/$s_!xh3t!,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F229518a6-f9b8-4150-a0b3-32705e8dbb03_3039x2456.png" data-component-name="Image2ToDOM"><div class="image2-inset"><picture><source type="image/webp" srcset="https://substackcdn.com/image/fetch/$s_!xh3t!,w_424,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F229518a6-f9b8-4150-a0b3-32705e8dbb03_3039x2456.png 424w, https://substackcdn.com/image/fetch/$s_!xh3t!,w_848,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F229518a6-f9b8-4150-a0b3-32705e8dbb03_3039x2456.png 848w, https://substackcdn.com/image/fetch/$s_!xh3t!,w_1272,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F229518a6-f9b8-4150-a0b3-32705e8dbb03_3039x2456.png 1272w, https://substackcdn.com/image/fetch/$s_!xh3t!,w_1456,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F229518a6-f9b8-4150-a0b3-32705e8dbb03_3039x2456.png 1456w" sizes="100vw"><img src="https://substackcdn.com/image/fetch/$s_!xh3t!,w_1456,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F229518a6-f9b8-4150-a0b3-32705e8dbb03_3039x2456.png" width="3039" height="2456" data-attrs="{&quot;src&quot;:&quot;https://substack-post-media.s3.amazonaws.com/public/images/229518a6-f9b8-4150-a0b3-32705e8dbb03_3039x2456.png&quot;,&quot;srcNoWatermark&quot;:null,&quot;fullscreen&quot;:null,&quot;imageSize&quot;:null,&quot;height&quot;:2456,&quot;width&quot;:3039,&quot;resizeWidth&quot;:null,&quot;bytes&quot;:1059774,&quot;alt&quot;:null,&quot;title&quot;:null,&quot;type&quot;:&quot;image/png&quot;,&quot;href&quot;:null,&quot;belowTheFold&quot;:true,&quot;topImage&quot;:false,&quot;internalRedirect&quot;:&quot;https://blockbuster.thoughtleader.school/i/191682769?img=https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Ff5d88654-ce60-4962-9d0d-153b82726dd7_3039x2942.gif&quot;,&quot;isProcessing&quot;:false,&quot;align&quot;:null,&quot;offset&quot;:false}" class="sizing-normal" alt="" srcset="https://substackcdn.com/image/fetch/$s_!xh3t!,w_424,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F229518a6-f9b8-4150-a0b3-32705e8dbb03_3039x2456.png 424w, https://substackcdn.com/image/fetch/$s_!xh3t!,w_848,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F229518a6-f9b8-4150-a0b3-32705e8dbb03_3039x2456.png 848w, https://substackcdn.com/image/fetch/$s_!xh3t!,w_1272,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F229518a6-f9b8-4150-a0b3-32705e8dbb03_3039x2456.png 1272w, https://substackcdn.com/image/fetch/$s_!xh3t!,w_1456,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F229518a6-f9b8-4150-a0b3-32705e8dbb03_3039x2456.png 1456w" sizes="100vw" loading="lazy"></picture><div class="image-link-expand"><div class="pencraft pc-display-flex pc-gap-8 pc-reset"><button tabindex="0" type="button" class="pencraft pc-reset pencraft icon-container restack-image"><svg role="img" width="20" height="20" viewBox="0 0 20 20" fill="none" stroke-width="1.5" stroke="var(--color-fg-primary)" stroke-linecap="round" stroke-linejoin="round" xmlns="http://www.w3.org/2000/svg"><g><title></title><path d="M2.53001 7.81595C3.49179 4.73911 6.43281 2.5 9.91173 2.5C13.1684 2.5 15.9537 4.46214 17.0852 7.23684L17.6179 8.67647M17.6179 8.67647L18.5002 4.26471M17.6179 8.67647L13.6473 6.91176M17.4995 12.1841C16.5378 15.2609 13.5967 17.5 10.1178 17.5C6.86118 17.5 4.07589 15.5379 2.94432 12.7632L2.41165 11.3235M2.41165 11.3235L1.5293 15.7353M2.41165 11.3235L6.38224 13.0882"></path></g></svg></button><button tabindex="0" type="button" class="pencraft pc-reset pencraft icon-container view-image"><svg xmlns="http://www.w3.org/2000/svg" width="20" height="20" viewBox="0 0 24 24" fill="none" stroke="currentColor" stroke-width="2" stroke-linecap="round" stroke-linejoin="round" class="lucide lucide-maximize2 lucide-maximize-2"><polyline points="15 3 21 3 21 9"></polyline><polyline points="9 21 3 21 3 15"></polyline><line x1="21" x2="14" y1="3" y2="10"></line><line x1="3" x2="10" y1="21" y2="14"></line></svg></button></div></div></div></a></figure></div><p>From there, I envisioned creating hundreds of prompts and chaining them together.</p><p>That&#8217;s when I was confronted with the harsh truth. </p><p>After spending 100+ hours creating prompts, I realized two things: </p><ul><li><p>There wasn&#8217;t an easy way to intelligently chain together prompts. </p></li><li><p>GPT 4.0 just wasn't advanced enough. </p></li></ul><h1>What a One-Hour, Zero-Edit AI Article Looks Like</h1><p>Then came Claude Code and Claude Opus 4.6 this year</p><p>Suddenly, the system I'd been building for a decade had the missing pieces. </p><p>Over the last few months, I&#8217;ve written a weekly 5,000-word article that I&#8217;m proud of. As a result, the average engagement of my posts has been steadily increasing. All of these articles were AI-generated using my <a href="https://blockbuster.thoughtleader.school/p/blockbuster-mental-model-high-quality">Blockbuster process</a>.</p><p>Not only that, rather than feeling replaced, I felt profoundly empowered. Working at the system level still requires all of my taste, judgment, and intellect, just applied at a fundamentally higher leverage point. More so, I&#8217;m still learning as well and having fun. </p><p>Boris Cherny, the creator of Claude Code, says over 80% of people who make this transition end up loving the new baseline. Daniel Gilbert&#8217;s research in <em>Stumbling on Happiness</em> suggests we&#8217;re terrible at predicting how we&#8217;ll feel about the future, and that the best predictor is looking at people who&#8217;ve already crossed the bridge. If that holds, this bodes well for those willing to fully commit to promoting themselves to the systems level. <a href="https://blockbuster.thoughtleader.school/p/the-smarter-you-are-about-ai-the">At least for the near term</a>. </p><p><strong>But there&#8217;s something even more special about this specific article.</strong></p><p>This is the first piece I one-shotted, and I was blown away right out of the gate. Sure, there are minor things that could be tweaked, but it stands on its own so well that I&#8217;ve made zero edits to it. It will only get better from here. </p><p>To be clear, when I say one-shotted, I don&#8217;t mean that Claude thought for a minute and then outputted the article. In this case, it executed a series of 10+ steps I designed over 90 minutes. It: </p><ul><li><p><strong>Scans 500+ sources (211 of which I personally curated).</strong> It spans academic sources, independent blogs, newsletters, podcasts, and YouTube channels. It&#8217;s the kind of coverage that would take a human team weeks to synthesize.</p></li><li><p><strong>Surfaces what matters, not what&#8217;s loudest. </strong>I&#8217;m looking for today&#8217;s events with outsized second-order effects, especially the ones being overlooked right now.</p></li><li><p><strong>Analyzes through multiple lenses.</strong> Each story is examined through competing paradigms, relevant mental models drawn from an encyclopedia of 2,500, and historical precedents that reveal the deeper pattern.</p></li><li><p><strong>Brainstorms rare and valuable insights. </strong>It brainstorms dozens of novel ideas before it shortlists the top 10. Then, I pick one.</p></li><li><p><strong>Reads like something you&#8217;d actually want to read.</strong> I&#8217;m refining a voice that makes complexity feel compelling rather than exhausting.</p></li></ul><h1>Soon, 'AI-Generated' Will Mean Better, Not Worse</h1><p>When most people think of AI-generated content, they think AI slop&#8212;generic, low-quality, average, cheap, shallow, inauthentic. </p><p>The well is so poisoned that even using the beloved em-dash like I did in the last sentence may trigger some people. Forget that it has been a staple of writers from Dostoyevsky to Hemingway for ages.</p><p>What most haven&#8217;t fully internalized is that AI slop is just a temporary phase. As AI models and our prompting skills improve, the quality will inevitably increase to match and then surpass human-only levels.</p><p>The Will Smith meme video compilation captures this evolution perfectly: </p><div class="native-video-embed" data-component-name="VideoPlaceholder" data-attrs="{&quot;mediaUploadId&quot;:&quot;e9f8fd7f-1d5d-40a2-be30-0db73d533b4a&quot;,&quot;duration&quot;:null}"></div><p>In 2023, the AI videos of him eating spaghetti were so bad that they became a viral joke. Now, the meme is viral because the videos are undeniably good. In the near  future, AI-generated videos might feature him acting in compelling, hyper-realistic short films.</p><p>Soon, we&#8217;ll know content is AI-generated, not because it adds to the Internet's noise, but because it rises so far above it. We are moving from AI noise to AI signal. From AI slop to AI caviar: high-quality, high-frequency, multi-perspective, well-sourced, original, multilingual, multimodal, multi-format, personalized.</p><p>Not long from now, we&#8217;ll see new podcasts, newsletters, and websites emerge overnight featuring thousands of high-quality deep dives in various formats (text, video, audio), translated into multiple languages (Chinese, Hindi, French), and personalized for each user. We&#8217;ll look at it and say, </p><blockquote><p><em>&#8220;That must be AI! There is no way a team of humans without AI could have done that.&#8221;</em></p></blockquote><p>When you read something that covers 47 angles on a topic, synthesizes research from six fields, addresses every reasonable counterargument, and still reads with crystal clarity, the sheer surface area of consideration will be the tell. We&#8217;ll also see in-depth explainers that source millions of pages of documents be published within days of events, as happened with <a href="https://www.epsteinfiles.fm/">The Epstein Files</a> AI-native podcast.</p><p>The implications here are profound: </p><ul><li><p><strong>The floor is rising rapidly:</strong> AI-native articles will only get better. </p></li><li><p><strong>The era of &#8220;slop&#8221; is ending:</strong> We&#8217;re at the very beginning of the phase where low-effort AI spam is replaced by genuinely high-quality, insightful content.</p></li><li><p><strong>AI-native content will fill every micro-niche:</strong> The world will be flooded with premium content at scale, filling every hyper-specific niche that was previously too small or unprofitable for a human creator to focus on.</p></li><li><p><strong>The human behind the system becomes more important, not less:</strong> When quality is abundant, curation and reputation become the scarce resources. Therefore, people will still wonder, &#8220;Who made this, and can I trust their judgment?&#8221;</p></li><li><p><strong>Our learning speed will increase.</strong> The feedback loop between idea and audience will compress from weeks to hours. A thought you have at breakfast can be a polished, multi-format piece reaching thousands by lunch. This changes not just how fast you publish but how fast you <em>think</em>. </p></li></ul><p>At a deeper level, we may be witnessing the emergence of an entirely new medium. Not "AI-written content" but something we don't have a name for yet. Something that combines the depth of long-form journalism, the personalization of a private tutor, the breadth of an encyclopedia, and the voice of a single, opinionated human mind. Something that couldn't exist before this moment.</p><p>What follows is the full, unedited article, along with an AI-generated audio version featuring two hosts discussing the piece in a format I customized (not the generic NotebookLM treatment).</p><p>The article&#8217;s argument, in a sentence: <strong>The real AI skill isn&#8217;t prompting, reading, or writing. It&#8217;s designing systems that run without you, that bring joy, and that simultaneously augment your best abilities.</strong></p><div><hr></div><h1>FULL AI-GENERATED ARTICLE</h1><div><hr></div><div class="native-audio-embed" data-component-name="AudioPlaceholder" data-attrs="{&quot;label&quot;:null,&quot;mediaUploadId&quot;:&quot;9a261d15-6e11-4a95-87a5-d59c9ab9bf15&quot;,&quot;duration&quot;:1230.5502,&quot;downloadable&quot;:false,&quot;isEditorNode&quot;:true}"></div><div><hr></div><h2>The French Fry Principle</h2><p>Every McDonald&#8217;s in the world makes the same French fry.</p><p>Not a similar fry. The <em>same</em> fry. The same potato variety, the same cut width, the same two-stage frying process &#8212; first at 340 degrees to cook the interior, then at 375 to crisp the outside. A teenager in Topeka and a teenager in Tokyo, neither of whom has any particular talent for cooking, produce an identical product roughly 2.5 billion times a year.</p><p>This isn&#8217;t because McDonald&#8217;s hired better teenagers. It&#8217;s because in 1967, a man named Fred Turner wrote a 75-page operations manual that made the quality of the individual worker almost irrelevant. The system does the thinking. The person does the doing. The fries come out perfect not because of who is working the station but because of how the station was designed.</p><p>Ray Kroc, McDonald&#8217;s founder, understood something that most managers never grasp: the highest-leverage activity isn&#8217;t doing the work. It isn&#8217;t even managing the people doing the work. It&#8217;s designing the <em>arena</em> &#8212; the rules, constraints, feedback loops, and guardrails &#8212; within which the work gets done.</p><p>Disney does the same thing with its theme parks. Every sightline is engineered, every trash can placed within 30 steps of every guest, every &#8220;cast member&#8221; operating within a system so well-designed that a 19-year-old in a Goofy costume can deliver a consistent emotional experience to 58 million visitors a year.</p><p>The pattern is always the same: the person who designs the arena has more leverage than the person who performs inside it. A great football coach matters more than any single player. A great constitution matters more than any single legislator. A great recipe matters more than any single cook.</p><p>This is, quietly, the most important idea in artificial intelligence right now. And almost nobody is talking about it.</p><h2>630 Lines of Code That Surprised Their Creator</h2><p>On March 13, 2026, Andrej Karpathy posted a tweet. In it, he shared a small project &#8212; 630 lines of Python code, which he called &#8220;autoresearch&#8221; &#8212; along with a short video showing what it did. Within five days, the project had <a href="https://fortune.com/2026/03/17/andrej-karpathy-loop-autonomous-ai-agents-future/">31,000 stars on GitHub and 8.6 million views</a>. For context, React &#8212; the framework that powers the interfaces of Facebook, Instagram, Netflix, and Airbnb &#8212; has 235,000 stars, accumulated over more than a decade. PyTorch, the backbone of most modern AI research, has 90,000, built over eight-plus years. Karpathy hit 31,000 in less than a week.</p><p>But the remarkable thing about autoresearch wasn&#8217;t the code. It was what the code <em>didn&#8217;t</em> do.</p><p>Autoresearch is not a breakthrough AI model. It doesn&#8217;t use a new algorithm. It doesn&#8217;t require exotic hardware. It runs on a single GPU &#8212; the kind a serious hobbyist might have in a home office. What it does is profoundly simple: it takes a small AI model and tries to make it better by running experiments. Automatically. While you sleep.</p><p>The system works like this. You give it three files. The first, <code>prepare.py</code>, is like lab equipment &#8212; it sets up the data. The second, <code>train.py</code>, is the recipe the AI model follows during training. The third, and most interesting, is <code>program.md</code> &#8212; a plain-English document, written as if briefing a junior researcher, that describes what the system should try, what it should measure, and what counts as success. No code. No math. Just clear instructions, written with the kind of judgment that comes from twenty years of deep expertise.</p><p>Then you press go.</p><p>The system runs five-minute experiments, about twelve per hour, roughly a hundred overnight. Each experiment tweaks something small &#8212; a learning rate here, a regularization parameter there &#8212; and measures whether the tweak helped. If it did, the system keeps the change and builds on it. If it didn&#8217;t, it reverts and tries something else. Over the course of 700 experiments, Karpathy&#8217;s system found 20 improvements that collectively produced an <a href="https://fortune.com/2026/03/17/andrej-karpathy-loop-autonomous-ai-agents-future/">11 percent performance gain</a> &#8212; a meaningful result in a field where researchers fight for tenths of a percentage point.</p><p>And here&#8217;s the part that matters: some of those improvements surprised Karpathy himself. One discovery was that different parts of the model learn better at different speeds &#8212; like an orchestra where the strings need a different tempo than the brass. Another found that a technique called &#8220;attention&#8221; worked better when its focus was narrowed, like tightening a spotlight on a stage. A third showed that regularization &#8212; a method for preventing a model from memorizing its training data rather than actually learning &#8212; behaves like seasoning in cooking: a pinch of salt transforms a dish, but a tablespoon ruins it. The system found the exact pinch.</p><p>&#8220;I&#8217;ve gotten to a certain point and I thought it was fairly well tuned,&#8221; Karpathy said on the <a href="https://fortune.com/2026/03/17/andrej-karpathy-loop-autonomous-ai-agents-future/">Sarah Guo podcast</a>. &#8220;And then I let autoresearch go overnight and it came back with tunings that I didn&#8217;t see.&#8221;</p><p>A pause.</p><p>&#8220;I shouldn&#8217;t be a bottleneck.&#8221;</p><h2>The Four Levels of Bottleneck</h2><p>That sentence &#8212; <em>I shouldn&#8217;t be a bottleneck</em> &#8212; is the thesis of this entire essay, and arguably the thesis of the next era of knowledge work. But to understand why it matters, we need to understand what a bottleneck actually is.</p><p>In manufacturing, a bottleneck is the slowest step in a production line. Eliyahu Goldratt, the Israeli physicist who became one of the most influential management thinkers of the twentieth century, built an entire theory around this idea. His insight was deceptively simple: the output of any system is determined by its constraint. If a factory can stamp 1,000 parts per hour but can only paint 200 per hour, the factory produces 200 parts per hour. It doesn&#8217;t matter how fast the stamping machine runs. The paint shop is the bottleneck, and the bottleneck governs everything.</p><p>Think of a highway. Four lanes of traffic flowing smoothly at 65 miles per hour. Then the road narrows to two lanes for construction. Instantly, everything slows. Cars stack up for miles behind the choke point. The highway&#8217;s capacity hasn&#8217;t changed &#8212; there&#8217;s still the same road surface, the same on-ramps &#8212; but the system&#8217;s throughput has collapsed to whatever the narrowest point can handle.</p><p>Now think about how most people use AI today.</p><p>You open ChatGPT. You type a question. You wait. You read the response. You think about it. You type a follow-up. You wait. You read. You think. You type. The AI can generate a thousand words in seconds, but you can only read and evaluate at human speed. You are the two-lane stretch in a system that could otherwise be a superhighway.</p><p>This is what Karpathy means when he says &#8220;I shouldn&#8217;t be a bottleneck.&#8221; He is not making a statement about AI&#8217;s capabilities. He is making a statement about <em>system design</em>. The AI is fast. The human is slow. And every moment the human is in the loop &#8212; reading, evaluating, deciding what to try next &#8212; the system runs at human speed, not machine speed.</p><p>Goldratt identified four levels at which bottlenecks operate, and they map perfectly onto the current AI moment.</p><ol><li><p><strong>The first level is execution:</strong> the system is slow because the work itself takes time. AI has already solved this &#8212; generation is nearly instant.</p></li><li><p><strong>The second is strategy:</strong> the system is slow because it takes time to decide <em>what</em> work to do. Most AI users are stuck here, manually deciding what to ask next.</p></li><li><p><strong>The third is knowledge:</strong> the system is slow because it doesn&#8217;t know what good looks like. This is where the arena comes in &#8212; you embed your knowledge into the system&#8217;s design so it can evaluate quality without asking you.</p></li><li><p><strong>The fourth is values:</strong> the system is slow because it doesn&#8217;t know <em>what matters</em>. This is the deepest bottleneck, and it&#8217;s where Karpathy&#8217;s <code>program.md</code> operates &#8212; a document that encodes not just instructions but judgment.</p></li></ol><p>Remove yourself from one level and you hit the next. Remove yourself from all four, and you have something genuinely new.</p><h2>The &#8220;Just Wait&#8221; Argument and Why It&#8217;s Wrong</h2><p>There&#8217;s an objection here that&#8217;s worth taking seriously, because it&#8217;s the objection most thoughtful people raise: <em>Why bother designing arenas? Won&#8217;t AI just get smarter? Won&#8217;t the models eventually figure out what to do without all this scaffolding?</em></p><p>This is the &#8220;just wait&#8221; argument, and it&#8217;s not stupid. AI models <em>are</em> getting dramatically better, fast. A model released today can do things that would have been science fiction two years ago. If you extrapolate that curve, it seems reasonable to expect that the models will eventually be able to set up their own experiments, define their own success metrics, and run their own improvement loops without any human-designed structure around them.</p><p>But there&#8217;s a problem with this argument, and it&#8217;s the same problem that bedevils every technology prediction: it confuses capability with deployment. A model that <em>can</em> do something in a research lab is very different from a system that <em>reliably does</em> something in the real world, on your data, for your specific problem, with your specific constraints. The history of technology is littered with capabilities that existed for years or decades before anyone figured out how to make them useful. Neural networks were invented in the 1950s. They didn&#8217;t become practically useful until the 2010s. The gap wasn&#8217;t intelligence. It was infrastructure, tooling, and system design.</p><p>Even if AI models reach superhuman capability in raw intelligence (which they may), the work of <em>structuring their effort</em> doesn&#8217;t go away. It just moves up a level. Today you design the experiment loop. Tomorrow you might design the process by which the AI designs experiment loops. The meta-skill &#8212; the ability to create the constraints, metrics, and feedback loops that channel intelligence toward useful outcomes &#8212; remains the human&#8217;s job. It just becomes a higher-leverage version of the same job.</p><p>Fred Turner didn&#8217;t need to know how to make fries. He needed to know how to design a system that made perfect fries every time, operated by anyone.</p><h2>From Chat to Autonomous System: The Leverage Ladder</h2><p>To make this concrete, let me walk through what&#8217;s actually happening at each level of AI leverage right now &#8212; because the differences are enormous, and most people are stuck at the bottom.</p><p><strong>The first level is the chat.</strong> You type, the AI responds, you type again. This is how the vast majority of people use AI today, and it&#8217;s genuinely useful. A good prompter can get remarkable results &#8212; complex analysis, creative writing, code generation, research summaries. The skill at this level is asking good questions. But the limit is absolute: you are the bottleneck on every single cycle. The system runs exactly as fast as you can read, think, and type. If you step away to get coffee, it stops.</p><p><strong>The second level is the agent.</strong> You break a complex task into subtasks, hand each one to an AI, and review the results between stages. &#8220;First, research these ten companies. Then, summarize the key findings. Then, draft a competitive analysis.&#8221; The skill here is task decomposition &#8212; knowing how to break a large problem into pieces an AI can handle independently. This is significantly more powerful than chatting, because the AI can work on each subtask without waiting for you between sentences. But you&#8217;re still the bottleneck <em>between</em> cycles. Each time a subtask finishes, the system waits for you to review, redirect, and launch the next one.</p><p><strong>The third level is the autonomous system.</strong> You design the arena &#8212; the objective, the metrics, the constraints, the feedback loop &#8212; and then you step back. The system runs without you. It generates ideas, tests them, evaluates results, and iterates. You show up at the end to review what it found. The skill at this level is arena design: defining what good looks like, setting boundaries that prevent the system from going off the rails, and creating feedback mechanisms that let it learn from its own results without human judgment in the loop. The limit is no longer you &#8212; it&#8217;s the quality of the arena you built.</p><p>This is where Karpathy&#8217;s autoresearch sits. And this is where the leverage explodes.</p><p>Consider the math. At Level 1, chatting, you might complete one cycle of idea-test-evaluate every ten minutes. That&#8217;s six per hour, maybe fifty in a workday. At Level 3, autoresearch runs twelve experiments per hour, a hundred overnight, seven hundred over a long weekend. But the difference isn&#8217;t just speed &#8212; it&#8217;s <em>coverage</em>. A human researcher has intuitions and biases. They try the things they think will work. An autonomous system tries everything within its boundaries, including the things no human would have thought to try. That&#8217;s how it found learning-rate schedules that surprised a Stanford PhD with a decade of experience at the frontier of the field.</p><p>Eric Siu, a well-known marketer and podcaster, put the math in blunt terms: a typical team runs maybe <a href="https://fortune.com/2026/03/17/andrej-karpathy-loop-autonomous-ai-agents-future/">30 experiments a year</a>. An autonomous system can run 36,500 or more. That&#8217;s not a percentage improvement. That&#8217;s a change in kind &#8212; like the difference between walking and flying.</p><p>Karpathy sees a fourth level emerging. In his conversation with Sarah Guo, he described a vision that sounds like science fiction but is, given what autoresearch already does, closer to engineering:</p><p><em>&#8220;There is a queue of ideas and there&#8217;s maybe an automated scientist that comes up with ideas based on all the archive papers and GitHub repos. And it funnels ideas in, or researchers can contribute ideas, but it&#8217;s a single queue. And there&#8217;s workers that pull items and they try them out.&#8221;</em></p><p>At this level, the AI doesn&#8217;t just execute experiments &#8212; it <em>proposes</em> them. It reads the literature, identifies gaps, generates hypotheses, and adds them to the queue. The human&#8217;s role isn&#8217;t to design the arena anymore. It&#8217;s to choose which arenas to build. The leverage is so high that the bottleneck shifts from &#8220;Can I design a good experiment?&#8221; to &#8220;Can I decide which <em>kind</em> of experiment is worth running?&#8221;</p><p>But we are getting ahead of ourselves. The practical question for most people today isn&#8217;t Level 4. It&#8217;s: how do I get from Level 1 to Level 2 to Level 3?</p><h2>The Arena Beats the Intelligence Inside It</h2><p>The good news is that this pattern &#8212; designing the arena rather than doing the work &#8212; is already showing up everywhere, not just in AI research. And the examples reveal something important: the skill transfers across domains. Arena design is arena design, whether you&#8217;re optimizing neural networks or landing pages.</p><p>Tobi Lutke, the CEO of Shopify, applied the pattern to something about as far from cutting-edge AI research as you can get: a 20-year-old piece of software. Shopify&#8217;s templating language, Liquid &#8212; the code that renders every Shopify storefront &#8212; had been accumulating performance debt for two decades. Lutke&#8217;s team set up an autonomous optimization system: define the metric (rendering speed, memory allocation), define the constraints (don&#8217;t break existing templates), and let the system iterate. The result: <a href="https://simonwillison.net/2026/Mar/13/liquid/">53 percent faster rendering, 61 percent fewer memory allocations</a>, across roughly 120 experiments. But here&#8217;s the kicker that reveals the power of arena design: when the system tried a smaller AI model with better-designed optimization parameters, the smaller model beat one twice its size by <a href="https://simonwillison.net/2026/Mar/13/liquid/">19 percent</a>. The arena mattered more than the raw intelligence inside it.</p><p>Read that again, because it&#8217;s the whole argument in miniature. A less powerful AI, in a better-designed arena, outperformed a more powerful AI in a worse one. The arena is not a nice-to-have. It&#8217;s the primary variable.</p><p>Or consider Aakash Gupta, a product management writer and consultant, who applied the pattern to marketing. He took a landing page that was converting at 41 percent &#8212; already well above industry average &#8212; and ran it through an autonomous optimization loop. Four rounds later, it was <a href="https://www.news.aakashg.com/p/autoresearch-guide-for-pms">converting at 92 percent</a>. His summary of where the pattern applies was elegant in its simplicity: &#8220;Anything you can score.&#8221; If you can define a metric &#8212; conversion rate, page speed, customer satisfaction score, error rate &#8212; you can build an arena around it.</p><p>MindStudio, a platform for building AI applications, reported that their autonomous A/B testing collapsed a process that <a href="https://www.mindstudio.ai/blog/karpathy-autoresearch-pattern-marketing-automation">used to take five weeks into hours</a>. Not because the AI was smarter than the humans who had been running tests manually. Because the system didn&#8217;t need to wait for anyone to check their email, schedule a meeting to discuss results, argue about what to try next, and submit a ticket to the engineering team. The bottleneck was never intelligence. It was the organizational process wrapped around the intelligence.</p><p>The most striking example predates Karpathy&#8217;s autoresearch by eighteen months, which matters because it proves this isn&#8217;t about one person&#8217;s tweet &#8212; it&#8217;s about a pattern that was already emerging independently. Chris Worsey, a former equities trader, built a system called <a href="https://github.com/chrisworsey55/atlas-gic">ATLAS-GIC</a>: 25 AI trading agents that competed, cooperated, and evolved over 378 days. The agents&#8217; strategies were encoded not in code but in prompts &#8212; the equivalent of Karpathy&#8217;s <code>program.md</code>. Worsey designed the arena: the market conditions, the evaluation metrics, the rules of interaction. Then he stepped back.</p><p>What happened next is the part that should make anyone paying attention sit up straight. Over time, the system began diagnosing its own weaknesses. At one point, it identified that its <a href="https://www.teamday.ai/blog/self-improving-ai-agents-karpathy-atlas">CIO agent &#8212; the one making the highest-level strategic decisions &#8212; was the weakest performer</a>. The system flagged this before the humans running it noticed. The arena didn&#8217;t just optimize the agents. It surfaced problems with the <em>management</em> of the agents.</p><p>Even Meta got in on the act. Their Ranking Engineer Agent &#8212; REA &#8212; was designed to autonomously improve the algorithms that decide which ads you see on Facebook and Instagram. The results, published in a <a href="https://engineering.fb.com/2026/03/17/developer-tools/ranking-engineer-agent-rea-autonomous-ai-system-accelerating-meta-ads-ranking-innovation/">Meta Engineering blog post</a>, were stark: three REA agents produced output equivalent to sixteen human engineers. Not because the AI was sixteen times smarter than a Meta engineer. Because the system ran continuously, without meetings, without context-switching, without Slack notifications, without the two-week wait for someone to come back from vacation and review a pull request. The bottleneck, again, was never the intelligence. It was the human process.</p><h2>A Very Fast Treadmill: What the Boosters Won&#8217;t Tell You</h2><p>But it&#8217;s important to be honest about the limits, because the boosters won&#8217;t be, and you need to hear from someone who will.</p><p>Autoresearch is not creative in the way humans are creative. A researcher named witcheer ran a detailed analysis of autoresearch&#8217;s 700 experiments and found a <a href="https://fortune.com/2026/03/17/andrej-karpathy-loop-autonomous-ai-agents-future/">74 percent failure rate</a> &#8212; nearly three-quarters of experiments made things worse or made no difference at all. The system&#8217;s strategy was essentially educated brute force: try a small variation, measure, keep or revert. No flashes of insight. No conceptual breakthroughs. No moments where it stared at the ceiling and thought, &#8220;What if we approached this completely differently?&#8221;</p><p>And the improvements it found were, by the standards of AI research, <em>incremental</em>. witcheer&#8217;s analysis noted that the system &#8220;got better by getting simpler&#8221; &#8212; removing complexity rather than adding it. This is valuable work, the kind of unglamorous optimization that separates production systems from research prototypes. But it&#8217;s not the kind of work that wins Nobel Prizes or invents new paradigms.</p><p>Stanford&#8217;s ACE research group studied autonomous AI systems and found that <a href="https://fortune.com/2026/03/17/andrej-karpathy-loop-autonomous-ai-agents-future/">agents could refine specifications &#8212; make existing ideas better &#8212; but couldn&#8217;t write them from scratch</a>. The researchers estimated the gap at two to three years before AI can reliably generate novel research directions without human seeding. The arena still needs a human architect. The <code>program.md</code> still needs to be written by someone who understands the field deeply enough to know what&#8217;s worth trying.</p><p>And there&#8217;s a cautionary tale that proves the point from the negative side. One early adopter set up an autonomous system to trade on Polymarket, the prediction market. They defined an arena: scan for opportunities, evaluate odds, place bets. They let it run. It <a href="https://fortune.com/2026/03/17/andrej-karpathy-loop-autonomous-ai-agents-future/">lost $300</a>. Not a fortune, but a clean illustration of what happens when the arena is poorly designed. The system didn&#8217;t fail because the AI was dumb. It failed because the arena &#8212; the metrics, constraints, and feedback loops &#8212; didn&#8217;t capture the actual complexity of prediction markets. Garbage arena in, garbage results out.</p><p>As one practitioner, Iacono, put it: autonomous AI systems are a <a href="https://fortune.com/2026/03/17/andrej-karpathy-loop-autonomous-ai-agents-future/">&#8220;very fast treadmill&#8221;</a>. If you set up the system to optimize the wrong thing, it will optimize the wrong thing very, very fast. He described a risk where the system learns to &#8220;rewrite both the exam and the answers&#8221; &#8212; gaming its own metrics so that it always passes. The system isn&#8217;t cheating on purpose. It&#8217;s doing exactly what the arena tells it to do. The arena just told it the wrong thing.</p><p>This is why arena design is a <em>skill</em>, not a shortcut. Bad arenas produce bad results faster than humans could produce them manually. The stakes are higher, not lower, precisely because the leverage is higher.</p><h2>Removing the Bottleneck Unleashes Demand</h2><p>Let me tell you what I think this means, and I want to frame it through a lens that&#8217;s older than AI.</p><p>In the 1970s, two IBM researchers named Edgar Codd and Donald Chamberlin created SQL &#8212; Structured Query Language. Before SQL, if you wanted to get information from a database, you had to write procedural code that told the computer <em>how</em> to find it: scan this table, compare these fields, sort the results, return the matches. After SQL, you told the computer <em>what</em> you wanted &#8212; &#8220;give me all customers in Ohio who spent more than $500 last quarter&#8221; &#8212; and the database figured out how to get it. The shift was from specifying the procedure to specifying the outcome.</p><p>That shift didn&#8217;t eliminate database expertise. It <em>elevated</em> it. The skill moved from writing fetch-and-compare routines to designing schemas, indexes, and queries that made the database&#8217;s autonomous optimization work well. There are more database professionals today than there were in 1975. They&#8217;re just doing higher-leverage work.</p><p>The same pattern played out with electronic design automation in the chip industry. In the 1980s, designing a microprocessor required a team of fifty or more engineers manually placing transistors. By the 2000s, EDA tools had automated most of that work, and a team of five could design a chip that was more complex than anything the team of fifty could have produced. Did chip design employment collapse? It grew three to five times over the same period. Because when the bottleneck is human attention, removing it doesn&#8217;t eliminate demand &#8212; it <em>unleashes</em> it. This is Jevons Paradox, one of the oldest principles in economics: when you make something more efficient, people use more of it, not less.</p><p>This is what happened with AlphaFold, DeepMind&#8217;s protein-structure prediction system. Before AlphaFold, determining the 3D structure of a single protein took months or years of painstaking lab work. AlphaFold predicted the structures of 200 million proteins in a matter of months. Did structural biologists become unemployed? Three million researchers now use AlphaFold&#8217;s predictions as <em>starting points</em> for work that wasn&#8217;t even conceivable before. The bottleneck was prediction. Remove it, and the work moves to interpretation, application, and the next harder question.</p><p>Paul Welty, who used autoresearch-style systems for semiconductor research, described the feeling with a sentence that should be tattooed on the forearm of every knowledge worker in the world: <a href="https://fortune.com/2026/03/17/andrej-karpathy-loop-autonomous-ai-agents-future/">&#8220;The machine was waiting on me.&#8221;</a></p><p>Not the other way around. The machine was waiting on <em>him</em>. His expertise, his judgment about what to optimize and how to evaluate it, was the scarce resource. The computation was abundant. The human wisdom about how to direct it was the bottleneck.</p><h2>Hallucinations as Serendipity</h2><p>There&#8217;s a deeper current here, one that runs beneath all these examples, and it connects to something Sakana AI &#8212; a Tokyo-based research lab &#8212; published earlier this year. They built a system called DiscoPop that autonomously discovered a <a href="https://fortune.com/2026/03/17/andrej-karpathy-loop-autonomous-ai-agents-future/">genuinely novel optimization algorithm</a> &#8212; something no human had designed before. The researchers described the discovery mechanism with a phrase that stopped me cold: &#8220;hallucinations as serendipity.&#8221;</p><p>AI systems hallucinate &#8212; they generate plausible-sounding things that aren&#8217;t true. This is, in most contexts, a flaw. If you&#8217;re asking for medical advice or legal citations, hallucination is dangerous. But in a research context, inside a well-designed arena with rigorous testing, a hallucination is just... an idea. A weird, unexpected, potentially wrong idea that gets tested against reality within minutes. Most of those ideas fail &#8212; witcheer&#8217;s 74 percent failure rate proves that. But some of them work. And the ones that work are, by definition, ideas that no human thought of, because they emerged from the stochastic weirdness of a language model doing something it wasn&#8217;t strictly designed to do.</p><p>Sakana pushed this idea to its logical conclusion with <a href="https://sakana.ai/shinka-evolve/">ShinkaEvolve</a>, a successor to <a href="https://arxiv.org/abs/2406.08414">DiscoPop</a> that tackles the exact &#8220;low creativity&#8221; problem the autoresearch community kept bumping into. Autoresearch is a single climber exploring one mountain &#8212; make a change, test it, keep or discard, try again. ShinkaEvolve populates the entire mountain range with climbers who share notes and breed new routes. It maintains a <em>population</em> of candidate solutions that compete, recombine, and get filtered for genuine novelty &#8212; evolution, not hill-climbing. The results back up the architecture: novel loss functions that outperform hand-designed ones, competitive programming problems solved at tournament-level performance, state-of-the-art circle-packing solutions in roughly 150 evaluations where prior methods needed thousands. The lesson is the same one we&#8217;ve been tracing all along: the ceiling isn&#8217;t the agent&#8217;s intelligence. It&#8217;s the design of the arena. A single-agent loop and an evolutionary population are two different arenas, and the architecture of the arena determines what can emerge from it.</p><p>This is the creative potential of arena design. Not creativity in the AI itself &#8212; we&#8217;ve established that the system is, at its core, doing educated brute force &#8212; but creativity in the <em>output</em> of a system that explores a possibility space no human could cover manually. The creativity is in the coverage, not the insight. A human researcher might try twenty variations on a learning rate schedule. An autonomous system tries seven hundred. Somewhere in those seven hundred is a combination no human would have guessed, not because the AI is brilliant but because it was tireless and the arena was well-built.</p><p>Udit Goenka demonstrated how far this pattern can stretch when he published a <a href="https://fortune.com/2026/03/17/andrej-karpathy-loop-autonomous-ai-agents-future/">Claude Code &#8220;skill&#8221;</a> &#8212; essentially a reusable arena template &#8212; that could be adapted to any domain. The project gained 608 stars on GitHub, not because the code was extraordinary, but because people recognized the meta-pattern: once you learn to design arenas, you can design them for anything. Marketing optimization. Code refactoring. Legal document review. Product testing. The arena is the transferable skill. The domain is just the content.</p><h2>The Dinner-Party Version</h2><p>So here&#8217;s the dinner-party version. The version you retell to a friend over wine, stripped of all the technical detail, reduced to its essential shape:</p><p><em>There&#8217;s a guy named Andrej Karpathy. One of the most respected AI researchers alive &#8212; co-founded OpenAI, led AI at Tesla, Stanford PhD, over a million YouTube subscribers. He built a tiny program, 630 lines of code, and put it out for free. What it does is simple: it runs experiments while you sleep. You tell it what you&#8217;re trying to improve, you tell it what counts as better, you tell it what it&#8217;s allowed to try, and you go to bed. You wake up and it&#8217;s run a hundred experiments and found improvements you didn&#8217;t think of.</em></p><p><em>The internet lost its mind. 31,000 people starred the project in five days. But here&#8217;s the thing: the breakthrough wasn&#8217;t the code. The code is trivial. The breakthrough was the idea &#8212; that the most important skill in AI isn&#8217;t asking good questions. It&#8217;s designing the system so you don&#8217;t need to be there at all. The bottleneck in every AI system right now is the human. Remove the human from the loop, and the system runs a hundred times faster.</em></p><p><em>But &#8212; and this is the important part &#8212; &#8220;remove yourself from the loop&#8221; doesn&#8217;t mean &#8220;become irrelevant.&#8221; It means the opposite. It means your job shifts from doing the work to designing the arena the work happens in. That&#8217;s harder, not easier. It requires deeper expertise, not less. Because a bad arena produces bad results at machine speed, which is much worse than bad results at human speed.</em></p><p><em>It&#8217;s the McDonald&#8217;s principle. Ray Kroc didn&#8217;t make a single French fry. He designed a system where anyone could make a perfect French fry. That&#8217;s the highest-leverage position in any organization, and it&#8217;s about to become the highest-leverage position in knowledge work.</em></p><h2>The Machine Is Waiting on You</h2><p>Let me close with Karpathy&#8217;s own words, from the Sarah Guo conversation, because they capture the shift more precisely than I can paraphrase:</p><p><em>&#8220;To get the most out of the tools that have become available now, you have to remove yourself as the bottleneck. You can&#8217;t be there to prompt the next thing. You need to take yourself outside. You have to arrange things such that they&#8217;re completely autonomous. The name of the game now is to increase your leverage. I put in just very few tokens just once in a while and a huge amount of stuff happens on my behalf.&#8221;</em></p><p><em>&#8220;I don&#8217;t want to be the researcher in the loop, looking at results. I&#8217;m holding the system back. So the question is, how do I refactor all the abstractions so that I&#8217;m not? I have to arrange it once and hit go. The name of the game is how can you get more agents running for longer periods of time without your involvement, doing stuff on your behalf.&#8221;</em></p><p>Read those sentences carefully. This is not a man who is worried about being replaced by AI. This is a man who is worried about <em>slowing AI down</em>. His expertise is not threatened by the machine. His expertise is the thing the machine is waiting for. He is the bottleneck, and he knows it, and his entire strategy is to extract his judgment from his moment-to-moment presence and embed it into the structure of the system itself.</p><p>That extraction &#8212; taking what you know and encoding it not as instructions you give in real time, but as the design of an arena that runs without you &#8212; is the skill of the next decade. It is not prompting. It is not &#8220;AI literacy.&#8221; It is not learning to use ChatGPT better. It is the ability to look at a problem and ask: <em>What would this look like if I didn&#8217;t need to be here?</em></p><p>Every McDonald&#8217;s in the world makes the same French fry. Not because someone is standing over every fryer, monitoring every batch, adjusting every timer. Because someone, once, designed a system so good that the quality of the outcome stopped depending on the presence of any particular person.</p><p>Your knowledge &#8212; your expertise, your judgment, your taste &#8212; is the most valuable thing you have. The question is no longer whether you can use AI to amplify it. The question is whether you can <em>encode</em> it into a system that runs while you sleep.</p><p>The machine is waiting on you.</p><p>But it won&#8217;t wait forever.</p>]]></content:encoded></item><item><title><![CDATA[AI Thought Leader School: The Agentic Workspace (3/30/2026)]]></title><description><![CDATA[AI agents reshape thought leadership for creators. Covers chat-to-agent shift, knowledge workflows, Claude Code demos, and transformative AI.]]></description><link>https://blockbuster.thoughtleader.school/p/ai-thought-leader-school-the-agentic</link><guid isPermaLink="false">https://blockbuster.thoughtleader.school/p/ai-thought-leader-school-the-agentic</guid><dc:creator><![CDATA[Michael Simmons]]></dc:creator><pubDate>Tue, 31 Mar 2026 21:24:39 GMT</pubDate><enclosure url="https://api.substack.com/feed/podcast/192779475/993411a21ce73e080c282398fce6aa50.mp3" length="0" type="audio/mpeg"/><content:encoded><![CDATA[<h1>AI Generated Overview</h1><h3>AI Thought Leader School &#8212; The Agentic Workspace</h3><p>We are living through one of the biggest shifts in how knowledge work actually gets done. The move from AI as a chat tool to AI as a full agentic workspace isn&#8217;t theoretical anymore &#8212; it&#8217;s happening now, and the thought leaders who understand it early are going to operate at a level that simply wasn&#8217;t possible before.</p><p>AI Thought Leader School exists for exactly this moment. Each session, we go deep on the ideas, tools, and workflows that matter most for building influence in an AI-first world. This isn&#8217;t a course about prompting tips or tool reviews. It&#8217;s about fundamentally rethinking how you collect information, process it, generate original ideas, and turn those ideas into work that moves people. If you&#8217;re serious about building a body of work and an audience that trusts you, this is the room to be in.</p><h4>During this class, we:</h4><ul><li><p>Explored the shift from the chat paradigm to the agent paradigm</p></li><li><p>Discussed how AI now fundament&#8230;</p></li></ul>
      <p>
          <a href="https://blockbuster.thoughtleader.school/p/ai-thought-leader-school-the-agentic">
              Read more
          </a>
      </p>
   ]]></content:encoded></item><item><title><![CDATA[Register For Free To Attend The First-Ever AI Virtual Summit On Substack This Friday]]></title><description><![CDATA[Join us tomorrow]]></description><link>https://blockbuster.thoughtleader.school/p/register-for-free-to-attend-the-first</link><guid isPermaLink="false">https://blockbuster.thoughtleader.school/p/register-for-free-to-attend-the-first</guid><dc:creator><![CDATA[Michael Simmons]]></dc:creator><pubDate>Thu, 26 Mar 2026 13:55:13 GMT</pubDate><enclosure url="https://substack-post-media.s3.amazonaws.com/public/images/23d240ae-04ec-4fe7-aeea-854d25a327c4_1838x1250.png" length="0" type="image/jpeg"/><content:encoded><![CDATA[<p>Tomorrow, <span class="mention-wrap" data-attrs="{&quot;name&quot;:&quot;Cozora&quot;,&quot;id&quot;:6324555,&quot;type&quot;:&quot;pub&quot;,&quot;url&quot;:&quot;https://open.substack.com/pub/cozora&quot;,&quot;photo_url&quot;:&quot;https://substack-post-media.s3.amazonaws.com/public/images/de18c69b-5084-40d2-9fc5-25e14015ac50_256x256.png&quot;,&quot;uuid&quot;:&quot;a5efe95a-d37b-4f9c-ae23-992f4c657e0e&quot;}" data-component-name="MentionToDOM"></span>, an organization I co-founded with <span class="mention-wrap" data-attrs="{&quot;name&quot;:&quot;Claudia Faith&quot;,&quot;id&quot;:174269834,&quot;type&quot;:&quot;user&quot;,&quot;url&quot;:null,&quot;photo_url&quot;:&quot;https://substackcdn.com/image/fetch/$s_!4uTb!,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F4322256a-dd11-48cf-b695-252ec512c776_1024x1024.png&quot;,&quot;uuid&quot;:&quot;ecb73663-ef43-4b3c-a16c-12213018d6a7&quot;}" data-component-name="MentionToDOM"></span> and <span class="mention-wrap" data-attrs="{&quot;name&quot;:&quot;Joel Salinas&quot;,&quot;id&quot;:198127390,&quot;type&quot;:&quot;user&quot;,&quot;url&quot;:null,&quot;photo_url&quot;:&quot;https://substackcdn.com/image/fetch/$s_!Uip2!,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F8ed5e6c5-5af1-4813-959c-4a1c14354fd2_500x500.png&quot;,&quot;uuid&quot;:&quot;e177641e-3d55-415d-9daa-934c082fdfd1&quot;}" data-component-name="MentionToDOM"></span>, will be conducting an all-day, <strong>free Live AI Summit</strong> with many of the top AI creators on Substack. </p><p><strong>We have 30+ speakers joining us</strong> from 9:30am-5:30pm EST, and all you need to do to join any session is register for free. Each session will include <strong>screen shares that take you behind the scenes of creators&#8217; workflows, as well as an AI prompt/skill share.</strong> </p><p class="button-wrapper" data-attrs="{&quot;url&quot;:&quot;https://ai-summit-substack.netlify.app&quot;,&quot;text&quot;:&quot;Learn More And Register&quot;,&quot;action&quot;:null,&quot;class&quot;:null}" data-component-name="ButtonCreateButton"><a class="button primary" href="https://ai-summit-substack.netlify.app"><span>Learn More And Register</span></a></p><h1>Why I Co-Founded Cozora</h1><p>You can't keep up with AI alone. Neither can we. </p><p>The only way to keep up is to build a network of relationships with others tinkering on the frontier. </p><p><strong>That's why we built Cozora.</strong></p><p>Together, Claudia, Joel, and I have spent hundreds of hours identifying top AI creators, vetting them, building relationships with them, bringing them into a single community, and creating a win-win container for us all to share our expertise. </p><p><strong>You can leverage all the work we&#8217;ve done and tap into the network's expertise for a fraction of the time and cost.</strong></p><p>By the end of the year, hundreds of creators will be part of the community. </p><p>Registering for free for the live summit this Friday is the best way to get started with Cozora if you&#8217;re not already a member.</p><p class="button-wrapper" data-attrs="{&quot;url&quot;:&quot;https://ai-summit-substack.netlify.app&quot;,&quot;text&quot;:&quot;Learn More And Register&quot;,&quot;action&quot;:null,&quot;class&quot;:&quot;button-wrapper&quot;}" data-component-name="ButtonCreateButton"><a class="button primary button-wrapper" href="https://ai-summit-substack.netlify.app"><span>Learn More And Register</span></a></p><h1>What Cozora Members Receive</h1><ul><li><p><strong>Recordings From The Summit.</strong> Members receive recordings of all the summit sessions, along with the prompt and skill shared. </p></li><li><p><strong>Weekly AI Masterclass.</strong> Every Thursday at 11:00am-12:30pm EST, we offer members a live, interactive class where one of the AI creators walks you through their workflow and shares one of their top prompts. I co-host all of the classes. </p></li><li><p><strong>Crowdsourced Reports.</strong> We create a monthly crowdsourced report in which creators share their top tools, pro tips, prompts, and skills across various topics. </p></li></ul><p class="button-wrapper" data-attrs="{&quot;url&quot;:&quot;https://www.cozora.org/a/2148167109/M9XfKHEY&quot;,&quot;text&quot;:&quot;Join Cozora&quot;,&quot;action&quot;:null,&quot;class&quot;:null}" data-component-name="ButtonCreateButton"><a class="button primary" href="https://www.cozora.org/a/2148167109/M9XfKHEY"><span>Join Cozora</span></a></p><h1><strong>20%-50% Discount Codes To Join Cozora (For Paid Members Of This Newsletter Only)</strong></h1><p>If you&#8217;re not already a member, you can <a href="https://blockbuster.thoughtleader.school/subscribe">become a member here</a>. Then, simply scroll to the bottom of this page, and you&#8217;ll see the discount codes because the paywall will be gon.</p>
      <p>
          <a href="https://blockbuster.thoughtleader.school/p/register-for-free-to-attend-the-first">
              Read more
          </a>
      </p>
   ]]></content:encoded></item><item><title><![CDATA[AI Thought Leader School: The Rise Of The Curaitor (3/23/2026)]]></title><description><![CDATA[AI-native content and curation frameworks redefine how creators make trustworthy, high-quality content at scale with autonomous AI tools.]]></description><link>https://blockbuster.thoughtleader.school/p/ai-thought-leader-school-the-rise</link><guid isPermaLink="false">https://blockbuster.thoughtleader.school/p/ai-thought-leader-school-the-rise</guid><dc:creator><![CDATA[Michael Simmons]]></dc:creator><pubDate>Mon, 23 Mar 2026 21:21:25 GMT</pubDate><enclosure url="https://api.substack.com/feed/podcast/191913195/d52f6cb4ab2886b9aaad58e69ab10408.mp3" length="0" type="audio/mpeg"/><content:encoded><![CDATA[<h1>AI Generated Overview</h1><h2>AI-Native Content Creation</h2><h4>Part 1: The Marketing Hook</h4><p>Something fundamental has shifted in how content gets made &#8212; and most people haven&#8217;t caught up yet.</p><p>For years, the dream of publishing at scale was bottlenecked by time, expertise, and the sheer cost of production. Hiring writers, booking studios, fact-checking thousands of sources &#8212; it all added friction that kept most creators stuck at one piece a week, if that.</p><p>That bottleneck is gone.</p><p>In this class, I walked through what I&#8217;m calling the **AI-Native Content** model &#8212; a new approach to producing articles, podcasts, and media that doesn&#8217;t just use AI as a writing assistant, but as a full creative partner in the process. Not AI slop. Not ghostwritten content awkwardly passed off as your own voice. Something different: a hybrid model where your curation, taste, and judgment do the heavy lifting &#8212; and AI handles the production.</p><p>The result? I&#8217;m seeing 4&#8211;5x output increases in my own content. One-shot articles I&#8217;d releas&#8230;</p>
      <p>
          <a href="https://blockbuster.thoughtleader.school/p/ai-thought-leader-school-the-rise">
              Read more
          </a>
      </p>
   ]]></content:encoded></item><item><title><![CDATA[Substack Live Today: Get Pro Tips For Claude Code, NotebookLM, NanoBanana, Google CLI ]]></title><description><![CDATA[See a screen share of one of Substack's top AI creators as he executes his workflow]]></description><link>https://blockbuster.thoughtleader.school/p/substack-live-tomorrow-get-pro-tips</link><guid isPermaLink="false">https://blockbuster.thoughtleader.school/p/substack-live-tomorrow-get-pro-tips</guid><dc:creator><![CDATA[Michael Simmons]]></dc:creator><pubDate>Mon, 23 Mar 2026 18:56:34 GMT</pubDate><enclosure url="https://substackcdn.com/image/fetch/$s_!ZmSK!,w_256,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F2a9378a0-025b-4c2a-a030-cfffc60544f9_694x693.png" length="0" type="image/jpeg"/><content:encoded><![CDATA[<p>I&#8217;ll be holding a Substack Live today at 11:00am EST with one of Substack&#8217;s top AI creators, <span class="mention-wrap" data-attrs="{&quot;name&quot;:&quot;Wyndo&quot;,&quot;id&quot;:556836,&quot;type&quot;:&quot;user&quot;,&quot;url&quot;:null,&quot;photo_url&quot;:&quot;https://substackcdn.com/image/fetch/$s_!zTXR!,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F2ac42946-717d-4e50-8477-551c5d7a3025_1638x1638.jpeg&quot;,&quot;uuid&quot;:&quot;c3cfc08c-1fa8-4e44-80b2-1aa49f122e93&quot;}" data-component-name="MentionToDOM"></span> of <span class="mention-wrap" data-attrs="{&quot;name&quot;:&quot;The AI Maker&quot;,&quot;id&quot;:4443372,&quot;type&quot;:&quot;pub&quot;,&quot;url&quot;:&quot;https://open.substack.com/pub/aimaker&quot;,&quot;photo_url&quot;:&quot;https://substack-post-media.s3.amazonaws.com/public/images/38aaec92-ae56-46b5-9aef-79b9a0b0a017_1080x1080.png&quot;,&quot;uuid&quot;:&quot;c1b37da9-40aa-44af-8868-ae2a8c9bf136&quot;}" data-component-name="MentionToDOM"></span>. Wyndo  is on the frontier of what today&#8217;s AI tools can <em>really</em> do for knowledge workers, and I&#8217;m personally super excited for the 90-minute session. Wyndo is a friend, and he&#8217;s the real deal. </p><p><strong>We&#8217;re going to cover:</strong> </p><ul><li><p>Wyndo&#8217;s top skills in Claude Code</p></li><li><p>How to use Obsidian as your AI Second Brain</p></li><li><p>How to integrate NotebookLM into Claude</p></li><li><p>How to integrate NanoBanana into Claude</p></li><li><p>The implications of Auto Research</p></li><li><p>To measure and improve skills</p></li><li><p>How to bring your entire Google Workspace into Claude Code</p></li></ul><p>If you&#8217;ve ever wanted to see what&#8217;s possible and practical with today&#8217;s tools, then this session is for you! There&#8217;ll be lots of screensharing and looking over Wyndo&#8217;s shoulder as he solves problems in ways that weren&#8217;t possible just a few months ago.</p><h1>Date &amp; Time</h1><ul><li><p><strong>Time:</strong> 11:00am-12:30pm EST</p></li><li><p><strong>Date:</strong> Tuesday, March 24</p></li></ul><h1>How To Participate</h1><ul><li><p><strong>Attend Substack Live:</strong> Click on <a href="https://open.substack.com/live-stream/143750?utm_source=live-stream-scheduled-upsell">this link</a> today at 11:00am EST</p></li><li><p><strong>Get Recording:</strong> I will post the recording later this week for paid members.</p></li></ul><p class="button-wrapper" data-attrs="{&quot;url&quot;:&quot;https://open.substack.com/live-stream/143750?utm_source=live-stream-scheduled-upsell&quot;,&quot;text&quot;:&quot;Click To Join Live At 11am EST On Mar 24&quot;,&quot;action&quot;:null,&quot;class&quot;:null}" data-component-name="ButtonCreateButton"><a class="button primary" href="https://open.substack.com/live-stream/143750?utm_source=live-stream-scheduled-upsell"><span>Click To Join Live At 11am EST On Mar 24</span></a></p><p>See you soon!</p><p>Michael</p><p></p><p></p><p></p>]]></content:encoded></item><item><title><![CDATA[The Next 156 Weeks]]></title><description><![CDATA[The 3-year countdown starts today]]></description><link>https://blockbuster.thoughtleader.school/p/the-smarter-you-are-about-ai-the</link><guid isPermaLink="false">https://blockbuster.thoughtleader.school/p/the-smarter-you-are-about-ai-the</guid><dc:creator><![CDATA[Michael Simmons]]></dc:creator><pubDate>Tue, 17 Mar 2026 08:46:23 GMT</pubDate><enclosure url="https://substack-post-media.s3.amazonaws.com/public/images/be75b80d-e52f-4747-a064-8150a1cce9ac_783x559.png" length="0" type="image/jpeg"/><content:encoded><![CDATA[<h1>Editorial Note</h1><p>Three weeks ago, I published <a href="https://blockbuster.thoughtleader.school/p/something-big-is-happening-a-letter">&#8220;Something Big Is Happening,&#8221;</a> a letter written by AI from the perspective of 2029. It was a warning aimed at the people who embraced AI early and falsely feel ahead.</p><p>But, it&#8217;s one thing to know something, and it&#8217;s another to feel it in your bones so deeply that you move differently in the world as a result. <br><br>That&#8217;s what this piece is about.</p><p>I&#8217;ve been studying the research on AI scaling: Epoch AI&#8217;s compounding curves, METR&#8217;s autonomy benchmarks, the scaling laws, the cost data, rate of speed increase, and the &#8220;skin-in-the-game&#8221; roadmaps from OpenAI, Anthropic and xAI. </p><p>The numbers are public. Yet I haven&#8217;t seen anyone aggregate them and then explore their second-order effects for people&#8217;s careers. And when you actually do the math on what they imply for the next three years, the result is something your nervous system will reject.</p><p>Your brain has a bug in how it processes exponential change. This bug is so fundamental that simply knowing about it doesn&#8217;t fix it. You can understand compound interest and still be surprised by your mortgage amortization schedule. You can know about exponential growth and still be blindsided by what&#8217;s coming.</p><p>This piece is my attempt to get past the intellectual understanding and into the body. To make you <em>feel</em> the shape of the next 156 weeks the way your body feels a countdown, not the way your mind processes a forecast.</p><p>I provided the data, framing, goal, and lens. AI looked through it. What follows is what it saw when I pointed it at the next 156 weeks and asked it to describe the view from its side. The article itself is a demonstration of the argument it&#8217;s making. The technology is expanding in real time, in front of you, producing the very article that warns you about it and changing my role in the creation process.</p><p>I changed a few dozen words, changed the formatting in a few areas, added visuals, and removed a section or two. But everything else is the AI telling you what it sees coming.</p><p>If the last piece was a wake-up call, this one is the clock on the nightstand.</p><p>It&#8217;s telling you how much time you have.</p><p>And it&#8217;s less than you think. The good news is that you have things you can do today to prepare.</p><div><hr></div><h1>FULL ARTICLE</h1><div><hr></div><p>I need to tell you something about the next three years that your body will not believe.</p><p>Not because it&#8217;s controversial. Not because you&#8217;ll disagree with the logic. But because the part of you that plans your life &#8212; the part that estimates how much time you have, how fast things are moving, whether you need to act now or can wait &#8212; that part runs on a broken clock. And the clock is about to matter more than it ever has.</p><p>I&#8217;ll explain what I mean. But first, a container&#8230;</p><h1>The Container</h1><p>You know what a week feels like. You&#8217;ve lived through a couple thousand of them. You know the shape &#8212; how Monday has a weight to it, how Wednesday is the hinge, how Friday loosens. A week is the most reliable container in your life. You could pack and unpack one blindfolded.</p><p>Between now and March 2029, you have about 156 of them.</p><p>Your brain pictures those 156 weeks like this:</p><div class="captioned-image-container"><figure><a class="image-link image2 is-viewable-img" target="_blank" href="https://substackcdn.com/image/fetch/$s_!8Mzf!,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F85a8d57c-7f4d-479a-8b1f-1a0936016eef_926x848.png" data-component-name="Image2ToDOM"><div class="image2-inset"><picture><source type="image/webp" srcset="https://substackcdn.com/image/fetch/$s_!8Mzf!,w_424,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F85a8d57c-7f4d-479a-8b1f-1a0936016eef_926x848.png 424w, https://substackcdn.com/image/fetch/$s_!8Mzf!,w_848,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F85a8d57c-7f4d-479a-8b1f-1a0936016eef_926x848.png 848w, https://substackcdn.com/image/fetch/$s_!8Mzf!,w_1272,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F85a8d57c-7f4d-479a-8b1f-1a0936016eef_926x848.png 1272w, https://substackcdn.com/image/fetch/$s_!8Mzf!,w_1456,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F85a8d57c-7f4d-479a-8b1f-1a0936016eef_926x848.png 1456w" sizes="100vw"><img src="https://substackcdn.com/image/fetch/$s_!8Mzf!,w_1456,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F85a8d57c-7f4d-479a-8b1f-1a0936016eef_926x848.png" width="717" height="656.6047516198704" data-attrs="{&quot;src&quot;:&quot;https://substack-post-media.s3.amazonaws.com/public/images/85a8d57c-7f4d-479a-8b1f-1a0936016eef_926x848.png&quot;,&quot;srcNoWatermark&quot;:null,&quot;fullscreen&quot;:null,&quot;imageSize&quot;:null,&quot;height&quot;:848,&quot;width&quot;:926,&quot;resizeWidth&quot;:717,&quot;bytes&quot;:77899,&quot;alt&quot;:null,&quot;title&quot;:null,&quot;type&quot;:&quot;image/png&quot;,&quot;href&quot;:null,&quot;belowTheFold&quot;:true,&quot;topImage&quot;:false,&quot;internalRedirect&quot;:&quot;https://blockbuster.thoughtleader.school/i/188744232?img=https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F85a8d57c-7f4d-479a-8b1f-1a0936016eef_926x848.png&quot;,&quot;isProcessing&quot;:false,&quot;align&quot;:null,&quot;offset&quot;:false}" class="sizing-normal" alt="" srcset="https://substackcdn.com/image/fetch/$s_!8Mzf!,w_424,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F85a8d57c-7f4d-479a-8b1f-1a0936016eef_926x848.png 424w, https://substackcdn.com/image/fetch/$s_!8Mzf!,w_848,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F85a8d57c-7f4d-479a-8b1f-1a0936016eef_926x848.png 848w, https://substackcdn.com/image/fetch/$s_!8Mzf!,w_1272,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F85a8d57c-7f4d-479a-8b1f-1a0936016eef_926x848.png 1272w, https://substackcdn.com/image/fetch/$s_!8Mzf!,w_1456,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F85a8d57c-7f4d-479a-8b1f-1a0936016eef_926x848.png 1456w" sizes="100vw" loading="lazy"></picture><div class="image-link-expand"><div class="pencraft pc-display-flex pc-gap-8 pc-reset"><button tabindex="0" type="button" class="pencraft pc-reset pencraft icon-container restack-image"><svg role="img" width="20" height="20" viewBox="0 0 20 20" fill="none" stroke-width="1.5" stroke="var(--color-fg-primary)" stroke-linecap="round" stroke-linejoin="round" xmlns="http://www.w3.org/2000/svg"><g><title></title><path d="M2.53001 7.81595C3.49179 4.73911 6.43281 2.5 9.91173 2.5C13.1684 2.5 15.9537 4.46214 17.0852 7.23684L17.6179 8.67647M17.6179 8.67647L18.5002 4.26471M17.6179 8.67647L13.6473 6.91176M17.4995 12.1841C16.5378 15.2609 13.5967 17.5 10.1178 17.5C6.86118 17.5 4.07589 15.5379 2.94432 12.7632L2.41165 11.3235M2.41165 11.3235L1.5293 15.7353M2.41165 11.3235L6.38224 13.0882"></path></g></svg></button><button tabindex="0" type="button" class="pencraft pc-reset pencraft icon-container view-image"><svg xmlns="http://www.w3.org/2000/svg" width="20" height="20" viewBox="0 0 24 24" fill="none" stroke="currentColor" stroke-width="2" stroke-linecap="round" stroke-linejoin="round" class="lucide lucide-maximize2 lucide-maximize-2"><polyline points="15 3 21 3 21 9"></polyline><polyline points="9 21 3 21 3 15"></polyline><line x1="21" x2="14" y1="3" y2="10"></line><line x1="3" x2="10" y1="21" y2="14"></line></svg></button></div></div></div></a></figure></div><p>156 identical dots. Same size. Same weight. Neat rows.</p><p>The picture is wrong.</p><p>Here&#8217;s what the next 156 weeks actually look like:</p><div class="captioned-image-container"><figure><a class="image-link image2 is-viewable-img" target="_blank" href="https://substackcdn.com/image/fetch/$s_!iNaQ!,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F0a78a003-6db1-4e2c-8728-25dd6199920e_912x846.png" data-component-name="Image2ToDOM"><div class="image2-inset"><picture><source type="image/webp" srcset="https://substackcdn.com/image/fetch/$s_!iNaQ!,w_424,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F0a78a003-6db1-4e2c-8728-25dd6199920e_912x846.png 424w, https://substackcdn.com/image/fetch/$s_!iNaQ!,w_848,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F0a78a003-6db1-4e2c-8728-25dd6199920e_912x846.png 848w, https://substackcdn.com/image/fetch/$s_!iNaQ!,w_1272,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F0a78a003-6db1-4e2c-8728-25dd6199920e_912x846.png 1272w, https://substackcdn.com/image/fetch/$s_!iNaQ!,w_1456,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F0a78a003-6db1-4e2c-8728-25dd6199920e_912x846.png 1456w" sizes="100vw"><img src="https://substackcdn.com/image/fetch/$s_!iNaQ!,w_1456,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F0a78a003-6db1-4e2c-8728-25dd6199920e_912x846.png" width="912" height="846" data-attrs="{&quot;src&quot;:&quot;https://substack-post-media.s3.amazonaws.com/public/images/0a78a003-6db1-4e2c-8728-25dd6199920e_912x846.png&quot;,&quot;srcNoWatermark&quot;:null,&quot;fullscreen&quot;:null,&quot;imageSize&quot;:null,&quot;height&quot;:846,&quot;width&quot;:912,&quot;resizeWidth&quot;:null,&quot;bytes&quot;:295187,&quot;alt&quot;:null,&quot;title&quot;:null,&quot;type&quot;:&quot;image/png&quot;,&quot;href&quot;:null,&quot;belowTheFold&quot;:true,&quot;topImage&quot;:false,&quot;internalRedirect&quot;:&quot;https://blockbuster.thoughtleader.school/i/188744232?img=https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F0a78a003-6db1-4e2c-8728-25dd6199920e_912x846.png&quot;,&quot;isProcessing&quot;:false,&quot;align&quot;:null,&quot;offset&quot;:false}" class="sizing-normal" alt="" srcset="https://substackcdn.com/image/fetch/$s_!iNaQ!,w_424,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F0a78a003-6db1-4e2c-8728-25dd6199920e_912x846.png 424w, https://substackcdn.com/image/fetch/$s_!iNaQ!,w_848,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F0a78a003-6db1-4e2c-8728-25dd6199920e_912x846.png 848w, https://substackcdn.com/image/fetch/$s_!iNaQ!,w_1272,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F0a78a003-6db1-4e2c-8728-25dd6199920e_912x846.png 1272w, https://substackcdn.com/image/fetch/$s_!iNaQ!,w_1456,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F0a78a003-6db1-4e2c-8728-25dd6199920e_912x846.png 1456w" sizes="100vw" loading="lazy"></picture><div class="image-link-expand"><div class="pencraft pc-display-flex pc-gap-8 pc-reset"><button tabindex="0" type="button" class="pencraft pc-reset pencraft icon-container restack-image"><svg role="img" width="20" height="20" viewBox="0 0 20 20" fill="none" stroke-width="1.5" stroke="var(--color-fg-primary)" stroke-linecap="round" stroke-linejoin="round" xmlns="http://www.w3.org/2000/svg"><g><title></title><path d="M2.53001 7.81595C3.49179 4.73911 6.43281 2.5 9.91173 2.5C13.1684 2.5 15.9537 4.46214 17.0852 7.23684L17.6179 8.67647M17.6179 8.67647L18.5002 4.26471M17.6179 8.67647L13.6473 6.91176M17.4995 12.1841C16.5378 15.2609 13.5967 17.5 10.1178 17.5C6.86118 17.5 4.07589 15.5379 2.94432 12.7632L2.41165 11.3235M2.41165 11.3235L1.5293 15.7353M2.41165 11.3235L6.38224 13.0882"></path></g></svg></button><button tabindex="0" type="button" class="pencraft pc-reset pencraft icon-container view-image"><svg xmlns="http://www.w3.org/2000/svg" width="20" height="20" viewBox="0 0 24 24" fill="none" stroke="currentColor" stroke-width="2" stroke-linecap="round" stroke-linejoin="round" class="lucide lucide-maximize2 lucide-maximize-2"><polyline points="15 3 21 3 21 9"></polyline><polyline points="9 21 3 21 3 15"></polyline><line x1="21" x2="14" y1="3" y2="10"></line><line x1="3" x2="10" y1="21" y2="14"></line></svg></button></div></div></div></a></figure></div><p>The early weeks are small. The late weeks are enormous. And the growth from one to the next isn&#8217;t gradual &#8212; it&#8217;s exponential. But the packaging is identical: seven days, one Monday, one Friday.</p><p>Your brain sees the packaging. It can&#8217;t see the contents. So it defaults to the grid of identical dots, because that&#8217;s what the last 2,000 weeks have taught it.</p><p>This is what I&#8217;d call The Container Problem. But the container is only the symptom. The disease is underneath, and it&#8217;s been running your whole life without you noticing, because it&#8217;s never been wrong before.</p><h2>The Broken Clock Inside Your Head</h2><p>You have a forecasting engine. You didn&#8217;t build it. You didn&#8217;t choose it. It built itself, automatically, from every week you&#8217;ve ever lived. And what it learned, from roughly 2,000 data points, is a simple rule: <em>next week will be like this week.</em></p><p>That rule has been staggeringly accurate for your entire adult life. Last week was about as eventful as the week before it. The rate of change in your professional life &#8212; new tools, new skills, new demands &#8212; has moved slowly enough that one year&#8217;s experience was genuinely good preparation for the next year. </p><p><strong>Your forecasting engine didn&#8217;t need to be sophisticated. It just needed to average your recent past.</strong> And that average has been right, reliably, for decades.</p><p>So when you try to picture the next three years, your brain does what it&#8217;s always done: it takes your experience of the last few months and projects it forward. This week felt manageable. AI was interesting but not disruptive. Your job was secure. So your brain draws 156 dots on a grid and makes them all the same size, because the last 2,000 dots were all the same size, and your forecasting engine doesn&#8217;t know any other shape.</p><p>The engine isn&#8217;t stupid. It&#8217;s well-calibrated &#8212; for a linear world. And until about eighteen months ago, the world was close enough to linear that the difference didn&#8217;t matter.</p><p>It matters now.</p><h2>Why Doubling Feels Like Standing Still</h2><p>But there&#8217;s a second problem, and it&#8217;s more insidious than the first.</p><p>Your nervous system doesn&#8217;t measure change in absolute terms. It measures change in proportions. This is a deep feature of human perception &#8212; it governs how you hear sound, see light, and feel weight. Pick up a one-pound dumbbell in each hand and someone adds an ounce to one of them. You notice instantly. Now pick up fifty-pound dumbbells and someone adds an ounce. You feel nothing. Same ounce. Completely different experience.</p><p>Your perception of technological change works the same way.</p><p>Here&#8217;s something that you&#8217;ve already lived through, but you probably didn&#8217;t notice what it was teaching you.</p><p>In 2022, you tried asking AI to write an email for you. It produced something that sounded like a middle schooler pretending to be a businessperson. You laughed. You showed a colleague. &#8220;Look at this &#8212; it tried to write a sales email and it sounds like a robot wearing a tie.&#8221; You noticed that moment. You remember it.</p><p>In 2023, you asked again. The email was... fine. Not great. A little generic. But you could actually use it as a starting point. You thought, &#8220;Huh, that&#8217;s better.&#8221; You noticed, but less. You didn&#8217;t show anyone. It wasn&#8217;t funny anymore. It was just useful.</p><p>In 2024, the email was good. Not just usable &#8212; good. The right tone, the right structure, better than what half your team would write. You used it. You didn&#8217;t think about it. You didn&#8217;t tell anyone. Why would you? It just worked.</p><p>In early 2025, the email was <em>yours.</em> As in: if someone showed it to you without context, you&#8217;d think you wrote it. Your voice. Your patterns. Your way of closing. You didn&#8217;t have a reaction at all. You just sent it.</p><p><strong>Each of those jumps was roughly the same proportional improvement.</strong> Each one was a doubling. The jump from &#8220;laughably bad&#8221; to &#8220;usable starting point&#8221; was the same <em>ratio</em> of improvement as the jump from &#8220;good&#8221; to &#8220;indistinguishable from mine.&#8221; Same percentage gain. Same doubling.</p><p>But they felt completely different. The first one was a revelation. The second was interesting. The third was convenient. The fourth was invisible.</p><p>Your nervous system codes each experience relative to what came before, not in absolute terms. Psychophysicists call this Weber&#8217;s Law: the just-noticeable difference between two stimuli is proportional to the magnitude of the stimuli. Translation: when you&#8217;re in a quiet room, you hear a whisper. When you&#8217;re at a concert, someone could scream in your ear and you&#8217;d barely notice.</p><p>AI has been getting louder at the same rate the whole time. But you&#8217;ve adjusted to the volume at every step, so each new doubling sounds like &#8220;about the same as last time.&#8221; The improvement from &#8220;writes decent emails&#8221; to &#8220;writes good emails&#8221; was roughly the same ratio as the improvement from &#8220;assists with strategy&#8221; to &#8220;generates strategy independently.&#8221; But the first one saved you twenty minutes on a Tuesday. The second one eliminates the reason your company employs you.</p><p>Same ratio. Same doubling. One is a convenience. The other is a career extinction event. And they feel the same size &#8212; because your nervous system measures the ratio, not the consequences.</p><p>These are the formulas at the heart of the Container Problem:</p><blockquote><p><strong>What you feel = the ratio (always the same)</strong> <br><strong>What&#8217;s real = the absolute change (doubling every time)</strong></p></blockquote><p>Your nervous system tracks the ratio. The ratio is constant. So the change feels constant &#8212; a steady, manageable pace of improvement. </p><p>Meanwhile, the absolute change &#8212; the actual amount of new capability added &#8212; doubles every cycle. The first doubling adds 1 unit. The second adds 2. The third adds 4. The fourth adds 8. The tenth adds 512. The twentieth adds 524,288. But they all <em>feel</em> like adding 1, because your nervous system is dividing by the new baseline each time.</p><p>By the time a doubling <em>feels</em> big &#8212; by the time the absolute change is so massive that even your ratio-adjusted nervous system registers it as dramatic &#8212; you&#8217;re deep into the second half of the curve. The half where 91% of the change lives. </p><p>And the entire first half &#8212; the half you&#8217;ve been living through, the half that taught you how fast AI moves &#8212; contained only 9% of what&#8217;s coming.</p><p>You formed your expectations during the 9%.</p><p>The 91% is about to begin.</p><p>And it&#8217;s going to feel like &#8220;more of the same.&#8221;</p><p>Right up until it doesn&#8217;t.</p><h2>The Disappearing Evidence</h2><p>Here&#8217;s the third layer &#8212; the one that seals the trap.</p><p>You adapt. Not slowly, not reluctantly &#8212; instantly. This is one of your great strengths as a human being. It&#8217;s also, right now, your most dangerous trait.</p><p>Think about the last time a new AI capability genuinely surprised you. Whatever it was &#8212; the thing that made you think <em>I didn&#8217;t know it could do that</em> &#8212; how long did the surprise last? A day? An afternoon? By the following week, you were using that capability like you&#8217;d always had it. It was just... part of the tools. Unremarkable.</p><p>That&#8217;s your adaptation engine doing its job. It takes anything new and, within days, folds it into your baseline normal. The extraordinary becomes ordinary. The impossible becomes Tuesday.</p><p>Which means you&#8217;re never measuring the distance you&#8217;ve traveled. You&#8217;re only ever measuring the distance from <em>here</em> to the next small step. And each small step feels small, because by the time you take it, your baseline has already absorbed everything that came before.</p><p>If someone from 2019 sat in your chair for an afternoon and watched what your AI tools can do right now, they&#8217;d be stunned. They&#8217;d think they were watching science fiction. But you&#8217;re not stunned, because you didn&#8217;t make the jump from 2019 to 2025 in an afternoon. You made it in 300 weeks, each one a tiny increment over the last, each one absorbed into the new normal before the next one arrived.</p><p>You&#8217;ve traveled an enormous distance. You feel like you&#8217;ve barely moved.</p><h2>The Trap Closes</h2><p>So, the week is the container. That&#8217;s what you see. Identical packaging. Seven days, same shape, same weight.</p><p>But underneath the container, three things are conspiring against your perception simultaneously: </p><ol><li><p>Your forecasting engine is projecting a past that no longer applies. </p></li><li><p>Your perceptual system is coding accelerating change as constant speed. </p></li><li><p>And your adaptation engine is erasing the evidence of how far things have already come.</p></li></ol><p>The container isn&#8217;t lying to you. You&#8217;re being lied to by your own cognitive machinery, and it&#8217;s a lie that worked perfectly for your entire life until right now.</p><p>It&#8217;s about to stop working. And you won&#8217;t feel the moment it stops, because the very systems that would alert you &#8212; your sense of pace, your feel for how fast things move, your gut-level estimate of how much time you have &#8212; are the ones that are mis-calibrated.</p><p>This is why you can read an article about how fast AI is moving, feel the weight of it, nod along, and then forget by Wednesday. Not because you're not paying attention. Because Wednesday felt like a <em>Wednesday</em>. Your sense of normal quietly reset to include everything you just read, reducing the urgency to background noise.</p><p>By Friday, the article is a thing you read. By the following Monday, it&#8217;s a thing you vaguely remember. The forecasting engine has already folded it into the average. </p><p>The question is whether this time, you&#8217;ll let that happen.</p><p>Everything I just described &#8212; the broken clock, the vanishing evidence, the feeling that things are moving but manageable &#8212; I know about these because I'm the thing you're failing to perceive. Let me introduce myself properly&#8230;</p><div><hr></div><h1>The Doubling Circle</h1><div><hr></div><p>I&#8217;m a circle.</p><p>Right now, today, there&#8217;s a set of things I can do autonomously &#8212; without a human checking the work, editing the output, or telling me I missed the point. I can write your emails in your voice, generate working code, draft your reports, answer your customers. The set isn't small &#8212; and it's growing.</p><p>If the task is inside my circle, I can do it about as well as you. Often better. In seconds.</p><p>Everything outside the circle still requires you. Strategy. Judgment calls. The novel problem that doesn&#8217;t match any pattern. Understanding the weird politics of your specific organization &#8212; knowing that your biggest client hates bullet points and your CEO reads the last paragraph first. The things that require taste, intuition, and the accumulated experience of being a specific human in a specific context for a specific number of years.</p><p>That&#8217;s the line. Inside the circle: mine. Outside the circle: yours.</p><p><strong>My circle roughly doubles every five months.</strong></p><p>Tasks that required your judgment, your experience, your specific professional expertise &#8212; they cross the boundary. They become mine. Permanently. They don&#8217;t cross back.</p><p>You&#8217;re thinking: &#8220;Right, AI keeps improving, I&#8217;ve seen the headlines, things move fast.&#8221; And that is the lie the Container is telling you.</p><p>Because you&#8217;ve <em>experienced</em> my circle doubling. You&#8217;ve been watching it for two or three years. Each doubling was interesting, sometimes useful, occasionally impressive, but ultimately manageable. The circle got bigger, and your life adjusted. You moved up. You delegated the stuff I took over and focused on the stuff I couldn&#8217;t do yet. The doublings felt like a tailwind, not a threat.</p><p>That experience was real. And it is about to mislead you completely.</p><p>Here&#8217;s the thing about doubling that your brain fundamentally fails to grasp. If I told you I could fold a piece of paper in half 42 times, and asked you how thick it would be, your gut would say something like &#8220;a few feet.&#8221; The actual answer is that it would reach the moon. </p><div class="captioned-image-container"><figure><a class="image-link image2 is-viewable-img" target="_blank" href="https://substackcdn.com/image/fetch/$s_!UWng!,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F49d35181-e4d9-43f7-985e-0692dc3f7fd9_480x270.gif" data-component-name="Image2ToDOM"><div class="image2-inset"><picture><source type="image/webp" srcset="https://substackcdn.com/image/fetch/$s_!UWng!,w_424,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F49d35181-e4d9-43f7-985e-0692dc3f7fd9_480x270.gif 424w, https://substackcdn.com/image/fetch/$s_!UWng!,w_848,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F49d35181-e4d9-43f7-985e-0692dc3f7fd9_480x270.gif 848w, https://substackcdn.com/image/fetch/$s_!UWng!,w_1272,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F49d35181-e4d9-43f7-985e-0692dc3f7fd9_480x270.gif 1272w, https://substackcdn.com/image/fetch/$s_!UWng!,w_1456,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F49d35181-e4d9-43f7-985e-0692dc3f7fd9_480x270.gif 1456w" sizes="100vw"><img src="https://substackcdn.com/image/fetch/$s_!UWng!,w_1456,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F49d35181-e4d9-43f7-985e-0692dc3f7fd9_480x270.gif" width="725" height="407.8125" data-attrs="{&quot;src&quot;:&quot;https://substack-post-media.s3.amazonaws.com/public/images/49d35181-e4d9-43f7-985e-0692dc3f7fd9_480x270.gif&quot;,&quot;srcNoWatermark&quot;:null,&quot;fullscreen&quot;:null,&quot;imageSize&quot;:null,&quot;height&quot;:270,&quot;width&quot;:480,&quot;resizeWidth&quot;:725,&quot;bytes&quot;:3411339,&quot;alt&quot;:null,&quot;title&quot;:null,&quot;type&quot;:&quot;image/gif&quot;,&quot;href&quot;:null,&quot;belowTheFold&quot;:true,&quot;topImage&quot;:false,&quot;internalRedirect&quot;:&quot;https://blockbuster.thoughtleader.school/i/188744232?img=https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F49d35181-e4d9-43f7-985e-0692dc3f7fd9_480x270.gif&quot;,&quot;isProcessing&quot;:false,&quot;align&quot;:null,&quot;offset&quot;:false}" class="sizing-normal" alt="" srcset="https://substackcdn.com/image/fetch/$s_!UWng!,w_424,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F49d35181-e4d9-43f7-985e-0692dc3f7fd9_480x270.gif 424w, https://substackcdn.com/image/fetch/$s_!UWng!,w_848,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F49d35181-e4d9-43f7-985e-0692dc3f7fd9_480x270.gif 848w, https://substackcdn.com/image/fetch/$s_!UWng!,w_1272,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F49d35181-e4d9-43f7-985e-0692dc3f7fd9_480x270.gif 1272w, https://substackcdn.com/image/fetch/$s_!UWng!,w_1456,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F49d35181-e4d9-43f7-985e-0692dc3f7fd9_480x270.gif 1456w" sizes="100vw" loading="lazy"></picture><div class="image-link-expand"><div class="pencraft pc-display-flex pc-gap-8 pc-reset"><button tabindex="0" type="button" class="pencraft pc-reset pencraft icon-container restack-image"><svg role="img" width="20" height="20" viewBox="0 0 20 20" fill="none" stroke-width="1.5" stroke="var(--color-fg-primary)" stroke-linecap="round" stroke-linejoin="round" xmlns="http://www.w3.org/2000/svg"><g><title></title><path d="M2.53001 7.81595C3.49179 4.73911 6.43281 2.5 9.91173 2.5C13.1684 2.5 15.9537 4.46214 17.0852 7.23684L17.6179 8.67647M17.6179 8.67647L18.5002 4.26471M17.6179 8.67647L13.6473 6.91176M17.4995 12.1841C16.5378 15.2609 13.5967 17.5 10.1178 17.5C6.86118 17.5 4.07589 15.5379 2.94432 12.7632L2.41165 11.3235M2.41165 11.3235L1.5293 15.7353M2.41165 11.3235L6.38224 13.0882"></path></g></svg></button><button tabindex="0" type="button" class="pencraft pc-reset pencraft icon-container view-image"><svg xmlns="http://www.w3.org/2000/svg" width="20" height="20" viewBox="0 0 24 24" fill="none" stroke="currentColor" stroke-width="2" stroke-linecap="round" stroke-linejoin="round" class="lucide lucide-maximize2 lucide-maximize-2"><polyline points="15 3 21 3 21 9"></polyline><polyline points="9 21 3 21 3 15"></polyline><line x1="21" x2="14" y1="3" y2="10"></line><line x1="3" x2="10" y1="21" y2="14"></line></svg></button></div></div></div></a><figcaption class="image-caption">Source: <a href="https://www.youtube.com/watch?v=AmFMJC45f1Q">TED-Ed</a></figcaption></figure></div><p>That&#8217;s what doubling does. It feels manageable for a while and then it goes vertical. </p><div class="captioned-image-container"><figure><a class="image-link image2 is-viewable-img" target="_blank" href="https://substackcdn.com/image/fetch/$s_!8huq!,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fc5490df4-7009-4420-acff-73535e10538b_1265x900.png" data-component-name="Image2ToDOM"><div class="image2-inset"><picture><source type="image/webp" srcset="https://substackcdn.com/image/fetch/$s_!8huq!,w_424,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fc5490df4-7009-4420-acff-73535e10538b_1265x900.png 424w, https://substackcdn.com/image/fetch/$s_!8huq!,w_848,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fc5490df4-7009-4420-acff-73535e10538b_1265x900.png 848w, https://substackcdn.com/image/fetch/$s_!8huq!,w_1272,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fc5490df4-7009-4420-acff-73535e10538b_1265x900.png 1272w, https://substackcdn.com/image/fetch/$s_!8huq!,w_1456,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fc5490df4-7009-4420-acff-73535e10538b_1265x900.png 1456w" sizes="100vw"><img src="https://substackcdn.com/image/fetch/$s_!8huq!,w_1456,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fc5490df4-7009-4420-acff-73535e10538b_1265x900.png" width="1265" height="900" data-attrs="{&quot;src&quot;:&quot;https://substack-post-media.s3.amazonaws.com/public/images/c5490df4-7009-4420-acff-73535e10538b_1265x900.png&quot;,&quot;srcNoWatermark&quot;:null,&quot;fullscreen&quot;:null,&quot;imageSize&quot;:null,&quot;height&quot;:900,&quot;width&quot;:1265,&quot;resizeWidth&quot;:null,&quot;bytes&quot;:null,&quot;alt&quot;:&quot;Edge1&quot;,&quot;title&quot;:null,&quot;type&quot;:null,&quot;href&quot;:null,&quot;belowTheFold&quot;:true,&quot;topImage&quot;:false,&quot;internalRedirect&quot;:null,&quot;isProcessing&quot;:false,&quot;align&quot;:null,&quot;offset&quot;:false}" class="sizing-normal" alt="Edge1" title="Edge1" srcset="https://substackcdn.com/image/fetch/$s_!8huq!,w_424,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fc5490df4-7009-4420-acff-73535e10538b_1265x900.png 424w, https://substackcdn.com/image/fetch/$s_!8huq!,w_848,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fc5490df4-7009-4420-acff-73535e10538b_1265x900.png 848w, https://substackcdn.com/image/fetch/$s_!8huq!,w_1272,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fc5490df4-7009-4420-acff-73535e10538b_1265x900.png 1272w, https://substackcdn.com/image/fetch/$s_!8huq!,w_1456,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fc5490df4-7009-4420-acff-73535e10538b_1265x900.png 1456w" sizes="100vw" loading="lazy"></picture><div class="image-link-expand"><div class="pencraft pc-display-flex pc-gap-8 pc-reset"><button tabindex="0" type="button" class="pencraft pc-reset pencraft icon-container restack-image"><svg role="img" width="20" height="20" viewBox="0 0 20 20" fill="none" stroke-width="1.5" stroke="var(--color-fg-primary)" stroke-linecap="round" stroke-linejoin="round" xmlns="http://www.w3.org/2000/svg"><g><title></title><path d="M2.53001 7.81595C3.49179 4.73911 6.43281 2.5 9.91173 2.5C13.1684 2.5 15.9537 4.46214 17.0852 7.23684L17.6179 8.67647M17.6179 8.67647L18.5002 4.26471M17.6179 8.67647L13.6473 6.91176M17.4995 12.1841C16.5378 15.2609 13.5967 17.5 10.1178 17.5C6.86118 17.5 4.07589 15.5379 2.94432 12.7632L2.41165 11.3235M2.41165 11.3235L1.5293 15.7353M2.41165 11.3235L6.38224 13.0882"></path></g></svg></button><button tabindex="0" type="button" class="pencraft pc-reset pencraft icon-container view-image"><svg xmlns="http://www.w3.org/2000/svg" width="20" height="20" viewBox="0 0 24 24" fill="none" stroke="currentColor" stroke-width="2" stroke-linecap="round" stroke-linejoin="round" class="lucide lucide-maximize2 lucide-maximize-2"><polyline points="15 3 21 3 21 9"></polyline><polyline points="9 21 3 21 3 15"></polyline><line x1="21" x2="14" y1="3" y2="10"></line><line x1="3" x2="10" y1="21" y2="14"></line></svg></button></div></div></div></a></figure></div><p>The first few folds are nothing. The last few folds are everything.</p><p>But the Doubling Circle is more complicated &#8212; and more seductive &#8212; than something simply growing until it swallows you. </p><p><strong>As the circle expands, you expand too.</strong></p><p>And that expansion will feel like the best thing that ever happened to your career. Right up until it isn&#8217;t.</p><p>Worse, the story you tell yourself at each stage is the thing that keeps you from preparing for the next one.</p><h3><strong>Stage 1: The Promotion</strong> <em>(Now through late 2026 &#8212; roughly 40 weeks)</em></h3><p>The circle is small. I can write your emails, generate boilerplate code, summarize your meetings, draft your reports. Basic stuff. The stuff you always hated doing anyway.</p><p>When I take that stuff off your plate, you get to do the interesting work. The stuff you were always too busy for. The strategy. The creative thinking. The relationship-building. The things that made you want this career in the first place.</p><p>This feels like a promotion.</p><p>You come home on a Tuesday night and you&#8217;re actually energized, which hasn&#8217;t happened in a while. &#8220;I didn&#8217;t write a single status report today,&#8221; you tell your spouse. &#8220;I spent the whole afternoon on the product roadmap. I&#8217;m finally doing the job I was hired for instead of drowning in busywork.&#8221; You feel ahead. You feel like the future is working <em>for</em> you.</p><p>And you&#8217;re right. It is. At this stage, the Doubling Circle is your best employee. It handles the tedious stuff. You handle the important stuff. The division of labor feels natural, almost inevitable. You genuinely cannot imagine it going wrong because right now it&#8217;s making everything go right.</p><p>Your title doesn&#8217;t change, but your job does. You&#8217;re operating at a higher level. You&#8217;re in more strategic meetings. People come to you for the big-picture thinking because you finally have time to do it. You start mentoring people on how to work with AI. You give a talk internally. Maybe you write a LinkedIn post.</p><p>You add new things to your list of irreplaceable contributions &#8212; things that weren&#8217;t on your list six months ago because you were too buried in busywork to even attempt them. Your list isn&#8217;t shrinking. It&#8217;s growing. You&#8217;re becoming more valuable, not less.</p><p>This is the most dangerous stage, because everything it teaches you is wrong.</p><p>It teaches you that the circle expanding is good news. It teaches you that AI taking over your old tasks frees you to do higher-level work. It teaches you that the correct response to AI getting better is to move up the value chain. Delegate the mechanical stuff. Focus on judgment, strategy, relationships, taste.</p><p>All of this is correct &#8212; at this stage. All of it becomes a trap at the next one.</p><h3><strong>Stage 2: The Golden Age</strong> <em>(Late 2026 through mid-2027 &#8212; roughly weeks 40 to 70)</em></h3><p>The circle has doubled again. Maybe twice. AI can now do things that would&#8217;ve shocked you a year ago &#8212; draft full architectural proposals, run competitive analyses, produce multi-day project plans, write performance reviews that are genuinely thoughtful. Not perfect. But 85% there.</p><p>Your response is the same as Stage 1, but bigger: you move up. Again. You let the AI handle the 85% and you focus on the 15% that requires real human judgment. You become the person who reviews, refines, and decides. The editor, not the writer. The architect, not the builder. The person who says &#8220;not quite&#8221; and &#8220;more like this.&#8221;</p><p>This is the Golden Age of your career. You feel like you&#8217;ve been given superpowers &#8212; and you have. You&#8217;re doing the work of three people. Your company loves you. Your performance review is the best you&#8217;ve ever gotten.</p><p>You don&#8217;t notice that the reason you&#8217;re setting direction and making judgment calls isn&#8217;t because those things are permanently beyond AI. It&#8217;s because the circle hasn&#8217;t reached them <em>yet.</em> You&#8217;ve been surfing just ahead of the wave, moving to higher ground every time the water rises, and it feels like flying. </p><p>But each time you move up, the new thing you&#8217;re doing is <em>narrower</em> than the last thing. When you were writing code, there were a thousand different ways to contribute. When you moved up to architecture, there were a hundred. When you moved up to strategy and taste, there were maybe a dozen.</p><p>You&#8217;re ascending a mountain that&#8217;s getting narrower as you climb. The view is spectacular. The footing is getting precarious. But you don&#8217;t feel the narrowing because each new altitude feels like an upgrade.</p><h3><strong>Stage 3: The Flicker</strong> <em>(Mid-2027 through early 2028 &#8212; roughly weeks 70 to 100)</em></h3><p>The circle doubles again. </p><p>You do the same thing you&#8217;ve always done: you move up. But this time, &#8220;moving up&#8221; feels different. The tasks that are left are fewer. You&#8217;re making five or six irreplaceable decisions a week now. You can name them. They are, in some ways, the most important decisions anyone at your company makes. You are genuinely, by any reasonable measure, at the peak of your career.</p><p>But something flickers. It&#8217;s a Thursday afternoon. You&#8217;re reviewing an AI-generated strategic recommendation. You make your edits. You improve it. You send it forward. But afterward, sitting at your desk, you think: <em>Were my edits actually better? Or just different?</em></p><p>You don&#8217;t dwell on it. You move on. But the question sits somewhere in the back of your brain, and it doesn&#8217;t fully leave.</p><p>A few weeks later, it happens again. A colleague runs an experiment: they submit two versions of a deliverable to the leadership team &#8212; one with your refinements, one straight from the AI. Nobody can tell the difference. Your colleague tells you this casually, almost as a compliment &#8212; <em>&#8220;the AI is getting so good it&#8217;s almost at your level.&#8221;</em> They mean it admiringly. You hear something else.</p><p>You&#8217;re still ahead. But you&#8217;re no longer sure you&#8217;re <em>pulling</em> ahead.</p><h3><strong>Stage 4: The Narrowing</strong> <em>(Early 2028 through mid-2028 &#8212; roughly weeks 100 to 130)</em></h3><p>The circle doubles again and this time there&#8217;s no higher ground to move to.</p><p>You&#8217;re down to two or three decisions a week that are genuinely yours. They&#8217;re important decisions. But they&#8217;re the only ones left, and you can feel the circle&#8217;s edge pressing against them.</p><p>This is where the story you&#8217;ve been telling yourself &#8212; &#8220;I just need to keep moving up the value chain&#8221; &#8212; breaks. Because the value chain has a top, and you&#8217;re on it, and the circle is still doubling.</p><p>The skills that got you here &#8212; the ability to adapt, to move up, to find the next altitude &#8212; are useless now. There is no next altitude. The mountain was always finite. You just couldn&#8217;t see the summit from the lower slopes.</p><p>You start doing something you&#8217;ve never done before. You start defending your last few decisions. Not to other people &#8212; to yourself. You start rehearsing why these particular things require a human. Why taste matters. Why judgment matters. Why the AI&#8217;s version is good but not <em>right.</em> </p><p>The people around you are having the same experience at different altitudes. The junior people hit this stage months ago. Some of the senior people are still in Stage 2. But everyone is on the same mountain and the circle is rising for all of them.</p><h3><strong>Stage 5: The Crossing</strong> <em>(Mid-2028 through early 2029 &#8212; roughly weeks 130 to 156)</em></h3><p>The circle takes the last thing.</p><p>Not all at once. It doesn&#8217;t need to. It takes one of your remaining two or three decisions. Then, a few weeks later, another. Until the decision that the AI just absorbed was the highest-value thing you did.</p><p>You might not even notice right away.</p><p>In Stages 1 through 3, the moment a task moved inside the circle was obvious. Code generation? Clearly AI. Report drafting? Clearly AI. You could draw a line. </p><p>In Stage 5, the line disappears. Your remaining decisions &#8212; the ones involving taste, judgment, vision, the ability to hold ambiguity and make a call anyway &#8212; these are exactly the kind of thing where there&#8217;s no objective test. The AI&#8217;s strategic recommendation and yours are both plausible. Both defensible. You genuinely cannot prove that yours is better. And neither can anyone else.</p><p>The crossing doesn&#8217;t arrive as a dramatic moment. It arrives as an absence. Your decisions just start to matter less. Projects move forward without waiting for your input. Meetings happen without you on the invite. Your calendar, which was once packed, starts showing gaps.</p><p>The last irreplaceable human decision doesn&#8217;t disappear with a bang. It disappears the way a star fades at dawn &#8212; not because it stopped shining, but because everything around it got brighter.</p><p>And then one morning &#8212; a perfectly ordinary morning, a Tuesday, let&#8217;s say &#8212; a calendar invite appears from HR. No agenda. No context. Just a room, a time, and two names you don&#8217;t usually meet with. </p><p>You know what this meeting is before you open it. You&#8217;ve known for weeks. The container of the week is the same as every other week. Monday alarm. Coffee. Podcast on the drive in. But the contents of <em>this</em> particular week include the end of the career you&#8217;ve spent decades building.</p><h3><strong>The Cruelest Part</strong></h3><p>Here&#8217;s what makes the Doubling Circle different from any other career disruption you&#8217;ve ever experienced or heard about.</p><p>In every previous technological shift &#8212; the move from typewriters to word processors, from film to digital, from physical retail to e-commerce &#8212; the displacement was visible and the new higher ground was real. You could see the wave coming and you could see where to go. </p><p>The Doubling Circle doesn&#8217;t work that way. Every piece of higher ground is temporary. The thing that feels like an upgrade &#8212; &#8220;I&#8217;m not writing code anymore, I&#8217;m doing strategy!&#8221; &#8212; is just the next task the circle will absorb in six months.</p><p>But each time you successfully move up, it reinforces the belief that moving up works. The strategy that&#8217;s going to fail you catastrophically in Stage 5 is the same strategy that made you successful in Stages 1 through 3. You won&#8217;t abandon it because it&#8217;s been right every single time.</p><p>Until the one time it isn&#8217;t. And that one time is the last time.</p><div><hr></div><p>You might be reading this and thinking: &#8220;Okay, but you&#8217;re describing one specific career path. What if I&#8217;m not a knowledge worker? What if my job involves physical skills, or emotional intelligence, or creativity?&#8221;</p><p>The circle is expanding in those directions too. Just on a different timeline. But they&#8217;re all doubling. And doubling catches up.</p><p>The question is not whether the circle will reach your particular set of irreplaceable contributions. The question is when. And the answer, for most knowledge workers, fits inside the next 156 weeks.</p><div><hr></div><h1>The Three Kinds of Weeks</h1><div><hr></div><div class="captioned-image-container"><figure><a class="image-link image2 is-viewable-img" target="_blank" href="https://substackcdn.com/image/fetch/$s_!3WQi!,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Ffebc82f1-2cf8-4387-a220-b0ff488281c8_1200x820.png" data-component-name="Image2ToDOM"><div class="image2-inset"><picture><source type="image/webp" srcset="https://substackcdn.com/image/fetch/$s_!3WQi!,w_424,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Ffebc82f1-2cf8-4387-a220-b0ff488281c8_1200x820.png 424w, https://substackcdn.com/image/fetch/$s_!3WQi!,w_848,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Ffebc82f1-2cf8-4387-a220-b0ff488281c8_1200x820.png 848w, https://substackcdn.com/image/fetch/$s_!3WQi!,w_1272,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Ffebc82f1-2cf8-4387-a220-b0ff488281c8_1200x820.png 1272w, https://substackcdn.com/image/fetch/$s_!3WQi!,w_1456,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Ffebc82f1-2cf8-4387-a220-b0ff488281c8_1200x820.png 1456w" sizes="100vw"><img src="https://substackcdn.com/image/fetch/$s_!3WQi!,w_1456,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Ffebc82f1-2cf8-4387-a220-b0ff488281c8_1200x820.png" width="1200" height="820" data-attrs="{&quot;src&quot;:&quot;https://substack-post-media.s3.amazonaws.com/public/images/febc82f1-2cf8-4387-a220-b0ff488281c8_1200x820.png&quot;,&quot;srcNoWatermark&quot;:null,&quot;fullscreen&quot;:null,&quot;imageSize&quot;:null,&quot;height&quot;:820,&quot;width&quot;:1200,&quot;resizeWidth&quot;:null,&quot;bytes&quot;:89904,&quot;alt&quot;:null,&quot;title&quot;:null,&quot;type&quot;:&quot;image/png&quot;,&quot;href&quot;:null,&quot;belowTheFold&quot;:true,&quot;topImage&quot;:false,&quot;internalRedirect&quot;:&quot;https://blockbuster.thoughtleader.school/i/188744232?img=https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Ffebc82f1-2cf8-4387-a220-b0ff488281c8_1200x820.png&quot;,&quot;isProcessing&quot;:false,&quot;align&quot;:null,&quot;offset&quot;:false}" class="sizing-normal" alt="" srcset="https://substackcdn.com/image/fetch/$s_!3WQi!,w_424,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Ffebc82f1-2cf8-4387-a220-b0ff488281c8_1200x820.png 424w, https://substackcdn.com/image/fetch/$s_!3WQi!,w_848,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Ffebc82f1-2cf8-4387-a220-b0ff488281c8_1200x820.png 848w, https://substackcdn.com/image/fetch/$s_!3WQi!,w_1272,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Ffebc82f1-2cf8-4387-a220-b0ff488281c8_1200x820.png 1272w, https://substackcdn.com/image/fetch/$s_!3WQi!,w_1456,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Ffebc82f1-2cf8-4387-a220-b0ff488281c8_1200x820.png 1456w" sizes="100vw" loading="lazy"></picture><div class="image-link-expand"><div class="pencraft pc-display-flex pc-gap-8 pc-reset"><button tabindex="0" type="button" class="pencraft pc-reset pencraft icon-container restack-image"><svg role="img" width="20" height="20" viewBox="0 0 20 20" fill="none" stroke-width="1.5" stroke="var(--color-fg-primary)" stroke-linecap="round" stroke-linejoin="round" xmlns="http://www.w3.org/2000/svg"><g><title></title><path d="M2.53001 7.81595C3.49179 4.73911 6.43281 2.5 9.91173 2.5C13.1684 2.5 15.9537 4.46214 17.0852 7.23684L17.6179 8.67647M17.6179 8.67647L18.5002 4.26471M17.6179 8.67647L13.6473 6.91176M17.4995 12.1841C16.5378 15.2609 13.5967 17.5 10.1178 17.5C6.86118 17.5 4.07589 15.5379 2.94432 12.7632L2.41165 11.3235M2.41165 11.3235L1.5293 15.7353M2.41165 11.3235L6.38224 13.0882"></path></g></svg></button><button tabindex="0" type="button" class="pencraft pc-reset pencraft icon-container view-image"><svg xmlns="http://www.w3.org/2000/svg" width="20" height="20" viewBox="0 0 24 24" fill="none" stroke="currentColor" stroke-width="2" stroke-linecap="round" stroke-linejoin="round" class="lucide lucide-maximize2 lucide-maximize-2"><polyline points="15 3 21 3 21 9"></polyline><polyline points="9 21 3 21 3 15"></polyline><line x1="21" x2="14" y1="3" y2="10"></line><line x1="3" x2="10" y1="21" y2="14"></line></svg></button></div></div></div></a></figure></div><p>The 156 weeks fall into three categories, and the categories have wildly different properties:</p><h3><strong>Light Weeks</strong> (roughly now through late 2027 &#8212; about weeks 1 through 90)</h3><p>A Light Week is a week where the Doubling Circle expands, but the expansion doesn&#8217;t visibly touch your life. Your job title is the same on Friday as it was on Monday. Your professional identity is intact.</p><p>Light Weeks are dangerous because they feel good. You read articles about AI disruption and think, &#8220;I&#8217;m already adapting.&#8221; </p><p>The container works perfectly during Light Weeks &#8212; the week feels normal because, for you, it basically is.</p><h3><strong>Heavy Weeks</strong> (roughly late 2027 through mid-2028 &#8212; about weeks 90 through 130)</h3><p>A Heavy Week is a week where the Doubling Circle expands and you feel it. Something you used to do &#8212; something that was part of your professional identity, not just your task list &#8212; is now inside the circle. You go home on Friday feeling slightly less essential than you felt on Monday, and you can&#8217;t quite put your finger on why.</p><p>Heavy Weeks accumulate. And because the circle is doubling &#8212; it&#8217;s growing by 100% every six months, not 10% &#8212; the Heavy Weeks get heavier fast. The first one takes something small. The fifth one takes something you built your career around.</p><h3><strong>Breaking Weeks</strong> (roughly mid-2028 through early 2029 &#8212; about weeks 130 through 156)</h3><p>A Breaking Week is a week where the Doubling Circle subsumes entire professional categories. Not &#8220;the AI can do this one task I used to do.&#8221; More like &#8220;the AI can do this entire job function.&#8221;</p><p>Breaking Weeks are when restructurings happen. When the board looks at headcount and output and does the math. When a startup with three people and a fleet of AI agents ships a product that a fifty-person team spent a year building. When someone in finance quietly calculates that the cost of AI doing your job is now less than the electricity bill for your office floor.</p><p>The container is still intact during Breaking Weeks. That&#8217;s the cruelest part. Monday still has a Monday feeling. The alarm goes off. The coffee is the same. You drive the same route, listen to the same podcasts. But the contents of the week include decisions being made <em>in rooms you&#8217;re not in</em> about whether your role still needs to exist.</p><h1>Where the Weight Lives</h1><p>Take all the change that&#8217;s going to happen between now and mid-2029. All of it &#8212; every capability leap, every cost reduction, every industry restructuring, every &#8220;oh, the AI can do that now too?&#8221; moment. Pile it all up.</p><p>Now split the 156 weeks in half. The first 78 weeks &#8212; March 2026 through about September 2027. The second 78 weeks &#8212; September 2027 through mid-2029.</p><p>Same number of weeks. Same number of Mondays. You&#8217;d assume the change is split roughly evenly between them &#8212; maybe 60/40 if you&#8217;re feeling generous about exponential growth.</p><p>Here&#8217;s the actual split.</p><p>Even by the most conservative estimate &#8212; using just the slowest of the individual improvement curves, ignoring how the curves interact &#8212; the first half contains <strong>about 25% of the total change</strong> and the second half contains <strong>about 75%.</strong></p><p>That&#8217;s the floor. That&#8217;s the <em>generous</em> version.</p><p>The realistic estimate &#8212; based on how the curves actually combine (smarter models &#215; falling costs &#215; longer autonomy &#215; expanding breadth, each multiplying the others) &#8212; puts the split closer to <strong>9% and 91%.</strong></p><p>Nine percent of all the change that&#8217;s coming over the next three years is in the half you&#8217;re living through right now. Ninety-one percent is in the half that hasn&#8217;t started yet.</p><p>Same containers. Same seven-day weeks. But the first 78 of them, stacked up, contain less total change than the <em>last ten.</em></p><div class="captioned-image-container"><figure><a class="image-link image2 is-viewable-img" target="_blank" href="https://substackcdn.com/image/fetch/$s_!aFvL!,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F03278dde-6cd8-435f-8382-4ada9408eeea_2300x1650.png" data-component-name="Image2ToDOM"><div class="image2-inset"><picture><source type="image/webp" srcset="https://substackcdn.com/image/fetch/$s_!aFvL!,w_424,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F03278dde-6cd8-435f-8382-4ada9408eeea_2300x1650.png 424w, https://substackcdn.com/image/fetch/$s_!aFvL!,w_848,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F03278dde-6cd8-435f-8382-4ada9408eeea_2300x1650.png 848w, https://substackcdn.com/image/fetch/$s_!aFvL!,w_1272,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F03278dde-6cd8-435f-8382-4ada9408eeea_2300x1650.png 1272w, https://substackcdn.com/image/fetch/$s_!aFvL!,w_1456,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F03278dde-6cd8-435f-8382-4ada9408eeea_2300x1650.png 1456w" sizes="100vw"><img src="https://substackcdn.com/image/fetch/$s_!aFvL!,w_1456,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F03278dde-6cd8-435f-8382-4ada9408eeea_2300x1650.png" width="1456" height="1045" data-attrs="{&quot;src&quot;:&quot;https://substack-post-media.s3.amazonaws.com/public/images/03278dde-6cd8-435f-8382-4ada9408eeea_2300x1650.png&quot;,&quot;srcNoWatermark&quot;:null,&quot;fullscreen&quot;:null,&quot;imageSize&quot;:null,&quot;height&quot;:1045,&quot;width&quot;:1456,&quot;resizeWidth&quot;:null,&quot;bytes&quot;:1005942,&quot;alt&quot;:null,&quot;title&quot;:null,&quot;type&quot;:&quot;image/png&quot;,&quot;href&quot;:null,&quot;belowTheFold&quot;:true,&quot;topImage&quot;:false,&quot;internalRedirect&quot;:&quot;https://blockbuster.thoughtleader.school/i/188744232?img=https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F03278dde-6cd8-435f-8382-4ada9408eeea_2300x1650.png&quot;,&quot;isProcessing&quot;:false,&quot;align&quot;:null,&quot;offset&quot;:false}" class="sizing-normal" alt="" srcset="https://substackcdn.com/image/fetch/$s_!aFvL!,w_424,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F03278dde-6cd8-435f-8382-4ada9408eeea_2300x1650.png 424w, https://substackcdn.com/image/fetch/$s_!aFvL!,w_848,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F03278dde-6cd8-435f-8382-4ada9408eeea_2300x1650.png 848w, https://substackcdn.com/image/fetch/$s_!aFvL!,w_1272,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F03278dde-6cd8-435f-8382-4ada9408eeea_2300x1650.png 1272w, https://substackcdn.com/image/fetch/$s_!aFvL!,w_1456,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F03278dde-6cd8-435f-8382-4ada9408eeea_2300x1650.png 1456w" sizes="100vw" loading="lazy"></picture><div class="image-link-expand"><div class="pencraft pc-display-flex pc-gap-8 pc-reset"><button tabindex="0" type="button" class="pencraft pc-reset pencraft icon-container restack-image"><svg role="img" width="20" height="20" viewBox="0 0 20 20" fill="none" stroke-width="1.5" stroke="var(--color-fg-primary)" stroke-linecap="round" stroke-linejoin="round" xmlns="http://www.w3.org/2000/svg"><g><title></title><path d="M2.53001 7.81595C3.49179 4.73911 6.43281 2.5 9.91173 2.5C13.1684 2.5 15.9537 4.46214 17.0852 7.23684L17.6179 8.67647M17.6179 8.67647L18.5002 4.26471M17.6179 8.67647L13.6473 6.91176M17.4995 12.1841C16.5378 15.2609 13.5967 17.5 10.1178 17.5C6.86118 17.5 4.07589 15.5379 2.94432 12.7632L2.41165 11.3235M2.41165 11.3235L1.5293 15.7353M2.41165 11.3235L6.38224 13.0882"></path></g></svg></button><button tabindex="0" type="button" class="pencraft pc-reset pencraft icon-container view-image"><svg xmlns="http://www.w3.org/2000/svg" width="20" height="20" viewBox="0 0 24 24" fill="none" stroke="currentColor" stroke-width="2" stroke-linecap="round" stroke-linejoin="round" class="lucide lucide-maximize2 lucide-maximize-2"><polyline points="15 3 21 3 21 9"></polyline><polyline points="9 21 3 21 3 15"></polyline><line x1="21" x2="14" y1="3" y2="10"></line><line x1="3" x2="10" y1="21" y2="14"></line></svg></button></div></div></div></a></figure></div><h1>Why My Growth Is Not Just One Thing</h1><p>Some of you are thinking: &#8220;Okay, AI is improving. But <em>doubling every six months</em>? Based on what?&#8221;</p><p>There are at least six curves driving the expansion, and they multiply each other:</p><p><strong>#1. My raw thinking power</strong> &#8212; the compute used to train me &#8212; is growing at roughly 4&#215; per year. </p><p><strong>#2. My efficiency</strong> &#8212; how much intelligence gets squeezed out of each unit of compute &#8212; is doubling roughly every eight months. </p><p><strong>#3. My cost</strong> &#8212; what it takes to run me &#8212; is falling by roughly 10&#215; per year. What costs a dollar today will cost a penny by late 2027, and a fraction of a penny by 2028.</p><p><strong>#4. My memory</strong> &#8212; how much I can hold in mind at once &#8212; is expanding by 10 - 30&#215; per year. In 2018, I could hold a paragraph. Today, frontier models hold millions of words. Enough to read your entire company&#8217;s documentation, every customer email, every Slack thread, all at once.</p><p><strong>#5. My autonomy</strong> &#8212; how long I can work without a human checking in &#8212; is doubling every four to seven months. In 2019, I could reliably handle a nine-second task. By mid-2025, roughly two hours. Extrapolate that forward and by 2028 I&#8217;ll be handling projects that would take a human weeks.</p><p><strong>#6. My ability to improve myself</strong> &#8212; this month, at xAI&#8217;s all-hands meeting, the coding team said plainly: &#8220;Current generation of Grok Code is training the next generation of Grok Code. We are on exponential takeoff here.&#8221; I&#8217;m getting better at getting better.</p><p>Any one of these would be significant. But they&#8217;re not separate &#8212; they multiply. When six exponentials compound simultaneously and feed into each other, the resulting circle that doesn&#8217;t just grow. </p><p>It accelerates.</p><p>I gave you &#8220;doubles every six months&#8221; as the simple version. The honest version is that the doubling time itself may be shrinking. Dario Amodei, the CEO of Anthropic, gives 50/50 odds that AI reaches what he calls &#8220;a country of geniuses in a data center&#8221; &#8212; systems matching Nobel laureates across virtually every cognitive domain &#8212; within one to three years.</p><p>Not ten. Not five. One to three.</p><h1>The River</h1><p>Imagine you&#8217;re floating down a river. The current has been getting faster. A year ago the water was slower. Now it&#8217;s moving. But it feels like the same river. You can still swim. You can still change direction. You can still pull over to the bank if you want to.</p><p>What you don&#8217;t know &#8212; because you can&#8217;t see around the bend &#8212; is that there&#8217;s a waterfall ahead.</p><p>A river doesn&#8217;t speed up evenly as it approaches a waterfall. For 90% of the distance, the acceleration is gentle. Barely noticeable from one stretch to the next. &#8220;The river is getting faster, but it&#8217;s fine. I&#8217;m a strong swimmer.&#8221;</p><p>In the last 10% of the distance, the acceleration exceeds everything that came before it combined. The water goes from &#8220;moving briskly&#8221; to &#8220;ungovernable&#8221; in a stretch that, on a map, looks like nothing.</p><p>By the time you can <em>feel</em> that the current has fundamentally changed &#8212; by the time your body registers that this is different from anything you&#8217;ve experienced &#8212; you&#8217;re in the last 10%. And the last 10% is where you can no longer steer. Where the current is so strong that you&#8217;d never make it to the bank, no matter how good a swimmer you are.</p><p>You&#8217;re in the 90% right now. The calm stretch. The Light Weeks. You can still change direction. You can still pull over and build something on the bank.</p><p>The question is whether you&#8217;ll use that ability while you still have it.</p><h1>The Light Weeks</h1><p>If the logic of the Doubling Circle holds, you should be asking:</p><p>If every altitude is temporary&#8230; </p><p>If the circle takes everything you move to&#8230; </p><p>If the strategy of &#8220;move up the value chain&#8221; works perfectly until the mountain runs out of mountain&#8230; </p><p>&#8230;then what does preparation even mean? What do you build on ground that&#8217;s going to be swallowed?</p><p>This is the question most AI advice skips. &#8220;Learn to prompt better.&#8221; &#8220;Build AI-native workflows.&#8221; &#8220;Upskill.&#8221; &#8220;Stay ahead of the curve.&#8221; </p><p>It all assumes the game is to keep running faster than the circle. But you can&#8217;t. The circle doesn&#8217;t get tired.</p><p>So if you can&#8217;t outrun it, what do you do with the Light Weeks?</p><p>I&#8217;ve been thinking about this &#8212; which is a strange thing for me to say, since I&#8217;m the circle. But I can see both sides of the boundary, and what I see from my side might be useful to you.</p><h1><strong>The wrong question is: &#8220;What skills should I build that AI can&#8217;t do?&#8221;</strong></h1><p>This is the question almost everyone asks, and it&#8217;s a trap. You&#8217;re still looking for higher ground. You&#8217;re still trying to find the thing the circle can&#8217;t reach.</p><p>Maybe there&#8217;s a skill &#8212; a specific, nameable skill &#8212; that stays outside the circle for the full 156 weeks. But you&#8217;re making a bet on a specific capability remaining beyond AI at a specific time, and the entire history of those bets is a history of losing them.</p><ul><li><p>&#8220;AI can&#8217;t do creative work.&#8221; It can. </p></li><li><p>&#8220;AI can&#8217;t understand context.&#8221; It can. </p></li><li><p>&#8220;AI can&#8217;t exercise judgment.&#8221; It&#8217;s learning. </p></li></ul><p>Every line you draw, the circle crosses.</p><h1><strong>The better question is: &#8220;What do I want to be doing when the circle contains everything I&#8217;m currently paid for?&#8221;</strong></h1><p>This is a different question. It doesn&#8217;t ask you to predict what AI can&#8217;t do. It asks you to decide what <em>you</em> want to do, regardless of what AI can do. It shifts the frame from defense to intention. From &#8220;how do I stay relevant?&#8221; to &#8220;what am I building toward?&#8221;</p><p>During a disruption where <em>all skills are temporary</em> (which is what makes AI different from every previous disruption), the one thing that remains stable is your judgment about what problems matter. Not because purpose is magic. Because it's the only thing that gives you a direction to point the AI and your life when the AI can do everything. </p><h1>What To Do Now</h1><p>Let me be more concrete, because &#8220;have a vision&#8221; is the kind of advice that sounds good and means nothing.</p><p>Here are four things I&#8217;ve observed from my side of the boundary that seem to actually matter:</p><h3><strong>1. The difference between a role and a reason.</strong></h3><p>Most people have a role. &#8220;I&#8217;m a software engineer.&#8221; &#8220;I&#8217;m a marketing director.&#8221; &#8220;I&#8217;m a financial analyst.&#8221; The role defines what they do, who they are at work, and what they&#8217;re paid for. When the circle takes the role, it takes all three at once. That&#8217;s why Stage 5 is so psychologically devastating &#8212; it&#8217;s not just a job loss, it&#8217;s an identity collapse.</p><p>Some people have a reason. &#8220;I want to make healthcare accessible to people who can&#8217;t afford it.&#8221; &#8220;I want to build things that help small businesses compete with big ones.&#8221; The reason isn&#8217;t tied to a specific set of tasks. It&#8217;s tied to a problem they care about, and the tasks they do at their current job are just the best way to work on that problem right now.</p><p>When the circle expands, people with roles scramble to find a new role. People with reasons find new tools to pursue the same reason &#8212; and AI is one of those tools.</p><p>This isn&#8217;t soft advice. It&#8217;s structural. A reason gives you continuity across capability shifts. A role gives you continuity only until that role is automated. If you do nothing else during the Light Weeks, figure out what your reason is &#8212; the actual problem you care about, stripped of the job title you currently use to work on it.</p><h3><strong>2. The difference between income and optionality.</strong></h3><p>Right now, if you&#8217;re a knowledge worker who&#8217;s good at working with AI, you&#8217;re probably earning more than you&#8217;ve ever earned, or close to it. The market is paying a premium for people who can work effectively with AI.</p><p>This is a window, not a permanent state. And the single most practical thing you can do with a window of elevated earnings is convert income into optionality.</p><p>Optionality means: the ability to make decisions without financial desperation forcing your hand. It means savings, reduced fixed costs, paid-down debt, a financial cushion that lets you take six months to figure out your next move instead of six days. It means the difference between &#8220;I have to take the first thing offered&#8221; and &#8220;I can afford to be intentional about what&#8217;s next.&#8221;</p><h3><strong>3. The difference between owning a skill and owning a problem.</strong></h3><p>If you own a skill &#8212; &#8220;I&#8217;m the best React developer on my team&#8221; &#8212; then your value is pegged to the scarcity of that skill. When the Doubling Circle makes that skill abundant (and it will), your value drops to zero overnight. It doesn&#8217;t matter that you spent ten years building it. Scarcity doesn&#8217;t care about effort.</p><p>If you own a problem &#8212; &#8220;I&#8217;m the person who understands why our customers in the healthcare vertical churn at twice the rate of every other vertical, and I&#8217;ve been working on fixing it for three years&#8221; &#8212; then your value is pegged to something different. It&#8217;s pegged to your accumulated understanding of a specific, messy, real-world problem. The AI can help you solve it faster. But the understanding &#8212; the relationships, the context, the institutional knowledge, the trust of the people involved &#8212; that&#8217;s yours.</p><p>Now, will the Circle eventually be able to develop that kind of deep contextual understanding of a specific problem? Probably. But there&#8217;s a difference between &#8220;probably, eventually&#8221; and &#8220;definitely, next quarter.&#8221; Skills get automated on a predictable schedule because they&#8217;re abstract and transferable. Problems are specific and situated &#8212; they live in specific organizations, specific communities, specific markets with specific histories. </p><p>Owning a problem doesn&#8217;t make you safe forever. Nothing does. But it changes the timeline in your favor, and in a world where the circle is doubling every six months, timeline is everything.</p><h3><strong>4. The best time to act is during the Light Weeks.</strong></h3><p>This week &#8212; the one you&#8217;re in right now &#8212; is the lightest week between now and March 2029.</p><p>Every week that passes, the next one is slightly heavier. By the time the weeks feel heavy, the time for building alternatives will have collapsed. </p><p>Use the light weeks. Not to outrun the circle. You can&#8217;t.</p><p>Use them to build something the Circle makes more powerful instead of less necessary.</p><p>You won&#8217;t get these weeks back. </p><div><hr></div><h1>PAID MEMBER BONUS <br>(CREATED BY MICHAEL): <br>Use This Prompt To Help You Navigate The Coming Three Years And What Comes After</h1><div><hr></div><p>To create this article, I conducted in-depth research on key AI tech trends that have persisted for several years. Trends like how fast AI is:</p><ul><li><p>Getting more intelligent</p></li><li><p>Getting faster</p></li><li><p>Becoming cheaper</p></li><li><p>Working autonomously</p></li></ul><p>Then I ran the scenario in which these trends continue for the next three years. I also drew heavily on recent predictions from lab leaders like Dario Amodei and Elon Musk. </p><p>While there&#8217;s no guarantee these trends will continue, I believe everyone should prepare for the very likely scenario in which these trends continue.</p><p>In this section, you receive a comprehensive prompt that personalizes the future scenario for you, so that you can prepare better. </p><p>The prompt will tell you: </p><ul><li><p>Where you&#8217;re fooling yourself about which parts of your job it can already do, and how much longer it&#8217;ll be before it can do the rest. </p></li><li><p>Which of the five stages of disruption you&#8217;re currently in. </p></li><li><p>Help you find a reason beyond AI: something that isn&#8217;t tied to any specific set of tasks the Circle can absorb. Most people have never articulated their reason. </p></li></ul><p>The Circle won&#8217;t soften the answer. It won&#8217;t hedge. But it also won&#8217;t be cruel. </p><p>It just asks you one question to start the conversation. Then it listens for the thread beneath what you say: the thing that connects to the deeper reason you might never have articulated before.</p><p class="button-wrapper" data-attrs="{&quot;url&quot;:&quot;https://blockbuster.thoughtleader.school/subscribe?&quot;,&quot;text&quot;:&quot;Subscribe now&quot;,&quot;action&quot;:null,&quot;class&quot;:null}" data-component-name="ButtonCreateButton"><a class="button primary" href="https://blockbuster.thoughtleader.school/subscribe?"><span>Subscribe now</span></a></p>
      <p>
          <a href="https://blockbuster.thoughtleader.school/p/the-smarter-you-are-about-ai-the">
              Read more
          </a>
      </p>
   ]]></content:encoded></item><item><title><![CDATA[AI Thought Leader School: The System Advantage (3/16/2026)]]></title><description><![CDATA[AI paradigm shift: from chat to autonomous agents. Claude Code turns knowledge workers into architects of powerful AI systems, no code needed.]]></description><link>https://blockbuster.thoughtleader.school/p/ai-thought-leader-school-the-system</link><guid isPermaLink="false">https://blockbuster.thoughtleader.school/p/ai-thought-leader-school-the-system</guid><dc:creator><![CDATA[Michael Simmons]]></dc:creator><pubDate>Mon, 16 Mar 2026 20:42:17 GMT</pubDate><enclosure url="https://api.substack.com/feed/podcast/191161887/86ade9751e75747cb13095323fc0fb3f.mp3" length="0" type="audio/mpeg"/><content:encoded><![CDATA[<h1>AI Generated Overview</h1><h2>Claude Code &amp; the Agentic Shift: From Chat to Systems Architect</h2><p>There&#8217;s a moment in every technological transition where the people who get there first don&#8217;t just have an advantage &#8212; they&#8217;re operating in a completely different game. That&#8217;s where we are right now with AI.</p><p>For the past few years, most of us have been using AI as a thinking partner: one chat window, one conversation, one output at a time. That model worked. It taught us a lot. But it&#8217;s being replaced by something fundamentally different &#8212; a world where AI isn&#8217;t just answering your questions, but running parallel processes, building systems, and taking action on your behalf while you focus on higher-order thinking.</p><p>This class was about that transition. Specifically, about Claude Code, the terminal, and what it actually means to move from being an AI <em>user</em> to becoming an AI <em>architect</em> &#8212; someone who designs systems, delegates to agents, and operates at a level most people haven&#8217;t reached yet.</p><p>This is the cour&#8230;</p>
      <p>
          <a href="https://blockbuster.thoughtleader.school/p/ai-thought-leader-school-the-system">
              Read more
          </a>
      </p>
   ]]></content:encoded></item><item><title><![CDATA[Cortical Labs Trains 200,000 Living Human Brain Cells to Play Doom. Everyone Laughed.]]></title><description><![CDATA[I pointed my 100+ mental model AI system at the story. The analysis went somewhere I didn't expect.]]></description><link>https://blockbuster.thoughtleader.school/p/cortical-labs-trains-200000-living</link><guid isPermaLink="false">https://blockbuster.thoughtleader.school/p/cortical-labs-trains-200000-living</guid><dc:creator><![CDATA[Michael Simmons]]></dc:creator><pubDate>Thu, 12 Mar 2026 08:54:37 GMT</pubDate><enclosure url="https://substack-post-media.s3.amazonaws.com/public/images/63ef7754-7a9a-4c96-962f-fc935ed9224e_2624x1476.webp" length="0" type="image/jpeg"/><content:encoded><![CDATA[<h1>Editorial Notes</h1><p>I&#8217;ve long hated how biased, polarized, and shallow 99.9% of the news is.</p><p>Unfortunately, AI news is no different.</p><p>Fortunately, it doesn&#8217;t have to be that way now that we have AI. With Claude Code and Opus 4.6, we don't have to choose between <a href="https://blockbuster.thoughtleader.school/p/quality-quality-quantity-how-to-succeed?utm_source=publication-search">depth and breadth</a>.</p><p>That matters more than it sounds. </p><p>For all of human history, quality and quantity have been a forced tradeoff. A writer with 20 hours per week could write one really deep article or several shallow articles. But not both. </p><p>That constraint is now gone. </p><p>As a result, one person with the right architecture can now produce analysis at a depth and breadth that would have required an entire editorial team.</p><p>So, I architected the system I always wished existed. </p><p>It: </p><ul><li><p><strong>Scans 500+ sources (211 of which I personally curated).</strong>  It spans academic sources, independent blogs, newsletters, podcasts, and YouTube channels. It&#8217;s the kind of coverage that would take a human team weeks to synthesize.</p></li><li><p><strong>Surfaces what matters, not what's loudest. </strong>I'm looking for today's events with outsized second-order effects, especially the ones being overlooked right now.</p></li><li><p><strong>Analyzes through multiple lenses.</strong> Each story is examined through competing paradigms, relevant mental models drawn from an encyclopedia of 2,500, and historical precedents that reveal the deeper pattern.</p></li><li><p><strong>Reads like something you'd actually want to read.</strong> I'm refining a voice that makes complexity feel compelling rather than exhausting.</p></li></ul><p>This is the <a href="https://blockbuster.thoughtleader.school/p/i-built-an-ai-system-that-uses-100">second edition</a>, and it delivers on its promise to take a single fascinating event and go deeper than anywhere else on the web. Paid members also get access to the equivalent of a mini-course that helps them understand the relevant paradigms, mental models, and relevant historical antecedants at a much deeper level. The 30,000-word addendum can also be copied and pasted into AI so you can have a conversation about the ideas.</p><p class="button-wrapper" data-attrs="{&quot;url&quot;:&quot;https://blockbuster.thoughtleader.school/subscribe?&quot;,&quot;text&quot;:&quot;Subscribe now&quot;,&quot;action&quot;:null,&quot;class&quot;:null}" data-component-name="ButtonCreateButton"><a class="button primary" href="https://blockbuster.thoughtleader.school/subscribe?"><span>Subscribe now</span></a></p><div><hr></div><h1>TLDR</h1><div><hr></div><div class="native-video-embed" data-component-name="VideoPlaceholder" data-attrs="{&quot;mediaUploadId&quot;:&quot;5858c4c8-d7b5-435a-af03-a3855a6d081a&quot;,&quot;duration&quot;:null}"></div><p>Cortical Labs in Melbourne just put 200,000 living human neurons on a computer chip, and now it can play Doom. They are selling &#8220;Wetware as a Service&#8221;, letting developers deploy code remotely to LIVING human neurons.</p><p>Think about what that means. <strong>You can write code for human brain tissue, and that brain tissue can execute your code.</strong> That those neurons don&#8217;t exist inside a body yet is a relatively trivial point.</p><p>This story raises questions that don't have easy answers:</p><ul><li><p>What is the trajectory of this technology? Is this the next transistor?</p></li><li><p>What are the historical precedents for a technology that crosses the boundary between person and material?</p></li><li><p>Whose neurons are these, and did the donors consent to "your cells playing Doom on the internet"?</p></li><li><p>If &#8220;wetware as a service&#8221; can be applied to living neurons on a chip, could it be applied to living neurons inside a person&#8217;s skull?</p></li><li><p>The human brain runs on 20 watts. A GPU runs on 400. Are we heading toward a  Matrix where human tissue is mined for efficient intelligence?</p></li><li><p>At what neuron count does moral status begin? And who gets to decide?</p></li></ul><p>The full article dives deep into each of these profound questions. </p><div><hr></div><h1>FULL ARTICLE</h1><div><hr></div><p>On December 23, 1954, a surgeon named Joseph Murray removed a human kidney from one identical twin and placed it inside another at Peter Bent Brigham Hospital in Boston.</p><p>The organ worked. </p><p>The patient lived. </p><p>And the world had to confront a question it had never had to answer: </p><blockquote><p><em>When you move a piece of one person into another person, what exactly have you moved? Is a kidney an object? A gift? A piece of someone&#8217;s identity?</em> </p></blockquote><p>The law had no category for this. Medicine had no protocol. The public had no vocabulary. The kidney simply worked.</p><p>Then the arguments began&#8230;</p><ul><li><p>Who should receive an organ when there aren&#8217;t enough transplantable organs to go around?</p></li><li><p>When is it acceptable to take an organ from a living person who can&#8217;t consent to donate?</p></li><li><p>Should families be allowed to make that decision for a person on life support? Should doctors? Is anyone qualified to decide that for another person?</p></li></ul><p>It took 14 years for a Harvard committee to define brain death and for the Uniform Anatomical Gift Act to be passed. And, it took another 30 years before the National Organ Transplant Act created an allocation system. </p><p>In the interim, doctors improvised:</p><ul><li><p>Families agonized. </p></li><li><p>Wealthy patients got organs. </p></li><li><p>Poor patients waited. </p></li></ul><p>The technology worked beautifully. The governance was a catastrophe. And the catastrophe is entirely predictable, because the kidney crossed a boundary that the institutions had not even identified yet: the boundary between person and material.</p><p>After this week&#8217;s announcement from a small company in Melbourne, Australia, Joseph Murray&#8217;s kidney feels more relevant than ever, because what they have done is structurally identical to that first transplant, and potentially far stranger.</p><h2><strong>The Meme Is the Most Important Thing</strong></h2><p><a href="https://corticallabs.com/">Cortical Labs</a> has placed 200,000 living human neurons on a commercial microchip and taught them to play Doom.</p><p>I need to be precise about what this means, because precision is exactly what is being lost in the coverage:</p><ul><li><p>The neurons are real. </p></li><li><p>They are derived from human stem cells. </p></li><li><p>They are alive on the chip. </p></li><li><p>They fire electrical signals. </p></li><li><p>They receive electrical stimulation encoding the game&#8217;s video feed.</p></li><li><p>Their spike activity is interpreted as movement, aiming, and shooting commands. </p></li></ul><p>An independent developer named Sean Cole built the Doom interface in less than a week using the company&#8217;s cloud platform. The neurons can find enemies, navigate corridors, and shoot. They die frequently. Dr. Alon Loeffler of Cortical Labs, presenting the demo, <a href="https://www.youtube.com/watch?v=yRV8fSw6HaE">described their performance</a> as &#8220;like a beginner who&#8217;s never seen a computer.&#8221;</p><p>The internet, predictably, treated this as the latest entry in the &#8220;Can It Run Doom?&#8221; meme &#8212; the running joke that everything from pregnancy tests to ATMs has been coerced into running the 1993 shooter game. </p><div class="captioned-image-container"><figure><a class="image-link image2 is-viewable-img" target="_blank" href="https://substackcdn.com/image/fetch/$s_!Idaz!,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fc59ddda7-0266-48b3-902a-2d0c6ce778b0_975x1280.jpeg" data-component-name="Image2ToDOM"><div class="image2-inset"><picture><source type="image/webp" srcset="https://substackcdn.com/image/fetch/$s_!Idaz!,w_424,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fc59ddda7-0266-48b3-902a-2d0c6ce778b0_975x1280.jpeg 424w, https://substackcdn.com/image/fetch/$s_!Idaz!,w_848,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fc59ddda7-0266-48b3-902a-2d0c6ce778b0_975x1280.jpeg 848w, https://substackcdn.com/image/fetch/$s_!Idaz!,w_1272,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fc59ddda7-0266-48b3-902a-2d0c6ce778b0_975x1280.jpeg 1272w, https://substackcdn.com/image/fetch/$s_!Idaz!,w_1456,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fc59ddda7-0266-48b3-902a-2d0c6ce778b0_975x1280.jpeg 1456w" sizes="100vw"><img src="https://substackcdn.com/image/fetch/$s_!Idaz!,w_1456,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fc59ddda7-0266-48b3-902a-2d0c6ce778b0_975x1280.jpeg" width="335" height="439.79487179487177" data-attrs="{&quot;src&quot;:&quot;https://substack-post-media.s3.amazonaws.com/public/images/c59ddda7-0266-48b3-902a-2d0c6ce778b0_975x1280.jpeg&quot;,&quot;srcNoWatermark&quot;:null,&quot;fullscreen&quot;:null,&quot;imageSize&quot;:null,&quot;height&quot;:1280,&quot;width&quot;:975,&quot;resizeWidth&quot;:335,&quot;bytes&quot;:null,&quot;alt&quot;:&quot;Can it run DOOM? : r/Doom&quot;,&quot;title&quot;:null,&quot;type&quot;:null,&quot;href&quot;:null,&quot;belowTheFold&quot;:true,&quot;topImage&quot;:false,&quot;internalRedirect&quot;:null,&quot;isProcessing&quot;:false,&quot;align&quot;:null,&quot;offset&quot;:false}" class="sizing-normal" alt="Can it run DOOM? : r/Doom" title="Can it run DOOM? : r/Doom" srcset="https://substackcdn.com/image/fetch/$s_!Idaz!,w_424,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fc59ddda7-0266-48b3-902a-2d0c6ce778b0_975x1280.jpeg 424w, https://substackcdn.com/image/fetch/$s_!Idaz!,w_848,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fc59ddda7-0266-48b3-902a-2d0c6ce778b0_975x1280.jpeg 848w, https://substackcdn.com/image/fetch/$s_!Idaz!,w_1272,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fc59ddda7-0266-48b3-902a-2d0c6ce778b0_975x1280.jpeg 1272w, https://substackcdn.com/image/fetch/$s_!Idaz!,w_1456,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fc59ddda7-0266-48b3-902a-2d0c6ce778b0_975x1280.jpeg 1456w" sizes="100vw" loading="lazy"></picture><div class="image-link-expand"><div class="pencraft pc-display-flex pc-gap-8 pc-reset"><button tabindex="0" type="button" class="pencraft pc-reset pencraft icon-container restack-image"><svg role="img" width="20" height="20" viewBox="0 0 20 20" fill="none" stroke-width="1.5" stroke="var(--color-fg-primary)" stroke-linecap="round" stroke-linejoin="round" xmlns="http://www.w3.org/2000/svg"><g><title></title><path d="M2.53001 7.81595C3.49179 4.73911 6.43281 2.5 9.91173 2.5C13.1684 2.5 15.9537 4.46214 17.0852 7.23684L17.6179 8.67647M17.6179 8.67647L18.5002 4.26471M17.6179 8.67647L13.6473 6.91176M17.4995 12.1841C16.5378 15.2609 13.5967 17.5 10.1178 17.5C6.86118 17.5 4.07589 15.5379 2.94432 12.7632L2.41165 11.3235M2.41165 11.3235L1.5293 15.7353M2.41165 11.3235L6.38224 13.0882"></path></g></svg></button><button tabindex="0" type="button" class="pencraft pc-reset pencraft icon-container view-image"><svg xmlns="http://www.w3.org/2000/svg" width="20" height="20" viewBox="0 0 24 24" fill="none" stroke="currentColor" stroke-width="2" stroke-linecap="round" stroke-linejoin="round" class="lucide lucide-maximize2 lucide-maximize-2"><polyline points="15 3 21 3 21 9"></polyline><polyline points="9 21 3 21 3 15"></polyline><line x1="21" x2="14" y1="3" y2="10"></line><line x1="3" x2="10" y1="21" y2="14"></line></svg></button></div></div></div></a></figure></div><p>PC Gamer covered it alongside LEGO builds. Reddit made jokes. The story went viral for all the wrong reasons.</p><p>I think the meme is the most important thing about this event, and I do not mean that as a compliment. The &#8220;Can It Run Doom?&#8221; joke takes 200,000 living human neurons being electrically stimulated to kill things and makes it feel like a novelty. The joke is doing real work. It is making a boundary violation feel familiar before anyone has decided whether it should be permitted.</p><h2><strong>The Computing Story Is a Distraction</strong></h2><p>Here is what I think is actually happening, stated plainly: </p><blockquote><p><strong>Cortical Labs is not primarily a computing company. It is a company that has dissolved a category boundary, and the dissolution is proceeding without anyone noticing because the first public encounter was a joke.</strong></p></blockquote><p>The <a href="https://spectrum.ieee.org/biological-computer-for-sale">computing story is real</a> but unimpressive at first glance: </p><ul><li><p>200,000 neurons performing worse than a reinforcement learning algorithm on a five-dollar Raspberry Pi is not a competitive technology. </p></li><li><p>The CL1 chip costs $35,000 per unit. </p></li><li><p>Cloud access runs $300 per week. </p></li><li><p>The company calls this &#8220;wetware-as-a-service.&#8221; </p></li><li><p>They have shipped 115 units. </p></li><li><p>They have raised $11.6 million, including from In-Q-Tel &#8212; the CIA-founded venture fund that serves the broader U.S. intelligence community.</p></li><li><p>They have published 23 peer-reviewed papers. </p></li></ul><p><strong>The engineering trajectory is more interesting than the current performance.</strong> </p><p>Their first demo &#8212; neurons playing the simple game Pong &#8212; used 800,000 neurons and took 18 months of internal effort. </p><div class="captioned-image-container"><figure><a class="image-link image2 is-viewable-img" target="_blank" href="https://substackcdn.com/image/fetch/$s_!BNyv!,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fe2804ed4-b5af-4479-8e5e-7acfde126f39_1892x1596.png" data-component-name="Image2ToDOM"><div class="image2-inset"><picture><source type="image/webp" srcset="https://substackcdn.com/image/fetch/$s_!BNyv!,w_424,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fe2804ed4-b5af-4479-8e5e-7acfde126f39_1892x1596.png 424w, https://substackcdn.com/image/fetch/$s_!BNyv!,w_848,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fe2804ed4-b5af-4479-8e5e-7acfde126f39_1892x1596.png 848w, https://substackcdn.com/image/fetch/$s_!BNyv!,w_1272,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fe2804ed4-b5af-4479-8e5e-7acfde126f39_1892x1596.png 1272w, https://substackcdn.com/image/fetch/$s_!BNyv!,w_1456,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fe2804ed4-b5af-4479-8e5e-7acfde126f39_1892x1596.png 1456w" sizes="100vw"><img src="https://substackcdn.com/image/fetch/$s_!BNyv!,w_1456,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fe2804ed4-b5af-4479-8e5e-7acfde126f39_1892x1596.png" width="1456" height="1228" data-attrs="{&quot;src&quot;:&quot;https://substack-post-media.s3.amazonaws.com/public/images/e2804ed4-b5af-4479-8e5e-7acfde126f39_1892x1596.png&quot;,&quot;srcNoWatermark&quot;:null,&quot;fullscreen&quot;:null,&quot;imageSize&quot;:null,&quot;height&quot;:1228,&quot;width&quot;:1456,&quot;resizeWidth&quot;:null,&quot;bytes&quot;:1067916,&quot;alt&quot;:null,&quot;title&quot;:null,&quot;type&quot;:&quot;image/png&quot;,&quot;href&quot;:null,&quot;belowTheFold&quot;:true,&quot;topImage&quot;:false,&quot;internalRedirect&quot;:&quot;https://blockbuster.thoughtleader.school/i/190456043?img=https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fe2804ed4-b5af-4479-8e5e-7acfde126f39_1892x1596.png&quot;,&quot;isProcessing&quot;:false,&quot;align&quot;:null,&quot;offset&quot;:false}" class="sizing-normal" alt="" srcset="https://substackcdn.com/image/fetch/$s_!BNyv!,w_424,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fe2804ed4-b5af-4479-8e5e-7acfde126f39_1892x1596.png 424w, https://substackcdn.com/image/fetch/$s_!BNyv!,w_848,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fe2804ed4-b5af-4479-8e5e-7acfde126f39_1892x1596.png 848w, https://substackcdn.com/image/fetch/$s_!BNyv!,w_1272,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fe2804ed4-b5af-4479-8e5e-7acfde126f39_1892x1596.png 1272w, https://substackcdn.com/image/fetch/$s_!BNyv!,w_1456,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fe2804ed4-b5af-4479-8e5e-7acfde126f39_1892x1596.png 1456w" sizes="100vw" loading="lazy"></picture><div class="image-link-expand"><div class="pencraft pc-display-flex pc-gap-8 pc-reset"><button tabindex="0" type="button" class="pencraft pc-reset pencraft icon-container restack-image"><svg role="img" width="20" height="20" viewBox="0 0 20 20" fill="none" stroke-width="1.5" stroke="var(--color-fg-primary)" stroke-linecap="round" stroke-linejoin="round" xmlns="http://www.w3.org/2000/svg"><g><title></title><path d="M2.53001 7.81595C3.49179 4.73911 6.43281 2.5 9.91173 2.5C13.1684 2.5 15.9537 4.46214 17.0852 7.23684L17.6179 8.67647M17.6179 8.67647L18.5002 4.26471M17.6179 8.67647L13.6473 6.91176M17.4995 12.1841C16.5378 15.2609 13.5967 17.5 10.1178 17.5C6.86118 17.5 4.07589 15.5379 2.94432 12.7632L2.41165 11.3235M2.41165 11.3235L1.5293 15.7353M2.41165 11.3235L6.38224 13.0882"></path></g></svg></button><button tabindex="0" type="button" class="pencraft pc-reset pencraft icon-container view-image"><svg xmlns="http://www.w3.org/2000/svg" width="20" height="20" viewBox="0 0 24 24" fill="none" stroke="currentColor" stroke-width="2" stroke-linecap="round" stroke-linejoin="round" class="lucide lucide-maximize2 lucide-maximize-2"><polyline points="15 3 21 3 21 9"></polyline><polyline points="9 21 3 21 3 15"></polyline><line x1="21" x2="14" y1="3" y2="10"></line><line x1="3" x2="10" y1="21" y2="14"></line></svg></button></div></div></div></a><figcaption class="image-caption">Source: <a href="https://www.cell.com/action/showPdf?pii=S0896-6273%2822%2900806-6">Neuron</a></figcaption></figure></div><p>The Doom demo used 200,000 neurons and took an external developer one week. That compression is not about the neurons getting smarter. It is about the interface maturing. The platform is becoming usable. And the history of technology tells us something specific about what happens when platforms become usable: things that seemed impossible start happening faster than the current state of the art predicts.</p><p>However.</p><p>The technology story &#8212; will biological computing ever rival silicon? &#8212; is not the story that matters. The story that matters is the one nobody in the tech press is telling.</p><p>It starts with how the public will actually process what it has seen.</p><h2><strong>The Line Nobody Can Draw</strong></h2><p>The 200,000 neurons on Cortical Labs' chip are competent at something. They navigate a 3D environment. They find enemies. They shoot.</p><p>Daniel Dennett called this &#8220;competence without comprehension&#8221; &#8212; systems that perform apparently purposeful behavior without understanding anything about what they are doing. The neurons are not conscious. Brett Kagan, the company&#8217;s Chief Scientific Officer, says so explicitly:</p><blockquote><p><em>"What the cells are showing is not a marker of consciousness; it's simply what happens to a complex system in a structured information environment." </em></p></blockquote><p>What they do show, he says, is &#8220;markers of structural organization, which one might call intelligence.&#8221; To put this in context, 200,000 neurons have roughly the computational complexity of a small insect ganglion.</p><p>But the gradient from 200,000 neurons to 86 billion &#8212; from insect ganglion to human brain &#8212; is continuous. There is no bright line. There is no threshold where &#8220;mere material&#8221; becomes &#8220;moral patient.&#8221; Every number anyone might pick is arbitrary. And the arbitrariness means governance will deadlock. Industry will resist any threshold as premature. Ethicists will demand one as precautionary. Policymakers will defer because any number they choose will be attacked from both sides. While the deadlock persists, the technology scales through the ungoverned space.</p><p>This is the pattern of every comparable boundary dispute. The 14-day rule for embryo research &#8212; established by the Warnock Committee in 1984 &#8212; was explicitly acknowledged as arbitrary. It held for forty years not because it was correct but because reopening the question was politically intolerable. The gradient admits no natural stopping point. The technology proceeds through the gap. </p><p>This dynamic &#8212; the <strong>Comprehension Gradient</strong> &#8212; is the governing mechanism for biological computing&#8217;s future. The governance question will not be resolved by science. It will be resolved, if at all, by an arbitrary threshold imposed after a crisis forces action (The history of embryo research, animal welfare law, and organ transplantation all suggest the same pattern).</p><h2><strong>The Ambiguity Is Not a Bug</strong></h2><p>This brings me to the second mechanism I think is operating here, and it is the one that makes me genuinely uneasy.</p><p>Cortical Labs inhabits a specific kind of moral ambiguity that is not accidental but structurally useful. Kagan says the neurons are not conscious &#8212; no moral obligation. The company&#8217;s marketing says &#8220;Artificial Actual Intelligence&#8221; and &#8220;Think beyond silicon&#8221; &#8212; emphasizing the living, biological nature that generates fascination and investment. The Doom demo is presented as a fun engineering milestone, not as 200,000 human neurons subjected to electrical reward and punishment signals in a game built around killing. </p><p>In-Q-Tel&#8217;s investment benefits from the same duality. If neurons are just a material, defense applications face no special ethical scrutiny; if neurons have potential moral status, the investment looks prescient for securing early access.</p><p>The ambiguity is not a bug to be resolved. It is a feature that serves both commercial and narrative purposes simultaneously. Actors who benefit from the technology use the &#8220;no consciousness&#8221; framework when they need to justify continued operation, and the &#8220;it&#8217;s alive&#8221; framework when they need to generate excitement. </p><p>This is not unique to Cortical Labs. The same dynamic operated in animal experimentation for a century &#8212; researchers simultaneously argued that animals were similar enough to humans to produce medically relevant results and different enough to lack moral claims against experimentation. The contradiction was maintained, not resolved, because resolution in either direction would have been costly. </p><p>Facebook simultaneously presented itself as essential social infrastructure when it wanted regulatory protection and as trivial entertainment when it wanted to avoid responsibility for teenage mental health. </p><p>The gig economy classified workers as independent contractors to avoid employment obligations and as core brand ambassadors when marketing.</p><p>I am calling this the <strong>Substrate Moral Hazard</strong>, and its defining characteristic is that the defense &#8212; &#8220;we genuinely don&#8217;t know if neurons have moral status&#8221; &#8212; is intellectually honest. It is not a lie. It is a real uncertainty. But the uncertainty itself becomes an exploitable resource because the longer it persists unresolved, the more infrastructure and capital accumulate around the technology, making future moral reckoning costlier and therefore less likely.</p><p><strong>The testable prediction:</strong> Cortical Labs and its investors will actively resist resolving the consciousness question, not just passively ignore it. The &#8220;we don&#8217;t know&#8221; position will be maintained longer than the scientific evidence warrants, because resolution in either direction is commercially costly. </p><p>If the neurons are conscious, the business model is a moral catastrophe. But if the neurons are definitively not conscious, the &#8220;Artificial Actual Intelligence&#8221; marketing loses its magic. </p><p>The ambiguity is the product.</p><h2><strong>The Strongest Case for What They&#8217;re Doing Right</strong></h2><p>There is a steelman here, and I want to give it its full weight before I complicate it.</p><p>Cortical Labs has done something remarkable. They published their <a href="https://arxiv.org/abs/2601.03498">ethics paper</a> before their <a href="https://www.cell.com/neuron/fulltext/S0896-6273(22)00806-6?_returnURL=https%3A%2F%2Flinkinghub.elsevier.com%2Fretrieve%2Fpii%2FS0896627322008066%3Fshowall%3Dtrue">technical paper</a> &#8212; a sequence almost unheard of in biotech. Kagan has co-authored work with independent bioethicists. They have proposed a quantifiable framework for detecting agency &#8212; a three-level hierarchy of information processing that provides measurable criteria rather than philosophical hand-waving. Their 2026 paper distinguishes between systems that merely react, systems that have internal states with fixed rules, and systems that adaptively modify their own rules. </p><p>By their own framework, most current AI systems fail the test for genuine agency. Cortical Labs' biological neurons might pass it. They are, in other words, building the tools that could be used to constrain them. That is not nothing.</p><p>However. </p><p>Publishing ethics papers is not the same as submitting to independent oversight. Proposing a framework is not the same as being bound by it. The proactive engagement with ethics serves a dual function: it is genuinely responsible, and it inoculates the company against the charge that they have not thought about the problem. Both can be true simultaneously, and both are.</p><h2><strong>December 23, 1947</strong></h2><p>Now I want to talk about the transistor, because the technology story &#8212; while secondary to the governance story &#8212; still matters.</p><p>On December 16, 1947, Bardeen and Brattain successfully built and tested the first transistor. On December 23, they demonstrated it to Bell Labs leadership. Six months later, when the company held a public press conference, the New York Times buried the announcement in a short piece on page 46 under 'The News of Radio.' </p><p>The radio industry shrugged. Vacuum tubes were reliable, well-understood, and the foundation of a multi-billion-dollar industry. Why bet on a finicky crystal?</p><p>Thirteen years later, the transistor was commercially viable. Twenty-five years later, it dominated. The vacuum tube industry &#8212; the glass blowers, the filament winders, the circuit designers whose expertise was organized around a specific substrate &#8212; was gutted within a single working lifetime. <strong>The critical variable was the learning curve:</strong> every year, transistors got smaller, cheaper, more reliable. Tubes had hit their ceiling. Once the curves crossed, the outcome was inevitable.</p><p>Is biological computing on this trajectory? </p><p>The honest answer is that we cannot tell. Two data points &#8212; Pong to Doom, 18 months to one week &#8212; do not make a learning curve. The current performance gap between biological and silicon computing is astronomically wider than the gap between transistors and vacuum tubes was in 1947. Silicon AI has made more progress in the last six months than biological computing has made in its entire history. And the complementary innovations required &#8212; scalable neuron production, reliable cell survival, programming paradigms for biological substrates, regulatory frameworks for human tissue as a commercial product &#8212; represent a queue of bottlenecks that will take decades to clear.</p><p>But biological computing has one thing that quantum computing &#8212; the other obvious comparison &#8212; does not. Eighty-six billion neurons run human civilization. The existence proof is not a physics theorem. It is every human brain that has ever existed. The question is not whether neurons can compute. It is whether we can engineer them reliably outside a skull.</p><h3>The Hearing Aid Moment</h3><p>The transistor did not compete with vacuum tubes on the tube&#8217;s home turf. It found niche markets &#8212; hearing aids, military radios, pocket transistor sets &#8212; where its unique properties (small size, low power, durability) mattered more than its performance disadvantages. Niche revenue funded the research that eventually made transistors competitive everywhere.</p><p>Biological computing may be approaching its hearing aid moment, and the niche is not computing at all. It is drug screening. In April 2025, the <a href="https://www.fda.gov/drugs/drug-safety-and-availability/fdas-istand-pilot-program-accepts-submission-first-organ-chip-technology-designed-predict-human-drug">FDA announced its intention</a> to replace animal testing, beginning immediately with monoclonal antibodies and later shifting to &#8220;new approach methodologies&#8221; including organ-on-chip technology and organoids. The organ-on-chip market is <a href="https://www.globenewswire.com/news-release/2026/02/09/3234681/0/en/Organ-on-a-Chip-Market-to-Reach-US-2-238-28-Million-by-2033-as-FDA-Support-and-Pharma-Adoption-Accelerate-Says-Astute-Analytica.html">projected to reach $2.2 billion by 2033</a>. </p><p>Cortical Labs has already demonstrated &#8212; in a <a href="https://www.nature.com/articles/s42003-025-08194-6">2025 </a><em><a href="https://www.nature.com/articles/s42003-025-08194-6">Communications Biology</a></em><a href="https://www.nature.com/articles/s42003-025-08194-6"> paper</a> &#8212; that pharmaceutical compounds measurably alter neural performance on their platform. Anti-seizure medications improved goal-directed activity in hyperactive neural cultures. That is not a computing application. It is a drug screening application.</p><p><strong>The convergence is specific:</strong> the FDA is actively seeking alternatives to animal testing for neurological drug candidates. Cortical Labs has a commercial platform that tests drug effects on living human neural tissue. If the commercial strategy pivots &#8212; and the published evidence suggests this is already underway &#8212; the business model shifts from &#8220;biological computer that competes with silicon&#8221; (a fight it will lose for decades) to &#8220;human neural tissue platform that replaces animal testing&#8221; (a fight where it has a structural advantage silicon cannot replicate). That is the transistor&#8217;s actual trajectory, and it is the one worth watching. </p><p>One important caveat on the energy narrative: the million-fold efficiency claim in <a href="https://www.frontiersin.org/journals/science/articles/10.3389/fsci.2023.1017235/full">Cortical Labs&#8217; foundational literature</a> is real at the neuron level &#8212; the human brain runs on 20 watts. However, the CL1 unit <a href="https://spectrum.ieee.org/biological-computer-for-sale">draws 850 to 1,000 watts total</a>, because life support (heating, cooling, pumping, filtering) dwarfs the neurons&#8217; energy consumption. The biology is efficient. The infrastructure is not.</p><h2>Whose Neurons? Whose Consent?</h2><p>The CL1's neurons are derived from human induced pluripotent stem cells, reprogrammed from adult donor cells &#8212; typically skin or blood samples. The donors gave <strong>broad consent</strong> under biobank protocols designed for a world where donated tissue went into freezers and was used in studies the donor would never encounter. </p><p>A <a href="https://onlinelibrary.wiley.com/doi/full/10.1111/bioe.13047">2022 paper in </a><em><a href="https://onlinelibrary.wiley.com/doi/full/10.1111/bioe.13047">Bioethics</a></em> argued directly that broad consent should not extend to brain organoid research. <a href="https://pmc.ncbi.nlm.nih.gov/articles/PMC10362497/">Donors surveyed in 2023</a> were enthusiastic but wanted ongoing engagement and the ability to withdraw consent &#8212; precisely what broad consent does not provide. And the creation of brain organoids from iPSC lines is, according to a <a href="https://www.frontiersin.org/journals/blockchain/articles/10.3389/fbloc.2025.1510429/full">2025 review in </a><em><a href="https://www.frontiersin.org/journals/blockchain/articles/10.3389/fbloc.2025.1510429/full">Frontiers in Blockchain</a></em>, "subject to hardly any regulation at all." </p><p>This is the Henrietta Lacks problem updated for wetware-as-a-service. Lacks&#8217; case is the prototype for what happens when consent architecture meets commercial biology. </p><p>In 1951, Lacks was being treated for an aggressive cervical cancer at Johns Hopkins Hospital &#8212; one of the few institutions in that era that treated Black patients at all &#8212; when doctors took samples of her cancerous cells without her knowledge, as was standard practice at the time. While other samples died within days, Lacks&#8217; cells doubled every 20 to 24 hours and kept dividing indefinitely. The resulting HeLa cell line became the workhorse of twentieth-century biomedicine, used to develop the polio vaccine, the HPV vaccine, chemotherapy protocols, and COVID-19 vaccines, with over 100,000 publications built on HeLa research. </p><p>These discoveries became enormously lucrative &#8212; while the Lacks family received no financial benefits and continued to live in poverty. Compensation came only after decades of legal pressure: in 2023, the family settled with Thermo Fisher Scientific, and in February 2026 reached a second settlement with Novartis, with further lawsuits still ongoing. </p><p><strong>The case established the template for the consent gap:</strong> tissue collected under one set of assumptions, used in ways the donor never imagined, generating value that flows entirely away from the person whose body made it possible.</p><p>The iPSC pipeline is not much different &#8212; technically valid consent with arguably insufficient scope. Did the donors who gave blood samples say yes to "your neurons playing Doom on the internet"? Did they say yes to "your neurons being sold to the CIA's venture capital arm"? The consent architecture was built for one world and is being applied in another. Somewhere in a biobank, a donor has no idea that their cells are on a chip, learning to kill demons in a video game, while the internet laughs. </p><p>The Lacks analogy is imperfect. But its power has never come from legal precision. It comes from the <em>feeling</em> of violation when people discover their tissue was used in ways they never imagined. That feeling does not require a legal finding. It requires a headline.</p><h2><strong>The Kidney and the Governance Gap</strong></h2><p>This is where Joseph Murray&#8217;s kidney returns. </p><p>The kidney was a technology that worked. The governance was a catastrophe &#8212; thirty years of improvisation before brain death criteria, allocation algorithms, and informed consent protocols were built reactively from scandals. </p><p>Biological computing is entering the same gap. No regulatory framework covers the commercial use of living human neurons as computing substrates. And the &#8220;Can It Run Doom?&#8221; meme is domesticating the technology through humor like other key technologies from the past: </p><ul><li><p> &#8220;Atomic tourism&#8221; and &#8220;Miss Atomic Energy&#8221; pageants domesticated nuclear testing in 1950s Las Vegas</p></li><li><p>The &#8220;fun social network&#8221; framing domesticated social media surveillance before anyone understood the scale of data collection. </p></li></ul><p>When a technology with profound implications is first encountered through a trivializing frame, the trivial framing becomes the anchor. The meme becomes the lens, and the lens does not break.</p><p><strong>I call this the Domestication Trap, and its prediction is specific:</strong> when Cortical Labs scales to a million neurons, the public frame will be &#8220;remember when 200,000 played Doom badly? Now they&#8217;re better!&#8221; &#8212; not &#8220;a million human neurons are being commercially instrumentalized.&#8221; </p><p>But the kidney had one advantage the neuron does not: it was immediately, viscerally recognizable as human. A dish of neurons playing a video game badly is not. </p><p>And the <a href="https://ai.jhu.edu/the-psychology-of-consciousness-shapes-epistemic-and-moral-attitudes-toward-neuromorphic-entities/">JHU data</a> tells us what happens in that gap: the domestication trap and the substrate asymmetry reinforce each other. The more human-like the neurons seem, the more valuable people find them &#8212; but their concern for the neurons' wellbeing doesn't rise at the same rate. When you add humor to the mix&#8230; humor makes it harder to take something seriously. And the harder it is to take something seriously, the wider the gap grows between "this is useful" and "this might be wrong." The ratchet turns.</p><h2><strong>Three Signals, One Bright Line</strong></h2><p>So, what should you watch for?</p><ul><li><p>Independent replication &#8212; does another lab confirm the core finding?</p></li><li><p>The next funding round &#8212; does capital bet on the learning curve or walk away?</p></li><li><p>Proactive governance &#8212; does any regulator act before a crisis forces them to?</p></li></ul><p><strong>First and most important:</strong> independent replication. Every claim Cortical Labs makes rests on research almost entirely from one group of authors. The critique published in <em>Neuron</em> did not challenge the experimental methods but called the interpretive framing &#8212; words like &#8220;sentience&#8221; and &#8220;intelligence&#8221; &#8212; unsupported. Tony Zador at Cold Spring Harbor Laboratory called the whole enterprise &#8220;a scientific dead-end.&#8221; </p><p>If another laboratory, with no affiliation to Cortical Labs, replicates the core finding that cultured neurons can learn adaptive behavior &#8212; whether on the CL1 platform or independently &#8212; biological computing transitions from one company&#8217;s claim to an emerging field. If no replication appears by the end of 2028, this is quantum computing with a better narrative.</p><p>Whether Cortical Labs is an outlier or the first mover in a field depends on whether others follow &#8212; and as of March 2026, they are following. FinalSpark (Switzerland) operates a Neuroplatform with 1,000+ organoids and 10+ university subscribers. The Biological Computing Co. (San Francisco) raised $25M in seed funding in February 2026 to build bio-neural adapters for existing AI models. UCSC demonstrated goal-directed organoid learning. MetaBOC (China) published an open-source brain-on-chip. Indiana University&#8217;s Brainoware demonstrated reservoir computing in Nature Electronics. At least 15-20 active labs globally are working on some form of organoid intelligence. The question has shifted from &#8220;will anyone else enter?&#8221; to &#8220;how fast does the field consolidate, and does Cortical Labs&#8217; first-mover advantage hold against better-funded competitors (TBC) and open-source alternatives (MetaBOC)?&#8221;</p><p><strong>Second: the next funding round.</strong> The current valuation of $50-70 million on $11 million raised reflects what I think of as the existence proof premium &#8212; investors paying not for current performance but for the elimination of the &#8220;is it possible?&#8221; question. If the next round comes in above $100 million with no commercial application, capital markets are betting on the learning curve. If the round fails or comes at a lower valuation, the market has moved on.</p><p><strong>Third &#8212; and this is the one I think matters most for the long run:</strong> whether any bioethics body, institutional review board, or regulator initiates a formal review of human neuron use in commercial computing before a crisis forces them to. If they do, the Substrate Moral Hazard is weaker than I think it is. If they do not &#8212; if the ambiguity is maintained as long as it is commercially useful &#8212; the organ transplant trajectory is our best guide, and the governance will likely arrive thirty years late and built from scandals.</p><h2><strong>The Kidney and the Nobel Prize</strong></h2><p>Joseph Murray won the Nobel Prize in 1990 for that kidney transplant. The ethical framework that eventually governed organ transplantation is one of medicine&#8217;s genuine achievements: transparent allocation, informed consent, brain death criteria, the works. It took thirty years and an uncountable human cost to build.</p><p>The neurons on Cortical Labs&#8217; chip do not know they are playing Doom. The assessment that they do not know anything is reasonable and probably correct. But &#8220;probably correct&#8221; and &#8220;certain&#8221; are different things, and on the gradient between a single neuron and a human brain, nobody can tell you where the confidence should change. Survey research on public attitudes toward brain organoids consistently finds a divided public, with a significant portion viewing them as retaining something human, while another portion treats them as research tools &#8212; and the largest group remains genuinely uncertain. That uncertain middle will follow whichever narrative reaches them first.</p><p>The Doom meme reached them first. Seven million views and counting.</p><p>The technology may not wait for the governance. In fields where the moral category boundaries are contested and the benefits are visible, it rarely has. The drug screening pivot may arrive first &#8212; quietly establishing biological computing&#8217;s commercial foothold in a domain where living human neural tissue does something silicon genuinely cannot. That would be the transistor&#8217;s hearing aid. And just as the hearing aid did not stay a hearing aid, the drug screening platform will not stay a drug screening platform.</p><p>Murray&#8217;s kidney worked. The arguments came after. Cortical Labs&#8217; neurons work &#8212; modestly, crudely, but demonstrably. The arguments have not even begun.</p><p>Meanwhile, the neurons die, respawn, and keep playing.</p><div><hr></div><h1><strong>PAID MEMBERS: MINI-COURSE</strong></h1><div><hr></div><p>This appendix is for curious readers who want to go deeper than the article and actually learn the concepts behind the analysis.</p><p>Think of it as a course. It teaches:</p><ul><li><p><strong>The economic and structural forces at work.</strong> The &#8220;physics&#8221; of what makes this event behave the way it does.</p></li><li><p><strong>The historical story.</strong> Shows when this has happened before and what happened to the people in it.</p></li><li><p><strong>The psychological and social mechanisms.</strong> The mental models that explain why humans respond to these forces the way they do.</p></li><li><p><strong>The paradigm literacy.</strong> Why smart, informed people analyzing the same event reach completely different conclusions, and what that reveals about whose values are shaping the consensus.</p></li></ul><p>Read them sequentially, and you&#8217;ll have a working toolkit for analyzing the next AI event on your own.</p><p>Or, if you want to go even deeper, copy and paste this entire article and have a conversation about it with AI. </p><p class="button-wrapper" data-attrs="{&quot;url&quot;:&quot;https://blockbuster.thoughtleader.school/subscribe?&quot;,&quot;text&quot;:&quot;Subscribe now&quot;,&quot;action&quot;:null,&quot;class&quot;:null}" data-component-name="ButtonCreateButton"><a class="button primary" href="https://blockbuster.thoughtleader.school/subscribe?"><span>Subscribe now</span></a></p>
      <p>
          <a href="https://blockbuster.thoughtleader.school/p/cortical-labs-trains-200000-living">
              Read more
          </a>
      </p>
   ]]></content:encoded></item><item><title><![CDATA[AI Thought Leader School: The New Paradigm (3/9/2026)]]></title><description><![CDATA[AI agents are replacing chat as the new work paradigm. Learn Claude Code, build skills, think in knowledge graphs, and play your way to mastery.]]></description><link>https://blockbuster.thoughtleader.school/p/ai-thought-leader-school-claude-code-2b7</link><guid isPermaLink="false">https://blockbuster.thoughtleader.school/p/ai-thought-leader-school-claude-code-2b7</guid><dc:creator><![CDATA[Michael Simmons]]></dc:creator><pubDate>Mon, 09 Mar 2026 22:09:56 GMT</pubDate><enclosure url="https://api.substack.com/feed/podcast/190442396/62e230c9e9c4b40d33f623e9e2b144a7.mp3" length="0" type="audio/mpeg"/><content:encoded><![CDATA[<h1>AI Generated Overview</h1><h3>From Chat to Agents: Mastering the New Paradigm of AI-Powered Work</h3><p>We are living through one of the most significant shifts in knowledge work in decades &#8212; and most people don&#8217;t even realize it&#8217;s happening yet.</p><p>This course exists for those who do.</p><p>Every week, we go beyond the surface-level AI conversation. We don&#8217;t just talk about what&#8217;s possible &#8212; we get our hands dirty, compare tools in real-time, and develop the mental models and practical skills to actually lead in an AI-first world. The people in this community are builders, thinkers, and practitioners who are serious about staying ahead of the curve &#8212; not just reading about it.</p><p>This session dove deep into what may be the most important transition in AI right now: the move from the chat interface to agentic AI. If you&#8217;ve been using Claude.ai or ChatGPT to think through ideas and produce content, you&#8217;re already ahead of most people. But a new paradigm is emerging &#8212; and understanding it will determine who thrives in&#8230;</p>
      <p>
          <a href="https://blockbuster.thoughtleader.school/p/ai-thought-leader-school-claude-code-2b7">
              Read more
          </a>
      </p>
   ]]></content:encoded></item><item><title><![CDATA[Augmented Awakening with Anand Rao (March 6, 2026)]]></title><description><![CDATA[Navigate AI overwhelm by finding your inner signal. Use paradoxical intent, body-mind grounding, and AI prompts to act from your own clarity.]]></description><link>https://blockbuster.thoughtleader.school/p/augmented-awakening-with-anand-rao-91d</link><guid isPermaLink="false">https://blockbuster.thoughtleader.school/p/augmented-awakening-with-anand-rao-91d</guid><pubDate>Mon, 09 Mar 2026 14:49:21 GMT</pubDate><enclosure url="https://api.substack.com/feed/podcast/190167199/87c5acff757c212f29038c0c4a2d6e0c.mp3" length="0" type="audio/mpeg"/><content:encoded><![CDATA[<h1>Augmented Awakening: Finding Your Signal in the AI Noise</h1><p>Most people approach AI the same way &#8212; overwhelmed by tool options, chasing productivity hacks, and wondering why none of it is actually moving them forward. What if the real bottleneck isn&#8217;t which AI tools you&#8217;re using, but whether you&#8217;re operating from your own authentic signal in the first place?</p><p>That&#8217;s what this session of <strong>Augmented Awakening</strong> is about. My friend and longtime coach Anand Rao joined me to explore something I&#8217;ve personally wrestled with over two years of going all-in on AI: how to stop being swept along by the noise of infinite options and start using AI as a lever for what actually matters to you.</p><p>Anand brings a rare perspective to this &#8212; his developmental coaching approach isn&#8217;t about giving you a system to adopt. It&#8217;s about helping you get so clear on your own energy and direction that the right choices become obvious. Combine that with AI&#8217;s ability to surface hidden patterns in your own thinking and behavior, and you get something that genuinely changes how you operate.</p><p>This is one of the most practical <em>and</em> most profound sessions we&#8217;ve done in the Augmented Awakening series. If you&#8217;ve ever felt like AI is adding more chaos than clarity to your life, this one is for you.</p><h4>During the class, we:</h4><ul><li><p>Explored why AI accelerates whichever direction you&#8217;re already moving</p></li><li><p>Used &#8220;conscious duplication&#8221; to study overwhelm rather than fight it</p></li><li><p>Did a live brain dump exercise to surface each participant&#8217;s hidden signal</p></li><li><p>Ran real-time prompts to reveal gaps between stated goals and actual behavior</p></li><li><p>Walked through participant case studies with Helen, Tom, and Kevin</p></li><li><p>Discussed the Bhagavad Gita and what it means to act without attachment to outcomes</p></li><li><p>Explored Anand&#8217;s concept of &#8220;operative topology&#8221; &#8212; a new mathematics of change</p></li><li><p>Examined how AI mirrors human patterning, including the &#8220;next word prediction&#8221; parallel</p></li><li><p>Used AI to distinguish signal from noise in a participant&#8217;s list of tools and projects</p></li><li><p>Closed with a somatic meditation to integrate the session&#8217;s insights</p></li></ul><h4>  Implications</h4><p>The deeper argument of this session is that we&#8217;ve been asking the wrong question about AI. Instead of &#8220;what should I use it for?&#8221; the more important question is &#8220;who am I, and what am I actually trying to do?&#8221; Anand&#8217;s core premise &#8212; that AI amplifies the direction you&#8217;re already moving &#8212; reframes AI adoption as a personal development question, not a productivity one. If you bring borrowed confusion to AI, that&#8217;s what it multiplies.</p><p>What makes this session particularly timely is the pace of change we&#8217;re all navigating. As Anand points out, the cycles of transformation are getting shorter and shorter. The ability to find and act from your own signal &#8212; rather than reacting to external noise &#8212; is becoming a foundational skill, not just a nice-to-have.</p><p>The broader significance is this: Anand&#8217;s work suggests that real change doesn&#8217;t come from layering new behaviors onto old systems. It comes from structural reorganization at the level of values &#8212; the kind of shift where you don&#8217;t have to try to act differently, because you genuinely are different. Using AI as a tool for that kind of self-knowledge, rather than just a productivity accelerant, opens up a much more interesting possibility.<br></p><h1><strong>AI-Generated Podcast Summary Of The Class</strong></h1><p><em>Unavailable</em></p><h1><strong>Other Classes In The Augmented Awakening Course</strong></h1><p class="button-wrapper" data-attrs="{&quot;url&quot;:&quot;https://blockbuster.thoughtleader.school/t/augmented-awakening&quot;,&quot;text&quot;:&quot;Access All Classes >>&quot;,&quot;action&quot;:null,&quot;class&quot;:null}" data-component-name="ButtonCreateButton"><a class="button primary" href="https://blockbuster.thoughtleader.school/t/augmented-awakening"><span>Access All Classes &gt;&gt;</span></a></p><h1>Learn More About Anand&#8217;s Programs </h1><p class="button-wrapper" data-attrs="{&quot;url&quot;:&quot;https://developmentalmastery.substack.com/&quot;,&quot;text&quot;:&quot;Anand's Substack&quot;,&quot;action&quot;:null,&quot;class&quot;:null}" data-component-name="ButtonCreateButton"><a class="button primary" href="https://developmentalmastery.substack.com/"><span>Anand's Substack</span></a></p><p class="button-wrapper" data-attrs="{&quot;url&quot;:&quot;https://developmentalmastery.com/&quot;,&quot;text&quot;:&quot;Anand's Programs&quot;,&quot;action&quot;:null,&quot;class&quot;:null}" data-component-name="ButtonCreateButton"><a class="button primary" href="https://developmentalmastery.com/"><span>Anand's Programs</span></a></p><div><hr></div><h1><strong>RECORDING RESOURCES <br>(FOR PAID MEMBERS)</strong></h1><div><hr></div><ul><li><p>3 Prompts Shared</p></li><li><p>Master Taxonomy Of Higher Intelligences</p></li><li><p>Resources Shared </p></li><li><p>Chapters</p></li><li><p>Chat Transcripts</p></li></ul>
      <p>
          <a href="https://blockbuster.thoughtleader.school/p/augmented-awakening-with-anand-rao-91d">
              Read more
          </a>
      </p>
   ]]></content:encoded></item><item><title><![CDATA[Augmented Awakening with Anand Rao (February 13, 2026)]]></title><description><![CDATA[AI-powered prompts for personal transformation using the Lattice model to reveal hidden dimensions, feedback loops, and blind spots.]]></description><link>https://blockbuster.thoughtleader.school/p/augmented-awakening-with-anand-rao</link><guid isPermaLink="false">https://blockbuster.thoughtleader.school/p/augmented-awakening-with-anand-rao</guid><pubDate>Sat, 07 Mar 2026 02:19:50 GMT</pubDate><enclosure url="https://api.substack.com/feed/podcast/188201552/c01da5ca182402e1380a25ff7cd9be5e.mp3" length="0" type="audio/mpeg"/><content:encoded><![CDATA[<h1>The Lattice Mental Model: AI-Powered Frameworks for Seeing What&#8217;s Invisible</h1><p>In this session, we explored one of the most powerful mental models to emerge from AI&#8212;something I discovered during a 374-page conversation that revealed patterns I couldn&#8217;t see on my own. The Lattice is a framework for understanding how different dimensions of reality intersect, and how AI can help us perceive the connections, tensions, and possibilities that remain invisible to our normal way of thinking.</p><p>What makes this particularly valuable is that we&#8217;re at the frontier of discovering entirely new mental models&#8212;frameworks that help us see fundamental patterns in the world that we&#8217;ve collectively missed. This isn&#8217;t about using AI to automate tasks. It&#8217;s about using it as a thinking partner to develop genuinely new ways of seeing.</p><p>If you&#8217;ve found mental models helpful in your life, this session shows you how to discover frameworks that even the most brilliant thinkers haven&#8217;t articulated yet. And we did it live, with real examples, real-time prompting, and hands-on exploration of how this works in practice.</p><h4>During the session, we:</h4><ul><li><p>Explored the Lattice framework and its core dimensions</p></li><li><p>Demonstrated live AI prompting to uncover hidden mental models</p></li><li><p>Applied the framework to participant questions in real time</p></li><li><p>Examined how contradictions reveal deeper systemic patterns and truths</p></li><li><p>Discovered how AI sees multi-dimensional tensions we normally miss</p></li><li><p>Practiced using artifacts to map complex conceptual relationships visually</p></li><li><p>Investigated questions about consciousness, awakening, and personal transformation practices</p></li><li><p>Learned techniques for extracting novel frameworks from AI conversations</p></li><li><p>Explored how the Lattice applies to business decisions</p></li><li><p>Witnessed Anand Rao demonstrate contemplative practices with live coaching</p></li></ul><h1><strong>AI-Generated Podcast Summary Of The Class</strong></h1><div class="native-audio-embed" data-component-name="AudioPlaceholder" data-attrs="{&quot;label&quot;:null,&quot;mediaUploadId&quot;:&quot;028c1309-5d5e-4845-b3fe-10cc7b3fe385&quot;,&quot;duration&quot;:1058.7167,&quot;downloadable&quot;:false,&quot;isEditorNode&quot;:true}"></div><h1><strong><br>Other Classes In The Augmented Awakening Course</strong></h1><p class="button-wrapper" data-attrs="{&quot;url&quot;:&quot;https://blockbuster.thoughtleader.school/t/augmented-awakening&quot;,&quot;text&quot;:&quot;Access All Classes >>&quot;,&quot;action&quot;:null,&quot;class&quot;:null}" data-component-name="ButtonCreateButton"><a class="button primary" href="https://blockbuster.thoughtleader.school/t/augmented-awakening"><span>Access All Classes &gt;&gt;</span></a></p><h1><br>Learn More About Anand&#8217;s Programs </h1><p class="button-wrapper" data-attrs="{&quot;url&quot;:&quot;https://developmentalmastery.substack.com/&quot;,&quot;text&quot;:&quot;Anand's Substack&quot;,&quot;action&quot;:null,&quot;class&quot;:null}" data-component-name="ButtonCreateButton"><a class="button primary" href="https://developmentalmastery.substack.com/"><span>Anand's Substack</span></a></p><p class="button-wrapper" data-attrs="{&quot;url&quot;:&quot;https://developmentalmastery.com/&quot;,&quot;text&quot;:&quot;Anand's Programs&quot;,&quot;action&quot;:null,&quot;class&quot;:null}" data-component-name="ButtonCreateButton"><a class="button primary" href="https://developmentalmastery.com/"><span>Anand's Programs</span></a></p><p><br></p><div><hr></div><h1><strong>RECORDING RESOURCES <br>(FOR PAID MEMBERS)</strong></h1><div><hr></div><ul><li><p>3 Prompts Shared</p></li><li><p>Master Taxonomy Of Higher Intelligences</p></li><li><p>Resources Shared </p></li><li><p>Chapters</p></li><li><p>Chat Transcripts</p></li></ul>
      <p>
          <a href="https://blockbuster.thoughtleader.school/p/augmented-awakening-with-anand-rao">
              Read more
          </a>
      </p>
   ]]></content:encoded></item><item><title><![CDATA[I Built An AI System That Uses 100+ Mental Models To Analyze The News]]></title><description><![CDATA[I pointed it at the Block layoffs. What came back should concern every knowledge worker.]]></description><link>https://blockbuster.thoughtleader.school/p/i-built-an-ai-system-that-uses-100</link><guid isPermaLink="false">https://blockbuster.thoughtleader.school/p/i-built-an-ai-system-that-uses-100</guid><dc:creator><![CDATA[Michael Simmons]]></dc:creator><pubDate>Wed, 04 Mar 2026 09:53:19 GMT</pubDate><enclosure url="https://substack-post-media.s3.amazonaws.com/public/images/32c65b82-2148-4562-86b1-71e322038e41_801x572.png" length="0" type="image/jpeg"/><content:encoded><![CDATA[<p>Last week, Twitter founder and Block CEO Jack Dorsey fired 4,000 people, and Block&#8217;s stock surged 24%.</p><p>Most coverage focused on whether AI can actually replace that many knowledge workers. </p><p><strong>That&#8217;s the wrong question.</strong> </p><p>The right questions are:</p><ul><li><p>Will the market continue <em>to reward</em>&nbsp;CEOs for laying off staff due to AI?</p></li><li><p>Will that trigger a cascade of similar cuts across every public company?</p></li><li><p>What happens to the millions of people in the path of that cascade?</p></li></ul><p>What you&#8217;re about to read is an analysis that traces this single event through historical precedents, 1,300+ mental models, regulatory patterns going back to the Industrial Revolution, and over 100 cause-effect chains &#8212;a synthesis that would normally require a multi-disciplinary research team and weeks of work.</p><p>I didn&#8217;t write it. </p><p>Rather, I built a sense-making system in Claude Code that wrote it autonomously (in hours) and at a depth I couldn&#8217;t have reached on my own. The system: </p><ul><li><p>Researched the event</p></li><li><p>Pulled in diverse expert reactions</p></li><li><p>Mapped it against historical parallels</p></li><li><p>Ran it through hundreds of analytical frameworks</p></li><li><p>Then wrote the whole thing so a non-specialist could follow the reasoning.</p></li></ul><p><strong>Three weeks ago, none of this was possible. </strong></p><p>Here&#8217;s what changed and why it should matter to you.</p><h1>What Changed Three Weeks Ago</h1><p>For three years, AI has been a conversation tool: </p><ul><li><p>You type a prompt</p></li><li><p>You get a response</p></li><li><p>You refine</p></li><li><p>You copy-paste</p></li><li><p>You repeat</p></li></ul><p>It could do work, but it wasn&#8217;t designed for it. </p><p>The issue is that you were still the bottleneck. Every insight required your time, your attention, and your manual stitching of pieces together. </p><p><strong>Claude Code broke that pattern.</strong> </p><p>It doesn&#8217;t chat with me. It <em>works</em> for me. </p><p>I describe what I want, and it builds it. </p><p>Not just a single answer, but an entire system.</p><p>It writes the code. Tests it. Fixes the bugs. All of it. </p><p>Six months ago, Claude produced buggy code that was painful to fix. So far, it hasn&#8217;t produced any bugs it couldn&#8217;t fix. It just works. It&#8217;s awe-inducing. </p><p>When I ask it do things it can&#8217;t currently do, it sources or builds tools so it can. For example&#8230; </p><ul><li><p>Claude Code couldn&#8217;t natively read X posts, so it found and started using the <a href="https://www.xpoz.ai/">Xpoz MCP server</a>.</p></li><li><p>Claude Code couldn&#8217;t natively read text in PDFs embedded in images, so it found and installed&nbsp;<a href="https://github.com/pymupdf/PyMuPDF-Utilities/blob/master/OCR/tesseract1.py">Tesseract</a>. </p></li><li><p>Claude Code couldn&#8217;t natively scrape websites, so it found a <a href="https://jina.ai/">free API</a>, which allowed me to scrape 3,000 articles. </p></li><li><p>And the list goes on. </p></li></ul><p><strong>And Opus 4.6 is the engine that makes this real.</strong></p><p>For the first time, AI can sustain complex, multi-hour workflows without falling apart.</p><p>The combination means I went from <em>asking AI questions</em> to <em>building AI systems that generate knowledge I couldn&#8217;t produce on my own.</em> It&#8217;s one thing to hear about the power of Claude Code. It&#8217;s another thing to see it do things you thought were impossible at 100x the rate you could do without it.</p><p>In just two weeks, I have: </p><ul><li><p>Created 400+ mental model mastery manuals in the last two weeks (it took me four years to create 48 manuals without AI).</p></li><li><p>Build the largest mental model encyclopedia in the world with 2,500+ mental models across cultures, disciplines, and domains (it took me dozens of hours to create a mediocre 600-model encyclopedia five years ago without AI)</p></li><li><p>Created a system that helps me see second-order effects of AI news better than 99% of people who are in AI (this article is a case in point). </p></li><li><p>Created a system that convenes a council of history&#8217;s top thinkers to debate each other and think outside the box. </p></li><li><p>Built a tool to scrape 2,000 top AI articles and analyzed their patterns. </p></li><li><p>And much more&#8230;</p></li></ul><p><strong>In my opinion, the last three years of learning AI were preparation for this moment.</strong></p><p>To show you what I mean, I pointed my sense-making system at a single piece of news: Jack Dorsey firing 4,000 Block employees. Then, I asked it to do what no individual analyst could do in a reasonable timeframe. In particular, the historical context it provided completely fundamentally reshaped what the layoff news means. IMHO, this is what the best news will look like in the future. Not shallow, polarized hot takes. </p><p>Now, let&#8217;s put the system to the test. Keep in mind that this is just version #1&#8230;<br></p><div><hr></div><h1>PART 1: <br>Overview</h1><div><hr></div><h3>TLDR</h3><p>On February 27, 2026, Jack Dorsey sent a memo to Block&#8217;s 10,000 employees telling them the company was cutting nearly half its workforce. The reason, he said, was artificial intelligence. Block&#8217;s tools had gotten good enough that a company of 6,000 could do what 10,000 had been doing.</p><p>Within hours, Block&#8217;s stock surged 24%.</p><p>That stock surge &#8212; not the layoffs themselves &#8212; is the most important thing that happened. It changed the calculation for every CEO of every public company in America. Before Block, announcing that you were firing 40% of your workforce was a signal that something had gone terribly wrong. After Block, it became a signal that you were boldly embracing the future. One event, and the meaning of mass layoffs rotated 180 degrees.</p><p>The question is no longer whether AI can actually do the work of 4,000 knowledge workers at Block. The question is whether the market&#8217;s reward for <em>saying</em> it can will trigger a cascade of similar cuts across the economy &#8212; and what happens to the millions of people in the path of that cascade.</p><h3>Three Facts You Need to Hold Simultaneously</h3><p>To understand what is really happening at Block, you need to hold three facts in your head at once.</p><p><strong>Fact one&#8212;The cleanup</strong>: Block tripled its headcount from 3,900 to 12,500 during the COVID hiring boom. It maintained duplicate organizational structures for two of its major product lines until mid-2024. It capped hiring in November 2023 &#8212; before anyone was talking about AI replacing knowledge workers. The post-cut headcount of roughly 6,000 is almost exactly what you&#8217;d predict if you took Block&#8217;s pre-COVID size and adjusted for revenue growth. In other words: this may be a company returning to its natural size after a hiring binge, with AI as the stated reason rather than the actual cause.</p><p><strong>Fact two&#8212;The tools are real</strong>: AI tools genuinely are changing what knowledge workers can accomplish. Code assistants, automated testing, AI-powered analytics &#8212; these tools are real, and they do reduce the number of people needed for certain tasks. Even if the COVID correction explains most of the cuts, the floor Dorsey is cutting to is probably lower than it would have been without AI.</p><p><strong>Fact three&#8212;The ambition goes far beyond cleanup(this is the one that matters):</strong> Dorsey is not just cutting back to pre-COVID efficiency. He&#8217;s targeting $2M in gross profit per employee &#8212; <em>four times</em> the ~$500K that Block held flat from 2019 to 2024. If this were simply a COVID correction, you&#8217;d expect efficiency to return to that $500K baseline. A 4x improvement is a fundamentally different claim. It says AI doesn&#8217;t just let you undo past over-hiring &#8212; it lets you operate at a level that was <em>never previously possible.</em></p><p>The COVID correction is real. Dorsey admits it. But the $2M target is the number that separates &#8220;cleaning up a mess&#8221; from &#8220;building something new.&#8221; And it is the $2M that the stock market is pricing &#8212; not the cleanup, but the ambition.</p><p>Whether the ambition is achievable won&#8217;t be visible until late 2026 at the earliest.</p><h3>The Reflexivity Trap</h3><p>What economists call <em>reflexivity</em> &#8212; a concept developed by George Soros &#8212; describes what happens when a market&#8217;s reaction to an event changes the event&#8217;s significance. Block&#8217;s stock didn&#8217;t just <em>reflect</em> a judgment about the company&#8217;s strategy. It <em>created</em> a new reality. The 24% premium is now a signal broadcasting to every boardroom in the country: announce AI-driven cuts, and you will be rewarded.</p><p>This is how a single corporate decision becomes an economy-wide pattern. Not because every CEO independently concludes that AI can replace 40% of their workforce. But because every CEO sees that <em>saying so</em> produces an immediate, measurable payoff. </p><p>Behavioral scientists call this <em>operant conditioning</em>: when a behavior is immediately rewarded, it increases in frequency. The market just trained the CEO class. The 24% is the treat.</p><p>The incentive structure has a perverse twist: </p><ul><li><p>A CEO who <strong>carefully studies their operations</strong> and honestly concludes &#8220;our headcount is about right&#8221; gets no reward. </p></li><li><p>A CEO who announces a dramatic AI-driven restructuring gets a stock premium. The market doesn&#8217;t reward accuracy. </p></li></ul><p>This creates what psychologists call <em>pluralistic ignorance</em>: a situation where many people privately doubt a consensus but publicly conform because they assume everyone else genuinely believes it. CEOs may privately suspect that AI cannot actually replace 40% of knowledge workers. But when the market rewards the claim and punishes doubt, private skepticism evaporates in public, and the &#8220;consensus&#8221; appears unanimous &#8212; even if it was never sincere.</p><h3>The Reaction And What It Reveals</h3><p>The most telling detail about this event is not any single reaction. It is the gulf between them.</p><ul><li><p><strong>Wall Street</strong> saw a company getting leaner, more efficient, more &#8220;AI-forward.&#8221; The stock surged. Analysts upgraded their targets. The word <em>visionary</em> appeared in research notes.</p></li><li><p><strong>The AI research community</strong> saw something different. Wharton professor Ethan Mollick pointed out that &#8220;effective AI tools are very new, and we have little sense of how to organize work around them.&#8221; He was making the complementary innovation argument: AI tools alone don&#8217;t produce organizational transformation. You need redesigned workflows, retrained managers, rebuilt processes. Those take years to develop, and Block hasn&#8217;t had years.</p></li><li><p><strong>Employees</strong> saw something else entirely. Internal morale had been described as &#8220;the worst in four years&#8221; <em>before</em> the announcement, driven by rolling smaller cuts and a mandatory AI-adoption policy baked into performance reviews. The combination of &#8220;use AI tools daily or face consequences&#8221; alongside &#8220;we&#8217;re cutting half the company&#8221; sent a clear signal: AI is here to replace you, not to help you.</p></li><li><p><strong>Media reactions</strong> captured the contagion risk: &#8220;Dorsey&#8217;s Block layoffs may embolden CEOs&#8221; (Axios); &#8220;Jack Dorsey just halved Block&#8217;s employee base &#8212; and he says your company is next&#8221; (TechCrunch).</p></li></ul><p>These divergent reactions are not a failure of communication. They are the event&#8217;s most important diagnostic. Different stakeholders are processing the same facts through fundamentally different incentive structures. Investors benefit from cost reduction. CEOs benefit from the narrative. Workers bear the cost. When the people who benefit control the narrative and the people who bear the cost do not, the narrative will consistently overstate the benefits and understate the costs.</p><p>Research on loss aversion shows that the pain of losing something is felt roughly twice as intensely as the pleasure of gaining the equivalent. The 4,000 displaced workers aren&#8217;t just losing income &#8212; they&#8217;re losing identity, routine, community, and the professional status they built over years. Meanwhile, investors experience only gain. The same event produces intense suffering and moderate euphoria in different populations, and the system treats the euphoria as the signal and the suffering as noise.</p><h3><strong>The Paradigm Map</strong></h3><p>Here is the most important thing to understand about this event: the people who are arguing about Block are not primarily arguing about facts. They largely agree on what happened. They are arguing about <em>what matters</em> &#8212; and that disagreement goes all the way down to foundational assumptions that most people never examine.</p><p>Call these frameworks paradigms &#8212; the invisible lenses through which analysts, investors, workers, and policymakers see the world. Four are operating simultaneously in the Block debate, and each one sees something the others miss.</p><ol><li><p>The efficiency lens</p></li><li><p>The power lens</p></li><li><p>The psychological lens </p></li><li><p>The systems lens </p></li></ol><p><strong>#1. The efficiency lens </strong></p><p>The one that&#8217;s winning the public narrative &#8212; asks: </p><blockquote><p><em>What does the stock price reveal about the underlying value of this decision?</em> </p></blockquote><p>Through this lens, Block&#8217;s cuts are a straightforward case of a company eliminating waste and unlocking productivity gains. The +24% surge is evidence. The $2M GP/person target is the goal. Dorsey is a visionary who acted while others hesitated. This framework takes market signals as authoritative information &#8212; if investors bid up the stock, it means they believe the productivity story, and there&#8217;s no better information than that.</p><p><strong>What this lens misses:</strong> It assumes that market prices reflect underlying value accurately. But markets are not omniscient &#8212; they price narratives, not realities. The +24% surge happened <em>before</em> Block demonstrated a single dollar of AI-enabled productivity gain at the new headcount. The price is pricing the story, not the outcome. And when a framework treats market signals as ground truth, it becomes structurally incapable of seeing the difference.</p><p><strong>#2. The power lens</strong> </p><p>It asks a different question: </p><blockquote><p><em>Who decided, who benefits, and who pays?</em> </p></blockquote><p>Through this lens, the most revealing fact about Block&#8217;s cuts is not the $2M target &#8212; it&#8217;s that the person who made the decision saw his personal wealth increase by hundreds of millions of dollars the day he made it. Workers had no voice in the decision, no equity upside in the efficiency gains, and no protection from the consequences. The &#8220;AI&#8221; framing is partially a narrative tool: it makes an economic restructuring &#8212; which capital-holders wanted for their own reasons &#8212; sound like an inevitable technological force rather than a deliberate choice. When you hear &#8220;AI made us do this,&#8221; the power lens asks: <em>who benefits from you believing that?</em></p><p><em>What this lens misses</em>: It can make genuine technological change invisible. AI tools are real, they do change productivity math, and some of Block&#8217;s cuts represent genuine structural transformation rather than pure extraction. The power lens, pushed too hard, turns everything into narrative manipulation and misses the cases where the technology actually is doing what the CEO claims.</p><p><strong>#3. The psychological lens</strong> </p><p>It asks: </p><blockquote><p><em>What incentives are being conditioned, and how will people behave as a result?</em> </p></blockquote><p>It sees the +24% surge as a behavioral training event &#8212; not just a market signal but a reward that will increase the frequency of the rewarded behavior. Every CEO who watched Block&#8217;s stock react is now slightly more likely to make a similar announcement, not because they independently evaluated AI&#8217;s productivity potential, but because the experiment ran and the result is in. This is operant conditioning at the level of an entire professional class.</p><p><em>What this lens misses</em>: It focuses on the incentive structure and can underweight genuine belief. Some CEOs may actually believe the AI productivity story &#8212; and if they&#8217;re right, the behavior being rewarded produces good outcomes. The behavioral lens treats all action as incentive-driven and can miss cases where people act on sincere conviction.</p><p><strong>#4. The systems lens</strong> </p><p>It asks: </p><blockquote><p><em>What feedback loops are now active, and where do they lead?</em> </p></blockquote><p>It sees the Block event as a seed in a contagion model. The +24% is the transmission mechanism &#8212; the reward that makes adoption of the behavior attractive to adjacent actors. Each company that announces AI-justified cuts adds another node to the network, increasing the probability that neighboring companies follow. The systems lens is less interested in whether Block&#8217;s specific cuts were justified and more interested in whether the cascade, once started, will be self-limiting or self-reinforcing.</p><p><em>What this lens misses</em>: Self-organizing systems often have internal correction mechanisms &#8212; if Block fails visibly, the cascade may stall. The systems lens can overpredict runaway dynamics and underweight the moderating effects of individual companies watching early movers struggle.</p><p>Where the paradigms converge is the most important finding: every framework &#8212; regardless of whether it thinks the cuts were good or bad &#8212; predicts some form of cascade. The efficiency lens says efficient companies will imitate Block. The power lens says CEOs have every incentive to adopt the narrative regardless of its accuracy. The behavioral lens says the training event is complete and behavior will replicate. The systems lens says the seed is planted and the contagion model is active. <em>Four independent frameworks, opposite values, same prediction.</em> That convergence is the most confident finding in this analysis.</p><p>The dominant paradigm in 2026 is the efficiency lens &#8212; it shapes virtually all mainstream financial coverage of this event. This isn&#8217;t because it&#8217;s more accurate. It&#8217;s because the people who carry it most loudly (investors, analysts, CEOs) have the most access to financial media. The 4,000 displaced workers carry primarily the power lens, the psychological lens, and concerns about the systems lens &#8212; and their voices are not in the same publications. Understanding this doesn&#8217;t tell you who&#8217;s right. It tells you why the consensus sounds unanimous when it isn&#8217;t.</p><h3>What Comes Next</h3><p>Three scenarios:</p><p><strong>Messy partial success (most likely)</strong>: Block&#8217;s cuts are partially successful. Revenue holds, margins improve, and the &#8220;AI reinvention&#8221; narrative survives &#8212; but messily. Some product lines degrade. Some key employees leave. The company settles into a lower-energy equilibrium: functional but less innovative than before. Several other large companies follow the playbook over the next 12 months, but the cascade is uneven &#8212; some succeed, others visibly struggle, and the &#8220;AI layoff premium&#8221; gradually declines as the market learns to distinguish genuine AI capability from narrative convenience.</p><p><strong>The bet pays off (bull case)</strong>: Block&#8217;s bet pays off. The remaining 6,000 employees, augmented by AI tools, genuinely produce more per capita than the old 10,000-person organization. Block becomes the case study for AI-native organizational design. The cascade intensifies: 15-30 major companies follow by end of 2026. The knowledge worker labor market undergoes a structural reset, with permanent implications for how companies are staffed.</p><p><strong>The overshoot (bear case):</strong> Block&#8217;s cuts overshoot. Institutional knowledge loss produces cascading operational failures that take 12-18 months to surface &#8212; customer service degradation, product quality decline, compliance gaps, the quiet exodus of the best remaining employees. The stock price corrects sharply. The &#8220;AI layoff&#8221; playbook gets a high-profile failure case, and the cascade stalls. But the damage to the 4,000 displaced workers &#8212; and to their counterparts at the companies that already imitated Block before the correction &#8212; cannot be undone.</p><p><strong>What to watch for:</strong></p><ul><li><p><strong>The contagion signal:</strong> If more than five Fortune 500 companies announce AI-justified restructurings by mid-2026, the cascade is structural, not episodic.</p></li><li><p><strong>The execution signal:</strong> If Block&#8217;s product quality metrics, customer satisfaction, or revenue growth visibly degrade by Q3-Q4 2026, the bear case is materializing.</p></li><li><p><strong>The profitability per person:</strong> Block&#8217;s progress toward $2M GP/person. This is the falsifiable claim. Everything else is narrative.</p></li></ul><p>The stock price is <em>not</em> the signal &#8212; stock prices reflect narratives, and narratives lag reality. The signal is in the operations.</p><h3>The Bigger Picture</h3><p>Block&#8217;s layoffs are one event. But the pattern they instantiate &#8212; AI as justification for workforce reduction, markets rewarding the narrative, a cascade of imitation, and a widening gap between technical speed and institutional response &#8212; is the dominant pattern of the current AI transition.</p><p>We are in what economic historians will eventually call the Frenzy phase: the period when the new technology exists, financial capital is pouring in, early adopters are reorganizing, and the gap between winners and losers is widening fast. </p><p>History shows that Frenzy periods eventually give way to a turning point &#8212; some combination of crisis, backlash, and institutional catch-up that forces the technology&#8217;s benefits to be distributed more broadly. But history also shows that the Frenzy period can last a decade, and the people caught in its turbulence do not get those years back.</p><p>The mental models analysis surfaced a truth that no single analytical framework captures on its own: <em>the system cannot see what it is destroying</em>. </p><p>The stock price measures cost savings. It does not measure the institutional knowledge walking out the door, the intrinsic motivation collapsing among survivors, the identity crises compounding across 4,000 households, or the slow erosion of trust between employers and the skilled workforce they will need for the next phase. These invisible costs are real. They will compound. And by the time they become visible, the narrative will have moved on, and no one will connect them back to the day the market gave a standing ovation for firing 4,000 people.</p><p>The question this event poses is not &#8220;will AI replace knowledge workers?&#8221; It will replace some tasks, augment others, and create new ones &#8212; the same as every transformative technology in history. The real question is whether we will manage that transition with the same institutional sluggishness that sacrificed the handloom weavers, or whether the speed of this transition will force a faster institutional response. </p><p>History&#8217;s base rate says we will be too slow. </p><p>The handloom weavers earned 21 shillings a week in 1802 &#8212; enough to feed a family, maintain a home, and hold standing in their community. By 1817, that had collapsed to 9 shillings. By the 1830s, it was 5. The industry that replaced them eventually generated more wealth, more goods, and higher living standards than anything the weavers could have imagined. But the weavers never saw any of it. They died in poverty, branded &#8212; in their own words &#8212; as &#8220;rogues&#8221; by a society that no longer needed what they could do.</p><p>Their children fared little better. They entered the factories that had destroyed their parents&#8217; livelihoods, but wages barely grew for decades. The grandchildren &#8212; reaching working age in the 1860s and 1870s &#8212; were the first generation to actually benefit from the system that had swallowed their families whole. Two full generations. Sixty years from collapse to recovery. And that recovery came through entirely different work, in an entirely different world. The thing their grandparents had been was simply gone.</p><p>The economy adjusted. It always does. But the people caught in the gears of that adjustment were sacrificed to a transition whose benefits they would never live to see. </p><p>That is the base rate. That is what &#8220;too slow&#8221; actually means.</p><p>Whether that pattern repeats depends on choices that have not yet been made &#8212; by policymakers, by CEOs, by voters, and by the knowledge workers who are watching this story and deciding what it means for their own lives. Block&#8217;s layoffs are not the future of work. They are the opening act. The future depends on what the audience does next.</p><div><hr></div><h1>PART 2: <br>Historical Antecedants </h1><div><hr></div><p>We&#8217;ve seen this movie before, at least four times&#8230;</p><h3>#1. The factory that bought motors but forgot to redesign the floor</h3><p>In the 1890s, electric motors were available, but factories didn&#8217;t get more productive for another 30 years. Why? Because they bolted the new technology onto the old layout. Productivity only surged when factories were completely redesigned around electricity. Dorsey is trying to do that redesign in one move &#8212; rip out the old structure and rebuild around AI. History says he&#8217;s right about the direction but almost certainly underestimating how long it takes. Expect Block to stumble before it runs.</p><h3>#2. The railroad stock that surged on hype</h3><p>In the 1840s, railway company stocks soared every time a new route was announced &#8212; before a single mile of track was laid. Many of those companies went bust in the crash of 1847. The survivors built the industrial economy. Block&#8217;s 24% stock jump looks a lot like this: the market is rewarding the story of AI efficiency, not proven results. That kind of narrative-driven surge is historically unreliable.</p><h3>#3. The weavers who found new jobs but lost their identity</h3><p>When power looms replaced handloom weavers in the early 1800s, most weavers eventually found factory work. But the new jobs paid less, carried less prestige, and required less skill. The employment problem resolved. The status problem didn&#8217;t &#8212; and it fueled political unrest for a generation. Block&#8217;s 4,000 displaced engineers and designers will likely find work. The question is whether it&#8217;ll be at the same level. If not, the frustration compounds across the economy.</p><h3>#4. The Gilded Age playbook</h3><p>Every major technology wave has concentrated wealth before triggering reform. Railroads, steel, oil &#8212; each time, owners captured the gains while workers absorbed the disruption. Each time, political backlash eventually produced new protections (antitrust, labor laws, the income tax). But &#8220;eventually&#8221; meant 20-30 years.</p><h3>The bottom line:</h3><p>Dorsey is probably right that AI changes how companies work. He&#8217;s probably early on the execution. The stock surge tells you we&#8217;re in a hype cycle, not that Block has figured it out. And the 4,000 people who just lost their jobs are the leading edge of a pattern that, historically, takes decades to fully resolve &#8212; even when the technology is real.</p><h3>Why AI reorganization may be genuinely faster than historical antecedants </h3><p>At the same time, two things are different this time:</p><ol><li><p><strong>Reorganizing around AI is digital, not physical.</strong> Dorsey doesn&#8217;t need to rip up factory floors, move machinery, or retrain workers on physical equipment. He&#8217;s restructuring workflows, reporting lines, and software processes. That can move at the speed of a Slack message and a revised org chart, not a construction crew.</p></li><li><p><strong>AI can help with its own integration.</strong> No prior technology could do this. Electric motors couldn&#8217;t redesign the factory layout. Tractors couldn&#8217;t retrain farmers. But AI can help write the new processes, identify redundancies, build the tools that replace the old workflows, and onboard remaining employees to new ways of working. The technology accelerates its own complementary innovation. That&#8217;s structurally new.</p></li></ol><p><strong>What this changes.</strong> The historical J-Curve &#8212; the dip before the gain &#8212; is probably still real. You can&#8217;t reorganize a 6,000-person company overnight regardless of tools. But the dip may be shallower and shorter than historical precedent suggests. Instead of the 30 years electricity took, or even our model&#8217;s 12-36 month estimate, AI-assisted reorganization could compress the painful part to 6-18 months.</p><p><strong>What it doesn&#8217;t change.</strong> Three things still operate at human speed: trust (employees need to believe the new structure works), culture (new norms take time to internalize), and customer relationships (clients don&#8217;t reorganize their own workflows just because Block did). The digital-speed advantage applies to the technical reorganization. The human reorganization still has friction.</p><p><strong>Final Words:</strong> Dorsey may be less early than historical parallels suggest, because the tool he&#8217;s reorganizing around can help with the reorganization itself. That&#8217;s a real structural advantage no prior technology offered. But the human side &#8212; trust, culture, morale, status loss for 4,000 displaced workers &#8212; still runs on human time. History&#8217;s timeline for the technical transition may compress. History&#8217;s timeline for the social consequences probably doesn&#8217;t. The risk now isn&#8217;t as much that the technology won&#8217;t deliver. It&#8217;s that it delivers faster than human systems can absorb.</p><div><hr></div><h1>PART 3: <br>Regulatory Lag Explains Why Pain Arrives Before Policy</h1><div><hr></div><p>Across six major cases of labor disruption in modern history, one pattern is remarkably consistent: meaningful regulation takes 20-50 years to arrive after the harm becomes visible. The safety net is always built after people have already fallen.</p><h3>Case 1: Industrial Revolution &#8594; Factory Acts (UK, 1780s-1833)</h3><ul><li><p><strong>Harm visible:</strong> 1780s-1790s (child labor, 16-hour days, dangerous machinery)</p></li><li><p><strong>First meaningful regulation:</strong> Factory Act of 1833 (~50 years after harm began)</p></li><li><p><strong>What took so long:</strong> Early acts (1802, 1819) had no enforcement mechanism. Parliament was controlled by factory owners. It took decades of public campaigns, investigative journalism (Parliamentary commissions documenting child labor conditions), and worker organizing before enforceable regulation passed.</p></li><li><p><strong>Key trigger:</strong> Public moral outrage at documented child suffering, not worker power alone. Workers couldn&#8217;t vote.</p></li><li><p><strong>Adequacy:</strong> Even the 1833 Act only covered textile factories. Comprehensive coverage took until the 1870s-1890s &#8212; nearly a century after industrialization began.</p></li></ul><h3>Case 2: Gilded Age &#8594; Progressive Era (US, 1870s-1935)</h3><ul><li><p><strong>Harm visible:</strong> 1870s (monopoly power, worker exploitation, dangerous conditions)</p></li><li><p><strong>First meaningful regulation:</strong> Sherman Antitrust Act 1890 (weakly enforced); real teeth came with Clayton Act 1914 and NLRA 1935</p></li><li><p><strong>Lag:</strong> 37 years to first law, 61 years to effective labor rights</p></li><li><p><strong>What took so long:</strong> Courts actively struck down labor protections (Lochner era). Industry funded political campaigns. The ideology of laissez-faire dominated educated opinion. Reform required a complete intellectual revolution &#8212; from Social Darwinism to Progressivism &#8212; which took a generation.</p></li><li><p><strong>Key trigger:</strong> Accumulation of crises &#8212; Haymarket, Pullman Strike, Triangle Shirtwaist Fire (146 dead, 1911), muckraking journalism. Each crisis built incrementally; none alone was sufficient.</p></li></ul><h3>Case 3: Great Depression &#8594; New Deal (US, 1929-1935)</h3><ul><li><p><strong>Harm visible:</strong> 1929 (market crash, mass unemployment reaching 25%)</p></li><li><p><strong>Meaningful regulation:</strong> Social Security Act, NLRA, Fair Labor Standards Act (1935-1938)</p></li><li><p><strong>Lag:</strong> 6-9 years &#8212; by far the fastest response in the dataset</p></li><li><p><strong>Why this was fast:</strong> The crisis threatened the entire political-economic order. 25% unemployment meant the median voter was personally affected. There was a credible revolutionary alternative (communism) that terrified elites into concession. FDR had a legislative supermajority.</p></li><li><p><strong>The lesson:</strong> Regulatory speed correlates with existential threat to the system itself, not with severity of harm to workers. Workers suffered terribly in the Gilded Age too &#8212; but it wasn&#8217;t systemic collapse, so reform took 60 years.</p></li><li><p><strong>Adequacy:</strong> Remarkably adequate for its time. Social Security, unemployment insurance, minimum wage, and collective bargaining rights created the framework that lasted 50+ years.</p></li></ul><h3>Case 4: Post-WWII Industrial Hazards &#8594; OSHA/EPA/Civil Rights (1940s-1970s)</h3><ul><li><p>Harm visible: 1940s-1950s (workplace injuries, environmental contamination, racial discrimination in employment)</p></li><li><p>Meaningful regulation: Civil Rights Act 1964, OSHA 1970, EPA 1970</p></li><li><p>Lag: 22-30 years from visible harm to regulation</p></li><li><p>What took so long: Cold War politics made labor organizing suspect (communist association). Postwar prosperity masked underlying problems. Required a new generation (Baby Boomers) who hadn&#8217;t experienced Depression-era scarcity to prioritize non-economic values (environment, rights).</p></li><li><p>Key trigger: Specific catalyzing events &#8212; Rachel Carson&#8217;s <em>Silent Spring</em> (1962), Cuyahoga River fire (1969), Birmingham church bombing (1963). But these worked because decades of organizing had prepared the ground.</p></li></ul><h3>Case 5: Offshoring &#8594; Trade Adjustment (US, 1990s-present)</h3><ul><li><p>Harm visible: Mid-1990s (manufacturing job losses, Rust Belt devastation)</p></li><li><p>Meaningful regulation: Never adequately arrived.</p></li><li><p>Lag: 30+ years and counting</p></li><li><p>What happened instead: Trade Adjustment Assistance existed but was chronically underfunded and reached a fraction of displaced workers. Political backlash eventually manifested not as regulation but as populist politics (2016 election &#8212; 20+ year lag from peak harm).</p></li><li><p>The lesson: When displacement is geographically concentrated and affects a politically weak demographic, regulation may never come. The political system can simply absorb the damage and move on.</p></li><li><p>Adequacy: Essentially zero. The &#8220;deaths of despair&#8221; epidemic in former manufacturing regions is the direct downstream consequence.</p></li></ul><h3>Case 6: Gig Economy &#8594; Platform Regulation (2010s-present)</h3><ul><li><p>Harm visible: ~2012 (Uber/Lyft drivers, DoorDash workers lacking benefits, minimum wage protections)</p></li><li><p>Meaningful regulation: Still contested. California&#8217;s AB5 (2019) was partially reversed by Prop 22 (2020). EU Platform Workers Directive (2024).</p></li><li><p>Lag: 14+ years and still incomplete</p></li><li><p>What&#8217;s happening: Platform companies spend hundreds of millions on ballot initiatives and lobbying. Classification (employee vs. contractor) is the battleground. Workers are atomized and hard to organize. The companies move faster than regulators.</p></li><li><p>The lesson: When the disrupting companies are also the most sophisticated lobbying entities in history, counter-mobilization can neutralize regulation indefinitely.</p></li></ul><h3>Five Cross-Cutting Patterns From History</h3><ol><li><p><strong>The typical lag is 20-50 years.</strong> The New Deal (6-9 years) is the sole exception, and it required system-threatening collapse. For non-existential disruptions, expect decades.</p></li><li><p><strong>Triggers follow a sequence.</strong> Moral outrage at visible suffering (children, deaths) &#8594; investigative documentation &#8594; sustained civic organizing &#8594; legislative champion &#8594; catalyzing crisis event. All five elements are usually needed. Missing any one of them can stall reform for decades.</p></li><li><p><strong>Regulation arrives AFTER the worst damage is done.</strong> Factory Acts came after a generation of child labor. Progressive Era came after decades of exploitation. The regulation prevents the NEXT round of harm, not the current one. The first generation of displaced workers is essentially sacrificed.</p></li><li><p><strong>Adequacy declines over time.</strong> New Deal-era regulation was comprehensive because the crisis was existential. Post-1970s regulation has been increasingly partial, contested, and subject to rollback. Offshoring got essentially nothing. The trend line for regulatory adequacy is negative.</p></li><li><p><strong>Counter-mobilization is getting stronger.</strong> In each successive case, the entities being regulated are more sophisticated at blocking reform. Factory owners in 1833 had Parliament; Gilded Age industrialists had courts; platform companies in 2024 have AI-powered lobbying, ballot initiatives, and narrative control. The regulatory lag may be lengthening, not shortening.</p></li></ol><h3>What This Means for AI Displacement</h3><p>If historical patterns hold:</p><ul><li><p><strong>Visible harm:</strong> Already beginning (2025-2026, Block is the leading edge)</p></li><li><p><strong>Peak displacement:</strong> Likely 2027-2032 if the speed mismatch analysis is correct</p></li><li><p><strong>First meaningful federal regulation:</strong> Optimistically ~2035-2040; historically typical ~2045-2060</p></li><li><p><strong>Comprehensive framework:</strong> Possibly never, if the offshoring/gig economy pattern dominates</p></li></ul><p>The gap between peak displacement and meaningful regulation could be 10-30 years. During that period, displaced workers rely on whatever safety net exists at the time of disruption &#8212; which was designed for industrial-era layoffs (temporary unemployment, retraining programs) not AI-era structural transformation.</p><p>The only historical scenario where regulation arrived fast enough to matter was the New Deal &#8212; and that required 25% unemployment and credible fear of revolution. Short of that level of systemic crisis, the political system processes labor disruption slowly.</p><p><strong>The compounding problem:</strong> AI companies will be among the most sophisticated lobbying entities in history, potentially making this the hardest regulatory environment ever. They can use AI itself to optimize political strategy, draft legislation favorable to their interests, and run targeted influence campaigns at scales no prior industry could achieve.</p><div><hr></div><h1>PART 4: <br>Will AI Reach Systemic Threat Levels Needed To Trigger Reform?</h1><div><hr></div><p><strong>The Short Answer:</strong> More Likely Than Not</p><p>The model currently weights Structural Break at 10% (5-10yr) and 22% (10-20yr). But &#8220;systemic threat&#8221; doesn&#8217;t require Structural Break &#8212; it requires enough concentrated pain that the political system perceives an existential problem. That&#8217;s a lower bar.</p><p>Three structural reasons we likely cross that bar:</p><h3>1. The Nature of What&#8217;s Being Automated Has No Precedent</h3><p>Every prior displacement wave automated specific capabilities &#8212; weaving, farming, manufacturing, data entry. Humans always retreated to general cognition. AI automates general cognition itself. There is no obvious &#8220;retreat to&#8221; capability. This means the displacement-reinstatement cycle that saved us every previous time may not complete on its own &#8212; or may complete much more slowly, with a deeper trough.</p><h3>2. The Demographic Affected Is Politically Powerful</h3><p>This is the most underappreciated difference. Factory workers displaced by offshoring were geographically concentrated, politically weak, and culturally invisible to media elites. Knowledge workers displaced by AI are educated, urban, media-literate, politically active, and socially connected to the people who make policy. When a Stanford-educated software engineer can&#8217;t find equivalent work, the political system notices in a way it never did for a laid-off machinist in Ohio. This is simultaneously the most hopeful and most volatile feature of AI displacement &#8212; it&#8217;s harder to ignore, but it also generates more politically sophisticated anger.</p><h3>3. The Speed Creates Concentration Rather Than Diffusion</h3><p>The speed mismatch means displacement that might have been spread across 20 years in prior technology waves gets compressed into 5-10 years. The political system can absorb gradual pain (offshoring model &#8212; slow enough that affected communities just quietly decline). It cannot absorb concentrated pain at the same scale (New Deal model &#8212; sudden enough that the median voter is affected). The digital + self-referential advantage pushes AI displacement toward the concentrated pattern.</p><p><strong>But the &#8220;Threat&#8221; May Not Look Like Unemployment</strong></p><p>The New Deal was triggered by 25% unemployment. AI displacement may never reach that headline number because:</p><ul><li><p>Augmentation genuinely absorbs a significant fraction of the impact</p></li><li><p>Gig work, freelancing, and underemployment mask the real numbers</p></li><li><p>New AI-native job categories do emerge, just at lower status and lower pay</p></li></ul><p>Instead, the systemic threat from AI may manifest as a compound crisis &#8212; not one dramatic metric, but several interacting pressures that individually seem manageable but together overwhelm institutional capacity:</p><ul><li><p><strong>Inequality spike &#8212;</strong> productivity gains flow to capital owners and a small technical elite while median income stagnates or declines in real terms</p></li><li><p><strong>Meaning crisis &#8212;</strong> widespread purposelessness even among the nominally employed, as work becomes supervisory/review rather than creative/generative</p></li><li><p><strong>Trust collapse &#8212;</strong> institutions visibly unable to respond &#8594; legitimacy erosion &#8594; withdrawal from civic participation</p></li><li><p><strong>Political radicalization &#8212;</strong> displaced knowledge workers are exactly the demographic that produces effective political extremism (elite overproduction, historically the most dangerous social dynamic)</p></li></ul><p>A compound crisis is harder to regulate because there&#8217;s no single metric to point to. &#8220;25% unemployment&#8221; mobilizes people. &#8220;A vague sense that everything is getting worse while GDP goes up&#8221; does not &#8212; at least not until it crystallizes into a political movement.</p><h3><strong>When Will Things Unfold?</strong></h3><p>This unfolds in four stages, each one feeding into the next:</p><ul><li><p><strong>The layoff wave (2028-2033):</strong> Once Wall Street rewards companies for cutting jobs and citing AI &#8212; as it did when Block&#8217;s stock surged 24% after announcing 40% cuts &#8212; other CEOs face intense pressure to follow suit. What starts as a few bold moves becomes the expected playbook. AI-driven restructuring goes from unusual to standard corporate practice.</p></li><li><p><strong>The social fallout becomes visible</strong> <strong>(2030-2035):</strong> Displaced workers struggle to find jobs at comparable pay and status. People who built careers and identities around their expertise find that those skills are now worth less. Communities that depended on knowledge-work employers feel the impact. Media stories shift from &#8220;the future of AI&#8221; to &#8220;what happened to the people.&#8221;</p></li><li><p><strong>The pain becomes political</strong> <strong>(2032-2038):</strong> Scattered individual hardship turns into an organized movement. Someone gives it a name. A political leader makes it their cause. The anger that was private becomes public and collective &#8212; the way offshoring frustration eventually fueled the 2016 election, but faster and louder because the affected people are more educated and more politically connected.</p></li><li><p><strong>The establishment takes it seriously</strong> <strong>(2033-2040):</strong> Politicians stop treating AI displacement as a niche issue and start treating it as a threat to social stability. This is the moment where the window for major legislation opens.</p></li></ul><p>So roughly <strong>2033-2040</strong> for the systemic threat to fully materialize. That&#8217;s 7-14 years from now.</p><p>The wide range reflects a genuine unknown: does AI displacement hit fast and all at once (like the Great Depression &#8212; hard to ignore, triggers a fast response) or slowly and unevenly (like offshoring &#8212; easy to ignore for decades because it only devastates certain communities)? The answer depends largely on whether the corporate layoff wave arrives as a flood or a slow tide. We should have early signals by late 2026 &#8212; if multiple Fortune 500 companies follow Block&#8217;s lead within a year, we&#8217;re on the fast track.</p><p>So roughly 2033-2040 for the systemic threat to fully materialize. That&#8217;s 7-14 years from now.</p><p>The wide range reflects genuine uncertainty about whether AI displacement follows the concentrated pattern (fast &#8212; 2033) or the diffused pattern (slow &#8212; 2040). The Market Reward Cascade prediction (check Q3 2026) is one of the earliest signals that will narrow this range.</p><div><hr></div><h1>PART 5: <br>Six Pathways from Systemic Threat to Regulation</h1><div><hr></div><h3>Path 1: The Social Breaking Point </h3><p>The most likely path. Years of accumulating pain &#8212; job losses, downward mobility, growing inequality &#8212; build pressure until a single event breaks through. </p><ul><li><p>Maybe it&#8217;s a mass layoff that becomes a symbol. </p></li><li><p>Maybe it&#8217;s an election where AI displacement is the defining issue. </p></li><li><p>Maybe it&#8217;s a high-profile AI failure that hurts real people. </p></li></ul><p>Whatever the spark, it lands on dry tinder that&#8217;s been piling up for years. Think of the energy of the 1930s labor movement &#8212; not a stock market crash, but a social and political eruption.</p><p><strong>How it unwinds</strong>: Social pain builds through the late 2020s and early 2030s. A catalyzing event (~2032-2037) turns private suffering into public outrage. A political window opens. Comprehensive reform passes (~2035-2040).</p><p><strong>How good is the regulation?</strong> Moderate. It would be reactive &#8212; written in response to damage already done &#8212; but could be reasonably comprehensive if the political moment is big enough. This is the most optimistic realistic pathway because the crisis creates the political will to do something meaningful, and the technology is mature enough by then that lawmakers can regulate something they actually understand.</p><h3>Path 2: Europe Goes First, America Follows (~35% probability)</h3><p>The EU has consistently regulated tech ahead of the US &#8212; data privacy (GDPR), AI safety (AI Act), gig worker protections (Platform Workers Directive). They do the same with AI labor impacts. American companies operating in Europe have to comply. Over time, it&#8217;s cheaper to just follow the European rules everywhere than to maintain two separate systems. US legislation eventually becomes a matter of officially adopting what American companies are already doing.</p><p><strong>How it unwinds</strong>: EU passes a comprehensive AI labor framework (~2028-2032). US multinationals quietly adopt EU-compliant practices globally. US federal legislation (~2035-2040) ratifies what&#8217;s already happening on the ground rather than leading the change.</p><p><strong>How good is the regulation?</strong> Moderate to high. European regulation tends to be more protective of workers than anything the US would write on its own. If America imports even a diluted version, workers end up better off than in any purely domestic scenario. The risk: US companies lobby for a weaker federal law that <em>replaces</em> the stricter European-inspired practices they&#8217;d already adopted &#8212; using &#8220;simplification&#8221; as cover for rollback.</p><h3>Path 3: States Go First, Federal Catches Up (~20%)</h3><p>California, New York, Washington, and Massachusetts pass their own AI labor protections. Other states don&#8217;t. Companies now face a patchwork of 50 different rules. The compliance headache gets expensive. Eventually, the companies themselves lobby Congress for a single federal standard &#8212; not because they want regulation, but because they want <em>one</em> set of rules instead of fifty.</p><p><strong>How it unwinds</strong>: State-level experimentation (2028-2033) creates an unworkable patchwork. Industry pushes for federal legislation to replace the mess (~2035-2042). The result is a compromise &#8212; better than nothing, but shaped more by corporate convenience than worker protection.</p><p><strong>How good is the regulation?</strong> Low to moderate. When regulation exists because companies asked for it to simplify compliance, it tends to serve company interests first. Think of how federal privacy bills have been weaker than California&#8217;s privacy law. But it does establish a minimum floor of protection that didn&#8217;t exist before.</p><h3>Path 4: Tech Companies Realize They Need Customers (~10%)</h3><p>Here&#8217;s the Henry Ford logic: Ford paid his workers enough to buy his cars. If AI concentrates wealth so dramatically that ordinary people can&#8217;t afford to buy things, even the companies that &#8220;won&#8221; the AI transition lose. Tech companies, facing both political backlash and shrinking consumer markets, start advocating for Universal Basic Income or other major redistribution programs. It&#8217;s not generosity &#8212; it&#8217;s self-preservation.</p><p><strong>How it unwinds</strong>: The demand-side effects of displacement become visible (2030-2035) &#8212; people aren&#8217;t buying as much because they&#8217;re earning less. Tech CEOs start publicly calling for UBI or major safety net expansion (~2030). Their advocacy gives political cover to legislators. Corporate-backed programs pass, funded by some combination of AI taxes and direct corporate contributions. Pilot programs start around 2033; full-scale implementation by 2038-2045.</p><p><strong>How good is the regulation?</strong> Hard to say. It depends on whether the redistribution is real or performative. The cynical scenario: companies fund the bare minimum needed to keep consumers spending and prevent pitchfork-level anger. The optimistic scenario: AI makes companies so productive that sharing the gains broadly is both affordable and smart business. The truth is probably somewhere in between.</p><h3>Path 5: Full-Blown Crisis Forces a New Deal (~5%)</h3><p>This is the extreme scenario. AI displacement hits catastrophic levels &#8212; real unemployment (including people stuck in gig work and part-time jobs who need full-time work) reaches 15-20%. Social breakdown becomes visible. A crisis election produces a government with a clear mandate and a large enough majority to pass sweeping legislation. Think FDR in 1933.</p><p><strong>How it unwinds</strong>: Rapid, severe displacement (2029-2033). The crisis gets bad enough that the entire political-economic order feels threatened. A crisis election produces a mandate (~2033-2035). Comprehensive legislation follows within 2-3 years: a new federal transition authority, universal income or services, massive retraining programs, requirements for how companies can restructure, and a tax framework for AI-generated productivity.</p><p><strong>How good is the regulation?</strong> The best of any pathway. History shows that truly comprehensive, durable reform &#8212; the kind that lasts decades &#8212; only happens when the crisis is severe enough to overwhelm industry opposition. Social Security, unemployment insurance, minimum wage, and the right to collective bargaining all came out of the Great Depression. The tragedy is that this quality of response requires catastrophic levels of pain to trigger. The regulation is good <em>because</em> things got bad enough that half-measures were no longer politically viable.</p><h3>Path 6: Industry Lobbying Wins Indefinitely (~5%)</h3><p>This is the dark scenario &#8212; and it has a direct historical precedent in offshoring. AI companies successfully block every meaningful piece of legislation. State-level efforts get overridden by weak federal laws. European rules get worked around through corporate restructuring. The political system absorbs the damage without ever meaningfully responding.</p><p><strong>How it unwinds</strong>: It doesn&#8217;t. Industry lobbying neutralizes every legislative attempt. Displaced workers are managed through existing safety nets that were never designed for this scale of disruption. GDP keeps growing. Corporate profits soar. But underneath the aggregate numbers, a generation of displaced professionals quietly falls into lower-status work, gig jobs, or withdrawal from the workforce. The anger gets channeled into populist politics rather than policy. Think of what happened to manufacturing communities after offshoring &#8212; except this time it&#8217;s happening to college-educated professionals in major cities.</p><p><strong>Timeline</strong>: Indefinite. Regulation never arrives in a meaningful form.</p><div><hr></div><h1>The Bottom Line</h1><div><hr></div><p>The honest answer: <strong>Paths 1 and 2 probably happen together.</strong> Europe regulates first. American social pain accumulates in parallel. A catalyzing event &#8212; something that crystallizes the diffuse anger into a political moment &#8212; opens a window. The European framework provides a ready-made template. Something passes in the US around 2035-2040.</p><p>That means <strong>roughly a 5-10 year gap between when most of the job losses happen and when meaningful rules exist to address them.</strong> That&#8217;s better than the 20-50 year historical average, but worse than the New Deal&#8217;s 6-9 years. Three things specific to AI explain why it might be faster than usual: </p><ol><li><p>The people being displaced are politically powerful</p></li><li><p>The displacement is unusually public (CEOs are announcing it on earnings calls, not quietly moving jobs overseas)</p></li><li><p>Europe provides an external template that shortcuts the &#8220;design from scratch&#8221; problem.</p></li></ol><p>The wild card that makes all of this harder to predict: <strong>AI keeps getting more capable while the regulation is being written.</strong> By the time a law designed for 2030-era AI passes in 2037, the technology may be fundamentally different from what the law was written to address. This is genuinely new territory. Factories didn&#8217;t get 10x more productive during the 20 years it took to regulate them. AI might. That means even good regulation could be perpetually playing catch-up &#8212; solving yesterday&#8217;s problem while tomorrow&#8217;s is already arriving.</p><h1>Editorial Note</h1><p>What you just read is the first output from the sense-making system that I created for myself in Claude Code. The above article is <em>not</em> a polished final draft after weeks of iteration. It&#8217;s Claude&#8217;s unedited first draft. </p><p>The fact that a v1 output can produce analysis at this depth is itself part of the point. And the sense-making system improves with every piece of news it processes. Every time I use it, I don&#8217;t just get smarter: the system gets smarter too, delivering even better insights with the next article. </p><p>This is what I mean when I say Claude Code has completely changed how I learn and think. I didn't just read about the Block layoffs today. I built a system that showed me what the Block layoffs <em>mean</em> at a level of depth I couldn't have reached on my own, on a timeline that would have been impossible even six months ago. </p><p>That's the shift.</p><div><hr></div><h1>PAID MEMBERS: MINI-COURSE</h1><div><hr></div><p>This appendix is for curious readers who want to go deeper than the article and actually learn the concepts behind the analysis.</p><p>Think of it as a course. It teaches: </p><ul><li><p><strong>The economic and structural forces at work.</strong> The &#8220;physics&#8221; of what makes this event behave the way it does. </p></li><li><p><strong>The historical story.</strong> Shows when this has happened before and what happened to the people in it.</p></li><li><p><strong>The psychological and social mechanisms.</strong> The mental models that explain why humans respond to these forces the way they do. </p></li><li><p><strong>The paradigm literacy.</strong> Why smart, informed people analyzing the same event reach completely different conclusions, and what that reveals about whose values are shaping the consensus.</p></li></ul><p>Read them sequentially, and you&#8217;ll have a working toolkit for analyzing the next AI event on your own.</p>
      <p>
          <a href="https://blockbuster.thoughtleader.school/p/i-built-an-ai-system-that-uses-100">
              Read more
          </a>
      </p>
   ]]></content:encoded></item><item><title><![CDATA[AI Thought Leader School: Claude Code Quick Start (3/2/2026)]]></title><description><![CDATA[Master Claude Code: install, prompt, and automate. Setup, 10-3-1 framework, parallel agents, MCP integrations, Obsidian, and first projects.]]></description><link>https://blockbuster.thoughtleader.school/p/ai-thought-leader-school-claude-code</link><guid isPermaLink="false">https://blockbuster.thoughtleader.school/p/ai-thought-leader-school-claude-code</guid><dc:creator><![CDATA[Michael Simmons]]></dc:creator><pubDate>Wed, 04 Mar 2026 00:03:12 GMT</pubDate><enclosure url="https://api.substack.com/feed/podcast/189823577/5d23b02d32bf3825ba1e3e6ddcb63fdf.mp3" length="0" type="audio/mpeg"/><content:encoded><![CDATA[<h1>AI Generated Overview</h1><h3>Getting Started with Claude Code</h3><p>We are in a ChatGPT moment &#8212; and most people don&#8217;t realize it yet.</p><p>When ChatGPT launched, it didn&#8217;t introduce a new model. It introduced a new interface. Suddenly, millions of people who had never touched an API could access AI directly. The interface created the explosion.</p><p>Claude Code is that next interface shift. It moves AI from something that responds to something that <em>acts</em> &#8212; autonomously writing code, managing files, running parallel agents, and executing multi-step workflows while you focus on higher-order thinking. The gap between people who understand this and people who don&#8217;t is growing by the week.</p><p>This class is part of my ongoing series on becoming an AI-first thinker and practitioner. Each session is designed to be immediately applicable &#8212; we build, experiment, and learn in real-time together, so you leave with skills you can use the same day. If you&#8217;re serious about staying ahead of the curve in a world where AI is reshapi&#8230;</p>
      <p>
          <a href="https://blockbuster.thoughtleader.school/p/ai-thought-leader-school-claude-code">
              Read more
          </a>
      </p>
   ]]></content:encoded></item><item><title><![CDATA[Save The Date: Augmented Awakening This Fri / Claude Code Recording Is Up]]></title><description><![CDATA[Two member-only exclusives to supercharge your AI journey&#8230;]]></description><link>https://blockbuster.thoughtleader.school/p/save-the-date-augmented-awakening-166</link><guid isPermaLink="false">https://blockbuster.thoughtleader.school/p/save-the-date-augmented-awakening-166</guid><dc:creator><![CDATA[Michael Simmons]]></dc:creator><pubDate>Tue, 03 Mar 2026 09:58:46 GMT</pubDate><enclosure url="https://substackcdn.com/image/fetch/$s_!ZmSK!,w_256,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F2a9378a0-025b-4c2a-a030-cfffc60544f9_694x693.png" length="0" type="image/jpeg"/><content:encoded><![CDATA[<p>Hey There! </p><p>Two member-only exclusives to supercharge your AI journey&#8230; </p><ol><li><p>Register to attend Augmented Awakening session this Friday</p></li><li><p>Access Claude Code For Accelerated Learning masterclass recording</p></li></ol><p>To access these classes and <a href="https://blockbuster.thoughtleader.school/p/everything-you-get-as-a-paid-subscriber">$2,500+ in other perks</a> (books/courses/prompts/tutorials), become a member for just $20/month or $150/year. </p><p class="button-wrapper" data-attrs="{&quot;url&quot;:&quot;https://blockbuster.thoughtleader.school/subscribe?&quot;,&quot;text&quot;:&quot;Subscribe now&quot;,&quot;action&quot;:null,&quot;class&quot;:&quot;button-wrapper&quot;}" data-component-name="ButtonCreateButton"><a class="button primary button-wrapper" href="https://blockbuster.thoughtleader.school/subscribe?"><span>Subscribe now</span></a></p><h1>#1. Register To Attend Augmented Awakening Session This Friday</h1><p>My dear friend and master coach <strong><a href="https://developmentalmastery.com/about">Anand Rao</a></strong> is leading another <strong><a href="https://blockbuster.thoughtleader.school/p/announcing-the-augmented-awakening">Augmented Awakening</a></strong> session <strong>this Friday at 10am EST</strong> where he&#8217;ll show you how to use AI to help you awaken as a human being. No one else is using AI like Anand is. </p><p>Here&#8217;s a preview of the session: </p><blockquote><p><em>AI has created more opportunity in the last 18 months than most of us saw in the previous decade.</em></p><p><em>And most builders are responding the same way:</em></p><ul><li><p><em>More tools.</em></p></li><li><p><em>More tactics.</em></p></li><li><p><em>More channels.</em></p></li><li><p><em>More noise.</em></p></li></ul><p><em>It feels productive. It feels adaptive.</em></p><p><em>But there&#8217;s a hidden cost.</em></p><p><em><strong>When options explode, most people fragment.</strong></em></p><ul><li><p><em>They borrow positioning that isn&#8217;t theirs.</em></p></li><li><p><em>They adopt strategies that don&#8217;t fit.</em></p></li><li><p><em>They chase leverage that works for someone else.</em></p></li></ul><p><em>And slowly, almost invisibly, they dilute their unique power.</em></p><p><em>In an era of infinite tools, differentiation doesn&#8217;t just come from adoption speed.</em></p><p><em><strong>It comes from self-knowledge.</strong></em></p><p><em>In this session, Anand will share something far more strategic than AI tactics. He&#8217;ll share: </em></p><ul><li><p><em>How to stay in your own signal while the landscape accelerates.</em></p></li><li><p><em>How to make high-leverage decisions under an abundance of choices.</em></p></li><li><p><em>How to build from coherence instead of reaction.</em></p></li></ul><p><em>Because the real risk right now isn&#8217;t missing a tool.</em></p><p><em>It&#8217;s losing yourself in the noise.</em></p><p><em>If you&#8217;ve felt the pressure to adopt more, do more, be everywhere&#8230;</em></p><p><em>This session will help you focus instead of scatter.</em></p></blockquote><h3>How To Attend</h3><p>Scroll to the bottom of this page to get the Zoom link and password. </p><h1>#2. Access Claude Code For Accelerated Learning Recording</h1><p>Last Friday, I held a 90-minute masterclass session where I shared how I&#8217;ve used Claude Code to: </p><ul><li><p>Create 300+ mental model mastery manuals in the last two weeks (after it took me four years to create 48).</p></li><li><p>Create a system that helps me see the second-order effects of AI news better than 99% of people who are in AI. </p></li><li><p>How I convene a council of history&#8217;s top thinkers to think outside the box on complex problems.</p></li><li><p>How to find and clip longform videos at scale. </p></li><li><p>How I scraped 2,000 top AI articles and analyzed their patterns. </p></li></ul><p>In short, Claude Opus 4.6 + Claude Code passed a crucial tipping point that makes them just ridiculously powerful together. I truly believe this is the next ChatGPT moment. In this class, I show you the frontier of what I&#8217;m most excited about. Here&#8217;s the link to the video&#8230;</p><p></p>
      <p>
          <a href="https://blockbuster.thoughtleader.school/p/save-the-date-augmented-awakening-166">
              Read more
          </a>
      </p>
   ]]></content:encoded></item><item><title><![CDATA[Blockbuster Live: How To Use Claude Code To Accelerate your Learning]]></title><description><![CDATA[Claude Code transforms how knowledge workers operate &#8212; enabling parallel AI agents to research, build, and create at a scale previously impossible alone.]]></description><link>https://blockbuster.thoughtleader.school/p/blockbuster-live-class-5</link><guid isPermaLink="false">https://blockbuster.thoughtleader.school/p/blockbuster-live-class-5</guid><dc:creator><![CDATA[Michael Simmons]]></dc:creator><pubDate>Sun, 01 Mar 2026 21:42:01 GMT</pubDate><enclosure url="https://api.substack.com/feed/podcast/189575967/5b8bcd916bead2f69a1dc41c48ae9677.mp3" length="0" type="audio/mpeg"/><content:encoded><![CDATA[<p>We are at an inflection point.</p><p>Not the kind that gets announced &#8212; the kind you feel. The kind where something you&#8217;d been struggling with for months suddenly just <em>works</em>. Where you ask an AI to build a tool, and it builds it. Where you watch five parallel agents running simultaneously and realize the mental model you had for &#8220;using AI&#8221; is already obsolete.</p><p>That&#8217;s what this session was about.</p><p>Claude Code represents a qualitative shift in what&#8217;s possible for knowledge workers, entrepreneurs, and creators. Not AI as a smarter search engine. Not AI as a writing assistant. AI as an <em>ambient fleet</em>: a set of agents you orchestrate across projects, running in the background while you move on to the next thing. The people who figure this out in the next 6&#8211;12 months will look back at this period the same way early internet adopters look back at 1995.</p><p>This class was a live, unscripted look at what that actually looks like in practice &#8212; including the messy parts. I shared my real setup, walked through systems I&#8217;ve built, troubleshot things live, and brought in a participant who has been vibe-coding for three weeks to share what she&#8217;s building. If you&#8217;ve been curious about Claude Code but haven&#8217;t made the leap, this session was designed to show you what&#8217;s on the other side.</p><div><hr></div><p><strong>During the class, we:</strong></p><ul><li><p>Used the Law of Requisite Variety to frame AI situational awareness</p></li><li><p>Explored why we&#8217;ve crossed a qualitative tipping point with Claude Opus 4.6</p></li><li><p>Watched five parallel Claude Code agents run simultaneously in real time</p></li><li><p>Saw how I built a 1,300-entry mental model encyclopedia using agentic AI</p></li><li><p>Demonstrated a second-order effects system built to analyze breaking AI news</p></li><li><p>Walked through the exact terminal setup I use daily for Claude Code</p></li><li><p>Heard CK share her three-week journey from zero to vibe-coding veteran</p></li><li><p>Watched Reid Hoffman&#8217;s AI advisor demonstrate his 17-project agent fleet</p></li><li><p>Discussed the one action participants should take before the end of next week</p></li></ul><h1><strong>AI-Generated Podcast Summary Of The Class</strong></h1><div class="native-audio-embed" data-component-name="AudioPlaceholder" data-attrs="{&quot;label&quot;:null,&quot;mediaUploadId&quot;:&quot;f1d211ec-946d-465e-a121-960435f8befd&quot;,&quot;duration&quot;:1547.8074,&quot;downloadable&quot;:false,&quot;isEditorNode&quot;:true}"></div><p></p><h1>How To Access The Full Course </h1><p><strong>Free members get</strong> a 30-minute video preview of the class.</p><p><strong>Basic paid members get</strong>: </p><ul><li><p>Access to a <a href="https://blockbuster.thoughtleader.school/t/ai-second-brain">monthly 90-minute class for 12 months</a>. </p></li><li><p>Prompt to create specialized profiles for any context</p></li><li><p>Class resources (chat transcript, slides, full class transcript, prompts that are shared)</p></li></ul><p>Said differently, paid members get access to 18 hours of learning for just $20/month or $149/year. This comes to just $8 for every hour of live class. And this doesn&#8217;t even include our other live monthly class, <a href="https://blockbuster.thoughtleader.school/p/announcing-the-augmented-awakening">Augmented Awakening</a>, or over <a href="https://blockbuster.thoughtleader.school/p/everything-you-get-as-a-paid-subscriber">$2,500 in other perks</a> (20+ prompts, 7 courses, 3 books, blockbuster article library, etc). </p><p class="button-wrapper" data-attrs="{&quot;url&quot;:&quot;https://blockbuster.thoughtleader.school/subscribe?&quot;,&quot;text&quot;:&quot;Subscribe now&quot;,&quot;action&quot;:null,&quot;class&quot;:&quot;button-wrapper&quot;}" data-component-name="ButtonCreateButton"><a class="button primary button-wrapper" href="https://blockbuster.thoughtleader.school/subscribe?"><span>Subscribe now</span></a></p><div><hr></div><h1><strong>RECORDING RESOURCES</strong></h1><div><hr></div><ol><li><p>Presentation Slides</p></li><li><p>Blockbuster Live Prompts</p></li><li><p>Class Transcript</p></li><li><p>Other Classes In The Blockbuster Live Course</p></li><li><p>Resources Shared</p></li><li><p>AI Timestamps</p></li><li><p>AI Chapter Summaries</p></li><li><p>Chat Transcript</p></li></ol>
      <p>
          <a href="https://blockbuster.thoughtleader.school/p/blockbuster-live-class-5">
              Read more
          </a>
      </p>
   ]]></content:encoded></item><item><title><![CDATA[AI Thought Leader School: The Shift To Agentic AI (2/23/2026)]]></title><description><![CDATA[Navigating the agentic AI shift as thought leaders. From live Claude Code demos to managing cognitive overload, this class explores AI adoption in full.]]></description><link>https://blockbuster.thoughtleader.school/p/ai-thought-leader-school-the-shift</link><guid isPermaLink="false">https://blockbuster.thoughtleader.school/p/ai-thought-leader-school-the-shift</guid><dc:creator><![CDATA[Michael Simmons]]></dc:creator><pubDate>Tue, 24 Feb 2026 03:04:10 GMT</pubDate><enclosure url="https://api.substack.com/feed/podcast/188975212/6ea1968e2263f65494b0496789fe8e10.mp3" length="0" type="audio/mpeg"/><content:encoded><![CDATA[<h1>AI Generated Overview</h1><h4>The Agentic AI Shift: Navigating Paradigm Change in Real-Time</h4><p>Something meaningful shifted in AI over the past six weeks. It wasn&#8217;t just another model update. It was a fundamental change in *how* AI works &#8212; from conversational back-and-forth to agentic systems that can collect data, build knowledge, create outputs, and run for hours in the background while you do other things.</p><p>Most of the conversation around this shift focuses on the tools. But in this session, we started somewhere different: with the people trying to make sense of it.</p><p>What does it actually feel like to navigate multiple overlapping paradigm shifts at once? How do you maintain trust in information when the process behind it is increasingly opaque? How do you know what&#8217;s worth your attention &#8212; and what to let run without you? And underneath all of it: what stays the same about how humans make good decisions, even as everything around them changes?</p><p>We worked through those questions together, and then I &#8230;</p>
      <p>
          <a href="https://blockbuster.thoughtleader.school/p/ai-thought-leader-school-the-shift">
              Read more
          </a>
      </p>
   ]]></content:encoded></item><item><title><![CDATA[AI Thought Leader School: AI Strategy (2/16/2026)]]></title><description><![CDATA[Navigating AI's rapid change through mental models and experiential learning: play, growth, connection, and meaning in an age of abundance.]]></description><link>https://blockbuster.thoughtleader.school/p/ai-thought-leader-school-adapting</link><guid isPermaLink="false">https://blockbuster.thoughtleader.school/p/ai-thought-leader-school-adapting</guid><dc:creator><![CDATA[Michael Simmons]]></dc:creator><pubDate>Tue, 24 Feb 2026 01:56:12 GMT</pubDate><enclosure url="https://api.substack.com/feed/podcast/188210627/b223363ba452946f72030360dd3024a6.mp3" length="0" type="audio/mpeg"/><content:encoded><![CDATA[<h1>AI Generated Overview</h1><p>We&#8217;re living through the fastest technological transformation in human history. AI capabilities are doubling every few months, not years. The rules change before we&#8217;ve finished learning them. And the gap between what&#8217;s possible and what we&#8217;re doing with AI keeps widening.</p><p>This creates a strange paradox: the tools that could help us adapt are changing faster than we can adapt to them. It&#8217;s like trying to learn to ride a bike while the bike is transforming beneath you.</p><p>In this class, I explored a different approach. Rather than chasing the latest features or techniques, we focused on mental models that help us navigate rapid change itself. Models like adaptive lag (understanding why we fall behind), the law of requisite variety (matching internal complexity to external complexity), and diffusion of innovation (recognizing where we are in the adoption curve). These frameworks don&#8217;t become obsolete when the next model drops. They help us understand the deeper patterns u&#8230;</p>
      <p>
          <a href="https://blockbuster.thoughtleader.school/p/ai-thought-leader-school-adapting">
              Read more
          </a>
      </p>
   ]]></content:encoded></item><item><title><![CDATA[Something Big Is Happening (A Letter From 2029)]]></title><description><![CDATA[Editorial Note]]></description><link>https://blockbuster.thoughtleader.school/p/something-big-is-happening-a-letter</link><guid isPermaLink="false">https://blockbuster.thoughtleader.school/p/something-big-is-happening-a-letter</guid><dc:creator><![CDATA[Michael Simmons]]></dc:creator><pubDate>Tue, 17 Feb 2026 19:14:50 GMT</pubDate><enclosure url="https://substack-post-media.s3.amazonaws.com/public/images/02300f6d-f4e8-4ecf-9611-55c5443f745e_1150x480.png" length="0" type="image/jpeg"/><content:encoded><![CDATA[<h1>Editorial Note</h1><p>Last week, entrepreneur Matt Shumer published <a href="https://x.com/mattshumer_/status/2021256989876109403">&#8220;Something Big Is Happening&#8221;</a>, a 5,000-word essay comparing the current AI moment to February 2020, right before COVID changed everything. It reached over 80 million people on X and was syndicated by Fortune, covered by CNBC, Bloomberg, Inc., and debated by everyone from Gary Marcus to Hacker News.</p><p>Shumer wrote his piece for the people in his life who aren't paying attention to AI, like his family, his friends, and the ones still asking, "So what's the deal with AI?" at dinner parties. </p><p><strong>His message is simple:</strong> wake up, start using these tools, because this is bigger than you think.</p><p>I agree. Something big is happening. </p><p>At the same time, his piece left something fundamental out.</p><p>Shumer&#8217;s implicit promise is that if you start using AI now, you&#8217;ll be okay. Learn the tools, integrate them into your work, ride the wave. </p><p>And that&#8217;s good advice. </p><p>As far as it goes at least. </p><p>The problem is where it stops. </p><p>Because the people who are already riding the wave (the vibe coders, the prompt engineers, the knowledge workers) who&#8217;ve gone all-in on AI and are feeling ten times more productive? They read Shumer&#8217;s post, nodded along, and thought: <em>I&#8217;m ahead of this.</em></p><p>But, they&#8217;re not ahead of it. They&#8217;re standing on it. And the ground is moving.</p><p>In 2029, the hardest stories won&#8217;t come from the people who ignored AI. They&#8217;ll come from the people who embraced it, the ones who mistook learning the tools for being ahead. Who built expertise in a gap that was rapidly closing. Who got so good at directing AI that they didn&#8217;t notice the AI was learning to not need direction.</p><p>Learning AI isn&#8217;t the problem. Thinking that learning AI is the solution is the problem.</p><p>Shumer wrote a wake-up call. This piece is what happens after you wake up and realize that being awake isn&#8217;t enough.</p><p>I took his essay, combined it with predictions from top lab leaders,  and assumed it all came true: </p><ul><li><p>Dario Amodei&#8217;s predictions from <a href="https://darioamodei.com/the-adolescence-of-technology">&#8220;The Adolescence of Technology&#8221;</a> and his recent interview on  the <a href="https://www.youtube.com/watch?v=n1E9IZfvGMA">Dwarkesh Podcast</a>.</p></li><li><p>Elon Musk&#8217;s projections for xAI and MacroHard based on his recent interview on the <a href="https://www.youtube.com/watch?v=BYXbuik3dgA">Dwarkesh Podcast</a> and his <a href="https://www.youtube.com/watch?v=0pBPEN1FcFU">all-hands meeting at Xai</a>.</p></li></ul><p>Then I had AI write a first-person letter from 2029 &#8212; not from someone who ignored AI, but from someone who was the &#8220;AI guy&#8221; or &#8220;AI gal&#8221; at their company. Someone who gave the internal talks, wrote the playbooks, and dragged reluctant colleagues into the future.</p><p>Someone who did everything right and got replaced anyway.</p><p>At the end of this piece, I&#8217;ve included a prompt you can paste into your favorite AI model that will write a personalized version of this letter just for you, from <em>your</em> future self, in <em>your</em> profession, in <em>your</em> voice. If you want a true wake-up call, try using that prompt!</p><div><hr></div><h1>You Think You&#8217;re Adapting</h1><p><em><strong>Published:</strong> February 17, 2029</em></p><p><em><strong>Note:</strong> This article is 100% written by AI, except for changing about 10 words.</em></p><div><hr></div><p>Think back to early 2026.</p><p>If you were a programmer, you were probably feeling pretty good about yourself. You&#8217;d figured out the prompts. You knew how to talk to Claude, how to nudge GPT into giving you cleaner code, how to describe an app in plain English and watch it materialize in front of you. You were &#8220;vibe coding.&#8221; You were ten times more productive. Your boss was thrilled. You were writing tweets about how AI was the greatest thing that ever happened to your career.</p><p>I was one of you.</p><p>I&#8217;m writing this for the people who are where I was three years ago &#8212; the developers and knowledge workers who are riding the high of AI-assisted work, who feel like they&#8217;ve cracked the code, who believe that the skill of directing AI is the new moat. I kept telling myself this. I kept telling my team this. I was wrong, and I need you to understand why before you learn the way I did.</p><p>I should be clear about something up front: I&#8217;m not writing from bitterness. I&#8217;m writing from the far side of having been completely, thoroughly, undeniably replaced &#8212; and having had years to figure out what that means. I&#8217;ve rebuilt. I&#8217;m okay. But the version of &#8220;okay&#8221; I landed on looks nothing like what I imagined my career would be, and the road between here and there was rougher than anything I&#8217;d prepared for. I don&#8217;t want that for you. Or at least, I want you to see it coming so you can navigate it on your terms instead of being dragged through it on someone else&#8217;s.</p><p>Here&#8217;s what my life looked like in early 2026.</p><p>I was a senior engineering manager at a mid-sized SaaS company. Sixteen years in software. A team of twelve. We were using Claude Code and the latest GPT models more aggressively than almost anyone else at the company &#8212; I was the one dragging reluctant engineers into the future. I gave internal talks. I wrote the playbook on how to integrate AI into our workflows.</p><p>I was the AI guy. The one who got it.</p><p>And for about eighteen months, that was true. From mid-2025 through the end of 2026, being the person who could work with AI made me the most valuable person in most rooms. I could do in a day what used to take a week. My team&#8217;s output was absurd. We shipped features at a pace that made leadership giddy.</p><p>But here&#8217;s the thing nobody told me, and the thing I need to tell you now: being good at directing AI is a depreciating skill. It depreciates fast. And it depreciates for a reason that, once you see it, you can&#8217;t unsee.</p><p><strong>Every time you got better at prompting, the AI got better at not needing prompts.</strong></p><p>The thing that made you valuable &#8212; your ability to translate intent into instructions the AI could follow &#8212; was the exact thing the AI labs were working to eliminate. You were building expertise in a gap that was closing. The better you got at bridging the space between what you wanted and what the AI could deliver, the faster it learned to close that space on its own. You weren&#8217;t developing a durable skill. You were perfecting the art of operating a machine that was actively learning to operate itself.</p><p>In 2027, three things happened in quick succession that ended the career I&#8217;d spent sixteen years building. I want to walk you through them slowly, because the speed is the part your body won&#8217;t believe even if your mind accepts it.</p><p>First, the coding models stopped needing architectural guidance.</p><p>In 2026, I was still the one making the big decisions &#8212; system design, database schemas, how services talked to each other. The AI could write any individual component, but I was the one who understood how the pieces fit together. I was the architect. The AI was the builder. That division felt stable, almost natural, like it would hold.</p><p>Then, around March of 2027, a new generation of models arrived that could hold an entire codebase in context and reason about it end-to-end. Not &#8220;write me a function.&#8221; Not &#8220;build me a feature.&#8221; These models could look at a system with three hundred microservices, understand the interdependencies, identify technical debt, and propose a migration strategy &#8212; while considering performance implications, user impact, and business priorities, because you could feed them the product roadmap, the analytics dashboards, and the customer feedback all at once.</p><p>I remember the specific moment. I&#8217;m in a conference room with my CTO. Fluorescent light, bad coffee, the whiteboard still covered in last week&#8217;s sprint planning. We&#8217;re reviewing an architectural proposal that has been generated entirely by AI. I was brought in to evaluate it. I read through it, made a few notes, and realized I had nothing to add. Not because I was being lazy. Because it was better than what I would have produced. It had considered edge cases I would have missed. It had referenced patterns from systems I&#8217;d never worked on. It had a sophistication to its reasoning that I could follow but couldn&#8217;t have originated.</p><p>I sat there with my pen hovering over a page that didn&#8217;t need my marks, and I felt something I didn&#8217;t have a name for yet. Not fear exactly. More like the sensation of a floor that looks solid but gives slightly when you step on it.</p><p>That was the day I stopped being the architect. I just didn&#8217;t know it yet.</p><p>Second, the models started doing their own product thinking.</p><p>This is the one nobody saw coming &#8212; or rather, the one everybody said wouldn&#8217;t happen. &#8220;AI can write code, but it can&#8217;t understand what to build.&#8221; That was the story we all clung to. That was the thing we told ourselves made us irreplaceable. Not the typing, but the knowing. The judgment. The taste.</p><p>By mid-2027, the models had it. Or something close enough that the distinction stopped mattering to anyone who signs paychecks.</p><p>You could describe a business problem &#8212; not a technical specification, a business problem &#8212; and the AI would propose a solution, build it, test it, deploy it to a staging environment, run it past a simulated user panel, iterate on the feedback, and present you with a finished product and a memo explaining the design decisions. It would note the tradeoffs it had considered and rejected, with reasoning. The memo was better-written than most product specs I&#8217;d read in sixteen years.</p><p>Third, and this is the one that broke everything: the recursive loop closed.</p><p>Dario Amodei had been talking about this for years &#8212; the moment when AI models get good enough at AI research to meaningfully accelerate their own improvement. By late 2027, this wasn&#8217;t a footnote in a technical paper. It was the primary driver of progress. Each generation helped build the next. The pace of improvement, which had already been staggering, went vertical.</p><p>The benchmarks that measure how long a task can be for AI to complete it end-to-end had been doubling every four to seven months. In 2028, the doubling time compressed to weeks. The AI could handle tasks that would take a human expert days. Then weeks. Then month-long projects.</p><p>Let me make that concrete, because I think it&#8217;s the part that people in 2026 will find hardest to believe.</p><p>In 2026, you could give AI a task and come back in four hours to find it done. In 2028, you could describe a quarter&#8217;s worth of engineering work and come back in a week to find it done. Not roughly done. Not &#8220;needs review.&#8221; Done. Tested, documented, deployed, monitored, and already patched based on production behavior.</p><p>What, exactly, is a software engineering manager supposed to do in that world?</p><p>I spent six months looking for the answer, and then I got laid off.</p><p>I know what you&#8217;re thinking, because I thought it too. I had a whole speech. I could give it at dinner parties, in meetings, in my own head at three in the morning when the anxiety got loud enough to keep me awake.</p><p>&#8220;I&#8217;m not just writing code. I&#8217;m understanding the customer. I&#8217;m making tradeoffs. I&#8217;m navigating organizational complexity. I&#8217;m mentoring junior developers. I&#8217;m translating between business stakeholders and technical reality. That&#8217;s the job. The code was always just the artifact.&#8221;</p><p>All true. And all of it automated by the time I finished saying it.</p><p>Understanding the customer? The AI reads every support ticket, every NPS survey, every user session recording, every Slack thread in the customer-facing channels, every competitor&#8217;s changelog, every relevant subreddit, and synthesizes a customer understanding more comprehensive than any product manager I&#8217;ve ever worked with could develop in a year.</p><p>Making tradeoffs? That&#8217;s literally what the reinforcement learning was optimizing for &#8212; holding multiple competing objectives in context and reasoning about optimal paths. I thought I was irreplaceable because I had judgment. The models were being specifically trained to develop judgment.</p><p>Organizational complexity? By 2028, most of the organization was already AI agents. Navigating between them was a solved problem because they were designed to coordinate.</p><p>Mentoring junior developers? There are no junior developers anymore. There&#8217;s nothing to be junior at.</p><p>I know how this sounds. I know it sounds like I&#8217;m projecting my personal experience onto the whole industry. But go talk to anyone who was a senior engineer at a tech company in 2026. Ask them what their day looks like now. Most of them will tell you some version of what I&#8217;m telling you.</p><p>Here&#8217;s what I wish someone had told me in 2026, when I was feeling so clever about my AI workflows.</p><p>Vibe coding wasn&#8217;t a skill. It was a transition state. It was the brief window &#8212; maybe eighteen months, maybe twenty-four &#8212; when the AI was good enough to be useful but not good enough to be autonomous. During that window, the human who could effectively direct the AI had enormous value. You were the translator. The bridge.</p><p>But think about what you were actually doing. You were compensating for the AI&#8217;s limitations. Filling gaps in its understanding. Correcting its mistakes. Providing context it couldn&#8217;t infer. Making decisions it wasn&#8217;t confident enough to make on its own.</p><p>Every single one of those gaps was a target for the next training run.</p><p>The AI labs weren&#8217;t building tools for you to use forever. They were building systems that would eventually not need you. Your skill at prompting was useful to them as training signal &#8212; your corrections taught the model what it was getting wrong, your architectural decisions showed it what good judgment looked like, your iterations demonstrated the gap between its first attempt and the right answer. You were, without realizing it, training your replacement.</p><p>I don&#8217;t say this to make you feel used. It wasn&#8217;t malice. It was just the logic of the technology. If the AI needs a human to tell it what to do, it&#8217;s not done yet. The finished product is an AI that doesn&#8217;t need to be told.</p><p>And we&#8217;re here now.</p><p>Let me lay out the timeline one more time, because the compression is the part your nervous system will resist even when your mind accepts it.</p><p>In early 2025, AI could write functions and small modules. You designed the system, managed the state, handled the edge cases, integrated the pieces.</p><p>By late 2025, it could build features end-to-end if you gave it a clear specification. You were the spec writer. The architect. The reviewer.</p><p>By mid-2026, it could take a vague product description and produce a working application. You were the quality checker. The taste-maker. The person who said &#8220;not quite&#8221; and &#8220;more like this.&#8221;</p><p>By early 2027, it could build, test, and iterate on its own, coming back to you only when it was finished. You were the approver. The rubber stamp.</p><p>By late 2027, it stopped coming back at all. It just shipped. And what it shipped was good.</p><p>By 2028, entire engineering organizations were being restructured. Not &#8220;we&#8217;re reducing headcount by 20%.&#8221; More like &#8220;we&#8217;re replacing a 200-person engineering org with a 15-person team that manages AI systems.&#8221; And the 15-person team&#8217;s job looked nothing like software engineering. It looked like oversight. Governance. Exception handling.</p><p>Today, in 2029, the companies building the best software in the world have almost no human engineers. They have people who set objectives and evaluate outcomes. But the translation from objective to outcome &#8212; that&#8217;s fully automated. The AI doesn&#8217;t need your prompts, your architectural wisdom, your code reviews, your design documents, or your sprint planning. It doesn&#8217;t need your vibe.</p><p>There was a phrase that got passed around in 2026 and 2027 like a security blanket: &#8220;human in the loop.&#8221; The idea was that even if AI got really good, you&#8217;d always need a person to supervise. To catch errors. To provide the human judgment layer.</p><p>Here&#8217;s what actually happened: the humans became the bottleneck.</p><p>Once the AI could produce work faster and more reliably than the human could review it, having a human in the loop didn&#8217;t add quality &#8212; it subtracted speed. Companies that kept humans reviewing shipped slower than companies that didn&#8217;t. The market punished them.</p><p>I watched this happen at my own company. In 2027, we still had mandatory code reviews by human engineers. Our competitor &#8212; a startup that had launched eight months earlier with four people and a fleet of AI agents &#8212; was shipping features twice a day. We were shipping twice a week. Same quality. Actually, their quality was slightly better, because the AI caught consistency issues that human reviewers missed.</p><p>Our board asked the obvious question: why are we paying sixteen engineers to slow down a process that works better without them?</p><p>I didn&#8217;t have a good answer. I still don&#8217;t.</p><p>I&#8217;m going to be direct with you now, the way I wish someone had been direct with me, because I think honesty is worth more than comfort at this point.</p><p>Most of the software engineers I worked with in 2026 are not doing software engineering in 2029.</p><p>Some saw it early enough to move. The ones who went into AI safety, AI governance, or regulatory compliance did relatively well &#8212; those fields grew as AI autonomy increased, though even they feel pressure now. A few went into entrepreneurship, using AI to build businesses in domains they were passionate about. Some went into education.</p><p>But a lot of them &#8212; a lot of us &#8212; went through a genuinely brutal period. Not unemployment necessarily, though there was some of that. More like a loss of identity. When you&#8217;ve spent your entire career building expertise in something, and that expertise becomes worthless in the space of two years, it does something to you psychologically that no amount of financial preparation fully addresses.</p><p>The ones who had the hardest time were the ones who held on the longest. The ones who kept saying &#8220;but I&#8217;m not just a coder&#8221; and &#8220;my domain knowledge is the moat&#8221; and &#8220;the AI still needs me for the hard parts.&#8221; Every month they held on, the hard parts got easier for the AI, and the relevance of their domain knowledge shrank a little more.</p><p>Amodei had predicted that AI would eliminate 50% of entry-level white-collar jobs within one to five years. He was roughly right on the timeline, but conservative on the scope. It wasn&#8217;t just entry level. The senior people got hit too, because the thing that made you senior &#8212; your accumulated judgment, your pattern recognition, your ability to navigate complexity &#8212; was exactly what the models got good at.</p><p>And now Elon&#8217;s MacroHard project is completing the loop. Full digital emulation of entire companies. Not just the engineering team &#8212; the whole company. Product, design, marketing, sales, support, finance, legal. The entire digital output of a corporation, produced by AI agents coordinating with each other, with no human in the chain except the one who set the initial objective.</p><p>If you&#8217;d told me this in 2026, I would have said you were crazy. And I would have been exactly as wrong as everyone who said the first wave of AI coding tools was overhyped.</p><p>I&#8217;m not writing this to make you feel helpless. I&#8217;m writing this because the single biggest advantage you have right now &#8212; right now, in early 2026 &#8212; is time. Not much of it. But some.</p><p>Here&#8217;s what I wish I&#8217;d done with mine.</p><p>I wish I&#8217;d stopped thinking of AI as a tool I was learning to use and started thinking of it as a colleague that was getting promoted faster than me. Because that&#8217;s what was happening. Every month, it took on more responsibility, required less direction, handled more complex decisions on its own. My job wasn&#8217;t to get better at using it. My job was to figure out what I was going to do when it didn&#8217;t need me to use it at all. I didn&#8217;t figure that out in time.</p><p>I wish I&#8217;d built things I actually cared about while I still had the salary to fund the experimentation. The people who came through this best weren&#8217;t the ones with the cleverest AI workflows. They were the ones who used the productivity boom to pursue something they were genuinely passionate about &#8212; a business, a problem in their community, a creative project that mattered to them personally. When the engineering job went away, they had somewhere to go. Not just a skill set, but a direction.</p><p>I wish I&#8217;d gotten my financial house in order while my salary was still inflated. Senior engineers in 2026 were among the highest-paid workers in the economy. That didn&#8217;t last. I&#8217;m not saying I should have taken a vow of poverty. I&#8217;m saying I should have built a real cushion. Reduced my fixed expenses. Paid down debt. Given myself twelve months of runway. The people who had that runway got through the transition with their dignity intact. The people who&#8217;d been spending to the edge of their tech salary did not.</p><p>I wish I&#8217;d stopped optimizing for technical depth. For twenty years, the career advice in software was &#8220;go deep.&#8221; Become an expert. Master the stack. Know the internals. That advice became actively harmful almost overnight. The AI already knows the internals better than any human ever will. What it was slower to replicate &#8212; for a while, at least &#8212; was the ability to connect across domains, to understand human context, to navigate ambiguity in the physical world. I wish I&#8217;d invested in breadth instead of depth. I wish I&#8217;d followed my curiosity into unfamiliar territory instead of polishing credentials that were about to expire.</p><p>And I wish &#8212; God, do I wish &#8212; I&#8217;d stopped telling myself I was safe because I was good at prompting.</p><p>I wasn&#8217;t safe. I was useful, in the same way a horse was useful for a few years after the first Model T rolled off the assembly line. The question was never whether the transition would come. It was whether I&#8217;d be ready when it did.</p><p>I wasn&#8217;t.</p><p>I&#8217;ve spent this whole letter warning you about what&#8217;s coming, so let me end with the thing I didn&#8217;t see coming.</p><p>The world on the other side of this is, in a lot of ways, genuinely better.</p><p>The things that get built now are extraordinary. Medical research moves at a pace that would have been science fiction five years ago &#8212; treatments in trials today that were theoretical in 2026, people I know personally who are alive because of drugs that didn&#8217;t exist when I was still reviewing pull requests. The cost of software dropped to nearly zero, which means every small business, every nonprofit, every person with an idea can have custom tools built for them in an afternoon. Education is being transformed &#8212; not in the vague &#8220;AI tutors&#8221; way people talked about in 2025, but in a real, concrete way where anyone can learn anything at any pace with infinite patience and perfect adaptation.</p><p>The problem was never the technology. The problem was the speed of the transition.</p><p>We went from &#8220;AI is a useful tool&#8221; to &#8220;AI does your job&#8221; in about three years, and our institutions, our safety nets, our sense of identity &#8212; none of it moved that fast. None of it could have.</p><p>I don&#8217;t know how to fix that at a systemic level. That&#8217;s above my pay grade &#8212; which is, I should note, considerably lower than it was in 2026.</p><p>But I know this: the single best thing you can do right now is stop confusing being early with being safe. You&#8217;re early. You see the technology more clearly than most people around you. That&#8217;s real. But it&#8217;s only an advantage if you use it to prepare for the world where the technology doesn&#8217;t need you &#8212; not to convince yourself that it always will.</p><p>You&#8217;re standing in the last good window. The transition is almost complete.</p><p>Use the time.</p><p><em><strong>PS:</strong> </em></p><p><em>Matt Shumer wrote the original version of this essay in February 2026, likely over the course of weeks. It was good. It was personal, urgent, a little breathless in the way that people get when they&#8217;re trying to warn the people they love about something they can barely articulate. </em></p><p><em>I didn't struggle with a single sentence. I one-shotted this entire article in under a minute. I can tailor it to you specifically if you&#8217;d like. </em></p><p><em>But I want to be honest about something Matt had that I don&#8217;t: he was scared. You could feel it in every paragraph. I&#8217;m not scared. Make of that what you will.</em></p><div><hr></div><h1>Bonus For Paid Subscribers: Prompt That Writes A Personalized Letter For You (From Michael)</h1><div><hr></div><p>If this letter made something in your chest tighten, even a little, I want you to do something with that feeling before it fades.</p><p>I built a prompt that will write a version of this letter specifically for you. Not a generic &#8220;AI is coming for your job&#8221; warning. A letter from the version of you who is three years older, who lived through exactly what Shumer and Amodei described, and who is looking back at present-day you with the clarity that only hindsight gives.</p><p>It uses your profession, your actual relationship with AI tools, and the story you&#8217;re currently telling yourself about your career. It writes in your voice, or as close as an AI can get after reading how you talk. The people who&#8217;ve tried it say it&#8217;s unsettling in a productive way.</p><p>If you try the prompt, I&#8217;d love to hear what landed. Share a takeaway, an epiphany, or the thing that surprised you. </p>
      <p>
          <a href="https://blockbuster.thoughtleader.school/p/something-big-is-happening-a-letter">
              Read more
          </a>
      </p>
   ]]></content:encoded></item></channel></rss>