I Built An AI System That Uses 100+ Mental Models To Analyze The News
Last week, Twitter founder and Block CEO Jack Dorsey fired 4,000 people, and Block’s stock surged 24%.
Most coverage focused on whether AI can actually replace that many knowledge workers.
That’s the wrong question.
The right questions are:
Will the market continue to reward CEOs for laying off staff due to AI?
Will that trigger a cascade of similar cuts across every public company?
What happens to the millions of people in the path of that cascade?
What you’re about to read is an analysis that traces this single event through historical precedents, 1,300+ mental models, regulatory patterns going back to the Industrial Revolution, and over 100 cause-effect chains —a synthesis that would normally require a multi-disciplinary research team and weeks of work.
I didn’t write it.
Rather, I built a sense-making system in Claude Code that wrote it autonomously (in hours) and at a depth I couldn’t have reached on my own. The system:
Researched the event
Pulled in diverse expert reactions
Mapped it against historical parallels
Ran it through hundreds of analytical frameworks
Then wrote the whole thing so a non-specialist could follow the reasoning.
Three weeks ago, none of this was possible.
Here’s what changed and why it should matter to you.
What Changed Three Weeks Ago
For three years, AI has been a conversation tool:
You type a prompt
You get a response
You refine
You copy-paste
You repeat
It could do work, but it wasn’t designed for it.
The issue is that you were still the bottleneck. Every insight required your time, your attention, and your manual stitching of pieces together.
Claude Code broke that pattern.
It doesn’t chat with me. It works for me.
I describe what I want, and it builds it.
Not just a single answer, but an entire system.
It writes the code. Tests it. Fixes the bugs. All of it.
Six months ago, Claude produced buggy code that was painful to fix. So far, it hasn’t produced any bugs it couldn’t fix. It just works. It’s awe-inducing.
When I ask it do things it can’t currently do, it sources or builds tools so it can. For example…
Claude Code couldn’t natively read X posts, so it found and started using the Xpoz MCP server.
Claude Code couldn’t natively read text in PDFs embedded in images, so it found and installed Tesseract.
Claude Code couldn’t natively scrape websites, so it found a free API, which allowed me to scrape 3,000 articles.
And the list goes on.
And Opus 4.6 is the engine that makes this real.
For the first time, AI can sustain complex, multi-hour workflows without falling apart.
The combination means I went from asking AI questions to building AI systems that generate knowledge I couldn’t produce on my own. It’s one thing to hear about the power of Claude Code. It’s another thing to see it do things you thought were impossible at 100x the rate you could do without it.
In just two weeks, I have:
Created 400+ mental model mastery manuals in the last two weeks (it took me four years to create 48 manuals without AI).
Build the largest mental model encyclopedia in the world with 2,500+ mental models across cultures, disciplines, and domains (it took me dozens of hours to create a mediocre 600-model encyclopedia five years ago without AI)
Created a system that helps me see second-order effects of AI news better than 99% of people who are in AI (this article is a case in point).
Created a system that convenes a council of history’s top thinkers to debate each other and think outside the box.
Built a tool to scrape 2,000 top AI articles and analyzed their patterns.
And much more…
In my opinion, the last three years of learning AI were preparation for this moment.
To show you what I mean, I pointed my sense-making system at a single piece of news: Jack Dorsey firing 4,000 Block employees. Then, I asked it to do what no individual analyst could do in a reasonable timeframe. In particular, the historical context it provided completely fundamentally reshaped what the layoff news means. IMHO, this is what the best news will look like in the future. Not shallow, polarized hot takes.
Now, let’s put the system to the test. Keep in mind that this is just version #1…
PART 1:
Overview
TLDR
On February 27, 2026, Jack Dorsey sent a memo to Block’s 10,000 employees telling them the company was cutting nearly half its workforce. The reason, he said, was artificial intelligence. Block’s tools had gotten good enough that a company of 6,000 could do what 10,000 had been doing.
Within hours, Block’s stock surged 24%.
That stock surge — not the layoffs themselves — is the most important thing that happened. It changed the calculation for every CEO of every public company in America. Before Block, announcing that you were firing 40% of your workforce was a signal that something had gone terribly wrong. After Block, it became a signal that you were boldly embracing the future. One event, and the meaning of mass layoffs rotated 180 degrees.
The question is no longer whether AI can actually do the work of 4,000 knowledge workers at Block. The question is whether the market’s reward for saying it can will trigger a cascade of similar cuts across the economy — and what happens to the millions of people in the path of that cascade.
Mechanism
To understand what is really happening at Block, you need to hold two contradictory facts in your head simultaneously.
Fact one: Block tripled its headcount from 3,900 to 12,500 during the COVID hiring boom. It maintained duplicate organizational structures for two of its major product lines until mid-2024. It capped hiring in November 2023 — before anyone was talking about AI replacing knowledge workers. The post-cut headcount of roughly 6,000 is almost exactly what you’d predict if you took Block’s pre-COVID size and adjusted for revenue growth. In other words: this may be a company returning to its natural size after a hiring binge, with AI as the stated reason rather than the actual cause.
Fact two: AI tools genuinely are changing what knowledge workers can accomplish. Code assistants, automated testing, AI-powered analytics — these tools are real, and they do reduce the number of people needed for certain tasks. Even if the COVID correction explains most of the cuts, the floor Dorsey is cutting to is probably lower than it would have been without AI.
Both facts are true. But notice which one produces a 24% stock surge and which one produces a shrug.
What economists call reflexivity — a concept developed by George Soros — describes what happens when a market’s reaction to an event changes the event’s significance. Block’s stock didn’t just reflect a judgment about the company’s strategy. It created a new reality. The 24% premium is now a signal broadcasting to every boardroom in the country: announce AI-driven cuts, and you will be rewarded.
This is how a single corporate decision becomes an economy-wide pattern. Not because every CEO independently concludes that AI can replace 40% of their workforce. But because every CEO sees that saying so produces an immediate, measurable payoff. Behavioral scientists call this operant conditioning: when a behavior is immediately rewarded, it increases in frequency. The market just trained the CEO class. The 24% is the treat.
The incentive structure has a perverse twist. A CEO who carefully studies their operations and honestly concludes “our headcount is about right” gets no reward. A CEO who announces a dramatic AI-driven restructuring gets a stock premium. The market doesn’t reward accuracy. It rewards the narrative. So the rational CEO, regardless of what they actually believe about AI’s capabilities, will choose the narrative that produces the premium.
This creates what psychologists call pluralistic ignorance: a situation where many people privately doubt a consensus but publicly conform because they assume everyone else genuinely believes it. CEOs may privately suspect that AI cannot actually replace 40% of knowledge workers. But when the market rewards the claim and punishes doubt, private skepticism evaporates in public, and the “consensus” appears unanimous — even if it was never sincere.
The Reaction And What It Reveals
The most telling detail about this event is not any single reaction. It is the gulf between them.
Wall Street saw a company getting leaner, more efficient, more “AI-forward.” The stock surged. Analysts upgraded their targets. The word visionary appeared in research notes.
The AI research community saw something different. Wharton professor Ethan Mollick pointed out that “effective AI tools are very new, and we have little sense of how to organize work around them.” He was making the complementary innovation argument: AI tools alone don’t produce organizational transformation. You need redesigned workflows, retrained managers, rebuilt processes. Those take years to develop, and Block hasn’t had years.
Employees saw something else entirely. Internal morale had been described as “the worst in four years” before the announcement, driven by rolling smaller cuts and a mandatory AI-adoption policy baked into performance reviews. The combination of “use AI tools daily or face consequences” alongside “we’re cutting half the company” sent a clear signal: AI is here to replace you, not to help you.
Media reactions captured the contagion risk: “Dorsey’s Block layoffs may embolden CEOs” (Axios); “Jack Dorsey just halved Block’s employee base — and he says your company is next” (TechCrunch).
These divergent reactions are not a failure of communication. They are the event’s most important diagnostic. Different stakeholders are processing the same facts through fundamentally different incentive structures. Investors benefit from cost reduction. CEOs benefit from the narrative. Workers bear the cost. When the people who benefit control the narrative and the people who bear the cost do not, the narrative will consistently overstate the benefits and understate the costs.
Research on loss aversion shows that the pain of losing something is felt roughly twice as intensely as the pleasure of gaining the equivalent. The 4,000 displaced workers aren’t just losing income — they’re losing identity, routine, community, and the professional status they built over years. Meanwhile, investors experience only gain. The same event produces intense suffering and moderate euphoria in different populations, and the system treats the euphoria as the signal and the suffering as noise.
What Comes Next
Three scenarios:
Most likely: Block’s cuts are partially successful. Revenue holds, margins improve, and the “AI reinvention” narrative survives — but messily. Some product lines degrade. Some key employees leave. The company settles into a lower-energy equilibrium: functional but less innovative than before. Several other large companies follow the playbook over the next 12 months, but the cascade is uneven — some succeed, others visibly struggle, and the “AI layoff premium” gradually declines as the market learns to distinguish genuine AI capability from narrative convenience.
Bull case: Block’s bet pays off. The remaining 6,000 employees, augmented by AI tools, genuinely produce more per capita than the old 10,000-person organization. Block becomes the case study for AI-native organizational design. The cascade intensifies: 15-30 major companies follow by end of 2026. The knowledge worker labor market undergoes a structural reset, with permanent implications for how companies are staffed.
Bear case: Block’s cuts overshoot. Institutional knowledge loss produces cascading operational failures that take 12-18 months to surface — customer service degradation, product quality decline, compliance gaps, the quiet exodus of the best remaining employees. The stock price corrects sharply. The “AI layoff” playbook gets a high-profile failure case, and the cascade stalls. But the damage to the 4,000 displaced workers — and to their counterparts at the companies that already imitated Block before the correction — cannot be undone.
What should you watch for?
Two early signals will tell you which scenario is unfolding.
Track the contagion: if more than five Fortune 500 companies announce AI-justified restructurings by mid-2026, the cascade is structural, not episodic.
Track Block’s execution: if product quality metrics, customer satisfaction, or revenue growth visibly degrade by Q3-Q4 2026, the bear case is materializing. The stock price is not the signal — stock prices reflect narratives, and narratives lag reality. The signal is in the operations.
The Bigger Picture
Block’s layoffs are one event. But the pattern they instantiate — AI as justification for workforce reduction, markets rewarding the narrative, a cascade of imitation, and a widening gap between technical speed and institutional response — is the dominant pattern of the current AI transition.
We are in what economic historians will eventually call the Frenzy phase: the period when the new technology exists, financial capital is pouring in, early adopters are reorganizing, and the gap between winners and losers is widening fast.
History shows that Frenzy periods eventually give way to a turning point — some combination of crisis, backlash, and institutional catch-up that forces the technology’s benefits to be distributed more broadly. But history also shows that the Frenzy period can last a decade, and the people caught in its turbulence do not get those years back.
The mental models analysis surfaced a truth that no single analytical framework captures on its own: the system cannot see what it is destroying.
The stock price measures cost savings. It does not measure the institutional knowledge walking out the door, the intrinsic motivation collapsing among survivors, the identity crises compounding across 4,000 households, or the slow erosion of trust between employers and the skilled workforce they will need for the next phase. These invisible costs are real. They will compound. And by the time they become visible, the narrative will have moved on, and no one will connect them back to the day the market gave a standing ovation for firing 4,000 people.
The question this event poses is not “will AI replace knowledge workers?” It will replace some tasks, augment others, and create new ones — the same as every transformative technology in history. The real question is whether we will manage that transition with the same institutional sluggishness that sacrificed the handloom weavers, or whether the speed of this transition will force a faster institutional response.
History’s base rate says we will be too slow.
The handloom weavers earned 21 shillings a week in 1802 — enough to feed a family, maintain a home, and hold standing in their community. By 1817, that had collapsed to 9 shillings. By the 1830s, it was 5. The industry that replaced them eventually generated more wealth, more goods, and higher living standards than anything the weavers could have imagined. But the weavers never saw any of it. They died in poverty, branded — in their own words — as “rogues” by a society that no longer needed what they could do.
Their children fared little better. They entered the factories that had destroyed their parents’ livelihoods, but wages barely grew for decades. The grandchildren — reaching working age in the 1860s and 1870s — were the first generation to actually benefit from the system that had swallowed their families whole. Two full generations. Sixty years from collapse to recovery. And that recovery came through entirely different work, in an entirely different world. The thing their grandparents had been was simply gone.
The economy adjusted. It always does. But the people caught in the gears of that adjustment were sacrificed to a transition whose benefits they would never live to see.
That is the base rate. That is what “too slow” actually means.
Whether that pattern repeats depends on choices that have not yet been made — by policymakers, by CEOs, by voters, and by the knowledge workers who are watching this story and deciding what it means for their own lives. Block’s layoffs are not the future of work. They are the opening act. The future depends on what the audience does next.
PART 2:
Historical Antecedants
We’ve seen this movie before, at least four times…
#1. The factory that bought motors but forgot to redesign the floor
In the 1890s, electric motors were available, but factories didn’t get more productive for another 30 years. Why? Because they bolted the new technology onto the old layout. Productivity only surged when factories were completely redesigned around electricity. Dorsey is trying to do that redesign in one move — rip out the old structure and rebuild around AI. History says he’s right about the direction but almost certainly underestimating how long it takes. Expect Block to stumble before it runs.
#2. The railroad stock that surged on hype
In the 1840s, railway company stocks soared every time a new route was announced — before a single mile of track was laid. Many of those companies went bust in the crash of 1847. The survivors built the industrial economy. Block’s 24% stock jump looks a lot like this: the market is rewarding the story of AI efficiency, not proven results. That kind of narrative-driven surge is historically unreliable.
#3. The weavers who found new jobs but lost their identity
When power looms replaced handloom weavers in the early 1800s, most weavers eventually found factory work. But the new jobs paid less, carried less prestige, and required less skill. The employment problem resolved. The status problem didn’t — and it fueled political unrest for a generation. Block’s 4,000 displaced engineers and designers will likely find work. The question is whether it’ll be at the same level. If not, the frustration compounds across the economy.
#4. The Gilded Age playbook
Every major technology wave has concentrated wealth before triggering reform. Railroads, steel, oil — each time, owners captured the gains while workers absorbed the disruption. Each time, political backlash eventually produced new protections (antitrust, labor laws, the income tax). But “eventually” meant 20-30 years.
The bottom line:
Dorsey is probably right that AI changes how companies work. He’s probably early on the execution. The stock surge tells you we’re in a hype cycle, not that Block has figured it out. And the 4,000 people who just lost their jobs are the leading edge of a pattern that, historically, takes decades to fully resolve — even when the technology is real.
Why AI reorganization may be genuinely faster than historical antecedants
At the same time, two things are different this time:
Reorganizing around AI is digital, not physical. Dorsey doesn’t need to rip up factory floors, move machinery, or retrain workers on physical equipment. He’s restructuring workflows, reporting lines, and software processes. That can move at the speed of a Slack message and a revised org chart, not a construction crew.
AI can help with its own integration. No prior technology could do this. Electric motors couldn’t redesign the factory layout. Tractors couldn’t retrain farmers. But AI can help write the new processes, identify redundancies, build the tools that replace the old workflows, and onboard remaining employees to new ways of working. The technology accelerates its own complementary innovation. That’s structurally new.
What this changes. The historical J-Curve — the dip before the gain — is probably still real. You can’t reorganize a 6,000-person company overnight regardless of tools. But the dip may be shallower and shorter than historical precedent suggests. Instead of the 30 years electricity took, or even our model’s 12-36 month estimate, AI-assisted reorganization could compress the painful part to 6-18 months.
What it doesn’t change. Three things still operate at human speed: trust (employees need to believe the new structure works), culture (new norms take time to internalize), and customer relationships (clients don’t reorganize their own workflows just because Block did). The digital-speed advantage applies to the technical reorganization. The human reorganization still has friction.
Final Words: Dorsey may be less early than historical parallels suggest, because the tool he’s reorganizing around can help with the reorganization itself. That’s a real structural advantage no prior technology offered. But the human side — trust, culture, morale, status loss for 4,000 displaced workers — still runs on human time. History’s timeline for the technical transition may compress. History’s timeline for the social consequences probably doesn’t. The risk now isn’t as much that the technology won’t deliver. It’s that it delivers faster than human systems can absorb.
PART 3:
Regulatory Lag Explains Why Pain Arrives Before Policy
Across six major cases of labor disruption in modern history, one pattern is remarkably consistent: meaningful regulation takes 20-50 years to arrive after the harm becomes visible. The safety net is always built after people have already fallen.
Case 1: Industrial Revolution → Factory Acts (UK, 1780s-1833)
Harm visible: 1780s-1790s (child labor, 16-hour days, dangerous machinery)
First meaningful regulation: Factory Act of 1833 (~50 years after harm began)
What took so long: Early acts (1802, 1819) had no enforcement mechanism. Parliament was controlled by factory owners. It took decades of public campaigns, investigative journalism (Parliamentary commissions documenting child labor conditions), and worker organizing before enforceable regulation passed.
Key trigger: Public moral outrage at documented child suffering, not worker power alone. Workers couldn’t vote.
Adequacy: Even the 1833 Act only covered textile factories. Comprehensive coverage took until the 1870s-1890s — nearly a century after industrialization began.
Case 2: Gilded Age → Progressive Era (US, 1870s-1935)
Harm visible: 1870s (monopoly power, worker exploitation, dangerous conditions)
First meaningful regulation: Sherman Antitrust Act 1890 (weakly enforced); real teeth came with Clayton Act 1914 and NLRA 1935
Lag: 37 years to first law, 61 years to effective labor rights
What took so long: Courts actively struck down labor protections (Lochner era). Industry funded political campaigns. The ideology of laissez-faire dominated educated opinion. Reform required a complete intellectual revolution — from Social Darwinism to Progressivism — which took a generation.
Key trigger: Accumulation of crises — Haymarket, Pullman Strike, Triangle Shirtwaist Fire (146 dead, 1911), muckraking journalism. Each crisis built incrementally; none alone was sufficient.
Case 3: Great Depression → New Deal (US, 1929-1935)
Harm visible: 1929 (market crash, mass unemployment reaching 25%)
Meaningful regulation: Social Security Act, NLRA, Fair Labor Standards Act (1935-1938)
Lag: 6-9 years — by far the fastest response in the dataset
Why this was fast: The crisis threatened the entire political-economic order. 25% unemployment meant the median voter was personally affected. There was a credible revolutionary alternative (communism) that terrified elites into concession. FDR had a legislative supermajority.
The lesson: Regulatory speed correlates with existential threat to the system itself, not with severity of harm to workers. Workers suffered terribly in the Gilded Age too — but it wasn’t systemic collapse, so reform took 60 years.
Adequacy: Remarkably adequate for its time. Social Security, unemployment insurance, minimum wage, and collective bargaining rights created the framework that lasted 50+ years.
Case 4: Post-WWII Industrial Hazards → OSHA/EPA/Civil Rights (1940s-1970s)
Harm visible: 1940s-1950s (workplace injuries, environmental contamination, racial discrimination in employment)
Meaningful regulation: Civil Rights Act 1964, OSHA 1970, EPA 1970
Lag: 22-30 years from visible harm to regulation
What took so long: Cold War politics made labor organizing suspect (communist association). Postwar prosperity masked underlying problems. Required a new generation (Baby Boomers) who hadn’t experienced Depression-era scarcity to prioritize non-economic values (environment, rights).
Key trigger: Specific catalyzing events — Rachel Carson’s Silent Spring (1962), Cuyahoga River fire (1969), Birmingham church bombing (1963). But these worked because decades of organizing had prepared the ground.
Case 5: Offshoring → Trade Adjustment (US, 1990s-present)
Harm visible: Mid-1990s (manufacturing job losses, Rust Belt devastation)
Meaningful regulation: Never adequately arrived.
Lag: 30+ years and counting
What happened instead: Trade Adjustment Assistance existed but was chronically underfunded and reached a fraction of displaced workers. Political backlash eventually manifested not as regulation but as populist politics (2016 election — 20+ year lag from peak harm).
The lesson: When displacement is geographically concentrated and affects a politically weak demographic, regulation may never come. The political system can simply absorb the damage and move on.
Adequacy: Essentially zero. The “deaths of despair” epidemic in former manufacturing regions is the direct downstream consequence.
Case 6: Gig Economy → Platform Regulation (2010s-present)
Harm visible: ~2012 (Uber/Lyft drivers, DoorDash workers lacking benefits, minimum wage protections)
Meaningful regulation: Still contested. California’s AB5 (2019) was partially reversed by Prop 22 (2020). EU Platform Workers Directive (2024).
Lag: 14+ years and still incomplete
What’s happening: Platform companies spend hundreds of millions on ballot initiatives and lobbying. Classification (employee vs. contractor) is the battleground. Workers are atomized and hard to organize. The companies move faster than regulators.
The lesson: When the disrupting companies are also the most sophisticated lobbying entities in history, counter-mobilization can neutralize regulation indefinitely.
Five Cross-Cutting Patterns From History
The typical lag is 20-50 years. The New Deal (6-9 years) is the sole exception, and it required system-threatening collapse. For non-existential disruptions, expect decades.
Triggers follow a sequence. Moral outrage at visible suffering (children, deaths) → investigative documentation → sustained civic organizing → legislative champion → catalyzing crisis event. All five elements are usually needed. Missing any one of them can stall reform for decades.
Regulation arrives AFTER the worst damage is done. Factory Acts came after a generation of child labor. Progressive Era came after decades of exploitation. The regulation prevents the NEXT round of harm, not the current one. The first generation of displaced workers is essentially sacrificed.
Adequacy declines over time. New Deal-era regulation was comprehensive because the crisis was existential. Post-1970s regulation has been increasingly partial, contested, and subject to rollback. Offshoring got essentially nothing. The trend line for regulatory adequacy is negative.
Counter-mobilization is getting stronger. In each successive case, the entities being regulated are more sophisticated at blocking reform. Factory owners in 1833 had Parliament; Gilded Age industrialists had courts; platform companies in 2024 have AI-powered lobbying, ballot initiatives, and narrative control. The regulatory lag may be lengthening, not shortening.
What This Means for AI Displacement
If historical patterns hold:
Visible harm: Already beginning (2025-2026, Block is the leading edge)
Peak displacement: Likely 2027-2032 if the speed mismatch analysis is correct
First meaningful federal regulation: Optimistically ~2035-2040; historically typical ~2045-2060
Comprehensive framework: Possibly never, if the offshoring/gig economy pattern dominates
The gap between peak displacement and meaningful regulation could be 10-30 years. During that period, displaced workers rely on whatever safety net exists at the time of disruption — which was designed for industrial-era layoffs (temporary unemployment, retraining programs) not AI-era structural transformation.
The only historical scenario where regulation arrived fast enough to matter was the New Deal — and that required 25% unemployment and credible fear of revolution. Short of that level of systemic crisis, the political system processes labor disruption slowly.
The compounding problem: AI companies will be among the most sophisticated lobbying entities in history, potentially making this the hardest regulatory environment ever. They can use AI itself to optimize political strategy, draft legislation favorable to their interests, and run targeted influence campaigns at scales no prior industry could achieve.
PART 4:
Will AI Reach Systemic Threat Levels Needed To Trigger Reform?
The Short Answer: More Likely Than Not
The model currently weights Structural Break at 10% (5-10yr) and 22% (10-20yr). But “systemic threat” doesn’t require Structural Break — it requires enough concentrated pain that the political system perceives an existential problem. That’s a lower bar.
Three structural reasons we likely cross that bar:
1. The Nature of What’s Being Automated Has No Precedent
Every prior displacement wave automated specific capabilities — weaving, farming, manufacturing, data entry. Humans always retreated to general cognition. AI automates general cognition itself. There is no obvious “retreat to” capability. This means the displacement-reinstatement cycle that saved us every previous time may not complete on its own — or may complete much more slowly, with a deeper trough.
2. The Demographic Affected Is Politically Powerful
This is the most underappreciated difference. Factory workers displaced by offshoring were geographically concentrated, politically weak, and culturally invisible to media elites. Knowledge workers displaced by AI are educated, urban, media-literate, politically active, and socially connected to the people who make policy. When a Stanford-educated software engineer can’t find equivalent work, the political system notices in a way it never did for a laid-off machinist in Ohio. This is simultaneously the most hopeful and most volatile feature of AI displacement — it’s harder to ignore, but it also generates more politically sophisticated anger.
3. The Speed Creates Concentration Rather Than Diffusion
The speed mismatch means displacement that might have been spread across 20 years in prior technology waves gets compressed into 5-10 years. The political system can absorb gradual pain (offshoring model — slow enough that affected communities just quietly decline). It cannot absorb concentrated pain at the same scale (New Deal model — sudden enough that the median voter is affected). The digital + self-referential advantage pushes AI displacement toward the concentrated pattern.
But the “Threat” May Not Look Like Unemployment
The New Deal was triggered by 25% unemployment. AI displacement may never reach that headline number because:
Augmentation genuinely absorbs a significant fraction of the impact
Gig work, freelancing, and underemployment mask the real numbers
New AI-native job categories do emerge, just at lower status and lower pay
Instead, the systemic threat from AI may manifest as a compound crisis — not one dramatic metric, but several interacting pressures that individually seem manageable but together overwhelm institutional capacity:
Inequality spike — productivity gains flow to capital owners and a small technical elite while median income stagnates or declines in real terms
Meaning crisis — widespread purposelessness even among the nominally employed, as work becomes supervisory/review rather than creative/generative
Trust collapse — institutions visibly unable to respond → legitimacy erosion → withdrawal from civic participation
Political radicalization — displaced knowledge workers are exactly the demographic that produces effective political extremism (elite overproduction, historically the most dangerous social dynamic)
A compound crisis is harder to regulate because there’s no single metric to point to. “25% unemployment” mobilizes people. “A vague sense that everything is getting worse while GDP goes up” does not — at least not until it crystallizes into a political movement.
When Will Things Unfold?
This unfolds in four stages, each one feeding into the next:
The layoff wave (2028-2033): Once Wall Street rewards companies for cutting jobs and citing AI — as it did when Block’s stock surged 24% after announcing 40% cuts — other CEOs face intense pressure to follow suit. What starts as a few bold moves becomes the expected playbook. AI-driven restructuring goes from unusual to standard corporate practice.
The social fallout becomes visible (2030-2035): Displaced workers struggle to find jobs at comparable pay and status. People who built careers and identities around their expertise find that those skills are now worth less. Communities that depended on knowledge-work employers feel the impact. Media stories shift from “the future of AI” to “what happened to the people.”
The pain becomes political (2032-2038): Scattered individual hardship turns into an organized movement. Someone gives it a name. A political leader makes it their cause. The anger that was private becomes public and collective — the way offshoring frustration eventually fueled the 2016 election, but faster and louder because the affected people are more educated and more politically connected.
The establishment takes it seriously (2033-2040): Politicians stop treating AI displacement as a niche issue and start treating it as a threat to social stability. This is the moment where the window for major legislation opens.
So roughly 2033-2040 for the systemic threat to fully materialize. That’s 7-14 years from now.
The wide range reflects a genuine unknown: does AI displacement hit fast and all at once (like the Great Depression — hard to ignore, triggers a fast response) or slowly and unevenly (like offshoring — easy to ignore for decades because it only devastates certain communities)? The answer depends largely on whether the corporate layoff wave arrives as a flood or a slow tide. We should have early signals by late 2026 — if multiple Fortune 500 companies follow Block’s lead within a year, we’re on the fast track.
So roughly 2033-2040 for the systemic threat to fully materialize. That’s 7-14 years from now.
The wide range reflects genuine uncertainty about whether AI displacement follows the concentrated pattern (fast — 2033) or the diffused pattern (slow — 2040). The Market Reward Cascade prediction (check Q3 2026) is one of the earliest signals that will narrow this range.
PART 5:
Six Pathways from Systemic Threat to Regulation
Path 1: The Social Breaking Point
The most likely path. Years of accumulating pain — job losses, downward mobility, growing inequality — build pressure until a single event breaks through.
Maybe it’s a mass layoff that becomes a symbol.
Maybe it’s an election where AI displacement is the defining issue.
Maybe it’s a high-profile AI failure that hurts real people.
Whatever the spark, it lands on dry tinder that’s been piling up for years. Think of the energy of the 1930s labor movement — not a stock market crash, but a social and political eruption.
How it unwinds: Social pain builds through the late 2020s and early 2030s. A catalyzing event (~2032-2037) turns private suffering into public outrage. A political window opens. Comprehensive reform passes (~2035-2040).
How good is the regulation? Moderate. It would be reactive — written in response to damage already done — but could be reasonably comprehensive if the political moment is big enough. This is the most optimistic realistic pathway because the crisis creates the political will to do something meaningful, and the technology is mature enough by then that lawmakers can regulate something they actually understand.
Path 2: Europe Goes First, America Follows (~35% probability)
The EU has consistently regulated tech ahead of the US — data privacy (GDPR), AI safety (AI Act), gig worker protections (Platform Workers Directive). They do the same with AI labor impacts. American companies operating in Europe have to comply. Over time, it’s cheaper to just follow the European rules everywhere than to maintain two separate systems. US legislation eventually becomes a matter of officially adopting what American companies are already doing.
How it unwinds: EU passes a comprehensive AI labor framework (~2028-2032). US multinationals quietly adopt EU-compliant practices globally. US federal legislation (~2035-2040) ratifies what’s already happening on the ground rather than leading the change.
How good is the regulation? Moderate to high. European regulation tends to be more protective of workers than anything the US would write on its own. If America imports even a diluted version, workers end up better off than in any purely domestic scenario. The risk: US companies lobby for a weaker federal law that replaces the stricter European-inspired practices they’d already adopted — using “simplification” as cover for rollback.
Path 3: States Go First, Federal Catches Up (~20%)
California, New York, Washington, and Massachusetts pass their own AI labor protections. Other states don’t. Companies now face a patchwork of 50 different rules. The compliance headache gets expensive. Eventually, the companies themselves lobby Congress for a single federal standard — not because they want regulation, but because they want one set of rules instead of fifty.
How it unwinds: State-level experimentation (2028-2033) creates an unworkable patchwork. Industry pushes for federal legislation to replace the mess (~2035-2042). The result is a compromise — better than nothing, but shaped more by corporate convenience than worker protection.
How good is the regulation? Low to moderate. When regulation exists because companies asked for it to simplify compliance, it tends to serve company interests first. Think of how federal privacy bills have been weaker than California’s privacy law. But it does establish a minimum floor of protection that didn’t exist before.
Path 4: Tech Companies Realize They Need Customers (~10%)
Here’s the Henry Ford logic: Ford paid his workers enough to buy his cars. If AI concentrates wealth so dramatically that ordinary people can’t afford to buy things, even the companies that “won” the AI transition lose. Tech companies, facing both political backlash and shrinking consumer markets, start advocating for Universal Basic Income or other major redistribution programs. It’s not generosity — it’s self-preservation.
How it unwinds: The demand-side effects of displacement become visible (2030-2035) — people aren’t buying as much because they’re earning less. Tech CEOs start publicly calling for UBI or major safety net expansion (~2030). Their advocacy gives political cover to legislators. Corporate-backed programs pass, funded by some combination of AI taxes and direct corporate contributions. Pilot programs start around 2033; full-scale implementation by 2038-2045.
How good is the regulation? Hard to say. It depends on whether the redistribution is real or performative. The cynical scenario: companies fund the bare minimum needed to keep consumers spending and prevent pitchfork-level anger. The optimistic scenario: AI makes companies so productive that sharing the gains broadly is both affordable and smart business. The truth is probably somewhere in between.
Path 5: Full-Blown Crisis Forces a New Deal (~5%)
This is the extreme scenario. AI displacement hits catastrophic levels — real unemployment (including people stuck in gig work and part-time jobs who need full-time work) reaches 15-20%. Social breakdown becomes visible. A crisis election produces a government with a clear mandate and a large enough majority to pass sweeping legislation. Think FDR in 1933.
How it unwinds: Rapid, severe displacement (2029-2033). The crisis gets bad enough that the entire political-economic order feels threatened. A crisis election produces a mandate (~2033-2035). Comprehensive legislation follows within 2-3 years: a new federal transition authority, universal income or services, massive retraining programs, requirements for how companies can restructure, and a tax framework for AI-generated productivity.
How good is the regulation? The best of any pathway. History shows that truly comprehensive, durable reform — the kind that lasts decades — only happens when the crisis is severe enough to overwhelm industry opposition. Social Security, unemployment insurance, minimum wage, and the right to collective bargaining all came out of the Great Depression. The tragedy is that this quality of response requires catastrophic levels of pain to trigger. The regulation is good because things got bad enough that half-measures were no longer politically viable.
Path 6: Industry Lobbying Wins Indefinitely (~5%)
This is the dark scenario — and it has a direct historical precedent in offshoring. AI companies successfully block every meaningful piece of legislation. State-level efforts get overridden by weak federal laws. European rules get worked around through corporate restructuring. The political system absorbs the damage without ever meaningfully responding.
How it unwinds: It doesn’t. Industry lobbying neutralizes every legislative attempt. Displaced workers are managed through existing safety nets that were never designed for this scale of disruption. GDP keeps growing. Corporate profits soar. But underneath the aggregate numbers, a generation of displaced professionals quietly falls into lower-status work, gig jobs, or withdrawal from the workforce. The anger gets channeled into populist politics rather than policy. Think of what happened to manufacturing communities after offshoring — except this time it’s happening to college-educated professionals in major cities.
Timeline: Indefinite. Regulation never arrives in a meaningful form.
The Bottom Line
The honest answer: Paths 1 and 2 probably happen together. Europe regulates first. American social pain accumulates in parallel. A catalyzing event — something that crystallizes the diffuse anger into a political moment — opens a window. The European framework provides a ready-made template. Something passes in the US around 2035-2040.
That means roughly a 5-10 year gap between when most of the job losses happen and when meaningful rules exist to address them. That’s better than the 20-50 year historical average, but worse than the New Deal’s 6-9 years. Three things specific to AI explain why it might be faster than usual:
The people being displaced are politically powerful
The displacement is unusually public (CEOs are announcing it on earnings calls, not quietly moving jobs overseas)
Europe provides an external template that shortcuts the “design from scratch” problem.
The wild card that makes all of this harder to predict: AI keeps getting more capable while the regulation is being written. By the time a law designed for 2030-era AI passes in 2037, the technology may be fundamentally different from what the law was written to address. This is genuinely new territory. Factories didn’t get 10x more productive during the 20 years it took to regulate them. AI might. That means even good regulation could be perpetually playing catch-up — solving yesterday’s problem while tomorrow’s is already arriving.
Editorial Note
What you just read is the first output from the sense-making system that I created for myself in Claude Code. The above article is not a polished final draft after weeks of iteration. It’s Claude’s unedited first draft.
The fact that a v1 output can produce analysis at this depth is itself part of the point. And the sense-making system improves with every piece of news it processes. Every time I use it, I don’t just get smarter: the system gets smarter too, delivering even better insights with the next article.
This is what I mean when I say Claude Code has completely changed how I learn and think. I didn't just read about the Block layoffs today. I built a system that showed me what the Block layoffs mean at a level of depth I couldn't have reached on my own, on a timeline that would have been impossible even six months ago.
That's the shift.
PAID MEMBERS: DEEP DIVE
What follows is the AI model's reasoning for generating the post above. If you want to go really deep on this story and learn the related mental models, the following section is for you…

