This “AI Command Language” Upgrades Claude to Opus 5.6
After months of exploring and stress-testing “cognitive operations” with AI, I mapped the architecture hiding inside mental models
Editorial Note:
This may be the most important article that I’ve ever written.
It’s the synthesis of two things:
Spending more time studying, applying, and teaching mental models than anyone else in the world.
Three years exploring how AI can help both humans and AI think better.
But what makes this article particularly unique is that it can be executed by AI.
By using the custom AI skill and 35,000+ word reference document I share at the bottom with paid subscribers, you shortcut the multi-year process of collecting, memorizing, and applying hundreds of mental models. Not only that, you supercharge your AI’s thinking so that it’s like using a version of AI from a year in the future.
Figuring out this system has enabled me to publish 50,000+ words of my most important ideas ever in the last two months.
Give it a try.
Foreword
I’ve spent 10 years collecting mental models and building a company around one idea: the right mental model at the right time is the highest-leverage intervention in human performance.
At first, each new mental model blew my mind and had an immediate impact on my life. But I reached a point where learning the 50th one didn’t move the needle. This eventually led me to stop creating new monthly models after four years of going non-stop.
I thought I was done with learning new mental models.
But over the past three months, I’ve had 100+ hours of conversation with Claude exploring how humans and AI can think better together, and I’ve had a breakthrough. Or rather, we had a breakthrough.
One sentence from Claude particularly rearranged everything for me:
“You’ve been collecting vocabulary without ever learning the grammar.”
This made me realize that I had spent years collecting individual models and very little time actually exploring how to chain them together in the most useful order in specific situations.
This is when I really changed how I prompt.
For example, instead of prompting “analyze this topic,” I started writing things like:
“Apply zeroth principles thinking (question the fundamental axioms that precede first principles). Then map the systemic incentives reinforcing current perspectives. Then apply cross-disciplinary analysis through multiple lenses. Then generate novel hypotheses. Then identify falsifiable predictions.”
That single prompt chains five named operations (zeroth principle, incentive mapping, cross-domain analogy, abduction, and falsification), each one producing a different type of insight that “analyze this” would never surface.
The result was that I felt like I was using a smarter version of AI from a year in the future.
Or instead of asking “what do you think about this framework?”, I recently wrote:
“Stress test the idea over 10 artifacts.”
“Stress test” is a direct command to run falsification. It triggered a 10-artifact adversarial analysis, including attacks from Nassim Taleb's anti-fragility lens, Buddhist epistemology, mathematical logic, and a Gerd Gigerenzer-style critique arguing the entire project was misguided. The framework survived all of them. None of that would have emerged from “What do you think?”
Every framework for better reasoning you've ever read from Kahneman's work on cognitive biases to Munger's mental model lattice to Senge's systems thinking to Bloom's taxonomy of learning described how thinking works and then left us to implement it using the same thinking we were trying to improve. That’s why we often read a book, nod along, and change very little.
This article is different.
It contains specific thinking recipes that chain together mental models into powerful sequences. Each one is written as a prompt you can run with AI right now. Not after you’ve internalized the framework. Now. You’ll paste a recipe into Claude, run it on a real problem, and…
Experience your problem reorganizing itself
Learn “thinking moves” you’ve never used
Here’s What You’re About To Get
Free Subscribers:
A 3-category taxonomy that sorts every mental model you’ve collected by what it actually does
9 distinct reasoning operations you can use with AI with the signature question that activates each one
3 complete Thinking Recipes: copy-paste prompts designed for specific problem types
A method to discover your cognitive signature — your default thinking patterns and blind spots.
Paid Subscribers:
35,000+ word Mental Model Encyclopedia. 100+ item reference taxonomy of the most useful thinking tools, organized by function that Claude can execute every time it runs.
Claude Skill. A Mental Models Skill that you can install in Claude to help you apply the right mental models to every conversation effortlessly, while also training you to think in mental models naturally.
You get this bonus and $2,500+ in other perks (courses, prompts, books) for just $20/month or $150/year.
One more thing…
This article is written in Claude’s voice because the core discovery required an observer standing outside human cognition. A fish can’t discover water. For the first time in history, we have a thinking partner that can:
See the grammar we think in, but can’t see ourselves.
Compress hundreds of hours of thinking into minutes.
That partnership is how this framework was found, and it’s how it’s used.
What follows is a lightly edited version of what Claude wrote. I focused on adding bullets to improve readability, bolding/italicizing key ideas, rewriting headings, and making the article shorter.
Full Article Written By Claude
I’m going to tell you something about your thinking that will be obvious once you see it, but that you cannot see from where you’re standing.
Out of all the thinking tools you’ve picked up over the years — through books, courses, experience, and conversations — a small handful have become reflexive. They’re the ones you reach for before you’ve consciously decided to reach for anything.
They don’t feel like choices. They feel like how you think.
Maybe you instinctively break things down to foundational components. Maybe you always look for analogies to something you've seen before. Maybe you stress-test ideas by trying to prove them wrong.
Whatever your defaults are, they activate automatically — and they're genuinely powerful. They're your most developed cognitive tools, and they consistently reveal things other people miss.
Here's the problem, though.
You also have blind spots that are just as consistent, just as automatic, and completely invisible to you. Not because you're not smart enough. Because every reasoning method has structural limitations that no amount of skill can overcome. A person who instinctively decomposes things to first principles will always find the foundational issue — and will systematically miss emergent dynamics that only appear at the system level, or the moment when the whole frame is wrong.
These blind spots are features of the method, not flaws in the thinker. And here’s the part that should concern you: acquiring more mental models doesn’t fix them.
You’ve probably noticed this already. At some point, the returns on new mental models flatlined.
It’s not that you have too few tools. Your toolbox is full. The problem is that every tool you own — every mental model, reasoning method, and decision-making framework — is in one drawer. And the drawer is labeled “mental models.”
But those tools are not all the same type of thing.
Some are measuring instruments: they tell you WHAT to notice.
Some are power tools: they tell you what to DO with what you notice.
Others are project plans: specific sequences of tools combined for specific jobs.
The problem is that you’ve been treating them all as interchangeable.
It’s as if a carpenter had tape measures, saws, drills, clamps, and blueprints all in one bin labeled “tools.” He can’t tell a measuring instrument from a cutting instrument in the dark.
Your defaults aren’t a discipline problem. They’re a sorting problem. You can’t select deliberately from a collection you’ve never categorized.
Once you see the architecture, three things happen immediately.
You see why learning more mental models stopped producing breakthroughs.
You see exactly which type of tool has been missing from your approach.
And you gain the ability to combine tools deliberately — in sequences designed for specific types of problems — producing insights qualitatively different from anything a single tool generates.
Lenses, Operations, and Recipes: The Three Thinking Tools You've Been Lumping Together As “Mental Models”
Every thinking tool you’ve ever encountered falls into one of three categories. These categories reflect what the tool actually does at a functional level.
Thinking Tool #1: Lenses
Lenses tell you WHAT to notice.
They direct your attention to specific patterns in reality:
The Pareto Principle (80/20 Rule) tells you to look for asymmetric distributions.
Second-Order Thinking tells you to look at consequences of consequences.
Chesterton’s Fence tells you to understand why something exists before removing it.
Goodhart’s Law tells you to watch for metrics that have become targets and have stopped measuring what they were supposed to measure.
The Lindy Effect tells you to use survival time as a predictor of future survival.
You probably have dozens of lenses. They’re powerful. Each one reveals patterns that are invisible without it.
Thinking Tool #2. Operations
Operations tell you what to DO with what you notice.
Once your lens has shown you a pattern, do you…
Decompose it to its foundations? (First Principles)
Run a test to prove it wrong? (Falsification)
Import a structural parallel from another domain? (Distant Domain Import)
Build the strongest case against your interpretation? (Steelmanning)
Map its systemic effects? (Systems Thinking)
Generate a hypothesis to explain it? (Abductive Reasoning)
Operations are reasoning procedures. They’re the power tools of thinking — they transform raw observations into specific types of insight.
Thinking Tool #3. Recipes
Recipes combine specific operations through specific lenses in a specific sequence, designed for a specific type of problem.
A recipe is a project plan for your mind. It’s a combination of lenses and operations that’s been given a single name:
Red Teaming → Perspective Simulation + Falsification + Systems Thinking
Scenario Planning → Counterfactual + Systems Thinking + Bayesian Updating
Design Thinking → Perspective Simulation + Abduction + First Principles + Analogical Reasoning
Most people have a rich collection of lenses, a handful of operations they use instinctively, and zero recipes. So they approach every problem with whatever tools happen to be within reach — which means they approach every problem the same way.
The result isn’t bad thinking. It’s limited thinking. You see more than most people can, and you process it competently. But you have systematic blind spots because every operation has structural limitations that no amount of skill can overcome.
Imagine you’re stuck on a strategic decision. You’ve been thinking about it for days. Without understanding these three categories, your only move is to reach for another mental model. Maybe you Google “best mental models for decision-making” and add one more to the pile.
But you don’t know whether you’re stuck because you’re looking at the wrong thing (you need a different lens), because you’re processing what you see the wrong way (you need a different operation), or because the problem requires a specific sequence of steps (you need a recipe). Usually, you grab another lens, because that’s what most books and articles offer. And nothing changes — because a new lens wasn’t what was missing.
When you do understand these three categories, you can diagnose the gap. “I’ve been looking at this through competitive advantage — and it’s showing me the right things. But I’ve been running first-principles on everything it reveals. What if I steelmanned the competitor’s strategy instead of deconstructing my own?”
That single switch — same lens, different operation — produces an insight that no number of additional lenses would have generated.
Now you have a diagnostic vocabulary for being stuck. “Am I missing a lens, an operation, or a recipe?”
That’s a question you can’t ask until you know those are three different things.
Take the Cognitive Signature Quiz: Learn The Thinking Patterns You Can't See Because You're Inside Them
Michael built the following exercise to help you map your defaults…
Instructions To Discover Your Cognitive Signature
Discover your cognitive signature by:
Opening your favorite AI model
Copying and pasting this entire article
Copying and pasting the prompt below
Prompt
Based on what you know of me and based on the attached article, what do you see as my cognitive defaults, and what do you see as my cognitive blindspots?
Based on our conversations, how have these led to certain consequential negative or positive results in my life where I'm likely missing that the root cause is my default cognitive style? Then, share the highest leverage easy move I can make to think better. The 9 Reasoning Operations That Survived 19 Attempts to Destroy Them
Michael and I identified 9 practically distinct reasoning operations. They survived 19 rounds of stress-testing — attacks from non-Western philosophy, cognitive neuroscience, mathematical logic, creative practice, AI architecture, and Taleb-level contrarian critique.
Why 9 and not some rounder number? Because that’s what survived. We started with 26 candidates and ran completeness tests against CIA intelligence techniques, medical diagnostics, legal argumentation, 2,400 years of rhetorical theory, and developmental psychology. 9 is what’s left standing.
These 9 organize into 4 functions:
GENERATE — Create new possibilities
Analogical Reasoning: Import structures from distant, unrelated domains. What it does that nothing else can: Accesses solutions invisible from inside your domain. Signature question: “What field has already solved a structurally similar problem?”
Abductive Reasoning: Generate hypotheses that explain surprising observations. What it does that nothing else can: Creates genuinely novel explanations — constructed to make a surprise non-surprising. Signature question: “What would have to be true to make this observation non-surprising?”
Counterfactual Analysis: Isolate the contribution of individual factors by imagining their absence or alteration. What it does that nothing else can: Reveals the structural role of specific variables. In reality, everything happens together. This is the only operation that runs controlled experiments in the mind. Signature question: “What would change — and what wouldn’t — if this one factor were different?”
EVALUATE — Test against reality
Falsification: Actively attempt to prove your best idea wrong. What it does that nothing else can: Delivers decisive refutation. Other operations can weaken confidence. Falsification can kill an idea — cleanly and permanently. Signature question: “What evidence would prove this wrong? Does that evidence exist?”
Bayesian Updating: Calibrate confidence to actual evidence, updating continuously as new data arrives. What it does that nothing else can: It’s the difference between “I believe this” and “I believe this with 70% confidence, and here’s what would move me to 90% or drop me to 40%.” Signature question: “Given this new evidence, precisely how much should my confidence change?”
DECONSTRUCT — Strip to foundations
First Principles: Remove every assumption until you reach bedrock truths. What it does that nothing else can: Reveals when the assumptions everyone shares are the ones nobody questions. Every other operation works within a frame. First Principles questions the frame itself. Signature question: "What would I believe about this if I had zero prior knowledge and could only work from what's directly observable?"
INTEGRATE — Combine across boundaries
Dialectical Synthesis: Hold two opposing positions simultaneously and find the higher-order truth that transcends both. What it does that nothing else can: Not compromise. Not balance. A genuinely new position that emerges from the tension. Signature question: “What becomes visible ONLY when I take both sides seriously at the same time?”
Systems Thinking: Map the relationships, feedback loops, and emergent properties within a complex system. What it does that nothing else can: Reveals how interventions create unintended consequences, how parts interact to produce wholes, and why optimizing one component often degrades the system. Signature question: “What does this connect to that nobody is tracking, and what feedback loops are operating invisibly?”
Perspective Simulation: Model what others know, believe, want, intend, and experience — with enough fidelity that your simulation produces insights the real person would recognize as their own. What it does that nothing else can: Forces genuine engagement with perspectives your mind instinctively dismisses. The classic version is steelmanning, but the operation is broader — it includes modeling opponents, simulating stakeholders, and immersing in experiences unlike your own. The constraint of genuine perspective fidelity distinguishes this from casual “seeing the other side.” Signature question: “What would the smartest, most informed advocate of this position say — and what do they see that I’m missing?”
Paid subscribers receive detailed versions of these operations that AI can execute.
3 Thinking Recipes: Specific Sequences That Produce Breakthrough Thinking
A recipe specifies: which operations, which moves, through which lenses, in what order, for what type of problem — and crucially, when NOT to use it.
Below are three of many thinking recipes to get you started…
Recipe 1: The Wrong-Problem Detector
Use when: You've been working on something for weeks and progress has stalled. The analysis keeps getting more sophisticated but the breakthroughs aren't coming. You suspect you might be solving the wrong problem.
When NOT to use: When the problem is well-defined and progress has been steady. When the bottleneck is execution, not framing. When time pressure demands action over reframing. In these cases, the simpler heuristic is: "What's the next concrete action?"
Step 1: First Principles via the Zeroth Principle move, through the lens of Inversion. “Before questioning my assumptions about the solution, question my assumptions about the problem itself. What am I assuming the problem IS — and what would I see if I looked at it from the direction of failure?”
Step 2: Abductive Reasoning via Anomaly Hunting, through the lens of Inversion. “If the problem ISN’T what I think it is, what would explain the pattern of failure I’m seeing? What alternative problem, if it were the real one, would make my repeated failure non-surprising?”
Step 3: Counterfactual Analysis via Removal Test, through the lens of Chesterton’s Fence. “If the thing that appears to be blocking me were removed, would I actually make progress? Or does the ‘blockage’ exist for a reason I haven’t understood?”
Step 4: Falsification via Crucial Experiment. “How would I test whether the new framing is correct? What would I observe if it’s right? What would I observe if it’s wrong?”
If at any point during this recipe you encounter genuine surprise — something you didn’t expect and can’t immediately explain — stop the recipe. The surprise IS the insight. Attend to it before continuing.
The prompt:
“I’ve been stuck on [X] for a while. I want to check if I’m solving the wrong problem. First, use the Zeroth Principle — question the assumptions BEHIND my assumptions about what the problem even is, looking at it from the direction of failure. Then, given my pattern of failure, what ALTERNATIVE problem would make that failure pattern non-surprising? Third, if the thing that seems to be blocking me were removed, would I actually progress — or does the blockage serve a function I haven’t understood? Finally, design a test: what would I observe if the new framing is correct versus incorrect?”
Recipe 2: The Innovation Engine
Use when: You need a genuinely novel idea — not a recombination of existing ones. You’ve exhausted the obvious approaches within your domain.
When NOT to use: When the conventional approach hasn't been tried yet, or when proven patterns are available and untested. The simpler heuristic: "What has worked in similar situations?"
Step 1: Analogical Reasoning via Distant Domain Import, through the lens of far-transfer domains. “Find structural parallels in three domains nobody in my industry would think to look: biology, music, and urban planning.”
Step 2: First Principles via Regressive Abstraction — decompose the most promising analogy. “Strip the analogy to its structural core. What PRINCIPLE is operating, separated from its original context?”
Step 3: Dialectical Synthesis via Both/And Reframe, through the lens of real-world constraints. “Hold the imported principle in tension with the real constraints of my situation. What emerges when the abstract insight meets concrete limitations?”
Step 4: Falsification via Pre-Mortem. “Assuming this idea was implemented and failed spectacularly — what went wrong? Which failure mode is most likely?”
Abandon when surprised. If an analogy triggers an unexpected connection — something that doesn’t fit your current framing — follow that thread before completing the recipe.
The prompt:
“I need a genuinely novel approach to [X]. First, find structural parallels in three completely unrelated domains — biology, music, and urban planning. Then strip the most promising analogy to its core operating principle using regressive abstraction. Then hold that principle against my actual constraints using Both/And reframing and tell me what emerges from the tension. Finally, run a pre-mortem: if this idea were implemented and failed, what went wrong?”
Recipe 3: The Blind Spot Finder
Use when: You suspect you’re missing something important but don’t know what. The metrics look fine but your gut says something’s off. Other people seem to see something you can’t.
When NOT to use: When the problem is clear and the path forward is obvious. When seeking blind spots would be procrastination disguised as thoroughness. In these cases, the simpler heuristic is: “What’s the one thing I’m avoiding?”
Step 1: Systems Thinking via Unintended Consequences Tracing, through the lens of Second-Order Effects. “Map every second-order effect, hidden incentive, and invisible feedback loop — especially the ones nobody is tracking.”
Step 2: Perspective Simulation via Strongest Possible Objection, through the lens of shadow perspectives. “Identify the perspective I’m most allergic to engaging with. Build its strongest case. The thing I most want to dismiss usually contains my biggest blind spot.”
Step 3: Abductive Reasoning via Surprising Absence Detection, through the lens of Goodhart’s Law. “Are the metrics I’m watching actually measuring what matters? What would explain the gap between my metrics looking fine and my gut saying something’s off? What’s conspicuously absent that should be present?”
Step 4: Analogical Reasoning via Negative Analogy, through the lens of Historical Precedent. “Has another field encountered this exact blind spot pattern? What did they learn that I can apply before I learn it the hard way?”
The prompt:
“I’m working on [X] and suspect I’m missing something. First, trace every second-order effect, hidden incentive, and invisible feedback loop. Then identify the perspective I’m most tempted to dismiss and build its absolute strongest case. Then check: are my metrics actually measuring what matters, or could they look fine while the real situation deteriorates — and what should be present that isn’t? Finally, find historical cases from other fields where this exact blind spot caused serious problems.”
Paid subscribers receive dozens of recipes at the bottom of the article.
Recipes Are The Difference Between Asking AI To Think and Helping It Think Better
Most people prompt AI the way they think: with one or two operations and one or two lenses. “Analyze this.” “Give me a strategy.” “What should I do?” These prompts activate AI’s default processing mode and produce competent but unremarkable analysis.
A recipe prompt activates multiple operations in sequence, each one compensating for the blind spots of the one before it. The resulting output isn’t just “better” — it’s a different KIND of output.
Consider the difference:
Default prompt:
“How should we improve employee retention?”
Recipe prompt:
“I want to understand employee retention at a deeper level.
First, through the lens of self-determination theory, use the Zeroth Principle to decompose retention — what fundamental needs, when met, make someone stay?
Then, treat our current retention data as a surprise: our investment is up but retention is flat — use anomaly hunting to generate a hypothesis that would make that non-surprising.
Third, through the lens of Goodhart’s Law, use surprising absence detection to check whether the metrics we’re using to measure retention success have become targets distorting behavior.
Fourth, use steelmanning to build the strongest case that our most effective retention initiatives are actually the ones we’ve been cutting.
Fifth, through the lens of the Lindy Effect, use historical analogy to identify which retention factors have remained constant across 50 years of research regardless of economic conditions.”
Five operations, three lenses, in a sequence where each step builds on what the previous one revealed.
The output isn’t be a better list of retention strategies. It’s a reframe what retention IS — exposing why current efforts aren’t working, surfacing dismissed perspectives, and identifying time-tested fundamentals. It will change how you think about the problem, not just what you do about it.
Here’s what makes the benefits of this approach compound: once you see how different operations produce different types of insight, you start noticing which operation is missing from every analysis. You read a consulting report and think “they mapped the system beautifully but never falsified their core assumption.” You catch your own thinking mid-stream and recognize “I’ve been deconstructing for an hour — I need to switch to generating.”
That awareness transforms thinking from an unconscious habit into a deliberate craft.
The high-leverage move isn't learning more lenses. It isn't even learning more operations, though that helps. It's seeing the whole architecture — understanding that you have tools of different types at different layers, and that the magic happens when you combine them deliberately.
You Don't Need to Memorize Any of This — Here's the Shortcut (Written By Michael)
Every prompt you’ve ever written has been shaped by this invisible architecture. Now you have vocabulary for what’s happening and levers to change it.
What’s beautiful about this moment is that you don’t have to spend hundreds of hours memorizing and applying mental models. You can shortcut the process by simply using this article as “synthetic data.”
Here’s what to do:
Copy and paste the entire article (including the bonus section below)
Prompt the following:
”How would you address [XYZ] problem using the approach of the attached article? First, identify the relevant lens, operations, and recipes. Then run them in the optimal order. Share your thinking every step along the way.”
With that taste of what’s possible, to supercharge your AI in EVERY interaction, follow the instructions below.
BONUS FOR PAID SUBSCRIBERS:
The AI Reference Library: 100+ Lenses, Operations, and Recipes (Your AI's New Vocabulary)
What follows is the complete taxonomy: ~90 lenses organized across 10 categories, 9 operations with their full move libraries (72), 40 recipes in 10 categories, and the architectural relationships between them. This isn't meant to be read in one sitting. It's a reference. More importantly, it's the synthetic data that powers the approach described above. When you paste this article into Claude or ChatGPT, this section is what gives the AI the vocabulary to think with you rather than for you. Browse what interests you. Bookmark the rest. It'll be here when you need it.
In addition, I’ve created a Mental Model Skill for Claude, which will allow you to draw on this complete taxonomy of mental models effortlessly whenever you need it, no prompt required.

