Blockbuster Blueprint

Blockbuster Blueprint

1.5-Hour Artificial Super Intelligence Book Group On December 23 At 11:30am EST

Michael Simmons's avatar
Michael Simmons
Dec 19, 2025
∙ Paid

This week, I published a 13,000-word Manifesto. I think it is critical for anyone interested in the future of AI to read it…

The Largest Religion In 10 Years Won’t Be Christianity, Islam, Buddhism, Atheism

Michael Simmons
·
Dec 15
The Largest Religion In 10 Years Won’t Be Christianity, Islam, Buddhism, Atheism

Read full story

Purpose Of The Book Group Discussion

The New Year is the time of year when we take a step back and reflect on:

  • Where we are

  • Where we want to be in the future

As AI continues to make profound progress in the coming years, it will FUNDAMENTALLY DISRUPT EVERY single field of knowledge work. Right now, it just feels like a nice-to-have. But if it continues at the same pace, it will inevitably become a must-have.

Most knowledge workers focus on how they can use AI to be more productive in the short term. This is important. It’s where most people should spend their AI time.

But, I also believe that it’s important to spend at least 10% of your time thinking 3-5 years ahead.

Don’t you?

This strategic thinking is critically important with AI, because one year in AI time is like 5 years in normal time. Therefore, what AI will be in 3-5 years will be completely different than what it is today. As a result, many of the things you do today with AI will NOT be relevant in five years. Different skills, mindsets, and strategies will matter. So, if you want a strategy that will work for years to come, then you need to think longer term too.

That’s where Artificial Super Intelligence comes in. This is the north star of where all the major AI companies are focused.

In The Intelligence Age, Sam Altman says,

"This may turn out to be the most consequential fact about all of history so far. It is possible that we will have superintelligence in a few thousand days (!); it may take longer, but I’m confident we’ll get there.”

Ilya Sutskever (Co-founder of Safe Superintelligence and Co-founder of OpenAI) is known as the #1 AI researcher in the world today. And that’s where he is focused as well:

“Superintelligence is within reach. Building safe superintelligence (SSI) is the most important technical problem of our time. We have started the world’s first straight-shot SSI lab, with one goal and one product: a safe superintelligence.”

So, if you want to understand how to thrive in the future, you need to understand what Artificial Superintelligence is and its implications.

That’s what our call will be about.

Agenda

During this call:

  • I will share the most interesting stories and ideas from the 13,000-word Manifesto.

  • I will share a hand-picked selection of the 20+ curated video clips that appear in the Manifesto.

  • We will discuss these stories, ideas, and videos together.

The manifesto took me 75+ hours to write, and this 1.5-hour call will give you the best nuggets. I’ve done all the work. This call will share it with you in a condensed version.

And, it will be fun.

Save The Date

  • Date: Tuesday, December 23

  • Time: 11:30am-1:00pm EST

Who Can Attend

The live call and on-demand recording will be available exclusively to paid subscribers. Becoming a paid subscriber is just $20/month or $149/year. When you become a paid subscriber, you get the full Manifesto (not just the first two chapters) and $2,500+ in other perks.

How To Join The Zoom Call

This post is for paid subscribers

Already a paid subscriber? Sign in
© 2025 Michael Simmons · Privacy ∙ Terms ∙ Collection notice
Start your SubstackGet the app
Substack is the home for great culture