WhatsApp Group Launch Day Is Today / Free Full Preview
Two weeks ago, I announced the launch of a Private AI WhatsApp Group For Basic Paid Subscribers. To summarize the offering:
I spend 3+ hours per day consuming super-curated, high-quality AI podcasts, X posts, and newsletters.
Most of the best clips never make it into the long-form articles in this newsletter.
I’ll share the best clips in the WhatsApp group to save you research time.
It’ll also be a way for us to connect on a more personal level.
With that said, I’m excited to share that today is the official launch day!
As a preview of what I’ll be sharing in the WhatsApp group, I’m publishing today’s content on Substack.
To get access to the group, simply become a basic subscriber ($20/month or $149/year):
And then follow the instructions below.
I hope you enjoy.
Why ‘Vertical Agents’ Might Be The Big AI Opportunity You’ve Been Looking For (via a16z podcast)
Contents: Episode Description | Why I Liked This Episode | My Favorite Clips | Full Episode Link | Full Transcript | How To Access The WhatsApp Group
EPISODE DESCRIPTION
Podcast Info
The a16z Podcast is the official podcast of the venture capital firm, a16z.
Episode Info
In this episode, a16z general partners Erik Torenberg and Martin Casado sit down with Aaron Levie (CEO, Box) and Steven Sinofsky (a16z board partner; former Microsoft exec) to unpack one of the hottest debates in AI right now. They cover:
Competing definitions of an “agent,” from background tasks to autonomous interns
Why today’s agents look less like a single AGI and more like networks of specialized sub-agents
The technical challenges of long-running, self-improving systems
How agent-driven workflows could reshape coding, productivity, and enterprise software
What history — from the early PC era to the rise of the internet — tells us about platform shifts like this one
The conversation moves from deep technical questions to big-picture implications for founders, enterprises, and the future of work.
Follow Guests On Social Media
Find Aaron Levie on X: https://x.com/levie
Find Martin Casado on X: https://x.com/martin_casado
Find Steven Sinofsk on X: https://x.com/stevesi
Related X Posts From Podcast Guest Aaron Levie
I particularly recommend Aaron Levie’s X account. He posts deep thoughts on AI in the enterprise every single day. Here are a few of his recent great X posts related to the topic of this podcast episode:
Subscribe To The a16z Podcast
WHY I LIKED THIS EPISODE
In 2023, the consensus was that the future of AI would be one huge, general-purpose AI with unlimited context, IQ, and the ability to recursively self-improve. This possibility fueled narratives around AI changing the world overnight.
When you look at the reality over the last two years, this narrative isn’t playing out.
Instead, the exact opposite is happening for several reasons:
Cost: The cost of running AI increases quadratically as its memory increases.
Context Rot: AI performance decreases as memory increases.
Jagged Frontier: The evolution of AI performance isn’t smooth. It’s becoming superhuman in some things while stalling in other areas.
As a result of these factors, we're currently witnessing an explosion of specialized, narrow AI agents that mirror the historical pattern of all technologies.
This episode takes a deep dive into the topic with experts who have a profound understanding of the history of enterprise technology.
The inspirational takeaway is that there is an enormous opportunity to create specialized vertical agents based on your expertise and then turn those agents into a company.
Caveat: While the full-on Artificial General Intelligence (AGI) and Artificial Super Intelligence (ASI) narratives aren’t currently unfolding, it doesn’t mean they won't in the future as AI research companies learn how to build more effective AI systems.
MY FOUR FAVORITE CLIPS
Experts Get The Best Returns On AI
History Lesson On How Technology Changes Workflows
History Lesson On How Technology Products Specialize Over Time
The Case For Vertical Agents In Every Niche
#1. Experts Get The Best Returns On AI
#2. History Lesson On How Technology Changes Workflows
#3. History Lesson On How Technology Products Specialize Over Time
#4. The Case For Vertical Agents In Every Niche
FULL EPISODE
FULL TRANSCRIPT TO COPY & PASTE INTO AI IN ORDER TO GO DEEPER
(00:00) We thought that we were looking at the form factor of AI, which is you're talking back and forth to something. The real ultimate end state of AI and thus AI agents is these are autonomous things that run in the background on your behalf and executing real work for you. The more work that it's doing without you having to intervene, the more agentic it's becoming.
(00:19) Somehow it produces output that it feeds back into itself. It's literally just the amperand in Linux, which is it's a background test. Okay. And it's like the worst assistant in the world. and agentification is just hiring a lot of these really bad interns. I thought we I'd start this wide-ranging podcast by asking the very simple but very provocative question, what is an agent? Oh boy. To who? Stephen. Okay. Exactly. Go for it.
(00:49) So, so I I actually have a very old person view of what an agent is, which is it's literally just the amperand in Linux, which is it's a background test. Okay. Because like you type something into 03 and then it's like, "Hey, I'm trying this out. Oh, wait. I need a password. Can't do that." And it's like the worst assistant in the world.
(01:08) And really, it's just cuz they need to entertain you while it's taking a long time to answer your prompt. And so that's my old person view of what an agent and agentification is just hiring a lot of these really bad interns. The intern, they're getting better.
(01:28) They are getting better, but they still don't remember if I have a password to nature, you know, like it's just Is it possible you guys just had bad interns in like the ' 80s and '90s? We were We had terrible interns. I have like a very high esteem for interns. So, but now a real answer. No, no. I mean, uh I I think I I think collectively we're seeing what what these are becoming.
(01:45) So if you think about two years ago the you know post chatbt moment we we thought that we were looking at the form factor of AI which is you're talking back and forth to something and I think to Stephven's point the real you know ultimate end state of AI and thus AI agents is these are these are autonomous you know uh uh things that run in the background on your behalf and executing real work for you and you're ideally in a in an ideal world interacting with them actually relatively little uh relative to the amount of value that they're creating and so so there's some kind of, you
(02:14) know, metric where the more work that it's doing without you having to intervene, the more agentic it's becoming. And and I think that's that's sort of the paradigm that we're seeing. Yeah. The only addition I'd have in addition to longunning, which I agree, is that somehow it produces output that it feeds back into itself as input, which you can actually do longunning inference. Like you can make a video that's really longunning, but it's just basically a single shot video and you just throw more compute at it. I
(02:39) I think there's like technical limitations. um you know if you start feeding the input back in because we're not quite sure how to contain that too and so you know I think you can do it I think you can measure things based on how long they run and you could also measure it by how many times it's actually taken its own guidance which would be kind of more of an agency.
(03:02) Yeah, because I think I do think it's important that in this transition, look, we are what Aaron described is where we're going to be. It it's it's just that what are the interesting steps that happen along the way because we are going to need for the time being it to stop and say, am I heading in the right direction or not? because you you really putting aside all the horror stories about you know taking um action without consent and using accounts and data or whatever there is this thing where like you just don't want to waste your time on the clock while it's churning away way off in the wrong direction. Yeah. So the question is to what extent do they have their own agency which to me means they've spit something out and
(03:35) they've kind of consumed it back up again and it's still a sensible thing which by the way as you start thinking of these things in distribution it's actually a very difficult thing to do because it doesn't know if it's going to be spitting something out that's still in distribution when it brings it back in like they don't have that self-reflection so I I think there's actually a very kind of technical question here of to what extent we can make these things have independent agency but we can make them long run pretty easily. Yeah. Yeah. We're good at the long run. The long running you get back is Yeah.
(04:00) Yeah, I mean I think the um uh the interesting thing is how the ecosystem is sort of solving um or or mitigating then you know the issues like you're seeing sort of this logical division of the agents. So they might be long running but they're not actually trying to do everything and so so the more that you subdivide the tasks out then actually the more that they they can go pretty far on on a single task without without getting kind of totally lost on what they're what they're working on. Well, Unix is going to prove to be right, which is like
(04:30) you're gonna you're gonna want to break things up into much smaller granularity and tools. And I think to other points that you've made on X, like you're going to want to divide things up so that it's like an expert in this thing. Yeah. And and then it might be a different, let's just say, body of code where you go and ask like ask, you know, are you good at this thing? Let me get your answer on on this part of the problem. Yeah. Um it's it's kind of interesting.
(04:55) I I don't know um how much you've plotted this but like the conversation on AGI has sort of evolved you know very clearly in the past like six months and and I think that the consensus was maybe not even consensus what what some of the view was let's say two years ago was is this sort of monolithic system that's just super intelligent and it solves you know all things and now if you kind of fast forward to today and let's say whatever we agree kind of state-of-the-art is it's sort of looking like that's probably not going to work and um for for a variety of reasons at least in in today's architecture. So
(05:28) then what do you have is maybe a system of many agents and those agents have to become very very deep experts in a particular set of tasks and then somehow you're orchestrating those agents together and then you know now you have two different types of problems. One has to go deep the other has to be really good at orchestration.
(05:45) Um and and that maybe is is how you end up solving you know some of these some of these issues over the long run. I I just think it's very difficult to think cleanly about this. Like I've still yet to see a system where you you they perform very well and you don't draw a circle that doesn't have a human being in it somewhere. Oh yeah.
(06:02) So in a sense like the G like often seems to be coming from like the general seems to so like I just listen these things are tremendously good at increasing productivity of humans. At some point maybe they'll increase productivity without humans but until then it's just very hard for me actually to talk clean. Well, and it's just it's it's so important for people to get past sort of the anthropomorphization of AI because that's what's holding everybody back.
(06:28) Like AGI is about about robot fantasy land and it and that leads to all the nonsense about destroying jobs and blah blah blah. None of that is helpful because it you have to then you dig yourself out of that hole to just explain, wow, you know, it's really really good at writing a case study, right? Which like it writes a better case study than all the people that work for, but it doesn't know who to write it about.
(06:46) It doesn't know what necessarily you want to emphasize. It doesn't know what's what the budget is, what's needed, how many words, right? But it also turns out like AGI just does an awful lot of work. You know what? So, for example, someone asked me recently and they say, "Well, um, you know, are you worried that like if we have uh AGI, then you'll no longer be investing in software companies?" I'm like, "Well, I mean, you're AGI, right? I'm still investing in software companies, right?" And so, like, just because you're AGI says nothing about economic equilibrium or economic feasibility, etc. So, like just the term
(07:16) AGI does basically infinite work for every kind of fear we have and maybe every hope that we have. And the more we tie it down to like not only it solves a class of problems, but the economics pencil out yes or no, we can actually have a more sensible discussion, which I actually I think is finally entering the discourse.
(07:32) I think we're actually talking a lot more sensibly now than we were a year ago. And so when you hear when people say things or the AI 2027 paper when they talk about sort of automated research or recursive self-improvement does that feel like fiction or fantasy or does it feel like or is it thinking that even with those things we're you know sort of nowhere near um you know peak software and there would just be unlimited uh sort of demand. I think you got to go first for each question. I don't want to stop.
(07:58) I need you to anchor us in reality and then and then we can deviate. Well, I look I I think that first I I would not I'm just not a fan right now of buying any into anything by year because whatever year you want to buy into I in that in 2027 we're just going to be having a fight over what we meant by by the metrics and it just turns into like OKRs for an industry which is just like a ridiculous place to be.
(08:26) But but I I I think that everything takes 10 years and and the thing is but you can't predict anything in 10 years. So, how do you even reconcile that? And and I think that it there you you know, you just have to re recognize that the we're on an exponential curve. So, no one's predictive powers work, right? And and it's it's just going to keep happening. It's not going to plateau.
(08:49) It's not going to, you know, all of a sudden we're done. And that's what makes this a different kind of platform shift. Like you if you just you look at the progress and that's the same that went through with storage that went through with bandwidth that went through with productivity on on computing on on connectivity around the world like you because it's exponential you can't predict it and it's just folly to sit around and try to predict. Now you could do science fiction and you could say in the future when we all have our personal AI with all this other stuff and then
(09:15) that's great but then you say it's going to happen in 2029 you're an idiot. Yes. And so that that sounds totally correct, right? Because basically uh three years ago you would not have been able to conceive of cloud code. So or cursor or you know name your your background agent writing code.
(09:36) So it's like what is the point of having some date at which you're you're naming something? And um and so we've actually seen probably vastly more progress in the past just two years of of actual applied AI than we would have thought. And yet does it matter that one or two of the predictions didn't play out? Like no.
(09:56) Um, so, so I think it's probably more interesting to think about like where is the technology from more of a classic Moore's law standpoint and like how much compute do we have, how much data are we working through, um, how powerful these models. I mean, just let me ask you like as semi- old like well I mean like like nobody after AI collapsed and machine translation and and machine vision failed. Yeah. there.
(10:20) You couldn't find anybody who thought that those would become solved problems or like or after neural nets imploded and like literally you were teaching or expert system or expert system but you were teaching and like like if you tried to teach neural nets like the students would rebel because you were wasting everybody's time you know in in like in 1999 like Hinton couldn't get funded trying to to do neural nets.
(10:44) I I took like I grad school was this three volume history of artificial intelligence thing. Neural nets was like eight pages. You know, ironically, I remember when ML was the cool thing and neural nets was the old thing and now like you know ML is like the old thing and neural nets are the cool thing, right? or NLP like and so so the fact that like so we will return to all of these problems that couldn't be solved like even like this the everyone's favorite one oh it doesn't understand math right like okay that is a solvable problem because math is solvable like there's
(11:15) just no one put the math layer in to understand what a number was and to you know hardcode it and just build in an expert system for math which is actually a well understood thing because we've had maxima since like 1975 you know I I think it's important to like maybe for us to describe how hard it is to predict anything, right? So let's take recursive self-improvement. This is one of my favorite ones.
(11:38) So the theory of recursive self-improvement is you have a graph where you have a box which is the thing and then there's an arrow that goes back to the box which says improve and then of course you look at that and you're like works right. So I guess you know like from an intuitive lay perspective every time you have a box with an arrow back in it you're like okay we're we're done.
(11:58) Right? But like if you know anything about nonlinear control theory, answering that question is one of the most difficult question that we know in all of technical sciences, right? Like does it converge? Does it diverge? Like does it asmmptote? Right? So for example, you could recursively self-improve if you're doing basic search, but you asmtote, right? And so like saying recursive self-improvement from like a deeply technical perspective says almost nothing. Mhm.
(12:23) It says, but but but unfortunately because we tend to anthropomorphize AI, we say recursive self-improvement and all of a sudden we're like and then it like overcomes energy boundaries and human intelligence. Well, that's how it goes from being a toddler to being like an 8-year-old. It just because it it figured out, right? And so, I mean, the reality is like nonlinear control systems, which are feedback loops that are adaptive, we don't even have the math for for for relatively simple systems to understand what happens.
(12:46) You have to actually know the distributions that come out and go into them. And so these things are going to improve. They're going to continue to improve. Maybe they'll improve themselves, but just because they do improve themselves doesn't mean they can continue to do it. And this is kind of part of this entire journey as we're learning about these systems.
(13:02) Again, the good news is I think we're talking a lot more sensibly now than we were a year ago. And hopefully that will continue. I hopefully hopefully the discourse can recursively self-improve. So we're just more Well, the good news is that's involving humans. So we don't actually model. But I I thought that I mean you you must be seeing this even with with customers.
(13:18) I mean like take the conversation about like hallucinations and things like that. How how dramatically that's altered in just the past two years say. Yeah. In in on two dimensions actually. So on one dimension the the problem of hallucinations has improved.
(13:36) So the as the models get better as our understanding of how do you you know whether it's rag or whatever what you know even the even the the problem of uh of of actually the efficacy of the context window has has improved. So you have the technical improvements um you know kind of across the stack and equally you have a kind of a cultural understanding to some degree within the enterprise as to like okay actually know these are these are non-deterministic systems they're probabilistic.
(14:00) So, so you're starting to see almost a culture shift which is okay, uh you can you can actually uh implement AI in in essentially more and more critical use cases because the employees that are using those systems understand that they do actually have to do the work to verify it.
(14:18) And then the only question is is what is that ratio of of time it took to verify versus if I had done it myself and how much efficiency gained for whatever that workflow is. Um but we are we're going from probably like two and a half years ago where there was you know this instant excitement as as to oh my god this is going to be the greatest thing of all time to a reality check within 3 to 6 months because everybody was like hallucination is going to be the the massive you know kind of problem to now a couple years later after that which is like okay like we're we're seeing the hallucination rates shrink
(14:49) we're seeing the quality of the outputs increase and we understand that you do have to go and review the work that these AI you know agents are doing and that that takes on a different form based depending on the use case.
(15:01) So in the form of coding that means just like you just had to go review the code in the in the uh which you had to do anyway. Seems people seem to be forgetting you you had to do anyway but but like there was probably at least a little bit of like theory as to like what part you you should go review with extra level of detail because you kind of knew the person you were working with.
(15:20) It also implicitly limits the value of AI which people are uncomfortable with which it just basically says it helps people that will know more than the AI does. And as soon as it knows more than you know like it starts to actually kind of biseect the utility. Yeah. Yeah. Basically it's it's super interesting which is the experts are now becoming the the productivity of an expert is is outpacing everything else which is which was this you know I think we could have probably predicted it based on historical events and I think you've got some good theories about how you know the skill the the type of skills um that
(15:44) that are that you know that kind of the right user for these models for the kind of use case. So we're seeing that, you know, where the expert engineers are like like I don't mind that it's a slot machine where I'm pulling it and I see what comes out because I know I can I can still get 10x productivity. It gives me good ideas and I get it good enough that that that it's worth that that productivity gain.
(16:03) Whereas if you were like not an expert engineer and you did this flot machine, you probably would try and go and deploy, you know, all the ones that were also wrong and you don't actually don't know which lever to pull, which is a big thing is like literally knowing the like what to ask for and what language to use.
(16:20) We'll get to a better Well, that I I think that this is just an incredibly important point that you're making and it it really gets to the heart of what it means to use a tool. Like you know, you put me in front of like a 12-in chopsaw and say like go fix the fence. Really really bad idea. I mean, I could go buy one. I I could go cruise the hungry and I'm like, "Dang, man. I would have a DeWalt and I could buy it, but it's really not a particularly good idea, right?" And and I think that how these platform shifts happen and why there's so much excitement over coding is that well, the best way for a platform shift to take hold is it's the the experts that are the the closest you have to an
(16:59) expert in the new platform. It's who becomes the most enthusiastic. Yeah. And the biggest users overall. Like I I've been practicing yoga over at um the Cubberly Community Center in Palo Alto because the studio is closed remodel. But what's neat is that was like the OG place for computer clubs like in the early 1990s and the late 80s like if you ever wanted to meet the computer and you would go and like this is like Halton Catch Fire like you and it's like like a bunch of people with soldering irons and and like they're that's who and and you know when a when it didn't work when
(17:32) something was broken that wasn't like oh man these things are terrible I'm wasting all my time. That was like the whole meeting was like who could get like one of these new discrete graphics cards to actually work and debug the driver? Can anyone print? Is there anyone in this room who can print in this new thing called postcript? And I I think that's what's really happening right now.
(17:55) And so first it's obvious it should happen with development and coding first because they're the most forgiving. Yeah. And the most understanding of like what's a bug, what's a thing that can never get fixed. And the thing to watch for is no one is saying that coding can't get fixed, right? Like whatever it's been generating that's bad for like a 2x coder rather than a 10x coder, no one is saying, "Well, that'll never be fixed." Right.
(18:19) And then the next thing that's going to happen is going to be what I think is just going to be like the the creation of words like the the marketing document, the positioning document, all of this long form stuff where if you're really good at that job, you you can you know the right questions to ask, you know what looks good and then you can you can get really domain specific like on the next you know the next level is like oh I need to understand like um a competitor which then is using real information from the internet in real time not just statistical and then you're like They already know what the competitor does, right? Like they're they're And then my
(18:47) favorite scenario is the one that just constantly just has these aha moments is attack this thing I just wrote. I'm not interested in you getting adding m dashes and making it a little bit better. I just want to know what did I miss? You said one I think recently on this last one about like here's my earning statement.
(19:05) Yeah. the the for people that's the thing you read that you read after to the analyst that now like attack it like an analyst and there's like 6,000 hours per company of analyst questions. it knows what they're gonna they only ask like three questions anyway expense line you know and I I feel like this is the thing that do not watch this if you're an analyst at this and this is not any advice about being an analyst but I this is what what's really going to happen with with writing and then it's going to happen with PowerPoint and slides and then it's going to happen with video and it
(19:40) but it's but it's really important to call out um which is you're getting the consensus mean response and so in the limit it it's offloading a lot of kind of busy work if you're a professional like you're a professional you actually know all of these things you just don't have the time to go through all of it and you may not remember it.
(19:59) So so in a way it's it's it's it's productivity helpful but it's not you know solving some problems that where you know you are a particular expert in this is maybe why for those that are non-expert it's a little bit more threatening because it can do that job. Yeah. Well, the the maybe to to bridge a view and probably throw in a different tangent like so so Stephen you're asking like so where is the enterprise now? So so that was the coding piece.
(20:24) I think what you're seeing this is you know kind of clear understanding which is okay the what I'm going to get out will be correlated to what I put in. So how precise I put the prompt what what like like I think prompting doesn't go away anytime soon simply because the leverage you get on the set of instructions you're going to give the AI at the start is still going to be massive.
(20:41) So we see what what what what would the prompting went away? What would you end up with? Well, I mean two years ago, I think the like people were like like you'll just tell the AGI what you wanted to produce. Oh, just there's just one prompt. Like you unbox it and you say go do something a be a software engineer.
(20:58) No, literally that was like that was like an open debate and it was like no, you're probably missing the fact that what is in my head is going to be unbelievably gerine to the thing that I'm trying to produce and like I have to somehow give you that context. Like there's no world where you have that context without me telling it to you. And now you're seeing it like you're seeing these incre in incredibly unhinged prompts which are like pages long. Yeah.
(21:16) And the output you're getting from that is actually like way better than if you didn't give it that context. So I think there's a clear understanding of of that side on the enterprise use cases and then a clear understanding that you've got to go and review it.
(21:29) And then and then on on this point about like well you know you know what what is I just have to say we forget that formal languages came out of natural languages for a reason. We didn't start we didn't like start with like we didn't start with like formal English like oh it's much easier to speak in English just speak in English. is the opposite is like we have this natural language we're like it's very tough to convey the information that I want to you and I are both experts we understand the solution space so let's communicate more efficiently right so to think that this somehow wouldn't happen and that's what jargon is
(21:53) of course like jargon is just a formalized way that people who have domain expertise talk to each other so so the thing that that is kind of kind of the most like fun to kind of think about right now at least is and and maybe you could give us a little history lesson on this in in kind of interesting parallel So when does the style of work change because of the tool versus the tool sort of adapted to the style of work? And so what I'm starting we're like only in day one of this.
(22:23) But what I'm starting to see kind of some some patterns emerge which is we thought agents would go and learn how we work and then automate that and then the question and so basically agents conform to how we work. The question is when is the moment when we conform to how agents are best used? Yeah. And you're you're seeing this in a couple areas.
(22:41) So you're seeing this in engineering to start with which is like people are saying okay I'm gonna have agents and then sub agents for parts of the codebase and then I'm going to give them kind of read me files that the agents read and then and then I'm going to actually optimize my codebase for the agent as opposed to the other way around in other forms of knowledge work.
(22:58) So within how we use Box um with with our AI product like you're starting to see people like basically tell the agent like its complete you know job and the the workflow is now starting to be almost like the agent is almost dictating the workflow in the future as opposed to it's just mapping to the existing workflow.
(23:15) So I don't know like what the history is on this of like when does when does the work pattern itself shift because of what the technology is capable of. I think I think probably where this goes has to be some version of that which is which is it's not going to just be that agents just plop into how we currently do our work and then and then just automate everything.
(23:34) I do think you start to change what we actually what the work is itself and then and then agents actually go in and accelerate that. Well, as important as that is, it's actually more important. Okay. Like because what what happens is where there there's to reuse the word in a different this anthropomorphization of work. What happens in is that the first tools actually anthropomorphize the work. And so like if you go back this is every single evolution of computing.
(24:00) I mean like how long did it take for Steve Jobs to get rid of the number buttons on a smartphone like like they they still had number buttons or like you look at at cars and until Elon got rid of all the controls, everybody kept all of the controls. I don't want to get in that fight.
(24:18) But but like the what happened with every technology shift is, you know, if you if you were to look at what accounting software looked like in the 60s before IBM said stop. We all use double entry, but we need to have people skilled in how computers can do the accounting, not how people can. Because we're never going to figure out how to close the books if we have to automate this whole room of people with green eye shades that have a manual process based on how far apart the desks were, right? and and everything that happened with the rise of of PCs and personal productivity started off and I always use this example because I've watched it
(24:50) happen like five times now which is the the first PCs that did word processing the biggest request was how do I fill in like expense reports and so the whole this whole world grew up of tractorfed paper that was preprinted with the expense report and so then software we wrote all of this code like are you using an Avery 2942 to expense report or is it a New England business systems A397 and and like you know and then you had like these adjustments in the print dialogue like 0.208 208 in and you you moved little things and then you would
(25:24) print out like ate dinner $22 and that was all you printed. And then someone said, you know, we could use the computer to actually print the whole thing, right? And then like fast forward and finally Concur said, you know, why just take a picture? Why not just take a picture of the receipt and then we could do all of it? And so then the whole thing gets inverted and and every single business process ended up being like that.
(25:51) And and then there are things that really really do change the tools. Like when email came along, you know, it used to be to prepare an agenda for a meeting, somebody would open up word and type in all the things and then print it out and everybody would show up the meeting with this very well for and now and then like email came out and that whole use case for Word just evaporated. Yeah.
(26:14) and and then an email agenda became no formatting, nothing, just like here are the eight things we're gonna talk about and you show up and everybody's like did you get the agenda? You know what's interesting about the AI one is it's kind of it's like we're seeing the same thing but visav AI. So nobody really predicted the generative stuff and we've had AI for a very long time.
(26:32) So we had chat bots, we've had, you know, and so you had these kind of like AI shaped holes in the enterprise for a long time. And a lot of the mistakes that we see today is people are taking the generative stuff and trying to kind of cram it into the old models when it's really a new behavior that's emerging that's very much more in like it used to be you would centrally sell you know AI to some platform team and then they would kind of try to get the NLP thing to work or the voice to work for like talking to people on the phone for support and it was this kind of very
(26:57) central. a lot of the adoption that we see is like much more individual for example and so I just think that there is a bit of a mismatch as we're seeing now that it's getting ironed out too well and and so I I think the question is is yeah are we in the phase where we're trying to graft the agents and and work in basically the what we've been doing for 30 40 years of software and is this going to be actually like a like a like the first real step function shift we've seen in what the workflow itself should look like oh we but we are I mean like you if you you know remember people like I I tried
(27:27) to jam the internet into office, right? And it and and it was fun to watch, but I mean you were you were not watching it, but but but like but but everybody around was trying to jam the internet into their product because that's the only way you could envision it. And it it didn't really like you you were like, well, where else would the internet go? Like there's no word processor on the internet.
(27:52) Like there's no spreadsheet on the internet. And and then other people would be like, well, let me just try to implement Excel using these seven HTML tags with no script. That turns out to not be a really good idea either. The best was like, let's do PowerPoint. Well, how do you do it? You give them five edit controls, tell them their bullet points, and then we'll generate a GIF on the back end and send it back to you as the slide. Yeah. Okay. That that that was not and so there was that whole like I think actually maybe the main point is
(28:16) just the durability of Office. It transcends all disruptions. I like to think it pretty much rises above everything. But but the thing is is that that's where we are now is everybody and you know like do but do you think I mean just to dig a little bit.
(28:33) So, do you think this is similar to the internet in that it's a consumption layer change? Because I always view the internet as very much a consumption layer change like I go to, you know, instead of going to my computer, I go to the internet. But otherwise, things kind of were the same where AI has got this weird quirk which for the first time I can recall programs are abdicating logic to a third party.
(28:52) Like we've always abdicated resources, right? Like so we'd be like, okay, I'll use your discs or whatever, but like I'm writing the logic. But this time it feels like we're changing the consumption layer. So like you know when my son you know talks to an AI character and you know he's not going to wsfargo.
(29:11) com he's going to an AI character and so like that's changing kind of how we're interacting with the computer but also these programs are no longer kind of written by a human in the same way. So I feel like the change is maybe a bit more sophisticated. Oh, I think but this is the this is why it's a platform shift and not just an application shift like where where each each platform shift changes the abstraction layer with which you interact with computing but what that also does is it changes what you write the programs to.
(29:37) Do you do you remember ever abdicating logic? Oh, here's a great here's an example of how disruptive this can be. the the first word processors in in in the DOSs era, the character mode era, they all implemented their own print drivers and clipboard. So if you were Lotus and you wanted to put a chart into a memo, you you you couldn't because you didn't have a word. You didn't sell a word processor.
(30:02) So you actually made a separate program to make something that the leading word processor could consume. And if you were word perfect, your ads said, "We support 1,700 printers." like and you won reviews because you had 7,800 and Microsoft had 1,200. That's so and so then along comes that's a great one and then so Windows comes along and and if you were and if you were trying to enter the word processing business step one I need to hire a team of 17 people to build device drivers for Epson and Okidata and Canon printers because you can't get them anywhere. Microsoft came along and for Windows built print
(30:32) drivers and a clipboard and all of a sudden and also Macintosh did it all of a sudden you there was a way that two applications that had no a priory knowledge of each other good but of course if you were word perfect or Lotus that's a dis you got creamed by that because your ability to control your information and so and what happened was a bunch of developers were like wow this is cool because now I'm just by when we did C++ for Windows like we were like where the demo in fact at that Cberly community center I would go and I would show brand new Windows programmers in 1990 like hey you don't
(31:08) have to write print drivers and use the clipboard and like literally standing ovation of you know 10 people at the thing and and but but they were like more than happy yeah to let data interchange between product because they were like that's nothing but opportunity for me can we can you like they they probably from an emotional standpoint felt exactly the same way as like a vibe coder does today which is like you've just given me this platform that and it was just a pring code for Windows book was like this big but the writing a device driver
(31:38) for an Epson printer was this big writing it for a Canon printer was this big and but I'm I'm just actually trying to think of like that but the paradigm shift is the same which is there's been many times where we've reduced the amount of work a developer takes but I just don't remember ever where the programmer advocates logic like so for example SDM didn't not logic Like I I would always say what is correct and what's not correct, right? I think you undersold it though. Could No, this is the thing by the way
(32:06) everybody that Martin invented and worked on that stuff like but but it's a big deal. Maybe we should postmortem your pitch at the time if if you mean well no let me like logic specifically which is I am writing an app. My app is whatever some vertical SAS app for a certain customer base.
(32:31) The answer the app gives is based on logic that I've written historically, right? Like if I run it on the cloud, the cloud is not producing an answer. It's providing resources. If I'm using your device driver, it's, you know, providing access to a device resources. But if I'm like, hey, large model, tell me the answer here. You're actually abdicating application. I'm maybe you're right. Maybe this I think what you're you're you're almost playing like incumbent in in the sense of trying to no trying to like decide this is abdicating the logic and this isn't when in fact like it it really was like a huge competitive advantage for word perfect and and they didn't want to give it up and they they fought against
(33:06) it and the number of people who didn't want to do like great clip and the next example of course is the browser where people literally gave up like you in a in Windows Windows or in Mac, you could rasterize anything you wanted. You wanted a button that you pushed and it spun and it animated like a rainbow.
(33:24) You could do that in your product. But then the web came along and you're like, "Wow, I have to use a gray button that says submit and and that was like Yeah, because the point we do use a bunch of third party things." Well, but it took a long time for those to show up.
(33:42) And so early in the internet, the magazines in particular and and the printed media were the ones who absolutely wouldn't go to the internet because they would not give up their ability to format and they and this is is another part about the the tooling and where where what happen what's going to happen with AI is that like a huge amount of the productivity software space today is like the preparation of output like office is basically a format debugger, right? Like all it is is like 7,000 commands for how to do kerning and bold and italic and like it turns out AI not only doesn't care. Yeah, you could ask it to make whatever you want. Like you
(34:14) could just say I'd like this to be a double index pie chart thing that that's not a thing. I just but but you you can do that and it it will just figure out something that looks like that and you'll go cool. And this was where to this disempowering and experts and who's not an expert.
(34:32) When when productivity software arose, the big thing about it was that there were people who figured out how to like make like killer charts. Like Benet Evans, like killer chart guy. Yes. And there were people who are like every meeting started with how did you make that chart? Like I could be on an airplane and somebody would be like making a so interesting.
(34:51) So like in this case the abdication is like actually what's the way to to visually represent the data which is absolutely right. And it turns out well because like 90% of the people never really got to be expert at doing that task even though 90% of the tool is about like to even so what happens is each but the programmer didn't abdicate the logic in this case this is the user but you know what's the user what's the programmer in that and and in fact what the programmer was doing was like like we would invent the thing called wizards or whatever you know and that would make a whole bunch of choices for you
(35:17) stylesheets or whatever and so in a sense we were making a bunch of choices for the user which to the experts looked like disempowering the experts who were tweaking all and so I this is all like this there there's some Steve Jobs quote that he loves about Schopenhau how if you've seen the the conjur if you've seen the conjurer it's not a trick anymore and I really feel like this is like the third or fourth time that this has happened just in in my lifetime of watching this well so something that's really caught my attention because it's the most senior people I know um is that a lot of
(35:49) very senior developers are spinning up a lot of background agents like code agents and they're interfacing at like the GitHub PR level, right? And so it's not obvious to me why you do a bunch as opposed to one and it's not obvious to me why you wouldn't interact directly. So it feels like something's going on here, but I'm not quite sure what and I would love your thoughts.
(36:06) Well, the the my read on it um uh and then the qu I guess I would kind of sort of throw out like what then happens next as a result of this because because to me it's actually a little bit of an epiphany on what the future work design could look like in this world because engineers back to the prior conversation are just the first to experience this.
(36:25) But I think what what my my read from from talking to kind of similar folks that are like all in on this is um is this mix of basically uh the the effectively the context rot problem which is you know the more that we put in the context window uh the the more it gets confused the the lossier the answers get.
(36:44) And so you have to have some kind of way to partition what what an agent should work on. And we see this in building agents internally, which is, you know, the panacea that I think we maybe would have hoped for is like, well, you just put a million tokens into the context window and then obviously Oh, so you're saying this is almost like a counter trend to the the AGI. It's almost like the is like the opposite.
(37:03) It it's the opposite, but it it's it's it only works because the models are so good. Yeah. But you're giving more things more specific tasks rather than one thing less specific tasks, right? And so so but like I think this is why it's happening. So basically the the craziest version of this is I was talking to somebody who who is in startup land and they have they have to your point they have all these sub agents but what's amazing is it maps one to one uh to each micros service in in their codebase and so they have an agent per microser they have effectively a readme for the agent and that agent owns
(37:34) the micros service and they I don't know the the the specific number but let's just say you could have you know dozens or hundreds of these things going on and you're you're effectively mitigating this issue which If you just said, "Here's my entire codebase, you know, go run wild," you know, to one agent, it's it will just, you know, produce worse and worse code over time because it's going to have context rot.
(37:56) It's not going to know exactly what what what you're trying to do in that one area of the microser, but the sub agent model seems to be working for that paradigm. I love this counter pattern because everybody's like they're going to like, you know, models will be get, you know, smarter and you'll give them higher level tasks and they'll do things longer. Yes, this is a counter one.
(38:12) I want to tweet that, but you have more Twitter followers. We can we can collectively do it. But but but the but then so then the question is okay so let let's just assume this works in engineering. You you have this interesting dynamic which is well then that means that like some of the coding practices will be pretty different in the future.
(38:29) We've talked about this idea of you know the individual engineer becomes the manager of agents. So that was already kind of I think a well understood path. This is like a supercharger of that of that concept. And then the then the question is like how does that translate to almost every form of work because if I am now you know the lawyer and working on cases and I can have 20 sub agents that all you know do a different case and then basically you know come back in some kind of task queue that I'm going through like obviously one just the the sheer leverage now you get is is going to be insane. Um but I do think the way that you you know might might even organize the work and um and what the
(39:03) what the you know workflows within an organization are are you know inevitably going to change as a result of that. Oh but I mean I think I think this just right goes gets to the you know essentially that that the the flow in the workflow has been serialized or linearized based sometimes on knowledge but other times on tooling.
(39:28) And so what happens when when the tooling changes is you just get this realignment of what's truly serial and and what's not. Like if you're if you're planning an event for a company, which is still going to keep happening, you know, like, oh, I have to book the venue. I have to invite all these people. We have to create all these materials. Well, they're actually not particularly gated on each other, right? But if you have an events person, they're gated, right? And and so now an events person can start spinning up all of these these different elements and then they're going to come back like I've gotten as far as I can on collateral until I get a logo for this event, right? Like I've
(39:59) gotten as far as I can on invites until I get the date and the time and the the venue. And I think there's no there's no reason why you can't spin up all those in parallel because of course how does that happen today? Well, if you're a company and you use Box and you've done this is your 58th event, you know, you have a folder called event, right? And people take the folder and go event 59 and they they make a copy of it and all the stuff in it. And and well, if you think about that workflow, that's exactly what a
(40:27) series of of different background tasks or agents could go do. And so I think the reason that you could be doing all that in coding is well there's a there was a natural there's a natural way to break that up because there's a bunch of program but there's the other side but there's also a bit of an indictment on the ability of you to give it a high level you know it kind of suggests that the human being needs to be you know giving them more granular orders otherwise you know to start a company you'd issue one prompt you'd go to the beach for six months right back and you'd have a full
(40:57) company which is a which is this almost re-anthorphizing effect which is which is like it turns out we we did we did kind of figure out division of labor. Uh we we figured it out in the context of an of of a lot of physical you know kind of analog limits that we that we clearly had uh that agents won't have but we now you know there there's no kind of you know total free lunch.
(41:21) So you have this context rot issue which is which is that you do actually have to subdivide the tasks at some point. So then the question is like what are the right I mean it may not be a context like the aam's razor here is you need to give them specific instructions for for specific tests and if you give them higher level instructions independent of context they just don't know what you want and this gets to the formal language part like at some point if you tried to use like the Uber frontier to get the whole thing done you you have to tell it the whole thing
(41:50) exactly and and that just seems like a lot of work whereas if you have to tell it less because the part of the model you're using knows more, right? It's basically a different way of thinking about templates or a different way of thinking about starting artifacts or scoping the context in a generic world.
(42:07) Well, but then there's this I mean uh it might though be the right architecture in general if you assume that that you know there's we're never going to get to a point where the model is just 100% perfect, right? And so it might also be the right uh kind of architecture design because at some point you're going to have you don't want an agent or a set of agents to go so far down a path when there was a step that it needed to check in with you on because there's just the compounding effect of that. So you do need to kind of subdivide the work also
(42:35) because if you do have gating, you know, moments that are going to have a bunch of dependencies, the agent does need to know like at what point should I roll that back up to the user? Yeah. Again, against the the the the common narrative, now that I think about it, it seems that the trend is is prompts are getting more complex, not less, right? And we're seeing more agents, not less, doing more narrow tasks, which is almost this kind of counter AGI narrative. It's almost like these are much more specialized and much more deep working with much more specific instructions.
(43:04) And there's like a sort of a history of this this wow, maybe we can actually solve it if we're specialized. Yeah. a little bit more in like if you take expert systems at first they thought expert systems would just be experts and they would just know and then like by the time you got to the actual published research like at Stanford it was like this is an expert system in deciding on what type of infectious disease as long as you have one of these seven. No, literally there was there was a paper was like there's this one digestive disorder that actually is a
(43:32) medical. I do but I I do want to though because I you wouldn't want like there there is one big difference which is somehow the model itself is packing in the inherent intelligence or capability to solve all of these like like like we are benefiting from the fact that that at least you can build these all on cloud 4 and GPG5 all on a computer too but let me try to show like demonstrate this one with an old person example on this one which was like early in the PC era the the there were words processor and spreadsheets and graphics and databases and a lot of
(44:07) people were like why are there these four programs? There should only be one program and and I my answer to that like which often involved screaming was have you been to an office supply store because if you go to an office supply store there's like paper with numbers and then there's blank rectangles of paper and then there's transparency paper for and like this has been around a really long time.
(44:31) There's some reason that these are different human context for how many minutes did it take you for for you to know Google Wave wasn't going to work? Zero. Okay. Okay. It was instantaneous. It was instant. No, I mean and and but this was the thing.
(44:48) There was a product ancient Mac product that was lauded by the industry called ClariS works which was like oh it does you could have a spreadsheet inside a word processor. And my first reaction is have you seen a person use a spreadsheet because their monitor can't be big enough. So they just want as many cells as you could possibly have and you're sitting there saying it has to fit on an 8 and 1 half x 11 sheet of paper on a on a Mac.
(45:05) And I think that one of the things that happens is is that these lenses that humans bring to specialization like really really matter. And if you think about the medical profession and you think about going from a a GP to the radiologist to a specialist to a nurse practitioner through the whole series, they're each going to look at and use AI in a different way. So then the only thing would be okay.
(45:28) So that was that level of specialization and division of labor emerged over a hundred-year period right with you know alongside tools and but but also with uh driven by a lot of the physical constraints and realities of of how organizations emerge.
(45:48) So the only question would be in a post agent world in 10 years from now do those divisions of labor look exactly the same or do those shift also because the agents collapse you know some of the functions and is there some blurring and then is there just a new set of roles like like clearly there's a role in a bunch of organizations emerging um which is like no I'm just like my role is like I'm the AI productivity person and like I just like have a way of of you know creating all new forms of productivity in the organization with AI.
(46:11) So, like clearly we'll have a bunch of new roles, but is our current division of labor going to also collapse in some interesting ways because of AI? Well, I I think that like if you actually stick with the medical example, we're just going to wake up and there's going to be way more people with way more specialties, right? And and AI will have created more jobs and in the interim, you think AI causes more specialization over time? Absolutely.
(46:35) Because everyone's every human is going to be way better, right? And and more knowledge will amount. And I think this is a thing that that has has really happened with computing that people forget like there used to just be like this morass of marketing, right? And R&D, right? And all of a sudden like just just and there used to just be coding and then there was coding and testing and design and product management and program management and you know usability and research and all of these specialties and all of those had their own tools.
(47:01) Go to a construction site. I I remember growing up these our neighbors built a house. We lived in an apartment and they built a house and there was Clem the carpenter and you built a house with a guy named Clem who used all the tools and everything and now like you build a house and it's like this 20 person list of sub subcontractors all who have whole companies that do nothing but like put in pavers you know and and that's what it's going to be.
(47:24) I mean there's been a there's been a long disagregation in the history of it right like everything in the same sheet metal then you know disagregate the OS and the hardware then you disagregate the apps right um and then it was kind of interesting like in the last 15 years we saw the app and like independent functions got disagregated right it's like almost everything became like like an API would become a company right you'd have like the twilios like o became a company like pub became a company etc and so it may very well be the case that every agent becomes like a whole new vertical and a
(47:55) whole new specialization and then and then you can actually build a company around it like it may be the case that today just like with APIs one company will have a whole bunch of agents it may be the case in the future that a third party will provide that agent as an independent well it's it's so it's the the opportunity to your point is is really there for that because like it used to be like the the impedance to creating a company and distributing No exactly was infinite and so you It used to be ridiculous to think that a single API like Oth could become a company, but then you know of course it became or it used to be ridiculous to think you
(48:29) could build a whole company out of signing documents, right? And and like that not just a whole company, but then all of a sudden you realize, wow, the addressable market for that is huge and it's way bigger than signing because of all the stuff that got done that was baked into a company causing headcount and waste and a bunch and fraud and abuse. Well, I I think you can kind of underwrite thousands of of of these companies emerging.
(48:54) So, uh Jared Freeman had a tweet um uh about basically like go deep on a on a workflow um you know take basically do the job of of some part of the economy payroll specialist and then build an agent for that. And it's not obvious that there's not literally like a thousand of those.
(49:12) So by every vertical and every line of department I just love this because this is like literally the anti-ag basically following like the long arc of computer science where as the market grows the level the granularity you can create a company well it's also economic growth like take that example like today just like Salesforce which is always my favorite example like the idea of having a productive salesforce used to just be a consultancy right and the only way you could ever fix it was hiring a consultancy to show up and analyze what everybody does and then do a report that says this is how you need to reorgan It usually meant go the opposite of whatever you had and and
(49:43) then they would leave. And then you know people tried but there was no cloud. So to build like CRM you had to do all that consulting work and then roll it out and and then it was static and you couldn't maintain it. And then all of a sudden there's like oh here's Mark Benny off and here's a whole way to do all this.
(50:04) And not only that, the people actually like it, right? And they think they're better at selling selling because they're they're using their phone and they're putting in a few notes about this client which helps everybody. And I think that's what's really going to happen with all this. And so suddenly something that looks really really small, right, becomes like a whole thing because there's no problem with distribution. There's no problem with customization.
(50:22) you know, we'll actually have ways to solve security and privacy and just like we solved reliability and and things like and and I think it's just I mean look at you know the stuff that you're a world expert in in the stack of internet techn of networking technologies.
(50:41) I mean, I you would have asked me 15 years ago, was CDN be companies? I never would have. I'm like, that doesn't make any sense. Like, how could you have a company that's a cash? Yeah. I think that people are probably way too afraid of the model providers kind of eating them. Um, and I I think it was I think it was basically uh a phenomenon in the first wave which was if you were just doing like basic like like if you had figured out that you could do something on GPT you know two and three where it was a text interface that produced more text like yes chatbt ate you like like that that
(51:09) clearly happened. Yeah. But basically since then most enterprises want kind of applied use cases for AI and AI agents. And so so it's not obvious that the current crop of companies if you're doing AI for healthcare, if you're doing AI for life sciences, if you're doing AI for financial services, if you're doing AI for coding at the right parts of the stack, AI AI for coding may be the one asterric area which will be hyperco competitive simply because the model companies like don't want to use somebody else's product to build their own models.
(51:39) AI kind of exception basically we're just in a 5year period right now where you're going to have to build agents for every vertical every domain and there's a playbook that's starting to emerge of what that needs to look like.
(51:57) I mean so I I think there was kind of a technical head fake that happened early on which was pre-training. So the pre-training really was a 10 out of 10 technical innovation. I I can't tell you like two years ago if somebody was like I had a friend that was building um like their own aging model post training aging model like we're going to make it so good at aging like you know this is a this is a a text to image model and they wanted to make it so like old people looked really good at it and then of course the next version of like mid journey whatever comes out and it does a
(52:21) better job of it and the thing with pre-training was you're just kind of consuming all of the world's existing data you're draining all of that energy and it perfectly generalized right but it feels like technically that's passed and now we're more in post training in RL which is a lot more domain specific and so well and the moment that you have access to some set of data that is only exactly is just for that enterprise and so who gets permission to access that data who gets permission to do the workflow on it it's going to be applied companies.
(52:46) Yeah. So yeah this if we had an infinite number of tokens then the models would just continue to generalize but it's pretty clear that that's not happening and so now we're going into which we all understand very well which is now companies have to choose which domains to go into and they got to solve the longtail problems there and get access to the data etc.
(53:06) And I also think that there's that the shadow having been the shadow, the shadow cast by large companies over we're going to put you out of business and stomp you. It's ridiculous. And it it has never in any technology wave lived up to the fear that people have. Look, if you built a new word processor in 1995, you were an idiot. Like that was not the thing to go build.
(53:24) Yeah. you know, and and but you know, there was a time just 10 years earlier where like companies built standalone spell checkers. Like it was just a thing. You went to the store and you bought a spell checker and and like it had more words than the other spell checker and and so the the the thing is that that's not being said now, which we should do a whole one on, is like what is the actual platform? Yeah, this is because it's very it's all well and good to say that the the large models will go subsume every application. The thing is the minute they start doing that, no one
(53:55) will be in their platform, right? Because like like no developer is gonna sit around and say if you're gonna subsume me, right, then and and this is there's a phrase that it's sherlocking on the Mac and the Apple world to this thing. And so it does it has a real chilling effect and that's one of the things all the model people are going to learn very very quickly.
(54:14) There's a chilling effect, but there's also just I think there really is just a problem of like it's hard to go deep in 50 categories like you just you just can't mod pre-train. I think everybody is scared cuz pre-training was actually the one thing that was good at that and then now they have to actually ch Yeah, I agree.
(54:30) You do have to like at some point it becomes purely just an execution issue which is like like I I don't know how anybody would set up a company to be to be able to beat 50 startups across 50 different domains. No, it's ridicul. And in fact, like it's only good because the what happens is is that the the the big company raises the big company raises the awareness of a whole category and then you just swoop in and you go we you to them you're I'm just a feature, right? But to to you I'm my this my whole life, right? And and you're gonna win.
(54:59) Look, I just I always come back there's a whole company that just signs things, right? I I I like I can't I cannot believe there's a whole company that just signs things. I have so much to say about this topic. I mean even minimally if you graph like like the cost to produ so the willingness to pay for um an inference versus the cost to serve it something like for most companies for most spaces 20% of the inferences are 80% of the cost. So like actually the problem of the application is just to choose those ones on which tend to be more domain specific. Yeah, this is the problem of
(55:30) inviting the three of us on here, which is like we just opened up the next just getting us to shut up in the trick. Yeah. Guys, thank you so much for coming on. This is fantastic.
HOW TO ACCESS THE WHATSAPP GROUP
Step #1: Become a basic paid member (if you aren’t already)
To get access to the group, you just need to become a basic paid member of this newsletter for $20/month or $149 per year.
In addition to the group access, you also get $2,500+ in other perks. This includes:
7 AI courses
Three books from Jay Abraham
Two mental model manuals
Dozens of premium prompts
My entire library of 200+ video lessons and blockbuster articles
Much more
If you’re already a paid subscriber, then go to step #2 & #3 to join the WhatsApp group right now…
Step #2: Download the WhatsApp Mobile And Desktop App (if you haven’t already)
Mobile
Desktop
Step #3: Click on the button below to join the group
Only paid members will be able to see the link.
After clicking on it, follow the instructions to officially join.
Once I accept your request to join, then you’ll officially be a member.