Enough to hope
What if the exponentials actually land?
Anthropic announced a model yesterday that it refused to actually ship. Claude Mythos Preview found thousands of zero-day vulnerabilities across major operating systems during evaluation, some hiding for 27 years, and Anthropic decided the capability was too dangerous for general release. They locked it behind a defensive consortium with AWS, Apple, Google, and Microsoft instead.
I’m not going to write another post about Mythos. Plenty of people are doing that. But time is ripe for normies like me to think about the post-AGI world.
The numbers got weird
The economic backdrop matters because the numbers have stopped making sense in the normal way.
Anthropic has grown revenue 10x per year for three straight years. Dario laid this out on Dwarkesh’s podcast in February: 2023 was zero to $100 million, 2024 was $100 million to $1 billion, 2025 was $1 billion to about $10 billion. As of this week, Bloomberg has Anthropic’s annualized run rate at $30 billion, putting it ahead of OpenAI by some measures. The company was worth $5 billion three years ago. It just raised at $380 billion.
OpenAI is on a similar curve: roughly $2 billion per month in revenue by early 2026, an $852 billion valuation, 910 million weekly active users, and internal projections showing $100 billion in revenue by 2028. Alphabet is doing the quiet incumbent thing, with Google Cloud growing 48% YoY in Q4 2025 and a market cap near $3.9 trillion. Capex for 2026 is $175 to $185 billion, nearly double last year.
What I find more interesting than the headline numbers is what Dario said about them. He worked through the math out loud: “I could assume the revenue will continue growing 10x a year, so it’ll be $100 billion at the end of 2026 and $1 trillion at the end of 2027... If I’m just off by a year in that rate of growth, you go bankrupt.” That’s the CEO of a $380 billion company saying bankruptcy is a live possibility, not because the business is failing, but because compute costs are so enormous that being wrong by twelve months is fatal.
This isn’t a normal industry. It’s a phase transition, and the people running it know it.
The prosperity case
The U.S. carries roughly $39 trillion in debt, with debt-to-GDP around 124%. Net interest payments will exceed $1 trillion this fiscal year. Under current 2% growth projections, the CBO sees debt-to-GDP climbing toward 140% by 2031. Conventional analysis says we’re sleepwalking into a fiscal crisis.
Now the historical parallel. After WWII, U.S. debt-to-GDP was 106%. Over the next 28 years it fell to 23%, not through spending cuts but through sustained growth. The economy nearly tripled between 1950 and 1980. What if AI produces something similar, compressed?
Epoch AI’s GATE model suggests that even when AI automates only 30% of tasks, GDP growth could exceed 20% per year. Erik Brynjolfsson at Stanford has publicly bet that U.S. productivity growth will average above 1.8% through 2029, nearly double the recent decade’s average. He’s already seeing early signs: productivity growth hit about 2.7% in 2025.
Run the debt arithmetic at these rates and the picture changes. At 10% real GDP growth, the economy doubles in about 7 years. A $39 trillion debt against a $60 trillion GDP would be roughly 65% debt-to-GDP, completely manageable. Even Penn Wharton’s much more conservative model estimates AI could cut federal deficits by $400 billion over 2026 to 2035.
And this is before factoring in what Dario calls the “compressed 21st century”, the idea that powerful AI could compress 50 to 100 years of biological progress into 5 to 10 years. In his “Machines of Loving Grace” essay, he projects 95% reductions in cancer mortality, effective treatment for most mental illness, and sub-Saharan Africa reaching China’s current per-capita GDP within a decade. If even 20% of that materializes, the downstream economic effects compound on top of the direct productivity gains.
Part of me wants to believe this. But economic models aren’t economies.
Why the boom might not land cleanly
Tyler Cowen, who coined “The Great Stagnation,” thinks AI adds only about 0.5% excess growth per year in the near term. His key observation: real interest rates and stock prices look startlingly normal. If markets actually expected explosive growth, long-term rates would be rising dramatically. They’re not.
Daron Acemoglu, the 2024 Nobel laureate, is more skeptical. He estimates only about 5% of tasks can be profitably automated within the next decade, yielding a total GDP boost of roughly 1.1%. Goldman Sachs, which originally projected a 7% global GDP boost from generative AI, published an update in early 2026 finding that $700 billion in cumulative AI investment has contributed basically nothing to U.S. GDP at the economy-wide level.
This is the J-curve problem. General-purpose technologies historically take 15 to 20 years to produce significant productivity gains, because the bottleneck isn’t the technology, it’s reorganizing institutions, retraining workers, and redesigning processes. Electricity was invented in the 1880s. It didn’t meaningfully boost factory productivity until the 1920s.
There’s also a distribution question. If AI productivity accrues primarily to capital owners, and early evidence suggests it might, you can get GDP growth that doesn’t translate into broadly shared prosperity. Which means it doesn’t generate the tax revenues you’d need to reduce debt without progressive reform.
The labor situation is already not great
Through Q1 2026, over 52,000 tech jobs have been cut, the most first-quarter cuts since 2023. The share explicitly linked to AI has jumped from under 8% in 2025 to over 20% in early 2026. Duke’s CFO Survey, published in Fortune, found that CFOs privately expect AI layoffs in 2026 to be 9 times higher than 2025. That implies roughly 495,000 AI-driven job losses this year alone.
Block laid off 40% of its workforce. Jack Dorsey was unusually candid: “This is not driven by financial difficulty, but by the growing capability of AI tools.” Block’s AI now handles 70 to 80% of customer service inquiries. Atlassian cut 1,600. Oracle is reportedly planning 20,000 to 30,000 cuts.
The hardest-hit demographic is the one with the least political power. U.S. programmer employment fell 27.5% between 2023 and 2025. New grad hiring at the Magnificent Seven plunged over 50% since 2022. Anthropic’s own labor research, published in March 2026, found a 6 to 16% decline in employment among workers aged 22 to 25 in AI-exposed occupations, primarily from slower hiring rather than increased separations. Companies aren’t firing young people. They’re just not hiring them.
How the soft landing might actually happen
On April 6, one day before the Mythos announcement, OpenAI published a 13-page white paper called “Industrial Policy for the Intelligence Age.” Sam Altman compared the moment to the Progressive Era. The proposals are substantive: a Public Wealth Fund seeded by AI companies that distributes returns to citizens, a robot tax where automated systems pay what the replaced workers would have, auto-triggering safety nets, and a subsidized four-day workweek.
You can read this cynically. The company that benefits most from AI expansion is publicly advocating for robot taxes because it knows they won’t pass. But there’s a version where the gesture becomes the policy. Vinod Khosla is separately proposing to exempt everyone earning under $100,000 from federal income tax, funded by eliminating preferential capital gains rates. Anthropic economist Anton Korinek has co-authored a Brookings framework on AI taxation. Even Dario, in his “Adolescence of Technology” essay from January, explicitly calls for “progressive taxation targeting AI firms.”
The soft landing requires the labs to mean it, or at least to follow through out of competitive pressure to look like they mean it. It requires governments functional enough to implement the mechanisms. And it requires the timeline to be measured in years, because every policy I just described requires legislation.
Where I actually think we land
If I’m being honest about my probability distribution, it looks something like this.
The transformative technology is real. The revenue curves are real. But the translation of capability into broad economic prosperity will be slower, lumpier, and more unevenly distributed than the optimists project. GDP growth will accelerate, maybe to 3 or 4% in the U.S. by the late 2020s, but not to the 10 to 20% scenarios the growth-solves-everything crowd fantasizes about. The J-curve is real. Institutions are slow. Regulation is slower.
The debt won’t evaporate, but it won’t crush us either. Some version of AI-enabled productivity growth combined with modest fiscal reform will probably stabilize the debt-to-GDP ratio rather than solve it.
The labor market will be genuinely painful for 3 to 5 years. Entry-level knowledge work will keep hollowing out. Some version of enhanced safety nets will emerge, probably too late and too small, as is the American tradition. The worst-case mass unemployment scenario probably doesn’t materialize because new categories of work will emerge alongside the displacement. They always do. The question is whether the gap between destruction and creation is survivable for the people caught in it.
The compressed century in biology is the wildcard I’m most cautiously optimistic about. AlphaFold already won a Nobel. If even 20% of Dario’s health predictions materialize in the next decade, the quality-of-life improvements alone would justify a generation of investment.
I’d put the genuine best case at maybe 30%. Another 40% on the mixed muddle: net positive but painful, slower than optimists hope and faster than skeptics expect. The remaining 30% on darker outcomes I’d rather not think about on a Tuesday afternoon.
But 30% odds on a genuinely wonderful world, are pretty good odds.
Not enough to relax. But maybe enough to hope.

