“Depressions linger, because people are afraid to invest. We get crashes on Wall Street about every 20 years, because that’s how long it takes people to forget what happened the last time. A generation of new guys think they’re as smart as they come, and then it turns out that they’re human, like the rest of us.” John Steele Gordon, author of The Scarlet Woman of Wall Street: Jay Gould, Jim Fisk, Cornelius Vanderbilt, the Erie Railway Wars and the Birth of Wall Street.
In December 1872, James Lees of banking house Lees and Waller wrote a letter warning of the coming railroad investment crash, decrying the enormous amounts of capital invested, often from Europe, “a great deal of which has been wasted in extravagance and ill spent in wildcat enterprises such as railroads through deserts—beginning nowhere and ending nowhere.”
In 1873, greed, speculation and overinvestment in railroads sparked a financial crisis that sank the U.S. into more than five years of misery.
189 years later, history may be repeating itself.
Deep learning, once hailed as the engine of infinite AI progress, is hitting a wall of diminishing returns. Scaling laws—the idea that bigger models, more data, and vast compute yield proportional intelligence gains—are breaking down. A 2024 analysis from DeepLearning.AI notes that major companies are rethinking strategies as these laws falter, with performance improvements tapering off despite exponential resource inputs.
An MIT study warns that the largest, most compute-intensive models may soon deliver vanishingly small benefits relative to their costs.
Evidence from Epoch AI and others points to a "scaling wall," where further investments yield minimal gains, echoing IEEE Spectrum's 2021 critique of deep learning's escalating computational costs with plateauing results.
Harvard's Kempner Institute highlights how accuracy becomes "extraordinarily insensitive" to resource scaling beyond a point, signaling extreme inefficiency.
This isn't just theoretical; real-world data shows depreciation. Gary Marcus's visualizations depict progress stalling, with LLMs reaching diminishing returns on metrics like test loss.
METR's 2025 study found experienced developers 19% slower using advanced AI tools, despite perceiving speed gains—AI arbitrages expertise away rather than augmenting it.
DeepSeek's market share plummeted from 70-80% to under 30% in months, as models commoditize and switching costs near zero.
Intelligence is becoming like electricity: cheap, interchangeable, and undifferentiated. Foundation models burn billions in CAPEX only to become obsolete commodities within six months.
As François Chollet observed, post-COVID cuts revealed deep learning's non-essentiality for many firms, marking the first decline in practitioners since 2010.
Meanwhile, venture capital firms thrive on this hype. Global VC poured $192.7 billion into AI startups in 2025—over half of all funding—with U.S. private AI investment hitting $109.1 billion in 2024 alone.
Hyperscalers like Amazon, Google, Microsoft, and Meta committed $364 billion in 2025 capex, much on AI hardware.
Foundation models announced nearly $1 trillion in infrastructure pledges.
VCs skim 2% management fees on these billions, pocketing tens of millions annually regardless of outcomes. Hardware—GPUs, data centers—drives this, with $37 billion spent on AI infrastructure in 2024.
But as inference costs collapse 99% in two years, models commoditize faster.
When the music stops, a pile of junk remains. AI corporations like OpenAI are cashflow negative, reliant on hardware gouging.
The bubble's mechanics—closed-loop financing where Nvidia sells GPUs to Oracle, then buys cloud access—echo dot-com excesses.
WSJ notes AI hardware depreciates in 2-3 years, unlike enduring dot-com fiber optics.
Bain estimates $2 trillion annual AI revenue needed by 2030 to justify spending—versus $45 billion today—setting up insolvency for debt-laden players.
The market floods with cheap, obsolete GPUs; applications falter, unraveling the chain. What’s a Samsung S5L8900 aka ‘Apple A4” worth today?
VCs walk away rich from fees, leaving startups and investors with depreciated silicon scrap and counties with unclean aquifers. Deep learning's promise fades, but the VC grift endures.
One of the most disturbing consequences of the Panic of 1873 was the loss of trust among the many small, sometimes well-informed investors who had invested in risky Northern Pacific bonds. “Their trust had been violated,” wrote Richard White in a 2003 article for The Journal of American History. “Most disturbingly, those deceived were ‘the intelligent classes, who read newspapers, mingle in affairs and have constant access to information.’” This time it may be the mayors and city halls and farms that relearn the lesson.
Editorial comments expressed in this column are the sole opinion of the writer.
