The risks embedded in US AI-linked equities are no longer theoretical. After years of euphoric pricing, markets are beginning to confront a hard truth: most AI firms remain deeply unprofitable, valuations are built on assumptions rather than earnings, and the entire ecosystem is increasingly circular. Roughly 95% of AI companies have yet to turn a profit, yet capital continues to pour in as if returns are inevitable.
To understand the fragility, consider how tightly intertwined the dominant players have become.
OpenAI now reportedly holds a 10% stake in AMD. Nvidia is investing $100 billion into OpenAI. Microsoft is both a major OpenAI shareholder and a major customer of AI cloud provider CoreWeave — a company in which Nvidia also holds a significant equity stake. Meanwhile, Microsoft alone accounted for nearly 20% of Nvidia’s annualized revenue as of Nvidia’s FY2025 Q4.
In less than three years, OpenAI has gone from a novelty to a structural pillar of the global economy.
The question is unavoidable: are we witnessing innovation — or a modern Wild West where equity, revenue, and influence blur to get deals done? One firm grants equity to a chip supplier to finance data centers while simultaneously taking ownership in a rival manufacturer developing similar products. This is not about outsmarting competitors — Jensen Huang and Lisa Su are both exceptional leaders — but about how a small cluster of firms now recycle capital, risk, and valuation among themselves, at a scale measured in hundreds of billions of dollars.
That structure is now being stress-tested.
US tech stocks sold off sharply as investors rotated out of previously untouchable AI-linked names. The Nasdaq Composite fell 1.9%, heading toward its worst week since November, while the S&P 500 dropped 1.5%.
Alphabet led the decline, falling more than 5%, after announcing plans to double capital expenditures to as much as $185 billion — reigniting concerns over when, or if, Silicon Valley’s AI spending will generate sustainable returns. This happened despite strong earnings, underscoring that profitability alone is no longer enough to justify AI valuations.
As Bespoke Investment Group’s George Pearkes put it, this is a “natural correction and a test of the AI story.”
Elsewhere, Qualcomm plunged 12% on warnings about memory chip shortages. Western Digital and Palantir fell 4%, Amazon dropped 3.5%, and Tesla slid 1.6%. Software firms and chipmakers have been hit especially hard as investors digest the disruptive implications of new AI coding tools and question whether demand growth can justify the infrastructure being built.
Estimates suggest AI-related capital expenditure surpassed US consumer spending as the primary driver of economic growth in the first half of 2025, contributing 1.1% of GDP growth. JP Morgan Asset Management’s Michael Cembalest notes that since ChatGPT launched in November 2022:
This level of concentration is historically dangerous.
RBC’s Kelly Bogdanova highlights that after explosive earnings growth in 2023 and 2024, growth rates between the “Magnificent Seven” and the rest of the S&P 500 are expected to converge. Meanwhile, the gap between tech’s share of market capitalization and its share of net income has widened dramatically since late 2022 — a classic warning sign.
At ESG.AI’s June CEO Summit, David Siegel — MIT-trained computer scientist and co-founder of Two Sigma — delivered a sobering message. AI is undoubtedly transformative, he said, but today’s hype freely mixes fact and speculation, and few people are willing to discuss its limitations.
Apple’s recent research reinforced this concern, suggesting that AI reasoning capabilities may be overstated due to benchmark contamination — where training data includes the answers to the tests used to evaluate models.
Siegel explained it plainly: it’s like giving a student the answers before the exam. The result is inflated confidence in AI’s ability to reason, generalize, and adapt — precisely the assumptions underwriting today’s valuations.
A handful of firms now dominate AI investment, infrastructure, and narrative control. Multibillion-dollar deals involving OpenAI, Nvidia, Microsoft, Google, CoreWeave, and others appear almost daily. Should AI’s bold promises fall short, this interdependence could trigger a chain reaction reminiscent of 2008.
The ambitions are staggering: massive energy and grid buildouts, agentic AI systems, and near-universal adoption — all projected within five years.
One example alone illustrates the scale. OpenAI has committed $300 billion over five years to Oracle for computing power — about $60 billion annually. Yet OpenAI reportedly loses billions each year, with projected 2025 revenues of $13 billion, far short of covering long-term commitments. Oracle’s stock surged over 40% on the announcement, adding nearly $300 billion in market value overnight. OpenAI’s own valuation jumped from $300 billion to $500 billion in under a year.
This is valuation reflexivity in action.
The parallels to crypto are unsettling.
Sam Bankman-Fried once promised to revolutionize finance through FTX and Alameda Research — until weak governance and poor oversight revealed systemic fraud. Crypto’s collapse was painful, but its limited scale contained the damage.
AI is different. Its perceived value is exponentially larger, and its governance frameworks are fragmented at best.
Even AI’s most powerful advocates are uneasy. Anthropic CEO Dario Amodei estimates a 25% chance that AI goes “really, really badly.” Elon Musk’s Grok recently demonstrated how quickly model manipulation can produce unintended consequences. A major public AI failure affecting markets or national security could force an immediate global moratorium on comparable systems.
Bethany McLean’s comparison to the 1990s fiber-optic overbuild is instructive. Massive infrastructure investments were rendered redundant when technological breakthroughs dramatically increased capacity.
AI faces the same risk. Advances in chip design or quantum computing could make today’s data centers obsolete before they ever generate returns. Excess compute may be useful eventually — but history suggests that payoff timelines can stretch decades.
Charles Mackay wrote in 1841 that humans “go mad in herds, while they only recover their senses slowly, one by one.” From tulips to railroads to dot-coms, the pattern repeats.
The US AI boom increasingly resembles 1929 more than 2008 — fueled by leverage, concentration, weak governance, and unquestioned narratives. A failure to ask obvious questions, demand tangible value, or enforce oversight could once again destabilize the global economy.
At ESG.AI, we built the AI Impact Calculator to challenge exactly this dynamic — to encourage deeper analysis of AI models beyond hype and capital allocation. The most heavily funded models may not be the most effective, safest, or sustainable. In fact, quieter alternatives may outperform them over time.
Before AI is embedded into any organization’s structure, everything must be examined: economics, governance, incentives, energy use, and long-term risk. The future of AI will not be decided by who raises the most money — but by who builds the most resilient systems.
At ESG.AI, we observe a critical disconnect: investment volume is being mistaken for quality, resilience, and long-term viability.
Our analysis shows that:
This is why ESG.AI built the AI Impact Calculator — to move decision-making beyond hype and toward measurable, multidimensional evaluation of AI systems across economic, environmental, social, and governance dimensions.
For investors, executives, and policymakers, the path forward is clear — though not easy:
AI will shape the future — but not all AI models should survive, scale, or dominate. The winners will not be those who raise the most capital, but those who align innovation with governance, efficiency, and long-term value.
That is the distinction ESG.AI exists to make visible — before the market is forced to learn it the hard way.