PNG ESG.ai official logo
  • ESG Score Navigator
  • Pricing
  • News & Insights
    • Press Releases
    • Insights
  • About Us
  • ESG Score Navigator
  • Pricing
  • News & Insights
    • Press Releases
    • Insights
  • About Us
Contact Us
ESGAI Insights

The AI Gold Rush: When Hype, Capital, and Power Collide

Kelly Kirsch
February 5, 2026

by Kelly KIRSCH- Directeur Général ESG Europe

Why US AI-Linked Stocks Are Starting to Crack

The risks embedded in US AI-linked equities are no longer theoretical. After years of euphoric pricing, markets are beginning to confront a hard truth: most AI firms remain deeply unprofitable, valuations are built on assumptions rather than earnings, and the entire ecosystem is increasingly circular. Roughly 95% of AI companies have yet to turn a profit, yet capital continues to pour in as if returns are inevitable.

To understand the fragility, consider how tightly intertwined the dominant players have become.

OpenAI now reportedly holds a 10% stake in AMD. Nvidia is investing $100 billion into OpenAI. Microsoft is both a major OpenAI shareholder and a major customer of AI cloud provider CoreWeave — a company in which Nvidia also holds a significant equity stake. Meanwhile, Microsoft alone accounted for nearly 20% of Nvidia’s annualized revenue as of Nvidia’s FY2025 Q4.

In less than three years, OpenAI has gone from a novelty to a structural pillar of the global economy.

The question is unavoidable: are we witnessing innovation — or a modern Wild West where equity, revenue, and influence blur to get deals done? One firm grants equity to a chip supplier to finance data centers while simultaneously taking ownership in a rival manufacturer developing similar products. This is not about outsmarting competitors — Jensen Huang and Lisa Su are both exceptional leaders — but about how a small cluster of firms now recycle capital, risk, and valuation among themselves, at a scale measured in hundreds of billions of dollars.

Markets Are Finally Pushing Back

That structure is now being stress-tested.

US tech stocks sold off sharply as investors rotated out of previously untouchable AI-linked names. The Nasdaq Composite fell 1.9%, heading toward its worst week since November, while the S&P 500 dropped 1.5%.

Alphabet led the decline, falling more than 5%, after announcing plans to double capital expenditures to as much as $185 billion — reigniting concerns over when, or if, Silicon Valley’s AI spending will generate sustainable returns. This happened despite strong earnings, underscoring that profitability alone is no longer enough to justify AI valuations.

As Bespoke Investment Group’s George Pearkes put it, this is a “natural correction and a test of the AI story.”

Elsewhere, Qualcomm plunged 12% on warnings about memory chip shortages. Western Digital and Palantir fell 4%, Amazon dropped 3.5%, and Tesla slid 1.6%. Software firms and chipmakers have been hit especially hard as investors digest the disruptive implications of new AI coding tools and question whether demand growth can justify the infrastructure being built.

AI Is Now Driving the Economy — and That’s the Problem

Estimates suggest AI-related capital expenditure surpassed US consumer spending as the primary driver of economic growth in the first half of 2025, contributing 1.1% of GDP growth. JP Morgan Asset Management’s Michael Cembalest notes that since ChatGPT launched in November 2022:

  • AI stocks drove 75% of S&P 500 returns
  • Accounted for 80% of earnings growth
  • Represented 90% of capital spending growth

This level of concentration is historically dangerous.

RBC’s Kelly Bogdanova highlights that after explosive earnings growth in 2023 and 2024, growth rates between the “Magnificent Seven” and the rest of the S&P 500 are expected to converge. Meanwhile, the gap between tech’s share of market capitalization and its share of net income has widened dramatically since late 2022 — a classic warning sign.

The Technology Isn’t as Omnipotent as Markets Assume

At ESG.AI’s June CEO Summit, David Siegel — MIT-trained computer scientist and co-founder of Two Sigma — delivered a sobering message. AI is undoubtedly transformative, he said, but today’s hype freely mixes fact and speculation, and few people are willing to discuss its limitations.

Apple’s recent research reinforced this concern, suggesting that AI reasoning capabilities may be overstated due to benchmark contamination — where training data includes the answers to the tests used to evaluate models.

Siegel explained it plainly: it’s like giving a student the answers before the exam. The result is inflated confidence in AI’s ability to reason, generalize, and adapt — precisely the assumptions underwriting today’s valuations.

Concentration Creates Contagion

A handful of firms now dominate AI investment, infrastructure, and narrative control. Multibillion-dollar deals involving OpenAI, Nvidia, Microsoft, Google, CoreWeave, and others appear almost daily. Should AI’s bold promises fall short, this interdependence could trigger a chain reaction reminiscent of 2008.

The ambitions are staggering: massive energy and grid buildouts, agentic AI systems, and near-universal adoption — all projected within five years.

One example alone illustrates the scale. OpenAI has committed $300 billion over five years to Oracle for computing power — about $60 billion annually. Yet OpenAI reportedly loses billions each year, with projected 2025 revenues of $13 billion, far short of covering long-term commitments. Oracle’s stock surged over 40% on the announcement, adding nearly $300 billion in market value overnight. OpenAI’s own valuation jumped from $300 billion to $500 billion in under a year.

This is valuation reflexivity in action.

Governance: The Quietest Risk of All

The parallels to crypto are unsettling.

Sam Bankman-Fried once promised to revolutionize finance through FTX and Alameda Research — until weak governance and poor oversight revealed systemic fraud. Crypto’s collapse was painful, but its limited scale contained the damage.

AI is different. Its perceived value is exponentially larger, and its governance frameworks are fragmented at best.

Even AI’s most powerful advocates are uneasy. Anthropic CEO Dario Amodei estimates a 25% chance that AI goes “really, really badly.” Elon Musk’s Grok recently demonstrated how quickly model manipulation can produce unintended consequences. A major public AI failure affecting markets or national security could force an immediate global moratorium on comparable systems.

Innovation Can Obsolete Everything Overnight

Bethany McLean’s comparison to the 1990s fiber-optic overbuild is instructive. Massive infrastructure investments were rendered redundant when technological breakthroughs dramatically increased capacity.

AI faces the same risk. Advances in chip design or quantum computing could make today’s data centers obsolete before they ever generate returns. Excess compute may be useful eventually — but history suggests that payoff timelines can stretch decades.

Herd Psychology Never Changes

Charles Mackay wrote in 1841 that humans “go mad in herds, while they only recover their senses slowly, one by one.” From tulips to railroads to dot-coms, the pattern repeats.

The US AI boom increasingly resembles 1929 more than 2008 — fueled by leverage, concentration, weak governance, and unquestioned narratives. A failure to ask obvious questions, demand tangible value, or enforce oversight could once again destabilize the global economy.

At ESG.AI, we built the AI Impact Calculator to challenge exactly this dynamic — to encourage deeper analysis of AI models beyond hype and capital allocation. The most heavily funded models may not be the most effective, safest, or sustainable. In fact, quieter alternatives may outperform them over time.

Before AI is embedded into any organization’s structure, everything must be examined: economics, governance, incentives, energy use, and long-term risk. The future of AI will not be decided by who raises the most money — but by who builds the most resilient systems.

🔍 ESG.AI Insight

Why Capital Alone Is the Wrong Signal

At ESG.AI, we observe a critical disconnect: investment volume is being mistaken for quality, resilience, and long-term viability.

Our analysis shows that:

  • The most capital-intensive AI models are often not the most efficient, transparent, or adaptable.
  • Energy intensity, governance maturity, data integrity, and incentive alignment vary widely across models — and are rarely reflected in valuation.
  • Smaller, less publicized AI systems often outperform larger models on risk-adjusted, application-specific outcomes.

This is why ESG.AI built the AI Impact Calculator — to move decision-making beyond hype and toward measurable, multidimensional evaluation of AI systems across economic, environmental, social, and governance dimensions.

📌 What to Do Now

For investors, executives, and policymakers, the path forward is clear — though not easy:

  1. Interrogate AI Economics
    1. Demand clarity on profitability timelines, cost curves, and capital dependencies.
    1. Separate revenue growth from valuation inflation.
  2. Assess Concentration Risk
    1. Identify dependencies on a small group of vendors, models, or infrastructure providers.
    1. Stress-test portfolios and operations against correlated AI failures.
  3. Evaluate Governance Before Capability
    1. Scrutinize oversight, data controls, incentive structures, and accountability mechanisms.
    1. Governance maturity should be treated as a leading indicator of long-term success.
  4. Question Infrastructure Assumptions
    1. Model scenarios where advances in efficiency or alternative architectures reduce the value of current compute investments.
  5. Use Objective Impact Tools
    1. Apply frameworks like ESG.AI’s AI Impact Calculator to compare AI systems on sustainability, resilience, and real-world effectiveness — not publicity or funding size.
  6. Slow Down Before Scaling
    1. Integration should follow analysis, not headlines. The cost of premature adoption is often invisible until it is irreversible.

AI will shape the future — but not all AI models should survive, scale, or dominate. The winners will not be those who raise the most capital, but those who align innovation with governance, efficiency, and long-term value.

That is the distinction ESG.AI exists to make visible — before the market is forced to learn it the hard way.


AI EuropeAI Investing CrisisAI Stocks CrashingESG.AIKelly KIRSCHPolicyTrendsUS AI equities overvalued

Related Articles


Blog  ·  ESGAI Insights
ESG Weekly Brief — AI’s Social Cost & Europe’s Next Sustainability Playbook
Illustration comparing U.S. and European AI models, highlighting the debate over whether U.S. AI companies can survive without government contracts and the rise of Europe’s open AI ecosystem.
ESGAI Insights
Can U.S. AI Companies Survive Without Government Contracts—and Could the European Model Prove More Viable in the Long Term?
ESGAI Insights
🌍 ESG Weekly Brief — Power, Policy, and the New Infrastructure Economy

Leave A Reply Cancel reply

Your email address will not be published. Required fields are marked *

*

*

White and black humanoid robot looking at a wall of many computer screens, which come together to display a a large image of the earth in the centre surrounded by icons.
"AI for Sustainable Future: Global Expert Insights" session - May 8
Previous Article
Grand Palais Paris 2026
🌍 ESG Weekly Brief Climate Law Hardens, AI Architecture Splits, and Capital Rewrites the Transition Playbook
Next Article

Copyright @2026 ESG.AI

Company

  • ESG Score Navigator
  • Pricing
  • About Us
  • News & Insights
  • Careers
  • Contact Us

Legal

  • Terms of Service
  • Privacy Policy
  • Sustainability Statement

Get in Touch

  • Contact us for a copy of our investor presentaion.
Envelope Linkedin