PNG ESG.ai official logo
  • ESG Score Navigator
  • Pricing
  • News & Insights
    • Press Releases
    • Insights
  • About Us
  • ESG Score Navigator
  • Pricing
  • News & Insights
    • Press Releases
    • Insights
  • About Us
Contact Us
ESGAI Insights

Can U.S. AI Companies Survive Without Government Contracts—and Could the European Model Prove More Viable in the Long Term?

Kelly Kirsch
March 9, 2026

By Kelly KIRSCH-Directeur General ESG Europe

Artificial intelligence is no longer simply a race to build the most powerful models. It has evolved into something much larger: a contest over economic architecture, national security, and technological sovereignty. As AI systems move from consumer applications to decision-support tools, infrastructure management, and even military systems, technology companies are increasingly becoming part of national strategic ecosystems.

This transformation raises a fundamental question for the future of the industry: can U.S. AI companies remain independent from government contracts, and could the European model—less tied to defense procurement—prove more sustainable in the long run?

Recent developments involving OpenAI, Anthropic, and the U.S. Department of Defense illustrate the growing tension between innovation, political power, and ethical governance. At the same time, Europe is experimenting with an alternative model built around open architectures, public-private coordination, and technological sovereignty.


The OpenAI Moment: When AI Becomes National Infrastructure

The issue came into focus when OpenAI accepted a Pentagon contract that its rival Anthropic reportedly declined. Anthropic had sought contractual limits preventing the use of its AI systems for mass domestic surveillance and fully autonomous lethal systems. When those restrictions were rejected, the company chose not to proceed with the agreement.

OpenAI subsequently accepted the contract.

In response to criticism, CEO Sam Altman held a public discussion on X, arguing that decisions about national defense should ultimately be determined by democratic governments rather than private companies.

Altman emphasized that elected officials—not corporate executives—should decide how technologies are used in national security contexts.

Yet the reaction from the public and parts of the AI community suggested deeper discomfort. Many researchers, users, and employees questioned whether companies developing extremely powerful systems should defer entirely to government authority when the implications of those systems may extend far beyond traditional defense technologies.

The episode highlighted a broader transition: AI companies are no longer just startups—they are becoming strategic infrastructure providers.

And that shift carries political and ethical responsibilities that the technology sector has historically avoided.


The Structural Dependence on Government Funding

The economics of frontier AI development make independence increasingly difficult.

Training advanced models requires massive computational resources, specialized chips, large engineering teams, and enormous datasets. The costs of building frontier models are now frequently measured in hundreds of millions or even billions of dollars.

Government contracts offer several advantages that are difficult for companies to ignore:

  • Stable long-term revenue streams that support expensive infrastructure
  • Access to government research funding and strategic datasets
  • Integration into national procurement and technology ecosystems

Historically, industries with similar cost structures—such as aerospace and defense—became closely intertwined with government procurement. Companies like Lockheed Martin and Raytheon evolved within a tightly integrated defense-industrial system that provided predictable funding and regulatory frameworks.

AI companies may be entering a similar dynamic.

Even firms that originally positioned themselves as consumer technology innovators may find that national security partnerships become a primary source of long-term growth.

But dependence on government contracts introduces new vulnerabilities.


The Anthropic Dispute and the Question of AI Ethics

Anthropic, founded by Dario and Daniela Amodei, has built its reputation around AI safety and what it calls “constitutional AI.” Its Claude models incorporate architectural guardrails designed to limit certain categories of misuse.

When the company refused to remove safeguards preventing certain surveillance and autonomous weapons applications, reports suggested that the U.S. Defense Department considered designating Anthropic a “supply chain risk.”

Such a designation could significantly limit a company’s ability to work with defense contractors and infrastructure partners.

Even if such measures were ultimately contested legally, the signal to the industry would be clear: companies that resist government demands may risk exclusion from key markets.

The dispute therefore represents more than a contractual disagreement. It reflects a deeper struggle over who ultimately controls the ethical architecture of artificial intelligence systems—the state or the developers who design them.


The Contradiction in AI Governance

The conflict reveals a fundamental contradiction in the global conversation about AI regulation.

For several years, governments and regulators have argued that technology companies must take responsibility for preventing harmful uses of their systems. AI developers have been urged to incorporate safeguards that limit misuse.

However, when those safeguards apply to government clients, the political dynamic changes.

In practice, the message risks becoming contradictory: AI companies are expected to build ethical constraints—unless governments decide those constraints are inconvenient.

This tension is likely to intensify as AI capabilities expand and become more deeply integrated into national security systems.


The Militarization of Artificial Intelligence

The stakes are amplified by the rapid militarization of AI technologies.

Across major global powers, artificial intelligence is increasingly used in:

  • autonomous drones and robotic systems
  • battlefield intelligence analysis
  • military logistics and targeting systems
  • coordinated drone swarms and autonomous operational systems

China has already deployed vast AI-enabled surveillance systems, with hundreds of millions of cameras linked to facial recognition and data analytics platforms capable of tracking individuals across cities in real time.

For governments, limiting access to advanced AI tools may appear strategically risky in an environment of technological competition.

For developers, removing ethical constraints may risk creating systems capable of large-scale surveillance or automated violence.

The resulting tension between national security imperatives and technological ethics is unlikely to disappear.


The Political Risk of Becoming a Defense Contractor

Entering the defense ecosystem also exposes AI companies to political volatility.

Traditional defense contractors evolved within stable regulatory frameworks designed to buffer them from rapid political shifts. Their long procurement cycles and institutional relationships provided continuity across administrations.

Technology startups operate very differently. They rely on rapid innovation cycles, global markets, and highly mobile talent.

Aligning too closely with government priorities—particularly those of a specific administration—can create reputational risks and internal tensions.

OpenAI now faces pressures from multiple directions:

  • employees advocating for ethical boundaries
  • users concerned about military applications
  • policymakers expecting strategic alignment
  • government agencies demanding operational flexibility

Anthropic faces a different challenge: maintaining ethical safeguards without risking economic marginalization.

Neither position is politically neutral.


Europe’s Alternative Approach

While U.S. AI companies are increasingly tied to national security ecosystems, Europe is developing a different strategic model.

The rise of France’s Mistral AI illustrates this alternative approach.

Rather than attempting to replicate Silicon Valley’s capital-intensive model, European AI development is increasingly based on:

  • open-weight models
  • public funding and institutional coordination
  • academic and research integration
  • alignment with broader digital sovereignty initiatives

France’s strategy reflects deliberate ecosystem design. Universities, research labs, startups, and policymakers are coordinated through industrial policy frameworks rather than purely venture-driven growth.

The objective is not necessarily global dominance. Instead, it is technological optionality—ensuring Europe can participate meaningfully in the AI economy without becoming structurally dependent on external platforms.


Open vs. Closed Models

At the center of this divergence lies a technical and economic question that may shape the next decade: open or closed AI systems?

The dominant U.S. model relies on proprietary systems requiring massive capital investment. Frontier models require:

  • enormous training datasets
  • specialized architectures
  • massive GPU clusters running for months
  • multi-billion-dollar compute infrastructure

To recover those costs, access is restricted through proprietary APIs and usage-based pricing models. The resulting structure resembles cloud computing oligopolies: high margins and strong platform lock-in.

Europe’s open-weight approach operates differently.

By publishing model weights, architectures, and training methodologies, developers allow organizations to run AI systems locally and build services on top of shared technical infrastructure.

The economic analogy resembles the development of Linux: open infrastructure combined with commercial services layered above it.

Open models commoditize inference. Closed models monetize it.

The long-term consequences of this distinction will shape who captures value in the AI economy.


AI as Foundational Infrastructure

The most important shift in the AI era may be conceptual.

Artificial intelligence is no longer simply a technology product. It is becoming foundational infrastructure—similar to electricity networks, telecommunications systems, and cloud platforms.

Whoever controls AI models influences:

  • data flows and digital ecosystems
  • economic productivity and innovation capacity
  • national security capabilities
  • regulatory leverage and technological standards

If frontier AI remains dominated by proprietary systems controlled by a small number of companies, global dependency may increase.

If open models expand successfully, technological power could become more distributed.

The emerging European strategy is an attempt to create that alternative.


🔍 ESG.AI Insight

The growing intersection between artificial intelligence, defense procurement, and geopolitical competition introduces significant ESG governance risks.

Three systemic dynamics are emerging:

Governance Risk (G)
AI companies may face increasing pressure to modify ethical safeguards in order to secure government contracts. This raises questions about corporate independence, transparency, and accountability.

Social Risk (S)
If AI systems are deployed in surveillance or autonomous weapons applications, companies could face employee dissent, reputational damage, and broader societal backlash.

Strategic Dependency Risk
Heavy reliance on government contracts can expose companies to political cycles, regulatory retaliation, and policy shifts across administrations.

At the same time, Europe’s open and sovereignty-driven model introduces its own uncertainties. Open systems may distribute technological power more broadly but may also struggle to mobilize the massive capital required for frontier innovation.

The long-term winner may not be determined by technological performance alone, but by which governance model proves most resilient.


📌 What to Do Now

For AI Companies

  • Develop clear governance frameworks defining acceptable uses of AI systems.
  • Establish internal oversight mechanisms for government partnerships.
  • Maintain transparency with employees, investors, and users regarding defense-related collaborations.

For Policymakers

  • Clarify procurement rules governing AI systems used in national security contexts.
  • Balance national security needs with safeguards protecting civil liberties and democratic oversight.
  • Promote international norms governing AI use in military systems.

For Investors

  • Evaluate AI companies not only on technological performance but also on governance resilience.
  • Monitor exposure to political risk associated with defense contracting.
  • Assess long-term sustainability of business models tied heavily to government procurement.

Artificial intelligence is rapidly becoming one of the most strategically important technologies of the century.

As governments and companies become more deeply intertwined, the central question will not simply be which AI models are most powerful—but which governance systems can sustain innovation, legitimacy, and stability over the long term.

The answer may determine not only the future of the AI industry, but also the balance of technological power in the global economy.


AIAI as InfrastructureAI EuropeESG.AIKelly KIRSCHOpen vs Closed AI

Related Articles


Blog  ·  ESGAI Insights
ESG Weekly Brief — AI’s Social Cost & Europe’s Next Sustainability Playbook

Leave A Reply Cancel reply

Your email address will not be published. Required fields are marked *

*

*

ESG Weekly Brief — AI’s Social Cost & Europe’s Next Sustainability Playbook
Previous Article

Copyright @2026 ESG.AI

Company

  • ESG Score Navigator
  • Pricing
  • About Us
  • News & Insights
  • Careers
  • Contact Us

Legal

  • Terms of Service
  • Privacy Policy
  • Sustainability Statement

Get in Touch

  • Contact us for a copy of our investor presentaion.
Envelope Linkedin