$400 Million for Public AI, €200 Billion for Infrastructure — But Who's Funding Governance?

The Paris AI Action Summit invested massively in AI capabilities. The investment in making AI trustworthy and accountable is still waiting.

The AI Action Summit wrapped up in Paris on February 11, bringing over 1,000 participants from more than 100 countries to the Grand Palais. The headlines focused on the big numbers: a $400 million "Current AI" initiative for public interest applications, a €200 billion EU "InvestAI" program including €20 billion for four AI gigafactories, and the ROOST Initiative — a coalition of Google, Discord, OpenAI, and Roblox to combat AI-generated child sexual abuse material.

These are meaningful commitments. They also reveal a pattern that should concern anyone thinking about sustainable AI deployment: the investment in building AI and the investment in governing AI are orders of magnitude apart.

The Funding Asymmetry

Let's do the arithmetic. The Paris summit committed €200 billion to AI infrastructure — compute, data centers, training capabilities. It committed $400 million to public interest AI applications. The governance-specific commitments? Harder to quantify, because they were mostly framed as principles, statements, and future working groups rather than funded programs.

This isn't unique to the Paris summit. It reflects a global pattern. Gartner projects worldwide IT spending of $5.61 trillion for 2025, with generative AI driving double-digit growth. That growth is concentrated in model development, compute infrastructure, and application deployment. AI governance infrastructure — the audit trails, verification systems, compliance tooling, and monitoring platforms that make AI trustworthy — receives a fraction of a fraction.

The analogy that comes to mind is building highways without funding traffic enforcement. You can pour billions into road infrastructure, but if nobody invests in the systems that keep traffic safe and orderly, the resulting chaos undermines the value of the infrastructure itself.

What the Summit Didn't Align On

The Paris summit produced commitments, but it also produced conspicuous non-commitments. The US and UK declined to sign the "Statement on Inclusive and Sustainable AI." The UK cited a lack of "practical clarity" — fair enough, given that the statement was heavy on aspiration and light on mechanism. The US objected to multilateralism itself, consistent with the recent executive order's emphasis on national AI dominance over international coordination.

Anthropic CEO Dario Amodei publicly characterized the summit's approach to risk as a "missed opportunity." That's a notable statement from the CEO of a company whose entire brand is built on responsible AI development. When the safety-focused lab says the safety-focused summit didn't go far enough, the governance gap is real.

The non-alignment matters because AI governance is inherently an international challenge. Models are trained in one jurisdiction, deployed in another, and serve users in dozens more. A governance framework that works only within national borders is a governance framework that doesn't work.

The Enterprise Implication

For companies deploying AI across European operations, the summit's outcomes create a specific planning challenge: massive investment in AI capabilities is coming, accompanied by uncertainty about governance requirements.

The €200 billion InvestAI program will accelerate AI adoption across European enterprises. More compute infrastructure means more AI deployment. More AI deployment means more governance surface area. But the governance frameworks to manage that deployment are still being developed, debated, and refined.

This creates a window — call it 12 to 18 months — where AI deployment will outpace governance clarity. Organizations deploying AI during this window face a choice: build governance infrastructure proactively based on the direction of travel, or wait for prescriptive requirements and scramble to comply.

The direction of travel is clear enough. The EU AI Act is proceeding on schedule. National competent authorities are being designated. Technical standards are being developed. The specifics will evolve, but the general requirements — transparency, auditability, risk management, human oversight — are established.

Building governance infrastructure now, aligned with these general requirements, is a defensible strategic choice. It's also considerably cheaper than building it later under deadline pressure.

The Gigafactory Question

The four AI gigafactories funded by the InvestAI program deserve specific attention. These facilities will provide massive compute capacity for European AI development, potentially reducing dependence on US cloud providers for training and inference.

From a governance perspective, this is a double-edged development. More European compute capacity means more data residency options, which simplifies compliance with EU data protection requirements. But it also means more AI development happening closer to home, which increases the governance surface area that European regulators need to monitor.

For enterprises, the gigafactories signal that European AI infrastructure is being built for the long term. Planning your AI governance infrastructure with a similar time horizon — not just for the next compliance deadline but for the next decade of European AI development — is the strategically sound approach.

Where the Governance Investment Needs to Go

If the Paris summit had dedicated even 1% of the €200 billion InvestAI program to governance infrastructure — €2 billion — it would have represented the single largest investment in AI governance in history. What could that fund?

Open-source governance tooling. The AI development community benefits enormously from open-source models and frameworks. AI governance tooling is far less mature. Funded open-source governance infrastructure — audit trail systems, verification frameworks, compliance automation — would lower the barrier to governance adoption across the ecosystem.

Standards development with teeth. The ISO 42001 standard for AI management systems exists, but adoption is slow because certification infrastructure is limited. Funding accelerated standards development and certification capacity would create the verification mechanisms that the International AI Safety Report identified as essential.

Regulatory capacity building. National competent authorities need technical expertise to evaluate AI systems. Most are currently understaffed and underfunded relative to the complexity of their mandate. Governance funding that builds regulatory capacity makes the entire system more credible.

Research on governance-preserving AI architectures. How do you build AI systems that are inherently more auditable, more transparent, and more verifiable? This is an engineering research question that deserves the same level of investment as model capability research.

The Gap Will Close — The Question Is How

The governance funding gap is real, but it's not permanent. As AI systems create more visible failures — security breaches, compliance violations, harmful outputs, liability incidents — the investment case for governance infrastructure becomes undeniable. The question is whether the gap closes proactively, through strategic investment in governance alongside capability, or reactively, through regulation and litigation after failures accumulate.

For individual organizations, the choice is simpler. You don't need to wait for summit commitments or government funding to build governance infrastructure. You need to recognize that the AI systems you're deploying today — systems that will become more capable and more autonomous over the coming months — require governance infrastructure that matches their impact.

The Paris summit invested €200 billion in making AI more powerful. The investment in making AI trustworthy and accountable is still waiting. For engineering leaders, that gap is both a risk and an opportunity: the organizations that fill it first will be the ones that enterprises and regulators trust most.