Federal Preemption Meets State Innovation: The Regulatory Patchwork That Makes Infrastructure-Level Governance Essential

Trump's EO 14365 creates federal preemption while NY's RAISE Act demands 72-hour incident reporting. Organizations need infrastructure-level governance that works across any jurisdiction.

On December 11, President Trump signed Executive Order 14365, the administration's most aggressive move yet to establish federal dominance over AI regulation. The order creates an AI Litigation Task Force to challenge state AI laws in court, conditions federal broadband funding on states not enforcing AI regulations that conflict with federal standards, and directs the FCC to consider federal AI standards that would preempt state laws entirely.

Eight days later, on December 19, New York Governor Hochul signed the RAISE Act — requiring AI frameworks for frontier models, safety protocols, 72-hour incident reporting, and fines up to $3 million for repeat violations. California, Colorado, and New York governors all indicated they would continue enforcing their existing AI laws regardless of the executive order.

The message from the states was unambiguous: we're not stopping.

This is not a story about which side is right. It's a story about what happens to every organization that deploys AI across jurisdictions when the rules are actively in conflict.

The patchwork deepens

To understand what EO 14365 means in practice, you need to see it in the context of the regulatory landscape that's been building all year.

In January, Trump's first AI executive order (EO 14179, Article 2) revoked the Biden-era safety framework and signaled a clear innovation-first, regulation-later federal posture. In February, the EU AI Act's first enforcement deadline arrived (Article 5), banning prohibited AI practices like social scoring and untargeted facial recognition scraping. By August, the EU's GPAI rules became enforceable (Article 27), creating binding obligations for any AI system operating in European markets. In September, California's SB 53 became the first US state law specifically targeting frontier AI models (Article 34).

Each milestone widened the gap between the US federal approach — minimal regulation, maximum innovation — and the multi-layered reality of state and international rules. EO 14365 doesn't close that gap. It adds another layer of complexity by creating active conflict between federal and state positions.

Consider what a Fortune 500 company deploying AI across the United States and Europe now faces. Federal policy says: move fast, innovate freely, we'll fight the state laws that slow you down. California says: frontier AI models require safety evaluations, and we're not backing down. New York says: 72-hour incident reporting and fines up to $3 million. Colorado says: algorithmic bias testing is mandatory for high-risk decisions, effective 2026. The EU says: all of the above plus comprehensive documentation, human oversight requirements, and conformity assessments for high-risk systems by August 2026.

This isn't regulatory clarity. It's a compliance optimization problem that changes monthly, varies by jurisdiction, and now involves active litigation between levels of government. Think of it as trying to build a house where the building codes change depending on which inspector shows up — and the inspectors are suing each other over whose codes apply.

Why preemption doesn't simplify compliance

The intuitive appeal of federal preemption is simplicity: one set of rules, one compliance framework, one answer to "what do we need to do?" But EO 14365 doesn't actually establish federal standards. It creates mechanisms to challenge state standards while directing agencies to consider what federal standards might look like. The practical effect is a period of uncertainty — potentially years — where state laws remain on the books, enforcement is contested in courts, and organizations have to decide whether to comply with laws that might or might not be enforceable by the time a court reaches a judgment.

For organizations operating internationally, the question is moot. The EU AI Act doesn't care about US federal preemption. Any company serving European customers, processing European data, or deploying AI systems that affect European citizens must comply with EU rules regardless of what the US federal government says about state laws. The EU's Digital Omnibus proposal from November may simplify some timelines, but the foundational requirements — risk assessment, documentation, human oversight, incident reporting — remain.

This creates a perverse dynamic: the companies most affected by the US regulatory patchwork are the same companies that already need EU-compliant governance infrastructure. For them, the question isn't whether to build governance infrastructure. It's whether to build it to the lowest common denominator (complying only with the most permissive rules) or to the highest common denominator (complying with everything that might apply).

History provides a clear answer. Every previous regulatory divergence — GDPR, SOX, Basel III — eventually resolved in the same direction: organizations that built to the highest standard saved money compared to those that built to the lowest standard and then retrofitted. The cost of upgrading from minimal compliance to comprehensive compliance always exceeds the cost of building comprehensive compliance from the start.

The RAISE Act as harbinger

New York's RAISE Act deserves particular attention because it introduces a requirement that will almost certainly spread to other jurisdictions: 72-hour incident reporting.

When an AI system causes a significant incident — defined broadly enough to cover everything from biased hiring decisions to safety-critical failures — organizations will have 72 hours to report it. Not 72 hours to investigate and report. 72 hours to report, which means having the detection infrastructure already in place to know that an incident occurred and what happened.

This mirrors the EU AI Act's post-market monitoring requirements, where providers of high-risk AI systems must operate continuous monitoring systems and report serious incidents. It also mirrors what cybersecurity regulations already require — CISA's 72-hour reporting requirement for critical infrastructure, GDPR's 72-hour breach notification, and SEC's material incident disclosure rules.

The pattern is clear: regulators are converging on the expectation that organizations can detect, document, and report AI incidents within 72 hours. Meeting this expectation requires infrastructure that most organizations haven't built — not because the technology doesn't exist, but because AI governance has been treated as a compliance exercise (check the boxes once a year) rather than an infrastructure investment (continuous monitoring, automated detection, real-time documentation).

The companies building incident detection and reporting infrastructure now — regardless of whether a specific state law survives a federal preemption challenge — are building capabilities they'll need everywhere within 18 months.

What infrastructure-level governance means in a patchwork

The regulatory patchwork creates a strong case for governance infrastructure that operates at a level below any specific regulatory requirement — infrastructure that captures what AI does, how it performs, what went wrong, and what was done about it, regardless of which jurisdiction's rules apply.

Think of it as the difference between writing a tax return for one country and maintaining an accounting system. A tax return is specific to a jurisdiction: different forms, different rules, different deadlines. An accounting system captures financial reality in a way that allows you to produce any jurisdiction's required reports from the same underlying data. Organizations don't maintain separate accounting systems for each country they operate in. They maintain one system that captures everything and generates jurisdiction-specific outputs.

AI governance infrastructure works the same way. The underlying capabilities — audit trails of AI decisions, performance monitoring, incident detection, documentation of model behavior — are universal. What changes between jurisdictions is which reports you generate from that data and which thresholds trigger which obligations. A company with comprehensive AI governance infrastructure can generate EU AI Act conformity documentation, RAISE Act incident reports, and California SB 53 safety evaluations from the same underlying system.

A company without that infrastructure has to build a separate compliance process for each jurisdiction, which is exactly the kind of bureaucratic overhead that the federal government claims to be trying to prevent.

The convergence thesis

Here's what I believe the next 12 months will reveal: the US regulatory patchwork, the EU AI Act's approaching August 2026 deadline, and the proliferation of state-level AI laws will converge on a common practical requirement. Not a common legal framework — the legal battles will continue for years. But a common infrastructure requirement: organizations need the ability to demonstrate what their AI systems do, verify that they do it correctly, and prove both to any regulator who asks.

The regulators disagree on what the rules should be. They're converging on what the evidence should look like. Audit trails. Performance documentation. Incident records. Human oversight mechanisms. These are infrastructure requirements, not policy positions, and they'll be required by whoever wins the preemption battle.

EO 14365 is an executive order about who makes the rules. But underneath the jurisdiction fight, there's a simpler question: can your organization demonstrate how its AI systems work to any regulator, in any jurisdiction, on any timeline? If yes, the patchwork is manageable. If no, no amount of federal preemption will protect you from the state that didn't get the memo, the EU regulation that doesn't care about US politics, or the incident that requires a 72-hour response you can't produce.