On January 23, President Trump signed Executive Order 14179, revoking the Biden administration's October 2023 AI Executive Order and directing federal agencies to remove barriers to AI development. The stated priority: "pro-growth AI policies" over safety measures.
If you're building AI systems in an enterprise context, this creates a strategic question that has nothing to do with politics and everything to do with risk management: what happens to your governance posture when the regulatory floor drops?
What Actually Changed
Let's be precise about what EO 14179 does and doesn't do.
It revokes the previous executive order that had established AI safety testing requirements, watermarking guidelines, and reporting obligations for developers of powerful AI systems. It directs federal agencies to identify and eliminate regulations that "act as barriers to American AI innovation." And it signals a clear policy shift toward innovation-first, regulation-later.
What it doesn't do: change the EU AI Act, which is proceeding on its own timeline. Change state-level AI legislation, which is accelerating. Change the liability landscape, which is expanding. Or change the operational risk that AI failures create in your business, which is constant regardless of federal policy.
This distinction matters enormously for any company operating internationally — which, in the context of AI services and data flows, means nearly every enterprise above a certain scale.
The EU AI Act Isn't Optional
While Washington pulls back, Brussels is pushing forward. The first EU AI Act compliance deadline hits in just nine days — February 2, 2025. AI literacy obligations and prohibited practices become enforceable. Violations carry penalties up to €35 million or 7% of global revenue.
For European companies, this is straightforward: compliance is mandatory regardless of US policy. For American companies with European customers, employees, or operations, the calculus is the same. The EU AI Act applies based on where AI systems are deployed and whose data they process, not where the company is headquartered.
This creates an asymmetry that engineering leaders need to understand: the US regulatory environment just became more permissive, while the EU regulatory environment is becoming more prescriptive. Companies operating in both jurisdictions now face a wider gap between their minimum US obligations and their minimum EU obligations.
The worst response to this asymmetry is to optimize for the lowest common denominator. The best response is to build governance infrastructure that satisfies the strictest requirements you face, because that infrastructure protects you everywhere.
Why This Is an Engineering Problem, Not a Legal Problem
There's a temptation to hand the EO analysis to legal and move on. That would be a mistake.
The reason is that AI governance — the kind that actually works — isn't a legal overlay on an engineering system. It's an engineering capability embedded in your AI infrastructure. Audit trails, verification checks, decision logging, bias detection — these are engineering deliverables, not legal documents.
When the regulatory environment is clear and prescriptive, legal teams can specify what engineering needs to build. When the regulatory environment is ambiguous — as it now is in the US — engineering teams need to make architectural decisions about governance without waiting for legal clarity.
This is uncomfortable for organizations that treat compliance as a checkbox exercise. It's an opportunity for organizations that treat governance as engineering infrastructure.
Consider the parallel to security: no serious engineering team waits for a government mandate to implement authentication, encryption, or access controls. These are baseline architectural decisions driven by operational risk, not regulatory compliance. AI governance is heading in the same direction, and the companies that understand this now will be better positioned regardless of which way the regulatory winds blow.
The Patchwork Problem
EO 14179 creates another challenge that engineering teams will feel before legal teams do: regulatory fragmentation.
With federal regulation pulling back, state-level AI legislation is accelerating. California, Colorado, New York, Illinois, and others are advancing their own AI governance requirements. Each state has different definitions, different thresholds, different reporting obligations.
For engineering teams, this means AI systems deployed across multiple states may need to satisfy different governance requirements depending on jurisdiction. If your governance infrastructure is built as a monolithic compliance layer, you'll need to reconfigure it for each jurisdiction. If it's built as modular infrastructure with configurable rules, you can adapt without rebuilding.
The architecture decision you make today — monolithic compliance vs. modular governance infrastructure — will determine how much pain the patchwork causes you in twelve months.
The Insurance and Liability Angle
There's a dimension to the regulatory pullback that gets less attention but may matter more to CFOs and boards: insurance and liability.
When government sets clear AI governance standards, companies that follow those standards have a defensible position if something goes wrong. They can demonstrate they met the applicable standard of care. When government removes those standards, the standard of care becomes ambiguous — and ambiguity in liability contexts tends to resolve in favor of plaintiffs.
Insurance underwriters are watching this closely. AI liability coverage is already difficult to obtain and expensive to maintain. A regulatory environment that lacks clear standards doesn't reduce the need for governance — it increases the need for demonstrable, auditable governance practices that can serve as evidence of reasonable care.
In practical terms: if you can't show an auditor exactly what your AI system did, why it made a specific decision, and how you verified its output, the absence of a federal mandate won't help you when something goes wrong.
What to Do Now
The executive order changes the political landscape, but it doesn't change the engineering fundamentals. Here's what matters:
Build for the strictest standard you'll face. For most enterprises, that's the EU AI Act. Infrastructure that satisfies EU requirements will satisfy any US requirement that eventually emerges — and protect you in the interim.
Make governance auditable. Whether regulators require it or not, the ability to demonstrate exactly what your AI system did and why is the single most valuable risk management capability you can build. Immutable audit trails, decision logs, and verification records aren't just compliance tools — they're litigation defense and insurance qualification evidence.
Treat the vacuum as a window. Regulatory clarity will return, whether through state legislation, EU enforcement, industry standards, or eventual federal action. The window of ambiguity is a window to build infrastructure without the pressure of imminent deadlines. Teams that use this time to establish governance foundations will be ahead when clarity arrives.
Don't assume permanence. Executive orders are policy instruments, not legislation. They change with administrations. Building your governance posture around the assumption that light-touch regulation is permanent is as risky as building it around the assumption that heavy regulation is imminent.
The safest bet — and the most engineering-sound approach — is to build infrastructure that works regardless of the regulatory environment. That means provider-independent, auditable, and configurable governance that can adapt to whatever requirements emerge.
The Engineering Team's Moment
There's a certain irony in the timing. Just as federal regulation pulls back, the AI models that engineering teams are deploying are becoming more powerful, more autonomous, and more deeply integrated into business-critical workflows. The gap between what AI systems can do and what governance infrastructure can verify is widening.
Someone has to close that gap. In the absence of clear federal guidance, it falls to engineering leaders to decide what "responsible AI deployment" looks like in their organizations. That's a heavy responsibility, but it's also an opportunity to build something durable — governance infrastructure that serves the business regardless of which way policy moves.
The regulatory vacuum is temporary. The infrastructure you build during it is permanent.
