Day One of the EU AI Act: What's Actually Enforced Now and What's Still Coming

The first EU AI Act deadline has passed. Specific obligations are now enforceable, specific practices are banned, and the clock is running on everything else.

February 2, 2025 came and went. The first major compliance deadline of the EU AI Act is now behind us — and if your LinkedIn feed is any indication, you'd think either the sky fell or nothing happened at all.

Neither is true. What actually happened is more nuanced and more important than either narrative suggests. Specific obligations are now enforceable. Specific practices are now banned. And the clock is running on everything else.

Here's what changed on day one, what hasn't changed yet, and what every organization deploying AI in Europe needs to understand about the eighteen months ahead.

What's Enforceable Now

The February 2 deadline activated two categories of requirements under the EU AI Act.

The first is AI literacy obligations. Every organization deploying AI systems must ensure that personnel involved in AI operations have sufficient knowledge and competence. This isn't a suggestion — it's a legally enforceable requirement. The definition of "sufficient" is deliberately flexible, but the obligation is not.

In practical terms, this means that if your team is deploying AI systems and can't demonstrate baseline competence in understanding how those systems work, what risks they present, and how to operate them responsibly, you're exposed. Not tomorrow. Now.

The second category is prohibited practices. As of February 2, the following are banned across the EU:

Social scoring systems — AI that evaluates individuals based on social behavior or personality characteristics for purposes of treating them unfavorably. Emotion recognition in workplaces and educational settings — AI that infers employees' or students' emotional states and uses that information to influence treatment. Untargeted facial image scraping — building facial recognition databases through indiscriminate collection of images from the internet or surveillance footage. Manipulative AI systems — AI specifically designed to distort behavior in ways that cause harm.

These prohibitions carry penalties of up to €35 million or 7% of global annual revenue, whichever is higher. For a company with €500 million in revenue, that's a potential €35 million exposure. For a company with €1 billion, it's €70 million.

What's Not Enforced Yet (But Is Coming)

The February deadline is the first of several. Here's the timeline that matters:

August 2, 2025: Rules for general-purpose AI models become applicable. This includes transparency obligations, documentation requirements, and obligations for providers of models with systemic risk. National competent authorities must be designated, and EU governance bodies become operational. Penalty frameworks take effect.

August 2, 2026: The full high-risk AI system requirements become applicable. This is the big one — comprehensive obligations for AI systems used in employment, credit scoring, law enforcement, critical infrastructure, and other high-impact domains. Technical documentation, conformity assessments, post-market monitoring, and incident reporting all kick in.

The August 2026 deadline is eighteen months away. That sounds like plenty of time until you consider what it actually requires: classifying all your AI systems, conducting risk assessments, building technical documentation, implementing monitoring infrastructure, establishing incident reporting processes, and potentially undergoing third-party conformity assessments.

Eighteen months for a compliance infrastructure project of this scope is tight. For organizations that haven't started, it's very tight.

The AI Literacy Obligation Is More Significant Than It Sounds

The literacy requirement deserves more attention than it's getting. On its surface, it seems straightforward: make sure your people understand AI. In practice, it creates a cascade of organizational requirements.

Who counts as "personnel involved in AI operations"? In most modern enterprises, that's not just the AI team. It's developers using AI coding assistants. Product managers specifying AI features. Customer service teams using AI-powered tools. Marketing teams deploying AI for content and analytics. Legal teams reviewing AI-related contracts. Risk teams evaluating AI deployments.

What constitutes "sufficient knowledge and competence"? The regulation leaves this deliberately open, which means organizations need to make — and document — their own determination. That documentation becomes evidence in any enforcement action.

How do you demonstrate compliance? Training records, competency assessments, ongoing education programs. The organizations that will be best positioned are those that can show a systematic approach to AI literacy — not a one-time slide deck, but an ongoing program with measurable outcomes.

This is the kind of requirement that sounds soft but has teeth. When a regulator investigates an AI incident, one of the first questions will be: "What training did the personnel involved have?" If the answer is "none" or "we can't demonstrate it," the liability exposure compounds.

The UK's Parallel Move Signals a Trend

While the EU AI Act hit its first deadline, the UK was making its own moves. On February 14, the UK AI Safety Institute was rebranded to the "AI Security Institute" at the Munich Security Conference. Technology Secretary Peter Kyle announced a shift toward focusing on "serious AI risks with security implications" — chemical and biological weapons, cyber-attacks, and fraud — while explicitly deprioritizing bias concerns.

This rebranding matters for two reasons. First, it signals that even in jurisdictions taking a different regulatory approach than the EU, AI governance is being institutionalized. The form differs — safety vs. security framing — but the direction is convergent. Second, the focus on security implications aligns with the infrastructure-level approach to AI governance: audit trails, decision logging, and system integrity become security tools, not just compliance mechanisms.

For organizations operating across both EU and UK jurisdictions, the practical implication is that governance infrastructure needs to serve multiple regulatory philosophies simultaneously. Documentation and audit trails that satisfy EU AI Act requirements should also support UK security-oriented evaluations.

What to Do Before August

The next deadline — August 2, 2025, when general-purpose AI model rules become applicable — is six months away. Here's what engineering and compliance teams should prioritize.

Conduct an AI system inventory. You can't govern what you can't see. Document every AI system in your organization, including third-party AI embedded in tools your teams use. Classify each system's risk level under the EU AI Act framework. This inventory is the foundation for everything else.

Start building your technical documentation. The August deadline will require documentation for general-purpose AI models. But the documentation requirements for high-risk AI systems in August 2026 are far more extensive. Starting now means you're building incrementally rather than scrambling later.

Implement an audit trail. If there's one infrastructure investment that serves every deadline on the EU AI Act timeline, it's a comprehensive, immutable audit trail of AI system activity. Decision logs, input/output records, error traces, and verification results — these serve literacy demonstrations, prohibited practice compliance, GPAI transparency, and high-risk system monitoring.

Think of it as building the foundation before the walls. The audit trail supports every subsequent compliance requirement, regardless of how the implementing regulations evolve.

Establish your AI literacy program. The literacy obligation is already enforceable. Design a program that covers the personnel categories in your organization, document participation and competency, and plan for ongoing updates as the regulatory landscape evolves.

Designate internal ownership. The August deadline requires member states to designate national competent authorities. Your organization needs its own equivalent — a clear owner for AI governance who can interface with regulators, coordinate compliance activities, and make decisions about risk classification and mitigation.

The Strategic Frame

Here's the frame that separates organizations treating the EU AI Act as a compliance burden from those treating it as a strategic opportunity.

The compliance burden frame says: "We have to do this. What's the minimum we can get away with? How do we check the boxes most efficiently?"

The strategic opportunity frame says: "We're going to build AI governance infrastructure that satisfies the EU AI Act and also makes our AI systems more reliable, more trustworthy, and more valuable. The regulation is the floor, not the ceiling."

Organizations in the second frame will build infrastructure that serves double duty — satisfying regulators while also catching the silent failures, context losses, and quality regressions that make AI systems unreliable in production. They'll treat February 2 not as a deadline they survived, but as the starting gun for a sustained infrastructure investment that compounds in value over time.

The EU AI Act isn't going away. The deadlines aren't moving. And the requirements will only get more specific as implementing guidance is published.

Day one is done. The real work starts now.