The EU AI Act Gets Specific: What July's Published Instruments Mean for Compliance

The European Commission published GPAI obligation guidelines, documentation templates, and prohibited practices guidance on July 18. The August 2 enforcement deadline is approaching.

On July 18, the European Commission published three key AI Act instruments: GPAI obligation guidelines, documentation templates, and prohibited practices guidance. Two weeks from now, on August 2, the GPAI model rules become enforceable.

After months of debate about what the EU AI Act would actually require in practice, the Commission has answered: here are the specific obligations, here are the templates for documentation, and here is the guidance on what's prohibited. The abstract has become concrete.

For every enterprise deploying general-purpose AI models — which, given the adoption rates we've tracked throughout 2025, means most enterprises — July 18 marks the moment when "we'll figure out compliance later" becomes an increasingly untenable position.

What the Three Instruments Contain

The GPAI obligation guidelines clarify what providers of general-purpose AI models must do under Chapter V of the AI Act. This isn't about high-risk AI systems (those obligations arrive in August 2026). This is about the models themselves — the foundation models and general-purpose models that power the AI coding tools, chatbots, content generators, and decision support systems that enterprises deploy.

The documentation templates are perhaps the most operationally significant of the three instruments. Abstract obligations like "provide adequate documentation" become concrete when the Commission specifies exactly what documentation is required, in what format, with what level of detail. These templates define the minimum information that model providers must produce and that downstream deployers can expect to receive.

The prohibited practices guidance clarifies the boundaries established in February when the first enforcement deadline arrived. Certain AI practices — social scoring, emotion recognition in workplaces and education, untargeted facial image scraping, and manipulative AI systems — were banned. The July guidance provides additional detail on how these prohibitions are interpreted and applied.

Together, these three instruments transform the AI Act from a framework into a compliance program with defined requirements, specified deliverables, and approaching deadlines.

Why August 2 Matters

The August 2 deadline is the second major enforcement date in the EU AI Act's phased implementation. The first, on February 2, established the prohibited practices and AI literacy requirements. August 2 makes the GPAI model rules applicable.

Here's what becomes enforceable on August 2. Member states must have designated national competent authorities — the bodies that will enforce the AI Act within each country. EU governance bodies become operational, creating the institutional infrastructure for AI regulation at the EU level. Rules on penalties take effect, meaning violations have defined consequences.

For model providers (OpenAI, Anthropic, Google, Meta, Mistral, and others), this means their GPAI models must comply with Chapter V requirements starting August 2. The documentation templates published on July 18 define what compliance looks like.

For enterprises deploying these models, August 2 is less about immediate obligations — most enterprise-specific obligations don't arrive until August 2026 — and more about the supply chain implications. If your AI provider's models must comply with GPAI rules, and compliance requires specific documentation and technical safeguards, those requirements flow downstream to you as a deployer. You need to understand what your provider is required to deliver, verify that they're delivering it, and factor that documentation into your own compliance planning.

The Documentation Challenge

The documentation templates published on July 18 create a specific, measurable compliance requirement. This is good news for enterprises that have been struggling with the ambiguity of abstract obligations. It's challenging news for enterprises that hoped compliance would be less demanding than it is.

Model documentation under the AI Act is not a marketing data sheet. It includes technical architecture descriptions, training data characteristics, evaluation results, known limitations, intended use cases, and deployment guidance. For general-purpose models, the documentation must be comprehensive enough for downstream deployers to understand the model's capabilities and limitations — and to conduct their own risk assessments.

This creates a documentation chain. The model provider documents the model. The enterprise deployer documents their specific application. The combination creates a compliance record that demonstrates the enterprise understands the model's characteristics and has assessed the risks of their specific use case.

For organizations accustomed to the relatively light documentation requirements of most software compliance frameworks, the depth and specificity of AI Act documentation is a meaningful step change. This isn't a checkbox exercise. The templates require substantive technical detail that demands genuine engagement with the model's characteristics — not just an attestation that you've read the vendor's marketing materials.

For enterprises using multiple models — which, based on the provider independence trends we've tracked, is increasingly common — this documentation chain must exist for each model. If you're routing between GPT-4o, Claude Opus 4, and Gemini 2.5 Pro depending on the task, you need documentation covering all three providers, all three models, and all three deployment configurations.

This is where governance infrastructure becomes essential. Manual documentation management — tracking which models are in use, which documentation has been received from providers, which risk assessments have been conducted — doesn't scale across multiple models, multiple use cases, and evolving requirements. Automated governance systems that maintain documentation, track compliance status, and alert when new requirements emerge are the only practical approach for organizations operating at scale.

The Transatlantic Gap Widens Further

The EU's July instruments arrive against the backdrop of the U.S. CAISI rebrand, which we covered last month. While the Commission publishes specific compliance templates and enforcement guidelines, the U.S. has renamed its AI safety institution to focus on innovation.

This divergence creates both risk and opportunity for enterprises.

The risk is regulatory complexity. Enterprises operating in both jurisdictions must comply with EU requirements that have no U.S. equivalent, while monitoring an evolving patchwork of U.S. state-level regulations. California's CPPA finalized regulations on automated decision-making tools just six days after the Commission published its instruments, requiring opt-out rights for employment AI decisions. The White House released "America's AI Action Plan" on July 23 — a policy document emphasizing AI "dominance" through minimal regulation, including an executive order on "Preventing Woke AI in the Federal Government."

The opportunity is competitive differentiation. Organizations that achieve EU AI Act compliance can credibly claim they meet the world's most demanding AI governance standards. In procurement decisions, insurance underwriting, and customer trust assessments, that credibility has commercial value. Swiss and European companies in particular are well-positioned to turn regulatory compliance into a trust advantage — the same dynamic that made GDPR compliance a selling point for European cloud providers in the years after 2018.

What Enterprise Teams Should Do Before August 2

The two weeks between July 18 and August 2 aren't a compliance deadline for most enterprises — your direct obligations under the high-risk provisions don't arrive until August 2026. But they are a preparation window that shouldn't be wasted.

First, audit your AI model inventory. Which general-purpose AI models is your organization using, directly or through third-party integrations? For each model, who is the provider? This is the baseline for understanding your exposure to GPAI rules.

Second, request GPAI documentation from your providers. The documentation templates now specify what providers must deliver. If your providers haven't yet produced documentation that aligns with these templates, ask when they will. The gap between what's required and what your providers have delivered is your compliance risk.

Third, assess your documentation infrastructure. When your August 2026 obligations arrive, you'll need comprehensive documentation of your AI deployments — models used, risk assessments conducted, safeguards implemented, monitoring results. If you're not capturing this information now, you'll be trying to reconstruct it retroactively, which is both more expensive and less reliable.

Fourth, map your compliance chain. For each AI deployment, trace the compliance requirements from the model provider through your deployment to the end user. Where are the gaps? Where is documentation missing? Where are risk assessments needed? This mapping exercise is the foundation for an August 2026 compliance program, and it's far easier to conduct now — when the pressure is low — than twelve months from now when the deadline is imminent and every competitor is scrambling for the same consulting resources.

The organizations that use this preparation window wisely will have a structural advantage when the high-risk obligations arrive. They'll know their model inventory, have documentation chains in place, and understand the gaps. The organizations that treat July 18 as someone else's problem will discover in 2026 that it was always their problem — they just chose not to look.

Looking Ahead

July 18 was when the EU AI Act stopped being a framework and started being a compliance program. The instruments are published. The templates are defined. The August 2 enforcement date is approaching.

For enterprises that have been waiting for clarity before acting, the clarity has arrived. The compliance requirements are now specific, documented, and enforceable. The only remaining variable is whether your organization starts building the infrastructure to meet them now — or tries to build it under deadline pressure in 2026.

The Commission has shown its cards. The compliance clock is running.