Skip to content

Build Once, Comply Twice: The EU AI Act’s Next Phase is Around the Corner

The European Union has kicked off a new era of AI regulation. With the Artificial Intelligence Act (“the Act”), which went into force on August 1, 2024, the EU has established the world’s first comprehensive legal framework for artificial intelligence. This landmark legislation is already reshaping how AI is developed, deployed, and governed—not just within Europe, but globally. The Act’s first substantive obligations began to apply in early 2025, and the next critical milestone—imposing  sweeping obligations for General Purpose AI (“GPAI”) models and new governance structures—takes effect on August 2, 2025. For AI developers, providers, and deployers—especially those operating across borders—this milestone marks a pivotal shift from preparation to implementation. Understanding what’s coming next is essential for staying compliant and competitive in an increasingly regulated AI landscape.

Phased Implementation

To ease the transition into this new regulatory regime, the EU AI Act is being rolled out in phases. Each stage introduces new obligations, giving organizations time to adapt their operations, governance, and technical infrastructure.

  • February 2, 2025 marked the first major compliance deadline. From this date, the Act’s prohibitions on certain high-risk and unacceptable AI practices—outlined in Article 5—became enforceable. These include bans on manipulative AI techniques, exploitative systems targeting vulnerable groups, and real-time biometric surveillance in public spaces (with narrow exceptions). Organizations also became responsible for ensuring that staff interacting with AI systems possess adequate AI literacy.
  • August 2, 2025 ushers in the next wave of obligations, with a focus on governance and GPAI models. This phase activates the European AI Office and the European Artificial Intelligence Board (“EAIB”), which will oversee enforcement and coordination across member states. National authorities must also be designated by this date. GPAI model providers—particularly those offering large language models (“LLMs”)—will face new horizontal obligations, including transparency, documentation, and copyright compliance. For GPAI models deemed to pose systemic risk, additional requirements such as risk mitigation, incident reporting, and cybersecurity safeguards will apply.
  • August 2, 2026 will see the broader framework come into full effect, including obligations for most high-risk AI systems.
  • August 2, 2027 marks the final compliance deadline, when Article 6(1) obligations for high-risk systems are fully enforceable.

What is the EU AI Act?

Risk-Based Classifications

The EU AI Act (Regulation (EU) 2024/1689) establishes the world’s first comprehensive legal framework for regulating artificial intelligence. The Act introduces a risk-based approach to AI regulation and aims to ensure that AI systems placed into the EU market are safe and respect fundamental rights. The Act classifies AI systems according to the associated risk:

  • Unacceptable Risk. AI practices deemed a clear threat to safety, livelihood, or rights are prohibited under Article 5. This includes systems that employ subliminal techniques to manipulate behavior, exploit vulnerabilities of specific groups, or implement social scoring by public authorities. Additionally, AI systems used for real-time remote biometric identification in publicly accessible spaces are banned, with certain exceptions for law enforcement purposes. Many of these prohibitions went into effect on February 2, 2025.
  • High Risk. High risk AI systems are those which are involved in critical functions such as remote biometric identification, infrastructure safety, education, employment, access to private and public services, law enforcement, border control, and the administration of justice and democratic processes. Additionally, an AI system is considered high risk if it is intended for use as a safety component in a product, and the safety component of the product is required to undergo a third-party conformity assessment.
  • Limited Risk. AI chatbots generally fall into the limited risk category and must meet certain transparency obligations. Providers must make users aware that they are interacting with a machine and ensure AI-generated content is identifiable.
  • Minimal or No Risk. The vast majority of AI systems currently used in the EU fall into this category. These systems, such as AI-enabled video games or spam filters, are not subject to additional legal obligations under the Act.

General Purpose AI Models and Systems

The Act distinguishes between GPAI models and GPAI systems. A GPAI model is the underlying foundation or base model that may be integrated into an AI system. Articles 53 through 55 set forth the regulatory obligations for the model provider. A GPAI system refers to an AI system built using a GPAI model. This may include fine-tuning, additional functionality, or integration into specific applications. If the GPAI system is deployed in high-risk context, such as those listed in Annex III, it may be subject to the full suite of obligations for high-risk AI systems under the Act. Regulatory obligations for GPAI systems fall on the provider or deployer of the system.

The Next Milestone: GPAI Obligations Kick In on August 2, 2025

Slated for implementation on August 2, 2025, Chapter V of the Act sets forth the classification and compliance requirements for General Purpose Artificial Intelligence (“GPAI”) models. Article 3 of the Act defines a GPAI model as an AI model that displays significant generality, is capable of competently performing a wide range of distinct tasks, and can be integrated into a variety of downstream systems or applications. If a model that can generate text and/or images uses training compute greater than 10^22 floating point operations (FLOP), the AI Office will presume that the model is a GPAI model. GPAI model providers are subject to transparency obligations. Under Article 53, GPAI model providers must:

  • prepare and update technical documentation for the model, including its training and testing process and the results of its evaluation
  • prepare information and documentation to supply to downstream providers that intend to integrate the GPAI model into their own AI system
  • establish a policy for compliance with EU copyright law
  • publish a sufficiently detailed summary of the content used for training data according to the AI Office template

Furthermore, Article 51 defines criteria for determining if a GPAI model should be classified as a GPAI model with systemic risk, requiring additional transparency and oversight. GPAI models with system risk are those which (a) have high impact capabilities based on benchmarks or (b) are assessed by the Commission as having high impact capabilities based upon the criteria set forth in Annex XIII. The Act applies a presumption of systemic risk to a GPAI model when the cumulative amount of compute used for its training data is greater than 10^25 FLOPs. In addition to the above obligations for GPAI models, GPAI models with systemic risk must also:

  • perform state of the art model evaluation in accordance with standardized protocols and tools, including conducting and documental adversarial testing of the model with a view to identifying and mitigating systemic risks
  • assess and mitigate possible systemic risks, including their sources
  • track, document, and report serious incidents and possible corrective measures to the AI Office and relevant national authorities without undue delay
  • ensure an adequate level of cybersecurity protection.

Codes of Practice

Article 56 outlines the Codes of Practice (“the Code”), which are expected to enter into force on August 2, 2025. The Code is a set of guidelines for ensuring compliance with the Act between the time when GPAI model provider obligations come into effect in August 2025 and the adoption of harmonized European standards in August 2027. The Code has three sections: Transparency, Copyright, and Safety and Security. Under the Transparency Section, Code signatories commit to drawing up and keeping up-to-date model documentation, providing relevant information to downstream providers and to the AI Office upon request, and to ensure the quality, security, and integrity of the documented information. The Copyright Section commits signatories to drawing up, keeping up-to-date, and implementing a copyright policy. The copyright policy must ensure that models reproduce and extract lawfully accessible copyright-protected content when web-crawling, comply with rights reservations, mitigate the risk of production of copyright-infringing outputs, designate a point of contact, and allow for the submission of complaints concerning non-compliance.

The Safety and Security Section only applies to GPAI models with systemic risk. Signatories commit to adopting and implementing a Safety and Security Framework that will detail risk assessment, mitigation, and governance to keep systemic risks within acceptable levels. Signatories must assess and mitigate risks along the entire lifecycle of the model, identify and analyze systemic risks for severity and probability, and provide risk acceptance criteria. Signatories must report their implementation of the Code through Safety and Security Model Reports for each GPAI model with systemic risk. Additional obligations include allocating compliance responsibility internally, carrying out independent external assessments, implementing serious incident monitoring, adopting non-retaliation protections, notifying the AI Office at specific milestones, and documenting and publishing information relevant to the public’s understanding of systemic risks from GPAI models.

The Code is not legally binding, but GPAI providers can rely on the Code to demonstrate compliance with GPAI model provider obligations in Articles 53 and 55 until harmonized standards are developed. Adherence to an endorsed Code creates a presumption of compliance with the applicable obligations. Otherwise, providers who fail to comply with the Code will have to demonstrate compliance to the Article 53 and 55 obligations via other adequate, effective, and proportionate means. They may also be subject to more requests for information and access to conduct model evaluations.

Extraterritorial Reach

One of the defining features of the EU AI Act is its extraterritorial scope. The Act applies not only to entities established within the European Union, but also to non-EU organizations whose AI systems or outputs are used within the EU. This broad jurisdictional reach reflects the EU’s intent to ensure that AI systems affecting its internal market adhere to its regulatory standards, regardless of origin.

Specifically, the Act applies to:

  • Providers that place AI systems or GPAI models on the EU market or make them available within the EU, irrespective of where the provider is established.
  • Deployers that use AI systems within the EU, even if the system was developed or supplied from outside the Union.
  • Entities whose AI-generated outputs are used in the EU, regardless of where the system producing those outputs is located or operated.

This means that a company based outside the EU—such as a U.S. developer offering an AI-powered recruitment tool—may be subject to the Act if the tool is used to evaluate candidates within the EU. Similarly, a GPAI model trained and hosted abroad may fall within the Act’s scope if it is integrated into applications accessible to EU users.

For multinational organizations, this extraterritorial application underscores the importance of assessing EU exposure early and aligning global AI governance practices with the Act’s requirements. Failure to do so could result in significant compliance risks, including administrative fines and reputational harm.

Strategic Considerations

Failure to comply with obligations under the Act may result in administrative fines up to €35 million or 7% of global annual revenue for the preceding financial year, whichever is greater. For U.S. AI developers, the Act creates a regulatory perimeter that must be considered in parallel with U.S. legal and ethical frameworks. To avoid the significant financial risk of non-compliance, firms should consider taking several forward-looking actions now:

  • Map EU Exposure. Identify what AI models or services are accessed by EU-based users or entities. This includes analyzing API usage logs, reseller agreements, or customer base composition to determine exposure.
  • Classify AI Systems. Undertake a structured risk assessment of AI systems to determine what classification it falls into. Separate analysis should be performed for GPAI models to determine if they are GPAI models with systemic risk.
  • Review Model Development Practices. For companies offering GPAI models—especially models above the 10^25 FLOPs compute threshold—prepare to disclose training data summaries and implement robust risk mitigation strategies. This could require developing new documentation and assurance capabilities that go beyond current industry norms.
  • Plan for Internal Governance. Create or strengthen compliance functions with AI-specific expertise. Legal, product, and engineering teams should collaborate to build risk management and documentation processes that anticipate audit or enforcement scrutiny.
  • Appoint an EU Representative Early. Even before formal enforcement of certain provisions begins, companies should identify and onboard an authorized representative to ensure a smooth transition into compliance.
  • Monitor EU Guidance and Harmonized Standards. The Act anticipates the adoption of harmonized technical standards and secondary guidance documents. Staying engaged with European standard-setting bodies and regulators will be essential to understanding how the law will be applied in practice.

For U.S. companies, early alignment with the Act’s risk management and transparency requirements could present an opportunity for a competitive advantage as regulatory regimes converge globally. With its emphasis on documentation, risk mitigation, and model transparency, the EU AI Act could become a global regulatory template. Companies may choose a Europe-facing approach to AI compliance, making EU standards the baseline for product development and risk governance. This could streamline compliance in other jurisdictions where AI regulations are quickly emerging.

Vinson & Elkins is well-equipped to help clients navigate the intellectual property and technical challenges arising under the EU AI Act. The Act’s transparency, documentation, and copyright-related obligations—particularly for general-purpose AI models—raise complex IP considerations. These may include questions about ownership and licensing of training data, compliance with EU copyright laws when web-scraping, risks of generating infringing outputs, and obligations to disclose training data sources. Providers may also need to evaluate whether existing licenses or content use practices are sufficient under the Act’s new standards. Our team can assist in assessing these emerging risks, developing policies to mitigate IP exposure, and aligning AI development practices with evolving regulatory expectations across jurisdictions.

This information is provided by Vinson & Elkins LLP for educational and informational purposes only and is not intended, nor should it be construed, as legal advice.