Skip to content

The EU Takes Another Step Towards Regulating Artificial Intelligence

Chris James presenting at ACI’s 12th West Coast Forum on FCPA Enforcement and Compliance Background Image

The EU is one step closer to approving regulation of AI systems, which, in addition to AI design rules in China, would be the first of their kind. On June 14, 2023, the European Parliament voted overwhelmingly to adopt the Artificial Intelligence Act (“AI Act”), along with numerous amendments.1

Draft rules were initially proposed in April 2021 by the European Commission (“EC”), the executive body of the EU.2 Now that the European Parliament has adopted its version of the new rules, it will begin three-party negotiations with the European Commission and the Council of the European Union. These three EU institutions aim to reach an agreement on the final rules by the end of 2023.3

A Risk-Based-On-Use Tiered Compliance Framework

The AI Act, as amended, continues to propose a risk-based-on-use, tiered regulatory framework. The highest risk tier is reserved for certain AI uses that are considered to pose an “unacceptable risk” to society, including uses like scraping Internet images from social media and other sites to build facial recognition databases, social credit scoring, real-time facial recognition technology, predictive policing, and emotion recognition in governmental, educational, and employment contexts. These uses are outright banned.

Uses of AI that are considered “high-risk,” such as uses in aviation, vehicles, medical devices, and eight other specifically enumerated categories, are permitted, but subject to heavy regulation. Operators will need to register their AI systems in an EU-wide database and will be subject to extensive regulatory requirements around risk management, transparency, human oversight, and cybersecurity, among others.

Uses of AI that are considered “limited risk,” such as systems that interact with humans (like chatbots) and AI systems that could produce “deepfake” content would be subject to a limited set of transparency obligations. Uses of AI that do not fall into any of the prior categories are considered “low or minimal risk” and are not yet subject to any regulation.4

Generative AI & General Purpose AI Systems

The extraordinary rise of general-purpose generative AI over the last year threw a wrench in the EU’s otherwise tightly calibrated framework. Large Language Models, like OpenAI’s GPT models (including ChatGPT) and generative image models like Midjourney, are applicable to a wide array of uses and thus could potentially fall into any category depending on the use to which they are put. Complicating matters further, most providers of these technologies do not primarily intend to make products, but to provide their models to other companies to be incorporated into downstream uses and applications. Thus, companies actually putting these models to productive use may not have the visibility into, or control over, how those “foundation models” operate to comply with the EU regulatory framework applicable to their use case.

Despite these challenges, the European Parliament adopted numerous amendments relating to general-purpose AI foundation models which categorized them as “high-risk” systems, and proposed that they be subject to additional requirements, including that providers:

  • demonstrate that appropriate measures have been taken to mitigate risks to the public from the model and document any remaining or unmitigable risks;
  • disclose that content was generated by AI;
  • prevent it from generating content that is illegal under EU law; and
  • publish summaries of any copyrighted data used for training the AI system.5

If a company fails to comply with these regulations, the draft rules impose significant penalties ranging from 2% to 7% of a company’s total worldwide revenue.6

Regulation May Be Coming to the US

In the U.S., some state and local regulation has been adopted for “automated decision-making systems,” a concept that is broader than — but includes — AI systems. The first such regulation was New York City’s Automated Employment Decision Tool Law (AEDT), which will go into effect on July 5, 2023 and restricts the use of automated decision-making in employment.7 Similarly, the California Privacy Protection Agency is actively drafting regulations under the California Privacy Rights Act (CPRA) to regulate automated decision-making systems in California.8

In contrast, the U.S. federal government has not made any significant steps towards following the EU’s lead and adopting actual legislation to address AI systems. Nonetheless, both the White House and Congress appear interested in the topic. On October 5, 2022, the Biden administration released its “Blueprint for an AI Bill of Rights,” which proposed a broad set of goals for potential AI regulation.9 Similarly, on Capitol Hill, the U.S. Senate held a hearing on AI regulation on May 16, 2023, and on June 13, 2023, the Senate Subcommittee on Human Rights and the Law convened a hearing on the impact of AI on human rights. But despite these early moves, no draft legislation has been proposed. Thus, it remains to be seen what steps, if any, the U.S. government will take to regulate AI.10

What This Means for You

The EU’s AI Act is still far from final. There remain several steps in the process, including three-way negotiations to determine the final text of the regulation and then the final implementation by the national parliaments of EU member states. Nonetheless, companies making major investments in AI technologies should watch these developments closely. If the current version of the regulation were adopted, the regulatory requirements on high-risk models (including generative AI foundation models) may be so burdensome so as to make products impractical or unprofitable. Indeed, Google has already prohibited the use of their Bard large language model in the EU, presumably out of concern over the EU’s current and future regulatory frameworks.11

Still, the EU’s legislative process is a lengthy one, so it will likely not be until late 2024 at the earliest that national governments may start implementing and enforcing the final regulations. We at V&E will continue to monitor these events closely, as well as other efforts to adopt regulations governing AI systems in the United States.

1 European Parliament Press Release, MEPs Ready to Negotiate First-ever Rules for Safe and Transparent AI (June 14, 2023) (available at: https://www.europarl.europa.eu/news/en/press-room/20230609IPR96212/meps-ready-to-negotiate-first-ever-rules-for-safe-and-transparent-ai).

2 European Commission, Proposal for a Regulation of the European Parliament and of the Council Laying Down Harmonised Rules on Artificial Intelligence (Artificial Intelligence Act) and Amending Certain Union Legislative Acts, COM(2021) 206 final (Apr. 4, 2021) (available at: https://artificialintelligenceact.eu/the-act/).

3 European Parliament, EU AI Act: First Regulation on Artificial Intelligence (last updated on June 14, 2023) (available at: https://www.europarl.europa.eu/news/en/headlines/society/20230601STO93804/eu-ai-act-first-regulation-on-artificial-intelligence).

4 See generally European Parliament, Artificial Intelligence Act (January 14, 2022) (available at: https://www.europarl.europa.eu/thinktank/en/document/EPRS_BRI(2021)698792); see also European Parliament, Texts Adopted Artificial Intelligence Act, COM(2021)0206 (June 14, 2023) (available at: https://www.europarl.europa.eu/doceo/document/TA-9-2023-0236_EN.html) [hereinafter Texts Adopted by European Parliament].

5 Texts Adopted by European Parliament at 200–01.

6 Id. at 306–08. If an individual fails to comply with the regulations, the rules impose a fine ranging from €5,000 to €40,000.

7 Rules of the City of NY Dep’t of Consumer and Worker Protection, Automated Employment Decision Tools (Updated) (effective July 5, 2023) (available at: https://rules.cityofnewyork.us/rule/automated-employment-decision-tools-updated/).

8 See California Privacy Protection Agency, News & Announcements, CPPA Issues Invitation for Preliminary Comments on Cybersecurity Audits, Risk Assessments, and Automated Decision making (February 10, 2023) (available at: https://cppa.ca.gov/announcements/).

9 White House, Office of Science & Technology Policy, Blueprint for an AI Bill of Rights (Oct. 5, 2022) (available at: https://www.whitehouse.gov/ostp/ai-bill-of-rights/)

10 On June 14, 2023, however, a bipartisan bill was introduced in the U.S. Senate that would allow for lawsuits against social media companies for claims based on generative AI technology, including “deepfake” AI-generated photos or videos. S. 1993, 118th Cong. (2023) (available at: https://www.congress.gov/bill/118th-congress/senate-bill/1993?s=1&r=1).

11 See Scharon Harding, Google Bard hits over 180 countries and territories—none are in the EU, arstechnica (May 12, 2023, 1:09 PM) (available at https://arstechnica.com/gadgets/2023/05/google-bard-hits-over-180-countries-and-territories-none-are-in-the-eu/).

This information is provided by Vinson & Elkins LLP for educational and informational purposes only and is not intended, nor should it be construed, as legal advice.