Skip to content

Biden Enlists Tech Titans in AI Safety Push

Chris James presenting at ACI’s 12th West Coast Forum on FCPA Enforcement and Compliance Background Image

On July 23, 2023, the Biden administration announced that seven AI companies have made voluntary commitments to adopt measures to develop safer AI technology.1 These companies made these commitments in recognition of the inherent risks that AI technology may pose, including concerns that the platforms were susceptible to cyberattacks and could aid in the dissemination of disinformation. While compliance with the commitments is voluntary, they mark the administration’s first attempt to regulate AI developers and foreshadow how future legislation and policy may require technology companies to manage AI risks to improve the safety, security, and trustworthiness of AI technology.

The seven companies — Amazon, Anthropic, Google, Inflection, Meta, Microsoft, and OpenAI (the “AI Developers”) — agreed to implement several safeguards introduced by the Biden administration when developing AI technology. In announcing the commitments, President Biden noted that each safeguard introduced by his administration promoted one of three principles critical to the future of AI: (1) increasing the safety of AI technology; (2) strengthening the security of AI platforms; and (3) earning and maintaining public trust in AI.2 According to President Biden, companies developing AI technology have a responsibility to the American public to mitigate risks in the technology.

Safety, Security, and Trust

To those ends, the AI Developers have made specific commitments to be followed in the development and release of future generative AI models. They commit to subject their AI products to several rounds of internal and third-party external safety testing, and to share with each other and other AI developers the results of this testing, as well as the safety risks and capabilities of their respective AI technologies. To implement the safety testing commitments, the Biden administration has asked the AI Developers to join or create forums or mechanisms through which they can discuss their practices, standards, and research for maximizing the safety of AI technology. The administration hopes that, by engaging in constant information-sharing, AI industry players can quickly determine the best practices for ensuring safety.

The AI Developers have also committed to making security a top priority when developing AI technology. To this end, they have agreed to invest in cybersecurity and internal threat-detection programs to decrease the risk of successful cyberattacks. Further, the Biden administration has asked AI Developers to protect their AI models with the same rigor that they would protect valuable trade secrets, with the caveat that they share any security issues or vulnerabilities they discover with each other, the federal government, and the general public.

To ensure that user confidence in AI technology is maintained, the seven companies have committed to watermarking AI-generated content to allow users to readily differentiate AI content from human-made content. The AI Developers have similarly committed to making their users and the general public aware of the capabilities and limitations of their AI models by publicly reporting that information. Lastly, each AI Developer has pledged to support the research and development of AI systems that avoid harmful biases and discrimination and to aim at addressing cyberattacks, climate change, and cancer detection and prevention, among other things.

Foreshadowing Future Regulation?

Announcement of these commitments comes less than three weeks after the Biden administration met with consumer protection and civil rights leaders to discuss the risks posed to the American public by AI technology. According to President Biden, securing commitments from these seven industry leaders was a critical first step in signaling to all AI developers that they have a duty to minimize these risks by creating AI technology that is safe, secure, and trustworthy.

President Biden noted that securing voluntary commitments was just a first step and that “[r]ealizing the promise of AI by managing [its] risk” would require “new laws, regulations, and oversight.”  The administration emphasized that it would continue to encourage congressional leaders to pass legislation regulating AI and similarly noted that the Office of Management and Budget would soon release policy guidance that could permit federal agencies to police the design and development of AI technology. Until such policy is in place, however, compliance with the administration’s safeguards will remain voluntary for the seven industry leaders. Whether the commitments made by these AI Developers will guide industry practices or influence the actions of other technology companies, as the Biden administration predicts, is yet to be determined.

What Comes Next?

At V&E, we know that incorporation of advanced AI technologies is a priority for many of our clients. These efforts can involve the same risks being addressed by this voluntary framework and by the NIST AI Risk Management Framework. Our attorneys are ready and able to assist in understanding the risks of adopting AI technologies, and implementing appropriate risk mitigations in view of those applications.

1 Press Release, White House, Fact Sheet: Biden-Harris Administration Secures Voluntary Commitments from Leading Artificial Intelligence Companies to Manage the Risks Posed by AI (July 21, 2023),

2 Joseph R. Biden, Jr., President of the U.S., Remarks by President Biden on Artificial Intelligence (July 23, 2023),

This information is provided by Vinson & Elkins LLP for educational and informational purposes only and is not intended, nor should it be construed, as legal advice.