Skip to content

Risk Management in the Artificial Intelligence Revolution

The Schrems Saga Continues: EU Data Protection Authorities May Be Ready to Strike Down Yet Another EU-U.S. Data Transfer Mechanism Background Image

After 18 months of numerous workshops, drafts, and discussions, the National Institute of Standards and Technology (“NIST”) published its inaugural AI Risk Management Framework (the “AI Framework”) in January 2023.

Over the last year, artificial intelligence (“AI”) has become the hot topic of conversation. Recent breakthroughs in generative AI are driving this surge in popularity, giving AI systems the ability to generate human language, images, and even music. These new abilities herald a new wave of exciting applications. Possibilities include automated customer support, document drafting, tax planning, real time translation, and even AI-assisted computer programming. These applications would have been rejected as pure science fiction only a few years ago.

As companies race to incorporate these AI systems into their business, many are grappling with understanding the benefits and risks of the technology. In the same way companies realized decades ago that they faced IT risks — even though technology was not their core business — many are about to realize that they now face considerable risks from AI. And with the endless possibility of AI come new and unpredictable challenges. NIST’s AI Framework provides a roadmap to understand, evaluate, and manage risk from AI systems.

AI Risk Management Is Unique

While the opportunities created by AI systems are extraordinary, the risks presented by those systems differ from those presented by other software systems. For instance, many AI systems do not leave a clear audit trail. AI systems can seem like a black box, where understanding how an output is produced is difficult without careful management. These systems rely on hundreds of billions (if not trillions) of data points — far more than humans or even traditional software applications can process.

Further, all AI systems require mountains of data to create a model during training, and input data to produce some prediction or work product. This data may include legally sensitive information, like personal information subject to data privacy laws, privileged or confidential business records, or trade secrets. Ensuring such information is used in a compliant and legally defensible way presents significant challenges. The use of this kind of data also creates security and intellectual property risks. Even if the models are trained solely on public information, there is no guarantee that the model will not yield false conclusions or include copyrighted works in its output. These possibilities could subject businesses to new potential legal liabilities still unsettled in the courts.

Additionally, harmful biases and ethical breaches remain a danger. AI systems tend to produce output consistent with their training data, which often inadvertently includes prejudiced or biased attitudes or opinions. By producing output consistent with such information, AI systems could perpetuate existing societal biases. To be socially and professionally responsible, business must analyze and implement these systems carefully.

The NIST Framework

NIST’s AI Framework provides a structure to build business processes to understand, assess, and respond to AI-related risks. The framework is general-purpose and adaptable to any business. It is not an “all-or-nothing” framework or a cumbersome certification but provides helpful guidance to building an AI management system that is right-sized for any business. The “Core Framework” is divided into four high-level functions: Govern, Map, Measure, and Manage.

Govern: Building core business processes and chains of command to monitor and evaluate continuously the results of the other components of the framework.

Map: Understanding where and how AI tools are used throughout the business, and how they connect to the company’s value proposition. This element focuses on the context of AI usage, without which businesses could not weigh risks and rewards appropriately.

Measure: Coordinating quantitative and qualitative methodologies to assess the effectiveness of AI systems. The measurement of AI system performance is not particularly well-understood and is an active and evolving area of research and development. Nevertheless, objective measurement of system performance is an essential part of managing risk.

Manage: With the understanding of AI value formulated in the Map function, and the metrics provided by the Measure function, the Manage function explains how potential risks can be weighed and prioritized in accordance with business goals. As more information is gleaned from the above functions, businesses will find it easier to assess and respond to risks and threats during the AI implementation process.

What This Means for You

AI systems will not soon take the place of human employees. However, over the next few years, companies will need to implement (or increase) the use of AI in their business to stay competitive. As opportunities to incorporate AI systems appear, NIST’s AI Framework can assist in building appropriate governance structures to ensure the systems deliver on their promise without unjustified risk.

Vinson & Elkins’s Cybersecurity, Data Privacy, and Technology teams have deep experience on AI, cybersecurity, and risk management, and assist clients in evaluating and implementing risk management strategies for emerging AI technologies. For further discussions regarding AI implementation in the workplace, please contact Palmina M. Fava at pfava@velaw.com or 212-237-0061 and Parker Hancock at phancock@velaw.com or 713-758-2153.

This is the first publication in a series on the impact of AI in the workplace. Please stay tuned for future updates and industry-specific approaches to the risks and rewards of AI.

 

This information is provided by Vinson & Elkins LLP for educational and informational purposes only and is not intended, nor should it be construed, as legal advice.