Navigating Risks in Technology Transactions Involving AI
Many companies are considering or currently engaging in acquisitions or investments in startups and established providers of artificial intelligence (“AI”) technology. They see opportunities to leverage AI to build and improve their existing businesses or get into new markets. While AI may present extraordinary opportunities, risks abound.
In the context of M&A and private equity deals, investments and financings, many have focused on risks of developing AI regulation, including the European Union’s new AI Act, and state and municipal laws in the United States. While regulatory risks and compliance are a key focus (understandably, as some activities will be entirely prohibited or require considerable oversight), there are additional concerns in AI-focused technology transactions. This article focuses on some of the important risks in buying or investing in companies that use AI technology in their products, services or operations.
IP and Ownership Risk
Transactions involving AI pose intellectual property (“IP”) and ownership risk in three significant areas: the AI models, the data used to train them, and the output.
Prior to acquiring a business that uses an AI model, decision-makers should conduct diligence to assess how the model was created, by whom and the resources used. AI models can be developed using internal company resources, but many also use open source software code or commercial third-party software in the development of the model. The model may be created or modified by employees, contractors or other third parties.
The AI model is then trained, sometimes with large and varied datasets. AI providers may obtain the training data from internal resources, but many also obtain training data from customers, vendors, employees, websites (through crawling and scraping), books, photographs, maps, or otherwise from third parties. A model may then be fine-tuned for a specific application. Any use of training data, code or other information from others creates a risk that the others may have an IP ownership stake in the model. Buyers should also carefully review customer agreements and licenses where third-party rights are obtained for use in training AI models.
If an AI model is used to generate output, that output will also be subject to IP rights. The owners of the model and training data may have a claim to IP rights in the output, as may anyone prompting or running the AI model. Generative AI models may also generate content that infringes IP rights of third parties who had nothing to do with the AI model, its training or running the model.
It is critical not to assume the model, the training data, or the output is owned by the company being acquired or simply free to use. While IP law as it relates to generative AI is still somewhat unsettled, buyers who want to minimize risk should confirm sellers have either ownership or valid licenses covering any uses of the AI model, the data used to train it, and the output.
Data Privacy and Security Risk
Use of personally identifiable (“PII”) data or sensitive company information to train an AI model raises both data privacy and security concerns in tech transactions.
Buyers should confirm through diligence whether sellers have individual consents covering the use of any PII in AI. Customer agreements, terms of use, privacy statements and other consents may allow for certain uses of PII by the acquired company, but they often do not extend to using the PII to build or train AI models, especially for external use or sale. Generally, once PII is used to train a model, there is often no practical way to remove it. So compliance with regulations requiring removal of data at the request of the individual may not be possible. Data privacy laws vary from state to state and throughout the world, and are changing and being newly adopted on a near daily basis. It is important to avoid creating future compliance roadblocks by training AI models with PII that cannot be quickly accessed or removed. Therefore, buyers should consider a representation that no PII was used to train AI models.
If sensitive company data is used to train a publicly accessible AI model, anyone, including competitors, may have access to it. Just like PII, it may not be possible to later remove sensitive data from an AI model. For these reasons, buyers should conduct diligence to confirm sellers have: (i) avoided using sensitive company data to train an AI model, (ii) not disclosed any AI model trained with sensitive company data, or (iii) limited any training with sensitive data to a fine-tuning layer that was not disclosed. Buyers should also review customer agreements and terms to confirm any uses of customer data to train AI models are permitted by the customers. While most buyers will seek a representation from sellers that there has been no misappropriation of trade secrets, they may also consider a more targeted representation that no sensitive company information or trade secrets were used to train a publicly accessible AI model.
Other Risks
Buyers should also take into account risks of AI models being trained using inaccurate information. AI output may be biased or incorrect, regardless of training. Buyers should inquire if steps were taken to minimize biases and other ethical concerns of the AI model to be acquired. Buyers typically assume these risks when they purchase AI assets, despite being unable to easily assess such risks through diligence.
In addition to typical IP and data privacy and security representations and warranties in a transaction, buyers may seek other AI-specific representations and warranties. These may include assurance of implementation and compliance with commercially reasonable policies for the responsible use of AI, such as mitigating bias and promoting transparency.
* * *
We fully expect these risks, and the ways to address them in transactions, to remain fluid and continue to develop rapidly in the coming years. Vinson & Elkins assists clients in all types of technology transactions, and regularly advises on AI-focused deals.
Key Contacts
Related Insights
- CLE EventWebcastNovember 12, 2024CLE Credit
- Event RecapSeptember 4, 2024Video
- InsightJuly 22, 2024
This information is provided by Vinson & Elkins LLP for educational and informational purposes only and is not intended, nor should it be construed, as legal advice.