Skip to content

AI Antitrust Issues Checklist

Antitrust Enforcement Beyond Big Tech: Other Industries to Watch Background Image

This checklist outlines antitrust considerations for the use of artificial intelligence (“AI”) technology. Practices involving AI that could give rise to antitrust risk include algorithmic collusion, exclusionary practices involving AI services, agreements to limit AI development or deployment, self-preferencing on existing services, and mergers and acquisitions involving companies offering AI services.

AI Background

AI represents one of the most significant forces in technological development for the foreseeable future. Although definitions can differ wildly, AI is generally used as a catch-all term that refers to the application of certain techniques including generative models, machine learning, neural-networks, and complex algorithms to perform a specific task or a generalized set of tasks. Historically, AI tools have been used in specific contexts such as search engines, social media, and autonomous machines, but recent advances in machine learning and natural language processing have led to AI development for broader use cases. Companies like OpenAI made headlines upon releasing AI tools that allow users to engage with a conversational chatbot that uses generative systems to produce natural language content (e.g., ChatGPT). Large technology companies such as Microsoft, Google, Amazon, and Meta, as well as major players across industries and hundreds of new startups, are incorporating AI technology into many aspects of their products and services.

Due to its potential significance to technology markets and consumers, antitrust and competition regulators around the globe are scrutinizing current AI practices to determine the appropriate level of intervention. Regulators that have publicly raised competition concerns about AI include the U.S. Department of Justice (“DOJ”), the U.S. Federal Trade Commission (“FTC”), the European Commission, the U.K. Competition and Markets Authority (“CMA”), the Korean Fair Trade Commission, and the Australian Competition and Consumer Commission. For example, the U.S. FTC held a panel discussion regarding competition in the market for cloud computing services and its effects on AI and most recently articulated its competition concerns for generative AI in a blog post. Similarly, the U.K. CMA launched an initial review of the effects of AI models on competition and consumer protection.

As regulators around the world increase their focus on AI, businesses should consider the antitrust and competition risks associated with AI activities. At the same time, however, competition regulators may find it challenging to bring successful cases related to their expressed concerns about AI, given the nascent and highly dynamic nature of generative AI development today.

Specific AI issues for consideration

  • Ensure that algorithms do not facilitate unlawful collusion. Capitalizing on developments in AI data processing, companies increasingly use algorithms to analyze large data sets. Global competition regulators have expressed concern about the degree to which AI algorithms may “collude” or facilitate unlawful agreements, either by design or autonomously. Recently, U.S. DOJ officials stated that investigators would likely inquire whether companies enabled its AI to fix prices, whether they enabled the AI to communicate with competitors to abuse its monopoly power, and whether the companies included training on its AI to prevent the fixing of prices. Companies engaging in algorithmic data sharing or processing can mitigate these risks by (1) closely overseeing the design of its algorithm to ensure it is not designed to overtly collude with any rival’s algorithm and (2) consider implementing “compliance by design” principles that can reduce even the risk of tacit collusion (e.g., programming the algorithm to never share competitively sensitive pricing data with third-parties).
  • Avoid exclusionary practices on existing services. Many technology companies are working to incorporate the latest AI models into their existing products and services. However, companies run potential risks if they use an acquisition of or exclusive arrangement with an innovative AI company to foreclose rivals’ access in areas where they currently enjoy some degree of market power. For example, tying or bundling a newly released AI tool with an existing product or service could implicate competition law concerns in many jurisdictions. At the same time, such integration may improve service quality to the benefit of consumers. To mitigate these risks, companies should consider whether exclusive access to a particular AI model is essential and document expected customer benefits from any AI integration.
  • Ensure that agreements on AI standards do not set prices or limit innovation. While the benefits of AI are potentially significant, many also believe that unfettered AI development has the potential to enact significant damage on society. These harms can arise from implementing AI in certain controversial use cases or from deploying the technology without certain safeguards. Across the globe, AI companies, policy makers, regulators, and consumers are debating the proper standards for continued AI development. However, discussions between competitors on the standards for technological development can raise concerns if they cross the line from agreements on safety and ethics to anticompetitive topics like price and output. Horizontal agreements among competitors to fix prices or reduce output are generally unlawful per se. Any potential agreement between AI developers should be narrowly tailored towards technical cooperation on safe standards for AI development, particularly when parties agree to adhere to those standards (rather than voluntary standards). Further, regulators are more likely to give particular scrutiny to agreements that may limit innovation for beneficial safety technologies, such as reducing the use of hate speech or offensive content in AI generative models. Thus, any such agreement should institute a floor, but not a ceiling, for safety standards.
  • Carefully consider exclusivity. The base level of the technological stack that powers the global deployment of AI systems is cloud infrastructure. It has become increasingly common for cloud infrastructure providers to enter into arrangements to be the exclusive provider of cloud infrastructure to an AI company or service. Regulators may consider whether exclusivity arrangements result in the preferential treatment of the exclusive service over rivals, though the likelihood of foreclosure appears low given the dynamic nature of generative AI development and the large number of cloud service providers. Significant, relationship-specific investments are a standard justification for exclusive dealing arrangements and could apply where a cloud infrastructure provider is making a significant capital investment in the AI service provider. Cloud exclusivity can also be justified by the need to accurately forecast demand.
  • Mergers and Acquisitions involving AI. Competition regulators have made it clear that they view AI services as having the potential to disrupt existing industries. The agencies will pay close attention to the acquisition of an AI company by a competitor to determine whether the acquisition substantially lessens competition. The agencies also may consider vertical claims if the company being acquired operates an AI tool or service used by the acquirer’s competitors. Accordingly, businesses interested in acquiring an AI company should assess the antitrust risks, plan for such risks in the transaction agreement, and prepare to advocate for their deal early in the review process.

This information is provided by Vinson & Elkins LLP for educational and informational purposes only and is not intended, nor should it be construed, as legal advice.