Skip to content

The Future of Generative AI Discovery Tools

Supreme Court Ruling Limits CFAA Reach Background Image

The legal world recently learned an important lesson about the blind adoption of generative AI when two New York attorneys were sanctioned for using ChatGPT to write a brief that included entirely fabricated cases.1 The firm admitted they “made a good faith mistake in failing to believe that a piece of technology could be making up cases out of whole cloth.”2 Reactions in legal circles have run the gamut, from a more measured recognition that AI tools, while useful when used properly, are not to be blindly trusted, to outright rejection of all generative AI technologies, at least in the short term.3

But the legal profession cannot avoid AI. Software companies will be rolling out AI integrations in every platform from email, word processing, and e-discovery. We are on the cusp of finding answers to questions that lawyers could not conceive of a decade ago. How do familiar tests concerning proportionality and undue burden apply in a world in which sophisticated litigants are armed with large language models that can easily digest and review millions of records in a cost-effective manner with minimal human oversight? How does the business record exception to the hearsay rule apply to documents that were not created by a human? How do you even authenticate records that were created by an AI model that may be “hallucinating”? Will AI replace court reporting? Could AI play a role in case evaluation and mediation by comparing the discovery record to existing case law? When can attorneys give AI the ability to handle legal research, brief drafting, or drafting transactional documents? Attorneys must keep up.

Federal judges’ standing orders are leading the way. Perhaps unsurprisingly, many of the most impactful reactions to the dangers of generative AI have come from the judiciary. Judge Brantley Starr in the Northern District of Texas was one of the first judges in the country to require certification that “no portion of any filing will be drafted by generative artificial intelligence (such as ChatGPT, Harvey.AI, or Google Bard) or that any language drafted by generative artificial intelligence will be checked for accuracy, using print reporters or traditional legal databases, by a human being.”4 Similarly, Judge Michael Baylson in the Eastern District of Pennsylvania requires parties to “disclose that AI has been used in any way in the preparation of the filing,” not just generative AI, “and CERTIFY, that each and every citation to the law or the record in the paper, has been verified as accurate.” Magistrate Judge Gabriel A. Fuentes in the Northern District of Illinois requires a party to “disclose in the filing that AI was used, with the disclosure including the specific AI tool and the manner in which it was used” and notes that for certification “Rule 11 of the Federal Rules of Civil Procedure continues to apply.”

Such certifications may become as rote as word count certifications in legal briefs. Unwary pro se litigants, who may be the greatest beneficiaries of generative AI briefing, could be another matter — depending on their access to legal tools and ability to perform the requisite cite-checks. But where this disclosure/certification path leads is less clear. One could see a future where not disclosing that AI was used could seem as peculiar as submitting a handwritten brief. Yet disclosure of the specific AI tool being used may fuel disputes related to the sufficiency of that tool. For example, should a generative AI tool used for legal briefing be able to pass the bar exam?5 What if it can be shown that the tool was trained on a biased data set?6 How much information should a party be required to disclose about its proprietary, in-house generative AI tools? If what appears to be an innocuous disclosure leads to discovery disputes or a perceived disadvantage at trial, attorneys may begin to resist the need for such disclosures, or engage in self-help by carefully wording the disclosures to avoid arming their opponents with new arguments. On the other hand, it could be that disclosing an AI tool becomes no different than declaring whether you prefer Westlaw or LexisNexis.

State bars and other legal organizations are rushing to provide guidance. The Florida Bar recently announced that it is preparing an advisory opinion that will address “[w]hether a lawyer is required to supervise generative AI and other similar large language model-based technology pursuant to the standard applicable to non-lawyer assistants.”7 Similarly, the State Bar of Texas has established a “Taskforce for Responsible AI in the Law (TRAIL)” to investigate “how Texas practitioners can leverage AI responsibly and ethically.”8 And the American Bar Association has recently created the “ABA Task Force on Law and Artificial Intelligence,” which intends to investigate “[e]mergent issues with generative AI” and other AI issues.9 These efforts (and many more like them) will hopefully build consensus around best practices for deploying generative AI tools in legal contexts.

But what happens when attorneys purposefully weaponize AI in ways that fundamentally challenge the sufficiency of the rules of civil procedure? For example, William Eskridge Jr., Alexander M. Bickel professor of public law at Yale Law School, has noted that generative AI is “clearly going to transform the discovery rules like Rule 26(b)’s proportionality requirements.”10 The ability of AI to quickly churn through a party’s entire data store may amplify the importance of the parties’ pre-discovery negotiations (e.g., through a Rule 26(f) conference) related to the types of inquiries AI can make on that data store. Some federal courts have already shown a willingness to reign in this behavior by excluding automatic production of metadata, limiting searches to live data (i.e., not backup tapes, etc.), and capping the number of custodians and search terms per custodian.11 Similar judicial governance may be helpful in guiding AI-powered e-Discovery where the technology has the potential to be even more invasive and to draw conclusions from the data it assesses.

1See Mata v. Avianca, Inc., Case No. 22-cv-1461-PKC (S.D.N.Y.).

2Benjamin Weiser, “ChatGPT Lawyers Are Ordered to Consider Seeking Forgiveness,” New York Times, June 22, 2023,

3Isha Marathe, “Michael Best’s New AI Officer Discusses New Role, Firm’s ChatGPT Ban, and More,”, August 4, 2023,



6 (Standing Order for Civil Cases).

7Bob Ambrogi, “Generative AI, Having Already Passed the Bar Exam, Now Passes the Legal Ethics Exam,” LawSites, November 16, 2023,

8Ken Knapton, “Navigating the Biases in LLM Generative AI: A Guide to Responsible Implementation,” Forbes, September 6, 2023,




12Isha Marathe, “3 Generative AI Impacts E-Discovery Professionals Are Watching Closely,”, October 25, 2023,


This information is provided by Vinson & Elkins LLP for educational and informational purposes only and is not intended, nor should it be construed, as legal advice.