Skip to content

The Future of Generative AI Discovery Tools

Supreme Court Ruling Limits CFAA Reach Background Image

The legal world recently learned an important lesson about the blind adoption of generative AI when two New York attorneys were sanctioned for using ChatGPT to write a brief that included entirely fabricated cases.1 The firm admitted they “made a good faith mistake in failing to believe that a piece of technology could be making up cases out of whole cloth.”2 Reactions in legal circles have run the gamut, from a more measured recognition that AI tools, while useful when used properly, are not to be blindly trusted, to outright rejection of all generative AI technologies, at least in the short term.3

But the legal profession cannot avoid AI. Software companies will be rolling out AI integrations in every platform from email, word processing, and e-discovery. We are on the cusp of finding answers to questions that lawyers could not conceive of a decade ago. How do familiar tests concerning proportionality and undue burden apply in a world in which sophisticated litigants are armed with large language models that can easily digest and review millions of records in a cost-effective manner with minimal human oversight? How does the business record exception to the hearsay rule apply to documents that were not created by a human? How do you even authenticate records that were created by an AI model that may be “hallucinating”? Will AI replace court reporting? Could AI play a role in case evaluation and mediation by comparing the discovery record to existing case law? When can attorneys give AI the ability to handle legal research, brief drafting, or drafting transactional documents? Attorneys must keep up.

Federal judges’ standing orders are leading the way. Perhaps unsurprisingly, many of the most impactful reactions to the dangers of generative AI have come from the judiciary. Judge Brantley Starr in the Northern District of Texas was one of the first judges in the country to require certification that “no portion of any filing will be drafted by generative artificial intelligence (such as ChatGPT, Harvey.AI, or Google Bard) or that any language drafted by generative artificial intelligence will be checked for accuracy, using print reporters or traditional legal databases, by a human being.”4 Similarly, Judge Michael Baylson in the Eastern District of Pennsylvania requires parties to “disclose that AI has been used in any way in the preparation of the filing,” not just generative AI, “and CERTIFY, that each and every citation to the law or the record in the paper, has been verified as accurate.” Magistrate Judge Gabriel A. Fuentes in the Northern District of Illinois requires a party to “disclose in the filing that AI was used, with the disclosure including the specific AI tool and the manner in which it was used” and notes that for certification “Rule 11 of the Federal Rules of Civil Procedure continues to apply.”

Such certifications may become as rote as word count certifications in legal briefs. Unwary pro se litigants, who may be the greatest beneficiaries of generative AI briefing, could be another matter — depending on their access to legal tools and ability to perform the requisite cite-checks. But where this disclosure/certification path leads is less clear. One could see a future where not disclosing that AI was used could seem as peculiar as submitting a handwritten brief. Yet disclosure of the specific AI tool being used may fuel disputes related to the sufficiency of that tool. For example, should a generative AI tool used for legal briefing be able to pass the bar exam?5 What if it can be shown that the tool was trained on a biased data set?6 How much information should a party be required to disclose about its proprietary, in-house generative AI tools? If what appears to be an innocuous disclosure leads to discovery disputes or a perceived disadvantage at trial, attorneys may begin to resist the need for such disclosures, or engage in self-help by carefully wording the disclosures to avoid arming their opponents with new arguments. On the other hand, it could be that disclosing an AI tool becomes no different than declaring whether you prefer Westlaw or LexisNexis.

State bars and other legal organizations are rushing to provide guidance. The Florida Bar recently announced that it is preparing an advisory opinion that will address “[w]hether a lawyer is required to supervise generative AI and other similar large language model-based technology pursuant to the standard applicable to non-lawyer assistants.”7 Similarly, the State Bar of Texas has established a “Taskforce for Responsible AI in the Law (TRAIL)” to investigate “how Texas practitioners can leverage AI responsibly and ethically.”8 And the American Bar Association has recently created the “ABA Task Force on Law and Artificial Intelligence,” which intends to investigate “[e]mergent issues with generative AI” and other AI issues.9 These efforts (and many more like them) will hopefully build consensus around best practices for deploying generative AI tools in legal contexts.

But what happens when attorneys purposefully weaponize AI in ways that fundamentally challenge the sufficiency of the rules of civil procedure? For example, William Eskridge Jr., Alexander M. Bickel professor of public law at Yale Law School, has noted that generative AI is “clearly going to transform the discovery rules like Rule 26(b)’s proportionality requirements.”10 The ability of AI to quickly churn through a party’s entire data store may amplify the importance of the parties’ pre-discovery negotiations (e.g., through a Rule 26(f) conference) related to the types of inquiries AI can make on that data store. Some federal courts have already shown a willingness to reign in this behavior by excluding automatic production of metadata, limiting searches to live data (i.e., not backup tapes, etc.), and capping the number of custodians and search terms per custodian.11 Similar judicial governance may be helpful in guiding AI-powered e-Discovery where the technology has the potential to be even more invasive and to draw conclusions from the data it assesses.

1See Mata v. Avianca, Inc., Case No. 22-cv-1461-PKC (S.D.N.Y.).

2Benjamin Weiser, “ChatGPT Lawyers Are Ordered to Consider Seeking Forgiveness,” New York Times, June 22, 2023, https://www.nytimes.com/2023/06/22/nyregion/lawyers-chatgpt-schwartz-loduca.html.

3Isha Marathe, “Michael Best’s New AI Officer Discusses New Role, Firm’s ChatGPT Ban, and More,” Law.com, August 4, 2023, https://www.law.com/legaltechnews/2023/08/04/michael-bests-new-ai-officer-discusses-new-role-firms-chatgpt-ban-and-more/.

4https://www.txnd.uscourts.gov/judge/judge-brantley-starr.

5https://www.paed.uscourts.gov/judges-info/senior-judges/michael-m-baylson.

6https://www.ilnd.uscourts.gov/judge-info.aspx?IuUaWzNcEoMNSPjavluUw8PpZUllF4J4 (Standing Order for Civil Cases).

7Bob Ambrogi, “Generative AI, Having Already Passed the Bar Exam, Now Passes the Legal Ethics Exam,” LawSites, November 16, 2023, https://www.lawnext.com/2023/11/generative-ai-having-already-passed-the-bar-exam-now-passes-the-legal-ethics-exam.html.

8Ken Knapton, “Navigating the Biases in LLM Generative AI: A Guide to Responsible Implementation,” Forbes, September 6, 2023, https://www.forbes.com/sites/forbestechcouncil/2023/09/06/navigating-the-biases-in-llm-generative-ai-a-guide-to-responsible-implementation/?sh=1322d1b95cd2.

9https://www.floridabar.org/the-florida-bar-news/proposed-advisory-opinion-on-lawyers-and-law-firms-use-of-generative-artificial-intelligence/.

10https://sbot.org/wp-content/uploads/2023/10/To-Chat-GPT-or-Not-to-Chat-GPT-That-is-the-Question.pdf.

11https://www.americanbar.org/news/abanews/aba-news-archives/2023/08/aba-task-force-impact-of-ai/.

12Isha Marathe, “3 Generative AI Impacts E-Discovery Professionals Are Watching Closely,” Law.com, October 25, 2023, https://www.law.com/legaltechnews/2023/10/25/3-generative-ai-impacts-e-discovery-professionals-are-watching-closely/.

13https://txed.uscourts.gov/sites/default/files/forms/E-Discovery_Patent_Order.pdf.

This information is provided by Vinson & Elkins LLP for educational and informational purposes only and is not intended, nor should it be construed, as legal advice.