The Future Is Here: Early Lessons From Using Generative AI
Generative artificial intelligence (“Generative AI”) is making waves due to its ability to create entirely new sentences, stories, images, and even music. Those waves also have been felt in the legal industry, and Generative AI has extraordinary potential in the practice of law to help deliver better client service. Vinson & Elkins is evaluating how to responsibly incorporate Generative AI into the law firm environment. While the following is not all-inclusive, here are some early lessons for the responsible use of Generative AI in legal practice:
Consider Confidentiality
When interacting with a Generative AI model, lawyers are usually submitting information to a third-party server outside of their control over the internet. This may run a risk of violating a lawyer’s confidentiality obligations. Under the ABA model rules, “a lawyer shall make reasonable efforts” to maintain client confidentiality. The use of similar research technologies, like searches of commercial legal databases or Google are generally considered acceptable, but most providers have privacy policies limiting the use of query information. In contrast, many Generative AI models are still considered “beta” or research projects, and may lack some of these privacy protections. For example, OpenAI warns that conversations with ChatGPT may be reviewed by engineering teams to improve the product. The specific effect that this disclosure may have on attorney-client privilege has not yet been explored in court. Until vendors provide better privacy assurances, or law firms are better able to restrict how Generative AI companies use their queries, the assumption should be that any information provided to a Generative AI model is not confidential. Accordingly, inputs to such models should be reasonably abstract or limited in detail to protect client confidentiality.
Ethics Matter
Any use of Generative AI must be consistent with a lawyer’s legal and ethical duties. First and foremost, the use of Generative AI alone cannot comply with a lawyer’s duty to provide competent legal advice. Even if the work product of a Generative AI seems reasonable and accurate, it is still the lawyer’s responsibility to ensure that the advice is competent. Generative AI can save considerable attorney time, but it cannot be a substitute for the lawyer’s own understanding of the legal and factual issues. Indeed, the power of Generative AI to produce creative outputs can occasionally lead to answers that rely on cases, articles, and other materials that do not actually exist.
Ethics rules also require that lawyers “keep abreast of changes in the law and its practice, including the benefits and risks associated with relevant technology.” As Generative AI becomes more useful, there may come a time where collaboration with a Generative AI becomes standard, but that time has not yet come. In the interim, it is imperative that lawyers consider all their ethical and professional responsibilities when using new and innovative technologies. Generative AI is no exception.
Review is Essential
One of the values of working with an experienced law firm is the capacity to rely on collaborative teams of attorneys to address an issue. Working in teams gives attorneys sounding boards for ideas, gut-checks on novel or unusual issues, and the ability to research new developments in the law in-depth to keep advice current and accurate. Even with the incredible promise of Generative AI, the iterative process of review and evaluation remains essential. As numerous studies have shown, even powerful language models are capable of producing inaccurate answers. For example, when SCOTUSblog asked ChatGPT for a list of Ruth Bader Ginsburg’s most noteworthy opinions, it incorrectly identified her as the author of a dissent in Obergefell v. Hodges. Even OpenAI warns that ChatGPT can produce incorrect answers.
In the same way that the work of recent law school graduates is carefully reviewed by more senior attorneys, the work product of Generative AI must be subjected to the same scrutiny — if not more. This can be difficult given the power of current Generative AI models that seem to have near omniscience across so many areas. The key to leveraging this technology effectively is to remember that current Generative AI models’ base of knowledge is broad, but often shallow. And the skilled attorneys using it frequently know more about their specific area of law than the Generative AI.
Context is Critical
When you ask an experienced attorney for advice on a legal issue, they are able to bring years, or even decades of experience to the question. In that time, they’ve seen similar fact patterns, observed the law develop and evolve on certain topics, and been in court (or at the negotiating table) working to persuade decisionmakers on the issue. That attorney can draw from this deep well of knowledge to provide meaningful advice to their clients. In the human context, we call this “wisdom” or “experience.” In the world of Generative AI, this is called “context.”
At present, Generative AI models like Microsoft’s Bing Chat, OpenAI’s ChatGPT and Google’s LaMDA have considerable context to draw upon. Both are trained with large bodies of text upon which they can draw to provide answers to questions. But just as wisdom and experience for humans might be limited to a specific field or area of law, there are key limitations on the context available to large language Generative AI models. First, these models have a “knowledge cutoff” date. For example, ChatGPT has a knowledge cutoff of September 2021. The model is unaware of anything that happened after that date, including recent caselaw and legislative and regulatory enactments. And because training a large language model is a multimillion-dollar affair, models are infrequently updated to incorporate new information. Second, the training data is general purpose, and thus may lack detailed information about a specific area of law or facts presented to it. The information available to the model may be miles wide, but is often only a few inches deep.
These challenges often can be overcome by providing additional context directly. While most prompts given to Generative AI models are single questions, many accept much larger prompts — of up to about 4,000 words. That prompt can be filled with relevant caselaw, statutes intended to be applied, or other relevant information to producing the response. With this context in-hand, Generative AI systems can better answer questions in view of that context. Further, Generative AI model providers are working on ways to provide better up-to-date context. For example, Bing Chat can now incorporate results from a current Bing search as supplemental context.
Conclusion
While Generative AI has tremendous promise, that promise currently is limited. Generative AI models are not yet all-purpose Artificial General Intelligences. And even if they were, that doesn’t necessarily make them good lawyers. But as with all new technologies, V&E is continuously evaluating how Generative AI can be used to provide better, more timely, and accurate advice to our clients. Continue to watch this space for more insights on how V&E is leveraging Generative AI, and the lessons we learn in the process.
Vinson & Elkins’s Cybersecurity, Data Privacy, and Technology teams have deep experience on AI, cybersecurity, and risk management, and assist clients in evaluating and implementing risk management strategies for emerging AI technologies. For further discussions regarding Generative AI in the practice of law, please contact Palmina M. Fava and Parker D. Hancock.
Key Contacts
Related Insights
- CLE EventWebcastDecember 5, 2024CLE Credit
- InsightNovember 22, 2024
- Event RecapNovember 21, 2024Video
This information is provided by Vinson & Elkins LLP for educational and informational purposes only and is not intended, nor should it be construed, as legal advice.