Why are Lawyers Afraid of AI?

Generative artificial intelligence and the law: there is no turning back.

screenshot of Casetext's CoCounsel AI legal assistant

Andrew Perlman, dean of the Suffolk University School of Law in Boston, is no stranger to examining innovative legal technology, but his recent experiment with generative artificial intelligence (AI)—Open AI’s ChatGPT, to be precise—led him to think the technology may create bigger changes to the way law is practiced than the Internet itself.

Perlman published one of the legal community’s first evaluations of ChatGPT’s capabilities in creating convincing arguments and answers to typical questions, in “The Implications of ChatGPT For Legal Services and Society” (, a paper submitted to the Social Science Research Network (SSRN, It would be inaccurate to say Perlman wrote the paper entirely by himself, however; as he stated in its preface, most of its 24 pages were generated in about an hour through prompts to ChatGPT. Only the abstract, preface, outline headers, epilogue, and the prompts for the platform were written by a person.

“ChatGPT generated the rest of the text with no human editing,” Perlman wrote.

“That was my first opportunity to use generative AI, and most of those prompts were given to ChatGPT about a week after it was released,” Perlman told Communications. “So I really hadn’t used it at all before I wrote that paper and, long story short, it blew my mind. I couldn’t believe what I was seeing.”

Perlman categorized the release of user-friendly generative AI with three precursor “Aha” moments—the development of the Internet, the release of the Google search engine, and the release of the Apple iPhone. However, he thinks generative AI also may have a revolutionary effect on the legal industry compared to the evolutionary, if profound, effects the other landmark technologies did.

As Perlman pointed out in his (and ChatGPT’s) paper, a significant part of lawyers’ work takes the form of written words: emails, memos, motions, briefs, complaints, discovery requests and responses, transactional documents of all kinds, and so forth.

“Although existing technology has made the generation of these words easier in some respects, such as by allowing us to use templates and automated document assembly tools, these tools have changed most lawyers’ work in relatively modest ways,” he wrote in his paper’s preface. “In contrast, AI tools like ChatGPT hold the promise of altering how we generate a much wider range of legal documents and information.”

While respondents to a comprehensive Thomson Reuters survey conducted in the U.S., U.K., and Canada about the potential of generative AI in legal practice showed a significant disconnect between those who thought generative AI could be used in legal work (82%) and those who thought it should be used (51%), legal technology experts say the genie—or genAI, if you will—is most definitely out of the bottle both in North America and in the U.K.

According to the Thomson Reuters research, attitudes as to whether these tools should be applied to legal work varied by both geography and by job title.

“Respondents from Canada felt more positively about generative AI and ChatGPT, with 62% saying they believe that those tools should be applied to legal work, while 15% said they believed they should not be,” the authors found. “On the other end, only 41% of U.K. respondents felt the tools should be applied to legal work. This may be largely due to uncertainty. More than one-third (34%) of U.K. respondents said they did not know whether generative AI or ChatGPT should be applied to legal work, more than 10 percentage points higher than those in the U.S. or Canada.”

Yet the overall sentiment about generative AI’s future impact on the profession seems to be more or less in line regardless of location. A LexisNexis survey of 1,175 U.K.-based lawyers found sentiments similar to those in the Thomson Reuters report, with 87% of those surveyed in the U.K. and the U.S. saying they were aware of the technology in the legal profession. Nearly the same percentages also expressed ethical concerns about its use (90% in the U.K., 87% in the U.S.).

Despite such concerns, however, generative AI’s rollout in many instances is a foregone conclusion, with one U.K.-based respondent to the Thomson Reuters survey saying everyone at her firm would be using it within six months.

“They recognize this technology is so much a part of the zeitgeist and has become so important so quickly, there is no saying ‘no’,” said Christian Lang, founder and CEO of LLM governance technology vendor Lega ( “You just can’t tell people ‘we are not going to do that, we’re not ready for that,’ because they will go do it on their own.”

Lang likened the incredible swiftness with which generative AI is being evaluated by law practices to trying to build a dam on a fast-flowing river; you can’t stop the river while you’re doing it, so tools such as Lega, which includes compliance and audit modules to monitor use of the technology, help to keep the flow from becoming an uncontrollable flood.

Other legal technology experts concur with Lang; they say the speed of the advent of generative AI in law and the amount of interest shown is unprecedented in their experience.

“There have been things that enable attorneys to do things in different ways,” said Myka Hopgood, senior director of legal innovation at Detroit-based law firm Dykema Gossett, “but I haven’t seen anything move at the speed generative AI has.” The firm, which employs more than 380 attorneys throughout the U.S., is an early customer of CoCounsel, a generative AI platform built atop the OpenAI GPT-4 model by legal technology vendor Casetext.

Sharon Nelson, president of Fairfax, VA-based digital forensics and cybersecurity consultancy Sensei Enterprises, said the demand for the firm’s continuing legal education (CLE) seminars on generative AI is unprecedented. “We give a lot of CLEs and this is far and away the one everybody wants,” Nelson said, noting in early June 2023 they had booked lectures on the topic into November.

“In all our years of lecturing, we have never seen anything like this. Never,” Nelson said.

Early lesson, careful steps

Ironically enough, the caution the legal profession should exhibit in deploying generative AI may have been reinforced very early on through a worst-case scenario in New York City, in which Steven Schwartz, an attorney with 30 years’ experience in practicing law but no knowledge of the way generative AI works, submitted a brief for which he used the consumer-facing ChatGPT as a research tool; in a brief filed in regard to a disciplinary action, Schwartz’s attorneys claimed he thought of it as a search engine rather than a generative language transformer. Unfortunately, up to six of the cases the brief cited were created out of whole cloth by ChatGPT, using snippets of words from cases with no relation to each other or the case at hand to weave convincing—and entirely fictitious—narratives.

Schwartz’s attorneys went on to say the erroneous filing was the result of a perfect storm of adverse circumstances: for instance, they said, due to a billing error, the subscription to Fastcase, the legal database Schwartz’s firm used for fact-checking, locked him out of complete federal case research. Also, as ChatGPT had literally been available for a matter of weeks when Schwartz used it in February 2023, there were simply no “rules of the road” yet in existence upon which he could rely.

“He did not have time to fully research the ‘risks and benefits’ of this new technology,” they wrote. “Indeed, there was no clear body of knowledge that he could call upon at the time to do so. This technology was essentially brand new, not one that Mr. Schwartz had ever used before, and not one that he is likely to use again.”

Dykema Gossett’s Hopgood said the firm’s rollout of CoCounsel will be done gradually, and that the firm will not support the use of any non-authorized generative AI tool with client data or client work.

“We will not do that,” she said. “The guardrails needed are just not there.”

At least one pioneering U.S. federal judge has installed his own guardrails regarding generative AI in his courtroom. In a “judge-specific” announcement, Brantley Starr of the Northern District of Texas mandated that any attorney appearing before him attest to any use of generative AI in a brief filed in that court: “All attorneys and pro se litigants appearing before the Court must, together with their notice of appearance, file on the docket a certificate attesting either that no portion of any filing will be drafted by generative artificial intelligence (such as ChatGPT, Harvey.AI, or Google Bard) or that any language drafted by generative artificial intelligence will be checked for accuracy, using print reporters or traditional legal databases, by a human being.”

Lega CEO Lang said that kind of attestation may become common practice within law firms as they begin to deploy generative AI. Such a protocol could strike a balance between the speed with which generative AI is emerging and the traditionally cautious legal industry.

Lang said Lega, which officially launched in May 2023, ‘was ready to “go live” integrating with generative AI applications early on, adding, “What ‘go live’ means in this context is actually quite limited. The vast, vast majority of firms are planning to use these technologies for the next few months to do their trialing and workshopping using non-confidential data, using non-live use cases.

“When you’re in the technology space, people are always saying things like ‘Move fast and break things,’ or ‘Fail fast and learn from your failures.’ And that is very true. You can take that approach and learn a tremendous amount from failure. But people often fail to appreciate that lawyers are members of a professional guild that has professional responsibilities vis a vis our clients, and we don’t have the ability to fail.”

Uncertain impact on legal jobs.

One of the most pervasive questions among the legal community is what will happen to the job market within the industry as a result of generative AI; such a text-dependent profession may seem to be particularly susceptible to large-scale disruption by profoundly capable language models. For example, studies by financial sector economists and by OpenAI researchers themselves predict a significant percentage of legal work and workers could be exposed to generative AI’s efficiencies; a Goldman Sachs report found 44% of legal industry tasks could be automated by AI, while one model in an OpenAI/University of Pennsylvania paper predicted that legal secretaries could be 100% exposed to generative AI (though the specific tasks that generative AI might assume were not mentioned).

However, experts with whom Communications spoke said it is simply far too early to predict whether attorneys or their support staffs might face reduced employment or unemployment due to generative AI. For instance. Perlman pointed out in his paper that Bing Chat, which answered several questions he posed, was operating at the level of a B to B+ law student, but that at the time he used the generative AI technologies, their knowledge of certain doctrines, such as personal jurisdiction, was problematic and incomplete.

He also said the technology is capable of providing additional details if someone knows how to engineer the appropriate prompts, “But the casual user is unlikely to know what to ask or how to ask it.”

Instead of eliminating legal jobs, Perlman told Communications it may instead change their focus: for instance, a paralegal may have to become an expert in prompt engineering, knowing “what to ask and how to ask it,” in other words.

Sensei Enterprises vice president John Simek said one trend that might emerge with legal generative AI is a curated platform similar to what tax preparation applications such as TurboTax provide: basic professional-level expertise at costs lower than certified accountants’ services. This could be a boon to increasing “justice equity” by making access to legal services more affordable, he said (the American Bar Association recently cited studies estimating 80% of lower-income individuals can’t afford a lawyer and that 40% to 60% of middle-class Americans’ legal needs go unmet).

Dykema Gossett’s Hopgood said employment tasks and the traditional revenue model of law firms, the billable hour, are very likely to be re-examined, but ultimately, generative AI will demonstrate its worth.

“I think there’s a value there, even if the hours are reduced,” she said. “There is a value to using these tools, and sometimes, depending on what you are doing, it will reduce the risk of inaccuracies or human mistakes. There is a value there.”

    • Bishop, L.
    • Can ChatGPT ‘Think Like a Lawyer?’ A Socratic Dialogue, SSRN, Jan. 27, 2023
    • Briggs, J. and Kodnani, D.
    • The Potentially Large Effects of Artificial Intelligence on Economic Growth, Goldman Sachs Economics Research, March 26, 2023
    • Eloundou, T., Manning, S., Mishkin, P., and Rock, D.
    • GPTs are GPTs: An early look at the labor market impact potential of large language models, OpenAI working paper, March 17, 2023
    • Perlman, A.
    • The Implications of ChatGPT for Legal Services and Society, SSRN, Dec. 21, 2022, revised March 10, 2023
    • Steeves, H.
    • Another One on AI: Teaching Legal Citation With ChatGPT, Slaw, June 22, 2023

Join the Discussion (0)

Become a Member or Sign In to Post a Comment

The Latest from CACM

Shape the Future of Computing

ACM encourages its members to take a direct hand in shaping the future of the association. There are more ways than ever to get involved.

Get Involved

Communications of the ACM (CACM) is now a fully Open Access publication.

By opening CACM to the world, we hope to increase engagement among the broader computer science community and encourage non-members to discover the rich resources ACM has to offer.

Learn More