In the first semester of my 2022 1L year, I saw several of my classmates’ screens displaying shopping websites, crossword puzzles, Quimbee, and PDFs of textbooks. These days, for better or worse, I see screens displaying cascading lines of text as ChatGPT responds to user prompts clarifying the cases that we were assigned for the day.


For Better?

Several professors seem to embrace artificial intelligence, many of whom do so with an undertone of reluctance and inevitability. UC Berkeley professor Christopher Hoofnagle notes that  “generative artificial intelligence is going to be in everything, so it will be impossible to tell students they can’t use it.” In response, UC Berkeley rolled out a policy that he drafted, allowing the use of AI to conduct research or to correct grammar but not on exams or for written assignments. Similarly, a few Fordham professors appear to welcome AI due to top-down pressure. They note that the use of AI by students’ prospective employers means that it’s critical that law students know how to use this increasingly marketable skill. To the extent that law schools are professional schools, such an impetus for more marketable lawyers is aligned with a law school’s purpose. In other words, several professors seem to embrace AI because they must, but not because they want to.

Several professors also believe that generative AI can equalize the “playing field”  for first-generation law students and students who may not have had the same educational preparatory experiences and opportunities as other students. For example, chatbot assistants may “be able to answer basic questions that students often struggle with.” This assistance would be  greatly helpful to first-generation professionals who may be reluctant to approach professors. Such a democratizing effect would not exclusively benefit students but also the clients those law students will serve. Due to the ability of AI to help lawyers perform legal tasks efficiently, they may be able to deliver more legal services feasibly to vulnerable communities that are cost-sensitive. Therefore, Professor Linna notes that law students should be excited about how they can use generative AI to help everyone gain access to legal services and justice. Unlike the professors above, such an embrace of AI in legal education is driven by an optimism for AI’s positive promise rather than a response to pressure.


For Worse?

Conversely, the democratizing effects of AI within law school can have ominous dimensions. In a study AI Assistance in Legal Analysis” the researchers noted that the highest performing students did worse while the low-performing students received a substantial boost in exam performances. This would lead to more compressed grading curves, and the researchers note that the legal profession has had a well-known bimodal separation between “elite” and “nonelite” lawyers in pay and career opportunities. Thus, AI would result in a “democratization” of performances.  Though such an effect sounds positive, this form of democratization is alarming. Professor Schwarcz, one of the paper’s researchers, notes that AI access could have made high-performing law students somewhat lazier on their exams, or that the technology may have made them less likely to tap their legal reasoning skills. The ceiling of excellence in legal performance may drop and access to AI might stifle creativity.

AI will also likely have disproportionate impacts across law school courses. In a study Lawyering in the Age of AI, the researchers found that AI enhanced the quality of students’ work products on contract drafting significantly more than it did for a legal memo. In addition, they also found that AI substantially improved average student performance on multiple-choice questions but not so much for essay questions. This likely means that courses which lend themselves to multiple-choice questions like civil procedure and evidence may be subject to more change than other courses like constitutional law that lend themselves more to essay questions.

Enforcement against over-extensive use of AI presents challenges. While UC Berkeley relies on the honor code to enforce their new AI policy, the researchers in the Lawyering in the Age of AI study believe the honor code is simply impractical given how widely accessible AI tools are. Additionally, there are currently no reliable tools available to identify content produced by generative AI.


A Middle Approach?

In response to the pros and cons that AI poses to legal education, professors Choi, Monahan, and Schwarcz propose a mixed solution. They propose that law schools ban, or at the very least substantially limit, the use of generative AI in core first-year law school classes. They explain that generative AI shouldn’t impact the way that law school students reason. This makes sense as it’s the AI that learns how humans reason, not the other way around. It is also a fundamental goal of law schools to cultivate legal reasoning skills in students. The professors note that such a pedagogical analogue can be found in math, where students are taught to do arithmetic without the aid of calculators so that students can still master these skills.


After first-year courses, though, those professors propose having course offerings teaching students how to use generative AI tools effectively so that students interested in serving the disadvantaged public may do so feasibly and effectively. Such an approach will likely allow students who dream of starting their own legal practices to transform those dreams into realities. Instead of the top-down approach that professors at Fordham adopt, perhaps this bottom-up approach promises a better legal future. Instead of subscribing to the existing marketplace for big legal employers, students who are skilled in both the law and AI can create and build a new marketplace for high-quality services that can be provided at affordable prices to those who would otherwise not be able to afford it.