The world of legal education was rocked recently when the State Bar of California admitted to using artificial intelligence (AI) to help develop some of the multiple-choice questions for its February 2025 bar exam. This revelation has ignited a heated debate among law students, educators, and legal professionals about the role of AI in high-stakes testing and the implications for fairness, transparency, and trust in the licensing process.
The Unfolding Controversy
It all began when hundreds of aspiring lawyers voiced concerns about technical glitches and irregularities during the February bar exam. Many reported being kicked off the online platform, encountering lagging screens, and facing confusing or error-filled questions. As the dust settled, the State Bar’s admission that a subset of questions had been developed with AI only added fuel to the fire.
For many, the idea that non-lawyers—using AI—were involved in drafting questions for such a critical exam was hard to swallow. Academic experts like Mary Basick from UC Irvine Law School and Katie Moran from the University of San Francisco School of Law expressed shock and frustration, questioning the validity and reliability of the process. Their concerns centered on whether AI-generated questions could truly assess the nuanced legal reasoning required to practice law.
How AI Was Used in the Exam
According to the State Bar, out of 171 scored multiple-choice questions, 100 were created by Kaplan Exam Services, 48 were recycled from a first-year law student exam, and 23 were developed with the assistance of AI by ACS Ventures, the Bar’s independent psychometrician. The State Bar emphasized that all questions, regardless of their source, were reviewed by panels of subject matter experts for legal accuracy, minimum competence, and potential bias.
Despite these assurances, critics pointed out potential conflicts of interest—since the same company that developed the AI-assisted questions was also responsible for validating them. Others worried that the speed and method of question development compromised quality, especially compared to the years-long process typically used by the National Conference of Bar Examiners.
The Push for Innovation—and Its Pitfalls
The State Bar’s move to incorporate AI was partly driven by a need to cut costs and modernize the exam process. Facing a significant budget deficit, the Bar had recently switched to a hybrid model of in-person and remote testing, and contracted with new vendors to develop and administer the exam. The California Supreme Court had even encouraged the Bar to explore new technologies like AI to improve reliability and cost-effectiveness.
However, the rollout was anything but smooth. The technical issues, combined with the controversy over AI-generated questions, led to calls for audits, lawsuits, and demands for greater transparency. Some experts argued that reusing questions from a first-year law exam was inappropriate, as the standards for bar admission are much higher and require more advanced legal reasoning.
What’s Next for California’s Bar Exam?
In response to the uproar, the State Bar has asked the California Supreme Court to adjust test scores for those affected by the February exam. The Bar is also reviewing its processes and considering further changes, including whether to continue using AI in future exams. Meanwhile, many in the legal community are calling for the release of all exam questions and a return to more traditional testing methods.
For test takers and educators, this episode serves as a reminder of the importance of transparency, rigorous review, and clear communication—especially when new technologies are introduced into critical assessment processes.
Actionable Takeaways
- If you’re preparing for a high-stakes exam, stay informed about any changes in format or question development.
- Advocate for transparency and fairness in testing, especially when new technologies are involved.
- Engage with professional organizations and provide feedback to help shape future exam policies.
Summary: Key Points
- The State Bar of California used AI to help develop some bar exam questions, sparking controversy.
- Technical issues and concerns about question quality led to widespread criticism and calls for reform.
- All questions, including those developed with AI, were reviewed by experts for accuracy and fairness.
- The Bar is considering adjustments and further changes in response to the backlash.
- The episode highlights the need for transparency and careful oversight when integrating AI into high-stakes testing.