Law Review Puts Out Full Issue Of Articles Written With AI

Do androids dream of electric footnotes? The post Law Review Puts Out Full Issue Of Articles Written With AI appeared first on Above the Law.

Jun 9, 2025 - 20:50
 0
Law Review Puts Out Full Issue Of Articles Written With AI

While practicing lawyers embrace generative AI as a quicker and more efficient avenue to sanctions, law professors have mostly avoided AI headlines. This isn’t necessarily surprising. Lawyers only get into trouble with AI when they’re lazy. It becomes a problem when someone along the assembly line inserts AI-generated slop without taking the time to properly cite check. Legal scholarship, on the other hand, is all about cite checking — usually to a comically absurd degree.

A 10-page article doesn’t get 250 footnotes because someone’s asleep at the switch.

But that doesn’t mean legal scholarship is somehow shielded from the march of technology. Generative AI will find its way into all areas of written work product eventually.

The Texas A&M Journal of Property Law, decided to take the bull by the horns — horns down, as the case may be — and begin grappling with AI-assisted scholarship with a full volume of AI-assisted scholarship.

In the course of publishing the 2024–25 Volume of the Texas A&M Journal of Property Law, we, the Editorial Board, were presented with the opportunity to publish a collection of articles drafted explicitly with the assistance of Artificial Intelligence (“AI”). After some consideration, we made the decision to do so. The following is our endeavor to share with our peers and colleagues—who may soon find themselves in similar situations—what we have learned in this process and, separately, contribute some forward-looking standards that can be implemented in the arena of legal scholarship for the transparent signaling and taxonomizing of AI-assisted works.

A foreword prepared by Spencer Nayar and Michael Cooper, Editor in Chief and Managing Editor respectively, laid out the issues encountered by the staff in putting together the volume and explained how they dealt with these issues.

The four articles, technically authored by Kansas Law professor Andrew W. Torrance and Bill Tomlinson, a professor of Informatics and Education at UC-Irvine, dealt with biodiversity loss and associated legal issues. But the real action in these articles resides in a footnote:

Portions of this article were drafted and/or revised in collaboration with ChatGPT (GPT-4o, Sept. 2024), Anthropic’s LLM Claude (Sonnet, Sept. 2024). All content was reviewed and verified by the research team. To ensure ethical and responsible use of AI, we engaged with ChatGPT in line with the best practices described by Bill Tomlinson, Andrew W. Torrance, and Rebecca W. Black, as well as the recommendations outlined in Nature Editorials. Bill Tomlinson et al., ChatGPT and Works Scholarly: Best Practices and Legal Pitfalls in Writing with AI, 76 SMU L. REV. F., 108 (2023); Tools Such as ChatGPT Threaten Transparent Science; Here Are Our Ground Rules for Their Use, NATURE (Jan. 24, 2023), https://www.nature.com/articles/d41586-023-00191-1 [https://perma.cc/PD4R-2GM8].

In the foreward, Nayar and Cooper identify three key factors in weighing legal scholarship: authorship (can we use this to grant tenure?); reliability (did we put in enough footnotes?); effort (have we made ourselves miserable enough in writing this to consider it a valuable contribution?). Along with the more elusive category of “merit,” the editors determined that AI won’t undermine and might even enhance an article’s value along these factors. Its human authors remain on the professional hook for the output, the potential high-profile embarrassment of hallucinations will keep editors focused on chasing down verification, and while AI will make the slog of writing easier, it can’t replace the soul-sucking draft-turning process. As for “merit,” AI can open up new inquiries that purely human scholarship couldn’t get at:

AI reifies, rather than offends, the value of merit in the legal paradigm because AI carries with it the capacity to help unleash creativity through the automation of various tasks. As scholars endeavor to find the next complex issue in law, their research may require in-depth pattern mining or other form of quantitative analysis or literature review. These are tasks that AI can help with by not only conducting rudimentary research but also by aiding scholars in their quest to find new connections between old dots.

Embarking on this new AI-assisted world, the editors proposed some best practices. It’s largely common sense — edit carefully and look out for unintentional plagiarism — but the journal also proposed a five-level taxonomy for signaling the level of AI involvement in a work. Purely human output on one end and purely AI on the other. In between, there are signals for using AI as a research aid, using it to draft outlines or early drafts, and using it to put together substantial clips of text. They propose disclosing this at the top of the article:

To inform readers of a specific article about the extent of AI use, the author should include a disclosure within their article’s biographical footnote. This disclosure should include a basic description of the AI used and, in brackets, the level of assistance. For example:
John Doe, Professor of Constitutional Law at Arpeggio University.
The Author used Artificial Intelligence in the researching and investigation of this topic. [AI Assistance Level 2].

This is probably overkill and might not even be feasible over the long-term. It’s like suggesting authors flag every article based on how much they used the internet — it might’ve been interesting in 1996, but now that it’s fully integrated into daily life, it’s hard to draw a line. It’s also a disclosure that might be counterproductive… if the author intended to create a first draft and happened to prompt the AI well enough that the output only required minor edits, it would move up the scale unintentionally.

For that matter, what does it mean to move “up” and “down” the scale? Using AI as a research aid clocks in closer to purely human output than using it to draft substantial amounts of text, though as the growing ranks of sanctioned lawyers can attest, research assistance can be a lot more problematic than spitting out filler prose.

But in any event, this is a project that someone needed to take on, so the editors should be commended for taking the initiative here.

The post Law Review Puts Out Full Issue Of Articles Written With AI appeared first on Above the Law.