Blog Viewer

Can AI Write Case Summaries Like ICLE? Not Really.

By Marissa Navarro posted yesterday

  

Hi, all! I’m currently a Fellow with ICLE and a 2025 graduate of Michigan State University College of Law, now barred in Illinois. I’ve been really interested in how AI is starting to show up in legal work, especially when it comes to research and writing. 

ICLE wanted to test whether AI tools could generate case summaries similar to the ones we produce in-house. The short answer is no.

For this project, I took ten case summaries from the Legal Updates page and used four different AI tools, Claude, ChatGPT, Gemini, and Copilot, to create their own versions. The goal was to compare how closely they matched to ICLE’s summaries. Some of the results were pretty surprising.

The Main Takeaway: The Prompt Really Matters.

The biggest lesson learned is that AI should not be used without a very detailed prompt.

That means including the full opinion, being clear about what you want, and even telling the tool what perspective to take. If you do not do that, it tends to miss what actually matters.

A lot of the time, the points ICLE identified as most important were not the points the AI tools focused on. Instead, they would focus on a secondary issue, default to a common legal topic that did not apply, or completely misread what the case was about. This lines up with the recent ICLE update, “AI Basics for Michigan Attorneys,” which talks about how important it is to get the prompt right.

What I Noticed About Each Tool

It should be noted that I used the free version of each tool, recognizing that not every member will have access to the premium subscriptions to the AI tools. I also used a prompt that was basic to demonstrate that the prompt matters. Here is the prompt I developed: You are a legal editor at the Institute of Continuing Legal Education preparing materials for practicing attorneys. For each case provided, write a concise summary of no more than five sentences that includes (1) the key facts, (2) the holding, and (3) any notable element or significant policy implications. Write in a clear, professional tone appropriate for an experienced legal audience.

Claude

Claude was decent in one important way—it knew when it did not know something. If it could not access an opinion, it said that clearly. The problem is that when it got something wrong, it was not a small mistake. It would flip a holding or guess what a case was about based only on the caption. That kind of error could actually cause problems if relied upon.

ChatGPT

ChatGPT struggled the most with identifying the actual legal issue. It was consistently confident, even when it was wrong. What was interesting is that its most useful output was not the summaries themselves, but the feedback it gave when comparing its work to ICLE’s. That part of the analysis was extremely helpful.

Gemini

Gemini was the most unpredictable. In some cases, it completely changed the type of case it was summarizing. It might take a child welfare case and turn it into a probate issue, or it treated a statutory dispute like a tort claim. Some of its commentary was useful, but it was often built on facts that were not right.

Copilot

Copilot was closest to ICLE summaries. It would find real legal issues in the case, just not the one that mattered most. It tended to focus on a secondary holding and miss the main point. It might be helpful as a backup check, but not something to rely on by itself.

Patterns That Kept Showing Up

After going through all of the summaries, a few things came up again and again:

·      The tools often got the main legal issue wrong.

·      They defaulted to familiar legal concepts even when they did not apply.

·      They struggled with procedural posture and statutory details.

·      They missed the practical points that matter for attorneys.

In some situations, the summaries were not just slightly off, they would describe a completely different case.

So What Does This Mean in Practice

AI can be useful, but it is not a shortcut. It is not a replacement for reading the opinion and definitely should not be used without review. This project made a few things clear:

·      You need to include the full opinion in the prompt, not just the citation.

·      You need to be very specific about what you are asking for.

·      You should treat the output as a starting point, not a final product.

·      Attorneys should still review the information provided.

Final Thoughts

There is still value in these tools, they can help you think though issues or catch something you might have missed. But when it comes to case summaries that attorneys are actually relying on, ICLE will continue to draft its summaries in-house to ensure that the information is reliable and accessible to its members.

0 comments
4 views

Permalink