Passer au contenu

When the robot does the research

How to respond with restraint to AI-fuelled findings while redirecting the conversation to the real-world decisions clients need to make

Un avocat examine une grande pile de documents de recherche empilée sur le sol.
iStock/triloks

It happened twice this week. A client presented me with the results of their own research, which consisted of dumping their (privileged) legal advice into the (free, non-confidential version of) ChatGPT and asking me about “all of these issues that AI thinks are important to my case?”

As I stared at these voluminous, robot-generated missives, I struggled with how to advise the client (and their computerized helper) that while their ‘second opinion’ sounded confident and was grammatically correct, it was probably a waste of time for me to unpack all the ways in which it was not helpful to their specific situation.

While sharing this experience with colleagues, I realized we are increasingly struggling with these issues. The modern client increasingly comes armed with their own research, generated by an algorithm that never bills by the hour and never hesitates.

The AI second opinion

Clients have always sought second opinions, but artificial intelligence (AI) has made that instant and almost free. Whether it’s drafting a contract clause, summarizing a case, or exploring a tax structure, AI tools now deliver what appears to be informed legal insight. And often, it sounds authoritative enough to question ours.

The problem is that AI does not know what it does not know. It does not understand nuance, context, or the reasoning that led to a legal conclusion. When it reviews a lawyer’s work, it does not analyze the thought process; it matches patterns in text, and that is where the real tension begins.

One of the most common and frustrating new dynamics occurs when AI reads a legal memo or contract and confidently proposes “additional issues” the lawyer supposedly missed.

Often, these are not new issues at all. They are matters the lawyer has already considered, weighed, and intentionally set aside because they were irrelevant, immaterial, or already resolved through other provisions. AI does not see that logic, however. It only sees patterns.

This creates two immediate problems.

First, the lawyer must engage in a time-consuming explanatory exercise, unpacking why those flagged issues were already considered and dismissed. The human reasoning behind legal judgment, balancing risk, cost, and practical reality, must now be defended against a machine’s simplistic certainty.

Second, it subtly undermines trust. When clients see AI-generated findings, they may start to wonder whether their lawyer missed something. Even if the lawyer ultimately proves correct, the process can erode confidence in the relationship.

There’s also a third problem emerging, and it’s economic.

If we are forced to unpack and justify every analytical decision to an algorithmic shadow audience, lawyering itself becomes slower and more expensive. It is a bit like a pilot who has to narrate every decision to passengers mid-flight, explaining why they adjusted altitude, changed course, and why turbulence is nothing to fear. The explanations might be educational, but they’re also a distraction. They take attention away from what really matters: flying the plane safely.

The same is true in law. If we spend too much time explaining to clients why every potential issue raised by AI is immaterial, we risk losing the efficiency and focus that make professional judgment valuable in the first place.

When explaining becomes exposure

There is another quieter risk that deserves attention: the liability exposure created by debating with AI-generated feedback.

Each time a client forwards AI-generated concerns and the lawyer responds in writing to explain why those points are wrong or irrelevant, a record is created — a growing thread of comments, clarifications, and dismissals. Over time, this back-and-forth can start to look less like professional reasoning and more like a list of client instructions.

That is where the danger lies.

If one of those AI-generated suggestions later proves tangentially relevant, or if a dispute arises, the paper trail can be misconstrued. It may appear as though the client raised an issue and the lawyer ignored it. In reality, the lawyer may have rightfully dismissed a false positive. But in hindsight, and under the harsh light of litigation or a professional negligence claim, the distinction between AI chatter and client instruction can blur.

In other words, the more we debate with the machine, the more exposed we become to it.

This dynamic places lawyers in a difficult position: respond too briefly and you risk appearing dismissive; respond too thoroughly, and you create a longer record of hypothetical issues that can be used later.

Managing that balance will require judgment, discipline, and restraint. Not every AI prompt deserves a written rebuttal. Sometimes the most prudent professional move is to bring the conversation back to where it belongs: the real-world decision the client needs to make.

Problem-solving or educating the machine?

Firms that handle this dynamic well will use AI as a bridge, not a barrier. When a client’s AI flags something, the conversation should not be defensive. It should sound like: “That is a good observation. Let me explain why that issue does not apply in this context, and what we considered before reaching this conclusion.”

Handled well, these interactions deepen trust. They show that the lawyer is not threatened by AI but is operating at a higher level of reasoning.

Handled poorly, they can damage credibility. If lawyers appear dismissive or impatient, clients may see that as evasion. If we over-explain, we risk validating the AI’s false authority. Striking that balance will become a defining skill for the next generation of legal advisors.

There is also a practical dimension that clients need to understand: explanation takes time, and time costs money. Every time AI raises a new issue, the lawyer must analyze it, place it in context, and explain why it may not matter. That is not free, nor is it efficient.

At some point, the conversation has to include a reality check. It is entirely fair to ask the client, candidly and politely: “Do you want to pay me to argue with your robot?”

It is a disarming question, but it cuts to the truth. The lawyer’s job is not to spar with an algorithm; it is to guide human judgment through complexity. Clients should decide whether they want their lawyer focused on solving the problem or educating the machine.

Whenever a client’s AI enters the discussion, the lawyer should also set clear boundaries at the outset. It is wise to state explicitly the degree to which we are willing to engage with AI-generated advice. Given the current state of these tools, most of the time, the best professional response is to politely say that it’s not in the lawyer’s or client’s best interest to engage directly with the AI output. Instead, we should invite the client to identify any specific items that they would like us to explain or defend.

This places the responsibility and cost decisions back where they belong. The client can then choose whether they want to pay for additional engagement.

The real value of judgment

AI can be a powerful tool in the hands of professionals. It can help us narrow long lists of issues, surface comparable precedents, and summarize positions quickly. Used properly, it can make legal analysis faster and more efficient.

However, that power depends entirely on the judgment of the person using it. In the hands of clients, without the training or instinct to properly prompt and interpret what has been generated, AI can become dangerous and misleading.

Lawyers do not just identify issues. We rank them, discard the irrelevant, and focus on what is truly significant to the client’s position. We factor in the client’s negotiating leverage, timing, and commercial realities — things no algorithm can yet evaluate. 

That is where the real value lies. AI can accelerate the search for possibilities, but it cannot decide which ones matter.

The future of advice

The arrival of AI in the client’s toolkit has changed the rules of engagement. Lawyers are no longer the only ones in the room analyzing the problem, but we are still the only ones capable of understanding it.

AI can surface issues, but it cannot prioritize them. It can find possibilities, but not make judgments. The challenge for lawyers now is to turn those false positives into opportunities to demonstrate precisely what the machines cannot replicate: experience, discernment, and professional judgment.

The future of legal advice will not belong to those who fight against AI, but to those who can integrate it calmly, intelligently, and without losing sight of the fact that, when it comes to navigating complexity, clients still need someone to fly the plane.

 

This editorial was originally published in the Ontario Bar Association’s magazine, Just.