Skip to Content

Everything I need to know about GenAI I learned in kindergarten

It may be new technology, but the old rules still apply

Kindergarten kids at play
iStock/LSOphoto

As lawyers in Canada begin using generative AI in their work, it helps to think back to what we learned in kindergarten. Many of those early lessons still matter today, especially when working with new technology. 

The basics, like honesty, safety, and responsibility, apply just as much in the digital world as they did in the classroom.

Don’t tell tales

In kindergarten, we learned that stretching the truth gets you in trouble. In the world of Generative AI, we call this hallucination.

Like a youngster with a vivid imagination, AI needs a grown-up (you) to check its homework. You cannot blame the robot for a fake citation. If AI gives you a case that sounds real, you must check it. Always confirm citations with reliable sources before using them in your work.

Share your toys, not your secrets

Remember when your teacher insisted you share the crayons? With AI, this means sharing good practices with your coworkers so everyone uses the tools safely. 

But some things should not be shared. Never put client information into public AI tools. These systems may store or train on your data, potentially exposing confidential or privileged information. Use secure, enterprise-grade tools that protect your data.

Client information deserves careful handling, whether it’s in a filing cabinet or a prompt window. Our ethical duties of competence, candour, and confidentiality don’t vanish just because technology enters the room.

Hold hands and stick together

Holding hands kept us safe on school trips. In legal work, it means reviewing AI output with another person when possible. A second set of eyes can catch errors, including hallucinations or inaccurate facts, before they reach a client or, worse, a court.

No cheating

Children are taught not to cheat, even when nobody is watching. Lawyers should treat AI the same way.

Waiting until you get caught is not the answer, and hallucinated cases are not harmless playground fibs; they’re professional risks. You must review and verify everything the tool produces.

Don’t hit people

When we were told as kids, “don’t hit people,” it meant don’t hurt others. In legal practice, it means not using technology in ways that could harm clients or mislead a court. Many free AI tools store or learn from user data. Sharing privileged information with them can cause serious damage. Your ethical duties come first.

Put things back where you found them

This rule teaches order and responsibility. For AI, it means knowing where your data goes, how it is stored, who can access it, and whether it will be deleted.

It also means keeping good records. If you used AI to help with research or drafting, record what was generated and what you changed. This protects you if questions arise later.

Clean up your own mess

In kindergarten, you cleaned up your own spills. In legal practice, you must clean up AI’s mistakes. Just as the excuse “the dog ate it” rarely explained away a lack of completed homework, courts do not accept the excuse “the AI did it.” There is no passing the buck to the algorithm. 

Remember, AI may sound confident even when it’s wrong. Verify facts and sources, and apply your own legal judgment. If AI makes a mistake, it’s on you to find it.

Be cautious before relying on AI outputs and sending out AI-assisted work. Fix errors, remove biased language, and make sure everything is accurate.

Say you’re sorry and own your mistakes 

Everyone makes mistakes. What matters is how you respond. 

If you used AI and something incorrect ends up in your work, take responsibility and correct it quickly. Professional accountability still belongs to the lawyer, not the tool.

Wash your hands

Like washing your hands, good cyber hygiene is important. You have to protect yourself from cyber risks. Lawyers are common targets for phishing and deepfake attacks. Use strong security practices, including multi-factor authentication.

Take a nap every afternoon

Rest helps you think clearly. Today, AI can tempt lawyers to rush because it produces quick answers. But speed can hide mistakes. Take time to think, step back, and check whether the AI’s output makes sense.

Keep asking ‘why?’

Kids ask “why” constantly. Lawyers should do the same. Ask the AI why it produced a certain result or why it ranked data a certain way, and ask yourself why and how bias might appear in its output.

Understanding how AI works helps you use it responsibly and explain its limits to clients and courts.

The takeaway

Generative AI gives lawyers powerful new tools, but the basic principles we learned in kindergarten still guide responsible use. Success depends not on the technology itself, but on using it with care, judgment, and respect for ethical boundaries.