Artificial intelligence is quite useful, and it becomes more capable every day. Whether you’ve fully embraced what AI can do for your law firm or you’re holding off until you understand more of the potential ethical implications, this is the right time to create an official AI policy for your firm.
Like most tools, AI is only valuable if it’s used the right way. When wielded irresponsibly, artificial intelligence can create expensive problems.
Consider predictive coding as an example. This is a longstanding application of AI for legal work, and it has been an absolute lifesaver for many legal professionals who used to guzzle gallons of coffee to make it through those last-minute, all-night document review sessions.
Then, think about the lawyers who were fined $5,000 for citing fake cases that they “found” while doing legal research with ChatGPT.
Here’s the challenge:
It doesn’t matter whether you see AI as the next big thing or the next big problem — your law firm employees are going to use it either way. By having an AI policy in place, you can help guide them towards acceptable use and set boundaries to avoid the biggest risks.
Why your firm needs an AI policy
You already know that AI tools can have both big benefits and big risks. This is especially true for generative AI (tools like ChatGPT or Midjourney that use AI to create digital content) and any programs that gather data as you use them.
Why?
Generative AI can produce content that appears to be plausible, even authoritative, yet is still inaccurate. This means that AI output must still be double-checked for accuracy by human attorneys who can apply their knowledge and experience.
There’s also a concern about the ways that generative AI “create” those things. Artificial intelligence can’t actually be creative. It can only analyze huge amounts of data from real people, then give you a result that’s essentially a remix of those original pieces.
To some people, that’s plagiarism.
Others argue that human intelligence does pretty much the same thing.
As for the question of legal ethics, the jury’s still out, so to speak. No bar association has yet offered any guidance on the sticky question of AI and intellectual property, so it’s up to you to make your own ethical judgment.
That judgment belongs in your firm’s AI policy.
There’s another big thing to consider, too.
When providing prompts to generative AI, lawyers must be mindful of privileged, confidential, or sensitive information. If it’s protected by attorney-client privilege or it could be used by a hacker to gain access to other sensitive data, it shouldn’t go into an AI tool.
What must be included in your AI policy?
From a lawyer’s perspective, human oversight is likely the most critical factor in the responsible use of AI output.
Your AI policy should mandate a certain level of oversight for different types of activities. Perhaps you can trust your legal document automation tool without close review, but you require a thorough check of anything generated by an AI writing assistant.
You might also require human review for specific types of work. A motion or appellate brief relying on legal arguments and case citations, for example, will definitely require double-checking by a real, live attorney. Meanwhile, an AI-drafted email is probably safe to send with a simple proofread.
Your AI policy should also create a command structure for your firm’s AI usage.
At least one person, if not a group of people, should be in charge of overseeing your entire AI ecosystem. This AI team will be in charge of monitoring the tools you use, updating them as needed, and ensuring ongoing training.
Incident response and remediation should also be addressed in your AI policy. Similar to how companies need plans for addressing cybersecurity events, AI-related events also need plans of action in place.
Finally, the most obvious thing to include is a list of tools you absolutely don’t want your employees to use.
ChatGPT tends to be the most controversial.
If you’re going to allow it, make sure to be very clear about how ChatGPT should and shouldn’t be used for legal work. If you don’t want people using it at all, make that very clear in your AI policy.
How will you enforce these guidelines?
Employee training makes your AI policy worthwhile. Without clear communication methods and intentional training, that policy is just some list you wrote for yourself.
Your firm’s entire staff should be trained on how to use AI, with prohibited activities clearly set forth.
In addition, there should be well-delineated consequences for violating the guidelines.
Your firm will be better able to implement its AI guidelines when designated individuals, such as a Chief AI Officer, are responsible for oversight of the firm’s AI systems. These individuals can ensure guidelines are enforced, with punishments for violations meted out appropriately and fairly.
Note: ensure your AI training is robust
While some of your firm’s attorneys and staff will happily dive into new tools, many will want more training on AI tools. Accordingly, your firm should train employees on how AI is to be used for day-to-day work.
If your firm has the resources, it may even be advisable to have designated personnel in charge of AI training specifically.
Keep in mind that many vendors of AI tools will provide their own training, especially if you’re paying for the tool. Just ask your sales rep or send a message to their support team to request training for you and your team.
While you should take advantage of vendor-provided training, remember that you’re still the person who’s in charge of “proper” AI usage. An AI tool vendor cannot decide when that tool will be used for legal research, case analysis, or document review. Instead, those decisions will fall to the firm’s leadership.
A note about third-party usage of AI
A law firm has to be concerned with more than its own internal use of AI. Most legal practices deal with a multitude of vendors and third parties, and some of those probably use tools powered by artificial intelligence.
Logically, this means a legal practice must have a policy regarding third-party usage of AI.
For example, if you don’t want your internal employees using generative AI because you’re concerned about questions of copyright and plagiarism, then you won’t want a freelancer or agency you hired to create AI-generated content for your website.
The easiest way to alleviate concerns is to ask vendors what kind of AI-powered tools they use. Discuss your existing AI policy, and, if applicable, ask them to sign an agreement to abide by your usage rules.
Final steps
You’re almost there! You know what to include, and now you’re ready to share the new AI policy with your law firm.
Keep that document on a shared drive where everyone in your firm can access it. Part of your training should include instructions on how to find and navigate policy documents without needing to ask a supervisor.
Next, go into your calendar program and set a reminder to review this policy in six months.
Why?
Think about how much artificial intelligence has changed since half a year ago. If you had introduced an AI policy in January 2023, it would have been outdated by July.
It’s also a good practice to update your AI policy whenever you add a new tool to your tech stack.
With that in mind, you’re ready to go. Artificial intelligence can be a real advantage for your firm, and publishing this policy is an important step to make sure you get the most out of it without unnecessary risk.
Author
After a fifteen-year legal career in business and healthcare finance litigation, Mike Robinson now crafts compelling content that explores topics around technology, litigation, and process improvements in the legal industry.
View all posts