Now that it's been shown that law firms can safely use generative AI in their day-to-day work, there’s a question that all firms should be asking: how do we want to engage with this technology?
Most headlines point to how AI will “disrupt” the legal industry beyond recognition, which may or may not be the case.
Society at large will almost certainly still require a legal function in a world with even more advanced AI, and firms with an existing experience and reputation will be best placed to serve.
With that in mind, firms’ leadership can then turn their attention to what is going to be strategically important for them, and their organisation’s longevity.
One option is to take a stance against artificial intelligence and double down on human intelligence.
The growth of Spotify and other digital streaming platforms coincided with a resurgence of physical record stores.
When everyone zigs, there can be benefits in zagging.
If your firm serves clients who are sceptical about artificial intelligence, and are willing to pay a premium for human involvement in legal matters, then this could be a genuine differentiation point.
The viability of this route will likely depend on the addressable market size of clients wanting “human only” legal work when AI involvement becomes more commonplace, and the ability to hire and retain (human) talent to do routine work that AI does elsewhere.
For most, the gateway into using artificial intelligence is to have various interfaces whereby human lawyers/ employees can ask for help from an intelligent bot.
This is the premise of ChatGPT - the way that most people were introduced to the technology.
A human is working on a task, types into a chat window what they want and the AI uses this to do the bulk of the work.
Using this for the first time can be genuinely impressive (“How was it able to draft that so well based on only a few sentences?”), though for anyone who has used AI assistance, the shine can soon wear off when it comes to using it in actual work.
The effectiveness of the last 20% of the output is largely dependent on the quality of the input (i.e. prompt) which is a skill in itself.
LLMs may develop in their ability to generate higher quality output based on generic prompts, though there’ll likely always be a somewhat specialist skill in being able to accurately communicate what one would like an AI to do for you.
In the tight deadline environment of legal work, this is often not a priority and, in the long run, arguable whether it’s a skill area worth investing in over other activities.
Many people can use software programs without needing to know how to code.
Firms should think about what is the necessary requirement employees should have to derive benefits from AI.
A level up from AI assistance is to use AI to augment the work of human lawyers.
This approach is grounded in the idea of viewing (legal) work as a series of processes, and then inserting AI into the relevant steps.
Processes are meant to be repeated, and therefore skilled Legal AI experts (often in Legal Tech or Legal Operations) can invest time and expertise into ensuring that AI is working 97-99% accurately for the specific steps it is required to do.
It’s much more plausible for AI to get 99% accuracy when an expert has rigorously defined its task as part of a process vs. asking it to do something vague like “summarise the key points from this client meeting and compare it to others”.
In this world, the best processes are designed to ensure humans focus on their expertise, and have AI help them do their best work.
The main challenge with AI augmentation is how to ensure that the human can easily edit/ verify the results of the AI.
The goal of AI augmentation is still for the human to give final approval on the output rather than be in a place where AI is entrusted to make judgement decisions.
When working on client matters, having a clear audit trail that records the transfer from AI to human (i.e. the law firm taking on the liability for the final output) as well as the reasoning behind how AI come to make its judgement is, we have found, critical.
If you have workflows involving lawyers working with large volumes of documents then we’d recommend requesting a demo of the WorkflowGPT document abstraction tool to see how others can quickly realise its benefits.
AI Automation involves removing the human from the process and leaving things to AI.
Whilst this might sound dystopian, there are actually places in law firms where this approach can already be valuable and relevant.
Unsurprisingly, the area where AI automation is best suited is the same as other automation:
The difference that generative AI has brought is that automation can perform exceptionally well with a non-standard input.
As an example, “technology” (OCR + Machine Learning) has for a number of years been able to automate the process of extracting information from invoices. This works, because most invoices follow a standardised format that pre-GPT 3.5 machine learning models could be trained to “read”.
To use this approach on documents with greater levels of variability and semantic meaning was almost impossible.
The fundamental basis of generative AI technology means it can be trained to generate much better results, on a drastically smaller training set. This opens up the realm of automation from just standard documents to almost any type of legal document.
Again, for automation to be deployed, there needs to be an acceptable level of error. For this reason, the automation projects that Curvestone usually does with clients are around “reading”/ categorising/ summarising documents for internal use, e.g. the creation of searchable databases of documents that were previously unusable.
If you currently have documents/ datasets within your firm that aren’t being fully utilised, or searchable datasets that you’d like to build, then reach out to set up a call with us.
A subset of AI Automation is what you can call AI Judgement. This is trusting AI to not just identify information out of documents, but also give a recommendation on what should be done.
Much talk of the advancement in LLMs points to the scenario where AI can replace the judgement of senior lawyers in decision making by, for example, comparing the matter with previous cases, ingesting other contextual information (perhaps missed by human lawyers) and delivering a verdict.
We’ll leave that for individual firms to decide whether this is a path they want to pursue.
Our stance is that judgement on cases should (and most likely always will be) a case of AI augmentation, and so the focus of firms should be on how to build processes whereby AI can be utilised to surface its reasoning for a human to make the ultimate call.
This article is premised on the question: how should law firms use AI?
The intention is to show that it’s not a binary “we should”/ “we shouldn’t” but rather where/ how should we use it.
Curvestone’s WorkflowGPT + support services is designed to help firms looking to find places in their firm for augmentation and automation. If you’d like to speak with us about how we could help you, then feel free to book a call here.