As the dust begins to (somewhat) settle on how the professional world will start interacting with generative AI it’s possible to take stock of what firms can be considering when choosing a vendor/ partner to work with.
From helping dozens of clients start using generative AI, we’ve seen the 5 main areas below be what boards/ AI working groups consider when choosing which path to take the firm, and which partner(s) to go with them on the journey.
Throughout this article we’ll explain some of the different options available in each area, and how Curvestone works with clients, through our WorkflowGPT platform.
Security
This is obviously often top priority for firms handling sensitive data and documents.
Firms can have different risk appetite/ levels of comfort based on the sensitivity of their data/ documents, however, broadly speaking the difference in vendor approach can be related to where data is processed and how that interacts with the models that will be deployed throughout the solution.
Will LLMs be trained on your data?
If you use the free ChatGPT web interface, anything that gets put in will go into training the Large Language Model and therefore the results could re-used elsewhere by other ChatGPT users. In short, don't use free interfaces (like web ChatGPT) if you worry about this risk.
Will the vendor “see” your data?
Even if the vendor has a private copy of an LLM, is their product set up so that data is “processed” on their servers? If so, this means sensitive data will be residing in a third party application which could breach client confidentiality.
Will the vendor use your data to train their own proprietary model?
A number of Legal AI tools in particular are aiming to create a “legally trained LLM” based on the input of lawyers using their tools. In some cases, they may look to anonymise data, however, it may be that, again, your (client) data is being processed by a vendor and then deployed elsewhere to other clients, which you may decide incurs too much risk.
Can you “partition” work with your vendor?
If you’re operating under a client relationship model where you need to respect ethical walls then this means ensuring that not just humans, but also AI, aren’t privy to work happening on one client being used with others. This makes it important to ensure that any tooling you use won’t be e.g. using work from one client to train a model that is then used on work with another client as this too could be considered a breach of client confidentiality.
The approach we take with WorkflowGPT is to avoid all of the above issues. We’re ISO 27001 certified and our application is built entirely within Microsoft Azure. When we work with clients we deploy the platform in their own dedicated instance so that no data leaves the environment and no training of models used with other clients. After 6-12 months, we can then even move everything to our client’s Azure environment and then be given encrypted access if/ when necessary.
Flexibility
If there’s one we feel certain of, it’s that the Legal AI market hasn’t yet settled.
Despite millions of dollars of Venture Capital funding going into promoting Legal AI products, from our experience, the jury is out on whether this type of expensive software is the best way to go.
The business model of these investor-backed companies is typically to create a “one size fits all” product that can be replicated across 1000s of customers.
With this naturally comes rigidity, because features and upgrades need to make sense across the board, rather than on what’s right for the individual company.
How easy is it to build “mini-applications”?
Most products will give the ability to write and save prompts, however how easy is it to build more ‘multi-step’ work, which often yields greater value? Will you be using the same tooling as other customers? Can you easily modify it to your needs?
How easy is it to integrate with other systems?
For POCs, it’s fine to show solutions working in isolation, however for broader based adoption, a solution performs better if integrated into how the firm already works.
Can you white label the platform?
Related to the above, employees often like to see tooling that has a consistent look and feel to other technology they use.
Can the platform incorporate existing custom work you’ve done?
Most firms with a decent size IT/ data science team will have played with creating POCs/ solutions themselves within the Azure AI Studio. If the use cases it covers are not covered by the vendor already, it’s nice to be able to use this development as part of the broader platform rather than maintain multiple solutions.
WorkflowGPT is built to be modified. The platform has four main underlying modules that can be used to cover the majority of legal use cases. This foundation, along with our support team, then makes it 10x faster/ cheaper to build solutions vs doing so from scratch, and means clients end up with solutions that actually land within their organisation.
Future-proofing
We’re in a new frontier where we’re seeing the AI industry play out.
Right now there are a few big players with the muscle/ deep pockets to develop LLMs and interfaces which are being used within all early applications of the technology.
The business model for e.g. OpenAI/ Google/ Anthropic is to undertake a land grab of early adoption then have companies “locked in” to using their LLMs (where they make money on usage) in the years ahead.
As such, be cautious of using tooling that is built using tooling that only works with certain (company’s) LLMs, such as Microsoft's Azure AI Studio. If there’s a price hike, or alternatives become better/ faster/ cheaper, it’s prudent to ensure that you can benefit from such advancements, rather than being tied to one LLM provider.
A related consideration is around platform choice. Choosing a vendor that is relatively fixed in what it can provide you and is similar to what other firms will be using (see next section) could mean that you end up feeling constrained in what you/ the firm are able to do, and also be unable to reap the benefits of (what one imagines) will be a falling cost of AI processing fees.
What models can you use with your vendor?
If it’s just GPT then it’s possible that the tool is built directly using modules “tied” to OpenAI LLMs.
Can you deploy third party/ smaller models?
We believe that before long there’ll be a well functioning market for pre-trained LLMs (think: an LLM trained on Scottish or Canadian law). If/ when that happens, it’ll be useful to be able to benefit from this.
WorkflowGPT is built in Azure for security and compliance reasons, however is done so in a way that is “multi model”. As such, our clients use not just multiple OpenAI’s GPT models, but also Google’s Gemini, Meta’s LLaMa and Anthropic’s Claude models on relevant use cases. We are also set up for clients to directly plug smaller/ third party directly into their platform.
Competitive edge
As explored in this article, it’s difficult to stand out from your competitors if you’re doing the same as everyone else.
As you think strategically about how the firm is going to interact with this step-change technology, we think it’s prudent to put yourself on a path where you can begin to reap the advantages of the technology before it becomes mainstream.
This comes from a combination of the technology and expertise at hand to move quickly and deliver market-leading solutions that benefit you and your clients.
What differentiated results have clients got as a result of your tool?
If the use cases given sound like a comparison of the pre-/ post-AI world then try and dig a little deeper. Ideally you avoid a scenario where the “differentiation” was essentially a well-written prompt (as that isn’t particularly defensible against others doing the same).
How have your clients won more business/ delivered higher profitability as a result of your tool/ partnership?
This is a good litmus test for whether the vendor is interested in technology for its own sake, or “gets” that its purpose is to generate business value, and has experience in doing so.
What support do you offer for developing bespoke solutions?
It’s likely that for the next first few years, your vendor(s) will know more about the intricacies of how generative AI works than you do. Some providers may not be set up to offer “experts for hire” which may or may not be of interest to you.
As well as providing out-of-the-box solutions to the majority of use cases that benefit a law firm, WorkflowGPT gets clients to bespoke solutions 10x faster/ cheaper than building in-house. The platform is born from our services background and we are used to delivering marketing-leading delivery/ service to leading professional services firms (e.g. PwC, Grant Thornton) and top international law firms through our AI strategy and implementation. You can also read this article around how we advise law firms to make money from AI.
Affordability
Lurking behind all of the excitement of what’s possible to do with generative AI is someone looking at their budget and seeing what this is all going to cost.
Transferring work being done by humans onto technology obviously puts pressure on the model of the billable hour, and so there’s a transformation that every firm will likely have to take in terms of how they compensate humans and technology investment in a world where client delivery is increasingly undertaken by AI.
In any case, linked to future-proofing and the general lack of stability in the Legal AI market, we see it as prudent to take a path where you’re able to start reaping the benefits of AI without paying high prices, especially as usage/ adoption is unclear.
Beyond getting an understanding of what the headline price(s) is, it’s good to understand a little more about the pricing model and as much predictability as possible about how much you should budget for.
What hidden costs might I encounter?
Whilst there’s always an element of uncertainty, this question should help unearth some of the less obvious dynamics of how your vendor earns money (from you) that might not be immediately obvious.
How do you intend to pass on cost savings if/ when AI processing fees fall?
Some vendors bundle access + usage of their tool into one price. Whilst this can seem ‘simple’, the dynamic means that it’s the vendor who’ll benefit from falling AI processing costs rather than you.
How much does it cost to use the tool?
Every time a user asks a question or runs a prompt, there’s an associated processing fee that needs to be paid, based on the number of tokens it uses. Even vendors who offer an “all in one” fee will have a cap after which you’ll either not be able to use it, or need to pay more. Try and understand what this looks like, again to establish any hidden dynamics.
WorkflowGPT makes money from licensing our platform and selling professional services to our clients to modify and build on top of it. Our clients pay for their tokens separately, directly to e.g. Microsoft or via us. This means when the price falls, the savings pass on to the clients. Whilst it of course depends, from a budgeting perspective we’ve seen it prudent to budget for £3-8 of token costs per user per month when using a general purpose tool like ours.
In summary
There are of course other considerations in how firms choose a vendor, such as how the interface looks, what use cases can be easily done or not, but these are typically shorter term considerations, as technology and UI often develops/ improves over time.
The five areas mentioned above cover the foundational elements of ensuring a firm is on the right path for them.
We’ve found that the best fit for most mid- to large-size law firms is to have a fully secure, affordable platform that covers most use cases out-of-the-box and is then modified and built upon when specific needs evolve and opportunities arise.
If you’d like to speak with us about how we could help your firm use generative AI in this way, then feel free to book a call with us here.