This week I have an interview with David Wakeling, head of the AI group at A&O Shearman. It was a really fascinating conversation, and I’d encourage you to listen to the whole thing (on 2X it will only take ~15 minutes). We discussed how one of the world’s leading law firms is reshaping its business model around artificial intelligence. The transcript is below, but here are a few key takeaways:
Back in 2022, A&O Shearman became the first law firm globally to roll out generative AI at scale, partnering with Harvey (then a startup, now a multibillion-dollar company). Early applications were modest, saving small increments of time, but they proved adoption the technology could work for legal use cases
From there, the firm developed Contract Matrix, an in-house system layered on top of foundation models like OpenAI’s. Contract Matrix combines advanced prompt engineering with curated data lakes and proprietary expertise, enabling AI to deliver outputs that sound like the work of highly specialized lawyers. Contract Matrix is now widely adopted at the firm and even being sold to corporate councils with Microsoft
Wakeling believes that GenAI will have big implications for talent. He foresees lawyers working in hybrid roles—part legal expert, part engineer. He even said that law schools are already training students to think in terms of prompt engineering, validation, and spotting the right use cases for applying AI tools
Finally, David cautions other professional firms to avoid “innovation theater” that’s more about flashy AI demos than about really changing how they do the work. Harness AI and getting real gains requires significant innovation and change management
Anyway, I’d suggest you listen for yourself. It’s exciting to see how fast the (supposedly boring) legal profession is moving to adopt AI.
Here’s the full transcript:
Richard Lichtenstein: Hello everyone. I am Richard Lichtenstein. I am the host of Artificial Investment, a Substack and sometime podcast. I am here with David Wakeling, who is the head of the AI group at A&O Shearman, a law firm. And he is here to tell us a lot of interesting stuff about AI and what he’s doing.
David, why don’t you introduce yourself? Tell us a bit about what you’re doing.
David Wakeling: Thanks Richard. Very pleased to be on your podcast. So I am a partner based in London for this very big law firm called A&O Shearman, and we have made it one of our strategic priorities to drive AI across our business.
That means pushing it around our lawyers to make sure there’s good internal adoption to achieve internal efficiencies, and it also means new revenue streams by finding new ways of delivering legal services using software. So my job in the firm and I run a team of lawyers, developers, and data scientists to do this, is to explore that future business model, which is augmented by AI.
Richard Lichtenstein: That’s fascinating. I’d love to separately discuss the internal use cases of how you’re getting lawyers to use AI and the revenue streams, which I think is also super interesting. So why don’t we start with the internal and then we’ll get to the external.
So internal use cases. What have been the most successful use cases that you’ve had so far?
David Wakeling: The most useful use case with limited effort and energy going into it was when we, back in 2022, we were the first law firm in the world to roll out generative AI. We did a partnership with Harvey, which were just starting back then, but now obviously a very successful $5 billion plus business.
But we were the right there beginning. And what we were doing in those days was rolling out generative AI models, which are tuned for law. So Harvey had changed the functioning of the model that changed the weightings, that added legal data to make the system behave more like a lawyer compared to ChatGPT and other systems, which were just emerging at that time.
And that was an easy, quick win because we knew we could roll this out in a safe environment ‘cause we found ways of driving it in an InfoSec compliant way. And with good guardrails, we found ways of rolling out to 4,000 people. By Christmas 2022, we had 4,000 people, and we all kept it a giant secret in the firm.
We had enormous numbers of people using it, and there it was curing writer’s block. So it was saving a bit of time and it was meaningful, but our method of understanding the difference it was making to lawyers’ days was quite anecdotal, so they’d say, yeah, I used it and it was great, and I saved, I know I saved half an hour there.
It wasn’t the best statistical data, so we then started building on products, which were much clearer on the proposition for the business and a return investment. We brought in something called Contract Matrix, which really was about making it very simple. To stack up a conventional human task, what it costs, the energy it took, how many lawyers, that kind of stuff, versus an AI augmented one and drive cutting real fat.
And so Contract Matrix, we’ve been baking lots of subject matter expertise into it. So the things you can get out of Harvey or Microsoft models or open AI models or everyone else, we can actually get it to really speak like a lawyer who’s a very specialist lawyer. And that’s what Contract Matrix is doing---
Richard Lichtenstein: just to be clear, sorry to interrupt but Contract Matrix, you built that yourself or that’s a ---
David Wakeling: ---yeah. We built it as an overlay on top of foundation models and specialist models like Harvey. The first thing it does is it harvests prompts at massive scale and the prompts can get very, very long and give like incredible context to an AI system.
So just to ask one question, in a very complex finance contract, a simple question, does it have this kind of provision? We might have two or three A4 sides, which give context to what the question is, what good looks like, what bad looks like. Examples of what we’ve seen before. What our lawyers know is market practice and then that will drive that single prompt.
So that’s the first thing it’s doing is curating those prompts on top of the foundation model layer. The second thing it’s doing is curating very good data lakes. We use it mainly for RAG.1 So when the AI model is speaking, it is grounded in a specialist database for the type of question you’re doing.
But we have had the odd foray into tuning of models where we’ve got it to do a little bit more. So that’s what Contract Matrix is about is really looking at that vertical going into law and trying to replicate the level of specialism you’ll find in a law firm like A&O Shearman. You know, we’re one of the biggest law firms in the world.
We will have a New York-based finance specialist will have a private equity sector focused specialist in London, will have a life sciences expert in IP only in Paris, and so on. You can bounce around every entry sector and what we’re trying to do is reflect that level of specialism and deep subject matter expertise as another layer on top of the base models we’re using like we’re using GPT-5 quite a lot at the moment and we’re finding very good, but with this extra ingredient, this extra source that you find in Contract Matrix, it’s excellent , for specific areas of law.
Richard Lichtenstein: Super interesting. How did you actually get people to use this? Because I’ve seen many times you can build a tool that’s helpful and it’s hard to get people to use it. And in particular, what I would think about is if I’m a very junior lawyer and I’m reading contracts all day, with this tool, I’m still reading contracts all day, right? Now I’m reading more contracts. But do I care? I still have to bill 10, 12 hours a day. So how do you make me care about reading them faster?
David Wakeling: Yeah. So Richard, you are hitting the big question. This is about incentives and the way a law firm practices is a lot of work will be billable hour: get the thing done, get it done quickly. But a lot of work will be fixed fee, price pressure, no specific urgency, may be quite process orientated, so you can sort of break up law firm chunks. And the area where this is the incentives aligned perfectly are areas where there are people doing repetitive process.
It’s complex, but it’s still repetitive and it is either already done at a fixed fee. Or it is, , done under like incredible price pressure. So the proposition to the lawyers in that particular area is you need to think differently. What you could do is you could bake your methods, your knowledge, and your market practice expertise, which is still relevant to that process.
Because it could be securities issuance out of the Middle East, or, you know, it could be anything where you and I wouldn’t really know how to do it. Whatever you’re trying---.
Richard Lichtenstein: ---Definitely I wouldn’t,
David Wakeling: ---yeah, and nor would I, and so the objective is to get them to bake a chunks of that, of their day to day into that system and change the value proposition to clients.
And the value proposition is I will deliver the same work I was delivering six months ago, but I would do it very quickly. I will still stand behind the outcome. But you recognize that I want to do it at a fixed fee because I’m, my objective is achieving tremendous efficiency using software.
So I can preserve margin as long as we can preserve margin, incentives are right. So the minds of those lawyers, they’re making a hours investment upfront to build a system. They’re taking a commercial risk that maybe the client doesn’t want it after all, or maybe they can’t sell it to 15 clients, but if it pays off, they don’t have to work so long hours.
They are instead curating that IP in the software. And that is a fundamental economics of a software product, right, is hours spent is irrelevant. You are going for scale. You are going for the kind of ability with minimum effort to add another client, another client.
Richard Lichtenstein: I would agree with that. Just to get back quickly to the technical side of things because I have a mix of different people listening to this, but some of them do care a lot about the technical details because they send me letters.
For your tool, it sounds like what you said for Contract Matrix, you’re not fine tuning a model. Essentially someone types a prompt. You inject into that prompt a whole bunch of information and context to make the prompt better.
David Wakeling: Yes.
Richard Lichtenstein: And you’re using a RAG system to inject. And so when someone says. I wanna understand this term on a securities contract in the Middle East, it’s able to figure out what that is, go into your database, find the relevant documents or information, and inject that into the prompt.
That’s how it’s working essentially.
David Wakeling: Yeah, that’s the guts of it. And also we are curating little data lakes which support the other end. So it’s prompt engineering and the data lakes. We don’t do fine tuning as a matter of course, but when we do it, we’ve done it directly with Harvey.
So we’ve launched a few very intricate workflows at the beginning of year. They offer even more specialist things where we didn’t feel Contract Matrix was the right forum to do it. We felt we had to do fine tuning. So in that environment, the way that works is, I’ll give you an example ‘cause we’ve been public about this.
With Harvey we start developing four modules. I’ll give you an example of one. The way it works is if a company’s buying another company, the system will ingest the financial information of the target and acquirer and then it’ll go through A&O Shearman databases of merger control laws and FDI laws, and it will tell the user which jurisdictions are gonna trigger merger approvals or antitrust filings, and which won’t, and those where more information is needed. So request for information and then the user’s expected to put in more financials, and then the system will absorb that and then coach the user over the life of the M&A deal. That we couldn’t really build in Contract Matrix.
Instead, we stood up a team of antitrust lawyers, and we’ve got a world leading antitrust team. So they had unusually good data and market expertise. And they did reinforcement learning directly with the data scientists at Harvey. So that is very unusual, but we can’t scale that. It’s very expensive, right?
So Contract Matrix I see is easier entry. You can get those antitrust lawyers developing some prompts, and we take some of their precedents. And then the Harvey fine tuning was when we’re really going big for something which we think could be industry utility scale, maybe other law firms wanna license that it, it’s so good.
So it was worth it. And that is a big ROI question that we have to make. We make some mistakes and some of them pay off. And as a business, our objective is to really define a commercially profitable, sustainable future business for a law firm in a world of AI. That’s my group’s job in the law firm.
Richard Lichtenstein: You mentioned some mistakes, some things that didn’t go well. Can you gimme an example of one you tried to do, and it didn’t work, and why didn’t it work?
David Wakeling: The best one is one you touched on, Richard. So one of your questions was what about, surely there’s some simple process where you are worried about being someone coming in and just being cheaper, and it’s a race to the bottom. We did experience that. One work stream we focused on: quite straightforward data extraction simple discovery, M&A, due diligence, very basic interrogation of large data sets to pull quite simple information. You don’t really need a lawyer to do. It became clear to us after a period of time, there was no margin in that. Everyone was doing it. Probably GPT-5 or 6 will do it in 30 seconds with, a prompt by someone who has a college degree.
So it was a race to the bottom. We couldn’t quite make it profitable, so we pivoted away from that. All of our resources really. Into, no, let’s go for subject matter expertise. That’s one of the threshold questions for any project. And if we don’t have something in the firm, which is a little bit special, and you couldn’t find in lots of other places, then we’re not doing that project.
And that’s become a barrier to any new investment by my group into an AI product.
Richard Lichtenstein: There’s a couple of takeaways I’ve got in terms of your internal use cases: that it has to drive positive commercial outcomes for the people, that you have this proprietary data lake that you’re building off of, and at this point it needs to be specialized and complex enough that there’s a reason why you’re buying the service from a premium law firm, right? You’re not a software company, right?
And you’re not gonna be a software company, and you have to accept that at some level, right? Would love to make sure we have time to talk about the other stuff you mentioned, which is the revenue generating stuff. Tell me more about it. What does that look like and how does it work?
David Wakeling: So there are three areas where we’re looking at revenue generation born from our AI expertise. One of them is we are licensing some of our technology as SaaS directly to clients. So Contract Matrix without all of this expertise baked into it.
Lots and lots of clients are licensing. They’re paying a per annum license fee for legal tech, AI, augmented from a law firm, and we entered into partnership with Microsoft to make that happen. Harvey is also in that partnership for some of the AI pieces. And that’s a good product, which makes money.
Richard Lichtenstein: Sorry, just to be clear, are you licensing that to other law firms or just to corporate counsels?
David Wakeling: Yeah, to corporate counsels. There are a few small law firms who will license it. But I think it’s a big order for another massive, premium law firm to license from us.
I think it’s a point of pride. I don’t think that’s gonna happen quickly, but smaller law firms will do it and a lot of clients will do it. That’s the client base now. So that’s the first area. It’s like pure SaaS. Broadly speaking, that works because our systems are quite ergonomic for the way lawyers work.
Lawyers have been working Microsoft Word for 30 years, and that shows no sign of changing, even though the rest of the world probably doesn’t like Microsoft Word. We’re in it all the time. So the whole thing is built around the way people operate with that Microsoft software. Hence we do the partnership with Microsoft. And they also gave us scale and a robust way of deploying in B2B. That’s the first area of SaaS.
Now, because we’re doing the SaaS, clients say, okay, that’s great. Can you now custom build? Additional layers to Contract Matrix where I want to you to bring in your practice group expertise?
So for example, we recently did a system for a top three US bank, global bank, and they wanted us to bake in a regulatory compliance element because they needed to demonstrate compliance with European regulators in this case. And so we devised the system with lots of expertise from from our regulatory knowledge, and then agreed with them a workflow that would achieve compliance and it was much cheaper than the human alternative.
So I would describe the second category as very custom build, very complex workflows, which are built on top of Contract Matrix. And then the third area is because we ha are keeping on with our push for AI deployment legal industry. And because surprisingly, law is a kind of a pioneer in this space ‘cause it’s so conducive to LLM disruption, we are getting asked by lots of clients to advise on how to deploy AI in their businesses outside legal space. So what they’re saying is, talk to me about how you drove adoption. Talk to me about how you managed all the risks of hallucinations and IP infringements and regulatory compliance and all these things as a law firm, but then now in my different sector.
And so it’s a bit closer to traditional law firm advisory to be honest. It’s strategic, it’s writing opinions, it’s forming views on like proper risk mitigation, but it’s informed by all that tech expertise I just described. So those are like the three interesting pillars. And obviously that spins up all sorts of work from those things.
And I’ll go back to what I said earlier, Richard. Our objective is to explore the future business model for a law firm. And so our mission is to make sure we kill the things which are never gonna make money. We’re not doing innovation theater here. We’re trying to really drive a good business that produces a good return for our clients and still is profitable for a law firm.
And obviously to double down in things where we can see this is a good way for a law firm’s practice. I would say that middle bucket is incredibly interesting. The more the custom subject matter expertise for amazing things can be done. With enough elbow grease, enough work.
Richard Lichtenstein: It’s very inspiring to hear the ways that you’ve managed to combine your expertise with your services with technology to create new products that people are actually paying for.
Those are the stories that everybody is trying to find: people using AI to create new ways of monetizing their existing intellectual property. So very exciting.
How do you see the future of talent? Is the pyramid, obviously you have some sort of pyramid. Is that gonna be the same shape? Should every law firm have a team of developers?
How do you see the law firm of the future work from a talent perspective?
David Wakeling: Yeah, so I think it already is changing in that very big law firms who have the scale, ‘cause I think this is an economies of scale business. They’re already hiring developers to look at products. Some are doing it better than others and more seriously, but they can afford to and they are starting to. So I think that trend will continue. I like to draw parallels with banks. If I go back 20, 30 years, banks were much less software orientated than they are today.
Today they’re basically tech companies or they’re close to a tech company. And I think law firms could go that direction. And banking is an interesting comparison ‘cause you still have a lot of people that work in banks. There’s a huge number of bankers, but they are augmented everywhere by systems, data analytics, compliance tools, whatever it is.
Everything is a giant ecosystem of software and working practices for compliance. And I think law firms could start going that or they are going that direction, but we’re right at the beginning of this journey and that changes the model. I think also that some jobs will be completely displaced.
But, and there are two very difficult things business challenges for a law firm or someone investing in a law firm when looking at this, the first one is it takes five years to get a very junior person into someone very productive. As a lawyer, there’s a long training cycle. They need to know the market, and you’ve gotta be incredibly sure that there won’t be demand for the X number of lawyers in five years time. That is a very, very brutal. Mistake if you, you do not have enough lawyers to serve your clients in five years ‘cause you managed for AI. So I don’t think even, we would not be, crazily bold on this because actually we’re very busy.
We’ve got good junior lawyers coming in. We’ve got enough demand and the AI work. Us building these systems, which is a second thing, requires quite a lot of time. I mentioned Richard, about how you have to invest a lot of elbow grease to get that subject matter expertise into the system.
Then you’ve gotta test the system to make sure it sings and delivers the whatever results for the clients. That takes a lot. And our juniors in my group, I’m still hiring junior lawyers, and they’re very busy and that’s what they’re doing. And I’ve got a load of lawyers in my team who are brilliant at prompt engineering for legal tasks.
They understand how to curate data lakes. They can hold their own in a conversation with someone who is a data scientist, and they’ll have a good exchange and then the lawyer will take something away and the data scientist will. It’s a new expertise, right? So I think it’s gonna be not stark, it’s gonna be a changing skillset, which might, not mean fewer people frankly, but different kinds of people.
There might be some rationalization in the legal sector though, because I wonder if it’s too expensive and hard to invest in AI for smaller firms. So I think that might be complex for people who are a bit small in the legal sector.
Richard Lichtenstein: There will still be lawyers, but the lawyers will be part lawyer, part engineer almost.
Do you feel that law schools should be totally rethinking how they teach everything? If that’s the skillset that’s required to be a successful lawyer and presumably five, 10 years from now, it’ll be more so, should law schools rethink completely how they’re teaching everything?
David Wakeling: Yes. I have seen this from Oxford and Cambridge in the UK. Some of the business schools, interestingly, and some law schools are teaming up in Europe to talk about it. And I’ve had conversations with some leading US law schools. Most of them like the big ones, the ones you’ve heard of, most of them are already doing this.
They are, and the best ones are saying openly, the professors openly saying to the room full of students, we have a legal problem here. I want you to tell me how you would use AI systems to resolve the issue, and I want you to tell me your prompts, why you’ve crafted them the way you have. And crucially, I want you to tell me how to validate those prompts and make sure they don’t have mistakes or hallucinations or make up case law or whatever the risks are of the system.
What they’re doing is they’re forcing those students to think, should I even use an AI system here? Because it’s a very good answer if you say there isn’t one fit for purpose. That’s A star, star in your grading. And then if there is a good system, do you know how to prompt engineer?
You’ve gotta feel for how to get the best out of it. And the third thing is that you’re still applying critical thinking and you are validating and taking the best and fixing the worst of the AI output. That is a great law school in my opinion. And there’s some of them are looking in those terms for sure.
But it’s early days. I’ve only been doing it a year or two, but I would think that’ll be par for the course over the next few years.
Richard Lichtenstein: That’s super interesting. Not only are, legal processes being rethought pretty radically, but even rethinking how you teach the subject of law is necessary.
What lessons do you think there might be for other professional services firms around change management? What are your tips for how they could do that effectively?
David Wakeling: I think number one is incentives. Find the incentives for adoption, which tend to be hearts and minds and they tend not hearts and minds in softly, doesn’t it feel nice to use ai? Isn’t it cool?
I mean hearts and minds along the lines of “Aren’t you sick of doing these repetitive tasks?” Isn’t there a better way to make a living where you are thinking it as a fixed fee and you are being paid for the architecture of the AI system and your brain work and getting it to really do the job.
These are things which are exciting when you talk to people. And the things where you’ve really found a very good business model or very good adoption then get those people to talk to their friends about it and really give them the platform to communicate with everyone else.
And I’d say the third one is do not fall into the trap of innovation theater, this whole, oh, we can do some AI aren’t we a cool, progressive place? It has to be about exploring that commercially viable future business model. If it’s not about that, then you probably want to kill whatever the AI project is.
And I think that gets lost a lot. I talk to clients, and I talk to other law firms. And I really do get the sense of some innovation theater going on. And the problem with innovation theater is people don’t really put the work in and if I was to do number four, sorry, Richard, number four there’s no easy win here. In my experience, to get the AI systems to really perform, you have to invest enormous amounts of work.
To really bring something different and some real subject matter expertise into it, and you have to take a commercial risk doing that because it might fail. It might be there isn’t the client demand you thought there was, or lawyers actually work in a slightly different way and you misread.
Just on a portfolio basis, you need to win a lot more than you lose there. And then it’s a brilliant AI strategy. But you got to accept there’s a risk of failure and investment. And it is not just taking software out of the box that’s probably not gonna do what you want.
Richard Lichtenstein: The first couple of points you made about your journey bring that to life, right? The idea that, you tried Harvey out of the box and that’s worked for some things, but you just found a bunch of people saved a little bit of time, took an extra coffee break.
It didn’t really work. It was only when you built your own system, fully redesigned how you did the work that you were able to really take time out in a big way and actually get benefits. Theater suggests there’s almost a performative aspect to it, which is probably what some of it is, but I also think it’s just that most people, are saying how can I take what I’m doing already and do it slightly better. And they’re not thinking about how do I just do the whole thing differently? That’s what you did, right? That’s what’s working.
David Wakeling: Yeah. Agree.
Richard Lichtenstein: Any final thoughts, anything else you think is important for people to know as they think about their own AI journeys?
David Wakeling: I would say I that classic adoption curve- we have innovators, early adopters, the early majority, the late majority, and then laggards. And there’s a nice bar chart which shows the proportions in your business. I was a bit skeptical, but I can say it is true. It is absolutely true. It got me reading literature from the tech sector. I really enjoyed Lean Startup or Pirates in the Navy and books like that, and I’ve really tried to learn. I’m not reinventing the wheel and to drive adoption to change working practices.
People have done this stuff before in other sectors, so I liked your question. What can other professional services learn? I think there’s a lot of well trodden paths here and I have a much clearer understanding. I’ve done the early majority of my firm. I now need to crack the late majority, and I’m trying to take techniques from other people’s experiences and other sectors and see how I could deploy them in the legal sector. But the challenge is still there for late majority. And the laggards I haven’t even started thinking about yet. But you know that, that journey you can learn from others, and there is a genuine adoption curve and it takes time and it’s a hard job.
Richard Lichtenstein: Yeah. Hard job. Good place to end. It’s definitely a hard job. I applaud you and all your success. It’s super inspiring to hear about all the great work you’ve been doing, and thank you for coming on the show. I appreciate it.
David Wakeling: Thank you, Richard. Enjoyed it. Good to see you.
Note: The opinions expressed in this article are my own and do not represent the views of Bain & Company.
RAG is Retrieval Augmented Generation. Basically, that means searching for the most relevant documents or parts of documents and putting them in the context window for the bot to use to generate content. It’s a common technique when you want to apply GenAI to a large corpus of content.