0:00
/
0:00

How Ontra transformed itself with GenAI

An interview about GenAI with Eric Hawkins, Ontra CTO

Today, I’m posting an interview I did with Eric Hawkins, CTO of Ontra. For those of you who don’t know Ontra, they are a legal services company, serving investors, that have transformed themselves using AI. They began as a pure services company and have used AI to morph into a tech-enabled services company and in same cases a pure software provider.

We cover how GenAI transformed some customer actions from taking months to just minutes, how they got customers to use their GenAI features, how to price for GenAI when you are disrupting yourself, and the future of the legal profession.

It’s worth a listen (and if you are on 1.75X speed, you can be done in 15 minutes). Note that the interview has been lightly-edited for clarity. Let me know if you find these interviews interesting and who you’d like to hear from next.


Full transcript is below:

Richard Lichtenstein: Hi everyone. I am Richard Lichtenstein. I am the host of Artificial Investment, a Substack and occasional podcast. I am here with Eric Hawkins, the CTO of Ontra. And we're gonna have, I think, a really interesting conversation today about AI and how it fits into tools. Eric, why don't we start with, tell us about Ontra.

What is Ontra? What does it do?

Eric Hawkins: Yeah. It's great to be here. Ontra is an AI powered platform for private market asset managers, so private equity firms and VC firms primarily. And we solve legal and compliance workflows through the full fund lifecycle.

The really time consuming legal and compliance workflows in private markets, things like getting an NDA negotiated before you can get in to the deal room or due diligence on a potential acquisition target, or keeping track of what you've committed to your LPs and all those side letters or keeping track of all your legal entities that are used by the fund.

We automate all that using AI and just completely take it off of our customer's plates. We're , heavy adopters of GenAI. I know that's what we're gonna talk about today.

Richard Lichtenstein: Okay, great. I think there are a lot of asset managers hopefully listening to this, so everything you said will probably make sense to them, especially.

What are the GenAI features that Ontra has that you're particularly excited about?

Eric Hawkins: A few examples that I think are pretty interesting. Automatically marking up an NDA to accelerate access to the deal room. We use AI to help negotiate those NDAs and other routine contracts. using AI we can suggest language, based on what you've done before, we can suggest it based on your negotiation preferences. And we can automatically mark those contracts up and then we hand it off to an attorney for really quick review.

We've really fine tuned the UX so that they can accept suggestions really quickly and move it forward. Another space we use AI to process and do the, summarization and data capture from more complex legal agreements, like side letters and limited partner agreements.

You just upload 'em to our platform. We use AI to synthesize all that. Find what are the tasks contained in those contracts, what are the obligations, commitments, exclusions, things like that. And then we can power downstream workflows directly from it.

Richard Lichtenstein: Just to help our listeners understand how you got here, what did Ontra do before you had GenAI? How did it work? Did it not do those things or did it do it with manual labor?

Eric Hawkins: Yeah, with manual labor. We have a network of legal professionals, around a thousand legal professionals that are distributed all over the globe. In the modern era, they backstop our AI and work in conjunction with the AI. Before that they would do all the work, right?

We had SaaS applications that our legal network would use to capture the state of a contract negotiation or capture the structured data, but it was humans doing all the work. Now, as we're building gen AI powered tools, we put 'em both into the hands of our legal network, so it accelerates their capabilities, but also more and more we're just able to offer the solutions directly to our customers.

Richard Lichtenstein: This is such a great example of a business that essentially was a services business, I would say a tech-enabled services business that you've pivoted to become a software company, right?

Eric Hawkins: In a lot of ways it always was the software company, because what we built was the software for our service offering. But yes, a hundred percent. Now we're very much an AI company that delivers our products as a service or directly as products that are consumed by our customers.

Richard Lichtenstein: Great. Do you have any time estimates, like how long did it used to take back before AI when you had the human lawyers doing these reviews?

Eric Hawkins: When we would add a customer to our obligation management platform, we call it Insight. So if I refer to Insight, that's what I'm talking about. A large asset manager might have tens of thousands of side letters, and we would need to get those all digitized, all abstracted and onto the platform before we, before our automation, around keeping them honest with what they've committed to their LPs could kick in. That would take months and it would, cost hundreds of thousands of dollars in labor. Now it's a matter of seconds to process all those legal contracts. We still have expert funds attorneys review the output of the AI, but it's largely complete within a matter of seconds. And then the review is really quick. We are talking months down to minutes or potentially days including the review. Just massive improvement.

Richard Lichtenstein: I think that's amazing.

I don't know how many AI skeptics there are reading my Substack. Maybe there are some. And so I think this is just a great example of a real use case of AI transforming a process, from months to minutes, right? Or days?

So are customers using the AI features? Maybe customers don't have a choice. How do you think about getting customers to use these features and measuring whether they use them and reporting back on their adoption and things like that?

Eric Hawkins: A lot of our AI functionality sits in the background, right? So processing a new legal contract that gets uploaded to the platform, things like that are relatively transparent to our users, they don't, necessarily see it in action. But, things like AI powered search, for example.

Side letter solution insight. Once you've digitized all those commitments and prohibitions and things like that, that are contained in your limited partner agreements and side letters, then we have a bunch of workflow around reminding our customers that they have reporting obligations or things.

But then we layered AI powered search, natural language search on top of it. So people from the deal team or maybe. Still the legal or compliance team at the fund can come in and ask questions. Am I allowed to do this oil and gas deal in Saudi Arabia? Well, no, because of your Calpers 2019 investment, that's prohibited.

So they interface with it directly, and it's in those experiences it's a comfortable AI feel for them where if you've used perplexity or Chat GPT or whatever, you ask a freeform question, you give as much context as you can. It's both gonna give you the answer, but then it's also gonna give you the attribution and the links to the governing language and in governing contracts and things like that.

And we see tons of adoption with those types of systems.

Richard Lichtenstein: Interesting. If I capture what you said properly, there's a first piece, which is just the workflow didn't change at all. It was completely opaque to the customer even that GenAI is happening, right?

They just know I upload my NDA and it gets reviewed and it just used to happen by human and now it's a robot. Now, I don't know, I just know I upload it and it happens. That's obviously the easiest thing 'cause then they don't have to change anything about how they do it.

They just know it's happening faster. The Insight platform you're describing with the search is interesting. What was the alternative that people were using before? So if I didn't have that, what would I have done?

Eric Hawkins: You would call your lawyer, right? Maybe it's in-house counsel, maybe it's outside counsel, but you would call them up.

And you would say, Hey, I'm evaluating this deal. Is it okay? Do we have any prohibitions on this? And they would say, I don't know. Let me get an associate or paralegal or whatever to go over all of your LPAs and all of your side letters, and we'll get back to you in a few weeks, and we'll send you a bill for hundreds of thousands of dollars to answer this question.

Richard Lichtenstein: I guess the answer is that talking to lawyers and dealing with lawyers is just so painful that people are running to use AI, rather than having to deal with that. I'm joking a little but I think what you're saying is that the alternative was so bad and so expensive and painful and terrible that people are willing to try an AI tool.

Whereas I think in some other domains, where the alternative maybe isn't quite as bad if people aren't as willing to adopt.

Eric Hawkins: Yeah, I think that's true. Obviously what people want out of their relationship with their attorney. If it's, outside funds counsel , or in-house counsel is to be a strategic advisor, right?

And there's just all this routine, tedious stuff that gets in the way of that, that really bogs down those legal teams. What we're trying to do is take that off their plate so they can spend more time designing a fund structure with the right LLCs and things like that. That's conceptually what we're after.

Richard Lichtenstein: It totally makes sense. Spending a lot of time and money to answer a fairly simple question, doesn't seem like it's good for anybody. I that makes sense.

Eric Hawkins: So then the other thing, just while we're on that topic of, AI functionality, the other side is on the markup tool.

Given some contract that comes in, we've digitized the negotiation preferences of our customer. The things that they generally accept or don't accept in legal contracts. We've digitized all that, plus we've digitized all their prior art, all of the precedent legal documents.

So when a new contract comes in the system can mark it up automatically, and then we have a really nice UX that we put back in front of the customer to say, do these suggestions look good? And we see very, very strong adoption of these tools where now even if it's an attorney doing the work, they're sort of like, accept, accept, touch up a little reject, and they're just working in partnership with the AI tools.

Richard Lichtenstein: Makes sense. AI does not have to replace someone's job. If it just makes someone's job a lot faster and easier that's great too. That also has a lot of productivity benefits.

How do you think about hallucinations? When I talk to clients, still hallucinations are usually one of the first questions I get. And especially for something like this, which is a use case where it's so life or death, where the precision is so important, right?

Like one comma in the wrong place or something like that could have a meaning. How do you get comfortable?

Eric Hawkins: It's something that we work really hard at to defend against hallucinations. The simple answer is in as much as you can ground the model in the data that you are providing rather than in its training set, that's sort of generically speaking the path to avoiding hallucinations. And what I mean by that is if I go into Chat GPT, and I ask it some question and I don't provide it a document or a bunch of context in which to ground its answer, it's gonna base its answer based on its training set, right?

And when it's basing its answer from its training set, it's much more prone to hallucination. On the other hand, if you say to the model, here is the relevant context, here is the relevant information. You are only allowed to use that information in your response.

You've grounded the model in better facts and greatly reduce the likelihood of hallucinations. The other thing that we do is not to get too terribly technical, but when you tell the model that it has to answer everything in a very particular format or comply with a very specific schema, that helps as well, right?

For example, if we're asking the model a question of interpretation, we say, you can only answer with a Boolean, and you have to ground it in the data that I've given you, right? And so there are these sort of guardrails and techniques where you can defend against hallucinations.

Now the third thing. Is evals, right? So we have millions of legal contracts in all private market domain. We have all that data labeled and annotated, and we have super sophisticated eval sets that we're just constantly running. So we have both offline evals and then online evals where we can judge the efficacy and the correctness of these responses.

Multi-pronged approach to defend against hallucinations. The battle is real, but I'd like to think that we are prevailing.

Richard Lichtenstein: It seems like you must be, 'cause I assume if it kept hallucinating, people wouldn't use the product. So I'm gonna assume that all the stuff you're describing is working.

Now here's another question. How do you think about pricing? You can decide how much detail you wanna get into about your pricing structure, but I think about that example you gave at the beginning where we used to have a process take a few months cost a hundred thousand dollars.

Now it takes minutes, presumably it costs a lot less. How do you think about pricing that, where you're capturing a lot of the value that you're creating, but also your customers are sharing in it?

Eric Hawkins: Interestingly, because we started as this legal outsourcing play, we had this legal network with all these legal professionals around the world.

The pricing of that solution out of the gate was a per piece pricing. So you can think of it as a consumption based pricing. You send Ontra an NDA to negotiate. You don't look behind the curtain, we negotiate it on your behalf and you're charged on a per contract basis.

So that's where the company started. As we've layered in AI, we've stuck with that consumption-based pricing. And what it does obviously is in a lot of places it allows us to lower the price because the labor component is less. Alternatively increased speed and quality in areas where we've held the price constant.

But pricing, I would say for us, has been less of a transition than a lot of SaaS providers that had to go from a per seat model to something that's more of a consumption model. In a lot of ways we were fortunate there. I do think the question behind the question is how do I feel about pricing and the AI age?

And I think consumption-based, success-based, outcome-based is the only way to go. People just aren't gonna stand for some massive license with unused seats and things like that. That's not getting a lot of volume.

Richard Lichtenstein: Exactly. I think consumption-based makes a ton of sense.

Now that you've made the process much faster and easier for the customers and potentially at least somewhat cheaper, right?

Do you see volumes go up? So do you see we've now made it much easier for you to red line a contract or whatever. So now I'm gonna put more contracts through the system so it actually ends up resulting in an increase for you.

Eric Hawkins: A hundred percent. A hundred percent.

And maybe just to put a fine point on it, so with our obligation management solution, the insight platform that I mentioned, in the prior era where it would take months and we had to have funds, lawyers annotating those contracts or abstracting those contracts, we had no choice but to pass on a lot of that labor cost to our customers.

Implementation might cost hundreds of thousands of dollars in the prior era. Now we offer it free, right? You upload all, everything that you've got, it's all ready to go within a matter of minutes or potentially days, including review. A massive cost savings.

But more importantly, the barrier to entry is so much reduced that we see the volumes and the ramp up time on our products increase. Phenomenally.

Richard Lichtenstein: I've had a couple of articles I've written where I've lamented that seat-based pricing feels like a very dangerous model right now.

This is a great proof point for those who are listening of seat-based pricing is not gonna last for that much longer. Now I agree the best is outcome based, right? If you can really tie it to an outcome that's even better. But even consumption, what you're describing, I think is great.

It sounds like a meter that's led to growth, which is exactly what we want.

Eric Hawkins: There's still value to providing some predictability in pricing. The risk with consumption based or outcome based is that you can get wildly variable costs, and in general, CIOs or CTOs don't really like that, or CFOs.

One of the things that we do is, we'll tranche out different levels of usage, but the important thing is that it's proportional to the outcomes and to the consumption. So even if it's not, one for one unit of consumption equals different price, you're still aligned to that world, right?

So you can deliver a little predictability with the outcome-based pricing.

Richard Lichtenstein: Got it. Having it be literally a straight line invites danger as you say, that planning people don't like.

Switching gears, totally different question. All on the AI topics, not that different. Thinking about models, you're presumably using a bunch of different models from different providers. Maybe you could share which ones, but it's up to you. But what do you do when the models improve?

How does that change your product and and these models seem to improve almost without warning. So if, you wake up in the morning and OpenAI has just announced, 4o or o3 or whatever, what happens next?

Eric Hawkins: It is a challenging space to operate, that's for sure. Because you're always at peril of all this engineering work that you just put in over the last six months being totally irrelevant tomorrow and needing to write that off because now the model can just do it for you. That is definitely true.

When we started with GenAI, I told the team, we're just gonna go with OpenAI's models. They're the furthest ahead, they're the best funded. This was GPT-3.5 vintage. I said, we're just gonna put our blinders on and we're just gonna build because it can waste so much time.

We needed to get product out there. We needed to get some reps. We needed to build all the systems around the models, right? Everything from. Evals like we were talking about, to RAG infrastructure on and on. There's a lot to build around the model when you're shipping product.

So that was our approach. It was just put the blinders on, go with OpenAI. We were pretty fortunate that a lot of the model providers converged on OpenAI's API semantics. So in pretty short order we were able to start using Anthropic's models and other models, and their APIs were really very similar to OpenAI's as far as the information that you pass, the schemas that they would return responses, things like that.

So in some ways we got lucky that we chose OpenAI out of the gate. These days we're in a position because of all of our automated evals and we have a internal model router where we can drive traffic to different models for different workflows or different customers where we can evaluate new models really quickly for specific workloads.

We can say, yeah, this is better by one or 2% or worse by one or 2%, and we can pin traffic to particular models. It was not always like that. This is very recent advancement and it's very empowering.

There were areas where we built out really sophisticated chain of thought mechanisms. A year and a half ago on GPT 3.5 or even 4o vintage models and then now come the reasoning models. And you really don't need all that sophisticated chain of thought and sequential prompting and you can just give it the high level goal.

And so we've definitely had to refactor and rebuild some of those systems with reasoning models.

Richard Lichtenstein: Got it. And of course, nobody knows what's coming next. 'Cause it feels to me like in order for them to continue to get the kinds of productivity gains we've been getting, there's gonna need to be some sort of algorithmic improvement. 'Cause throwing more data at it doesn't seem to work. So I don't know what that'll be or when it'll happen, but if it does, then I'm sure it'll be a fun morning for you.

Eric Hawkins: Yeah, I agree.

I also think I hear a lot of, or a lot of my peers debate is RAG dead now with long context, you can just give the model everything. And why do you even need to have a vector store and do the retrieval step, but it's just not practical to send that much data over the wire.

Latency matters when you're building production. So I do think a lot of, the RAG systems in the infrastructure that you build around the model is gonna stand the test of time. I think more terms of the prompting. We have prompt libraries now of thousands of prompts and we use prompt management tools that are connected to evals.

I think that's the area where you just see a lot of rework based on more sophisticated models that you can give higher level goals to. But a lot of the framework around the models is valuable engineering work no matter what I believe.

Richard Lichtenstein: I think there is some possibility, again, I have no knowledge of whether this will happen, but I think there's some belief that an LLM company might release its own sort of optimized RAG type of model, right? Like I could see OpenAI says, “Hey, you wanna chat with a bunch of documents? We've got a RAG model that we think is really great that we've spent a lot of time on.” Now, whether that will be better than your domain specific one that you've spent years engineering, I don't know.

But I can see a world in which like there becomes a more standard RAG model that comes from the bottom that makes some of that go away.

Eric Hawkins: I think you're definitely correct, and I already see that happening, including with the Enterprise search players, people like Glean and those who effectively you can think of it as RAG as a service.

The difference though, which you hinted at, we've invested so much in custom embedding models really understand the nuance of how you even chunk up these legal documents, what things you index, how you index 'em. The results are wildly variant, and I just don't think, unless you're operating in a very specific vertical where you've optimized these things the horizontal players, the open eyes of the world, I don't think they're gonna be able to compete at that level of depth.

But on the surface, you're right.

Richard Lichtenstein: Now, I think that we may have set a record of going 20 minutes1 into a podcast about AI and not mentioning the word “agents” once, but now let's turn to agents for a second. What do you think as the world moves towards agents? You can imagine Ontra building its own agents to help facilitate workflows, but also your customers having agents on their side that they're starting to interact with? How do you see things changing in a world of agents?

Eric Hawkins: Mostly it's the user interactions that are gonna change. So the types of agents that we're building behind the scenes. And to be clear, we don't have any agents in production today, and that's very purposeful because the types of workflows that we're dealing with. If you want to hand it off to some agentic system, it's really, really gotta get it right. And the checkpoints for what you reveal to the human and how you bring the human in to approve and move the workflow forward matter a lot.

So we're investing a lot in the UX around agents where you can just hand it a legal contract, it'll go off, it'll do the whole redlining and basically come back to you. Now it's that user interaction that matters a lot, right?

Because if it just comes back and says, it's all ready to go, do you want me to send it to the counterparty? You're like, no, I need to review it. So what's the experience around reviewing it and correcting it? That is largely unsolved with agents. It's easy to build an agent, it's easy to hand it some work.

It's easy to give it a high level goal, but how you bring the expert human into the loop, I think like we're seeing the same thing with code gen, right? Like a lot of the agent code gen tools that my team uses, we've got a team of a hundred engineers, and it's the interaction. The human software engineer expert can tweak whatever the Agentic AI solution did and override it and correct it and things like that.

That's where it matters a lot.

Richard Lichtenstein: I still think that you're gonna need some sort of connectivity into master agents that the clients are using. For example, if I'm a fund and I'm now going to hire someone, and so I'm gonna go interview 10 people and I need them all to sign NDAs.

And I have some hiring agent, right? And the agent is doing a whole bunch of stuff as part of that hiring process, which includes it goes in schedule, it interacts with the person and schedules the interview, right? It kicks off an NDA process.

When, as that NDA process thing, you want it to then go call Ontra, right? And say okay, Ontra, I need the NDA for the interview. You need you to send it to this person. If the person has some complaint about some clause, they can interact with you and you can figure it out and a human will review it before obviously anybody signs anything, you'll kick it off.

So how do you think about if someone has a recruiting agent on their end? How are you gonna build MCP connectors or something to be able to connect into Ontra for that process?

Eric Hawkins: We're working on MCP servers internally, and I think there's a lot of promise with MCP in terms of interoperability.

The point that I was trying to make is, I believe in most of these mission critical workflows, you're it's a long way out before a human is gonna trust one agent to call another agent without intervention and without oversight. So we're investing in the technology to be able to support MCP and have other agents talk to us, but I think it's much more interesting to talk about what's the UX when like the HR agent has done some stuff and needs to take action.

Sure, maybe it is advising, sending something to Ontra, but you need a human that's brokering this, for a long while. There's gonna be this chain of trust issue with agents. That's certainly what we see as we're building. We have a lot of AI powered stuff in production in the market and humans want oversight.

I think MCP agents, calling agents, how you pass context, all that, those are things we're working on technically, but even if we had that all solved tomorrow, people aren't ready for it.

Richard Lichtenstein: Yeah, that's probably fair. I think it depends what the workflow is, but I agree with you, especially legal workflows, you have to assume that people want control over what's going on.

So I think that's a safe assumption.

Eric Hawkins: I think it's true in every domain. I have friends who are building in the go to market space, right? You don't want it to contact a high value prospect unless you've reviewed the copy and know what it's gonna do.

Richard Lichtenstein: Okay. One, one last question for you and then I'll let you go. Stepping back to if you think about the legal profession in general, what do you think the impact of AI is gonna be? Because tools like the ones you're building or have built seem to suggest that the world might need fewer lawyers or fewer paralegals in the future, but it also suggests there's some elasticity. So maybe there's more people suing people or doing more legal affairs because it's cheaper. I don't know. What do you think is gonna happen?

Eric Hawkins: The legal profession is going to evolve profoundly here in the not too distant future if it's not all already. And I think a lot of what the legal profession over the last few decades has evolved to be it like it's really moved away from this trusted strategic advisor, which probably at one point it was and has moved into this world of just incredible amounts of trivial, repetitive, monotonous work, marking up contracts, reviewing contracts, that type of stuff.

I believe that stuff is all gonna get automated really, really, really fast. And lawyers writ large are gonna have to redefine what their place in the value chain is and really focus on their customer relationships. Really focus on being strategic consultants to their clients. A lot of people ask what's gonna happen to the hourly rates and things like that.

I feel like you could argue it either way. You could argue it goes down because you're more efficient, and you can do more. You could argue that it goes up because you're doing higher value work that's more meaningful to the client. I don't know what the right answer is.

I think it's gonna be very difficult for people coming out of law school who generally would work their way up the chain doing manual work. A lot of that repetitive manual work that was a training tool in the past is gonna be automated away.

And so I don't know how you go from associate to partner in the new world. Maybe it is by mastering the AI tools and really being the most prolific person at the firm in terms of using the tools and using the automation. Very speculative, but definitely a lot of disruption out there.

Richard Lichtenstein: Yeah, it's one of the professions that for sure isn't gonna go away. For sure, we need lawyers. They serve a vital role in society and in transactions, but what they're doing is gonna be dramatically different 10 years from now. I think there's no question. And so how that evolves, it's gonna be fascinating.

Very interesting. I'm sure you're gonna be, you and Ontra are gonna be a big part of that. So thank you so much for your time, really appreciate it. Hope you enjoyed this and thanks for coming on.

Thanks for reading Artificial Investment! Subscribe for free to receive new posts and support my work.

1

I edited our conversation a little, taking out ums and excessive verbiage on my part, so when I originally made this comment, I said 26 minutes, but then when I edited it, it was actually 20. I was able to use AI to simulate my voice and say “20” in the video. I think it’s pretty seamless. We live in a wild time.

Discussion about this video

User's avatar