2 Comments
User's avatar
Ryan Leibowitz's avatar

Great post, Richard! Agree agents will unlock significant incremental value. One interesting outstanding question in my mind is to what extent we'll keep "humans in the loop" on agentic behavior (at least in early days) -- e.g., where can you build in human review or approvals to gain confidence in higher criticality tasks. This may be natural in some places (e.g., I've drafted this email, do you want me to send it), but also might limit the value in other areas (e.g., chatbot going back and forth with a customer can't stop to ask for approval for each incremental message!). In areas where it's not pre-approval, it could also potentially be post-action flags for review, in the spirit of improving agentic capabilities for the next time.

Expand full comment
Richard Lichtenstein's avatar

Yes, I think humans will be in the loop for a while, and as you say, that may not be practical in some situations. One version of that I've seen is having the bot at least double check what it's going to do initially, especially if it's a time and/or resource intensive task. So, it will say, "I'm going to go read these 20 websites and look for these pieces of info and then put them in a table." Then, the user can say if they want that before the bot gets going.

In terms of post-action review, in customer service, I imagine it would be when the bot wants to give a discount or refund, a human would take a quick look to approve.

Expand full comment