For this post, I did a little gonzo journalism. I wrote this post by speaking to a bot. It asked questions. I answered. Then it helped me shape my thoughts. Then it created a draft (which I’ve heavily edited).
Talking to a machine may seem like a poor substitute for talking to a human. But, GenAI interviewers are becoming increasingly useful in real-life applications such as primary research, requirements gathering, and even helping get your thoughts in order.
Bots are not at the point where they can substitute for a human interviewer, but there are still ways they can augment existing approaches in exciting ways.
Robocalls
Primary research is critical for making important decisions about product development, marketing, or even whether to buy the company. Typically, for primary research, investors will do interviews or surveys with customers. Interviews can obviously be quite insightful. You need to make sure the person you talk to is qualified, but if they are, you can learn an enormous amount in a short time.
But, making sure they are qualified is tricky. People often misrepresent their level of expertise in order to get compensated. These new bots open up a new possibility. Before a person can talk to a human, they need to pass a lightning round with a bot. The purpose of the 10-minute conversation with the robot interviewer is not to get insights but to verify that they know what they are talking about. The bot might ask about the meaning of certain acronyms or the difference between similar sounding concepts. This rapid-fire approach makes it hard for the person to fake their knowledge with ChatGPT.
Another way interview bots can support primary is by supplementing surveys. Surveys are useful ways to get input from a large number of customers, but it can be hard to get qualitative findings from them. Interview bots can supplement a consumer or SMB survey by doing 5-10 minute interviews with customers to get their sentiment on a topic. Hearing the tone of voice of someone answering a question provides color that’s missing from multiple choice questions. But to be clear, at the moment, that interpretation needs to be done by human analysts. These tools offer some AI ability to pull out the most relevant parts of interviews or track themes, but humans need to sift through this data to find the insights and the best, most representative clips.
This concept isn’t purely hypothetical. There are tools such as Outset or Strella that already work quite well for these use cases.1
Better Input for Big Projects
But talking to customers is not the only time people do interviews. Another important use case is discovery and scoping for a big project. I’ve spoken to several system integrators2 (SI) recently who are thinking about how to incorporate bot interviews into this process. Today, an SI might interview 20-30 stakeholders for a big project. With bots, they could interview hundreds of people at the company.
This could allow them to see key patterns earlier. Maybe there are some people who are skeptical of the project. Maybe there are some junior people that have conflicting ideas about what the project should achieve. These issues will now get surfaced upfront instead of at the end of the project when the SI is trying to get people to use the new system.
Obviously, you can’t only do bot calls. Key stakeholders will want and need to talk to people, but bots can create a more inclusive process. And, the SI will probably want to follow-up with the skeptics in person. This is particularly important because other parts of the SI value prop such as actually programming the software will get easier and faster, putting pressure on pricing. Finding a way to deliver more value earlier in the process could help offset.
Even Writing This Post Was A Good Test Case
Here’s where we get a bit meta - I used an interview bot3 to write this post. The experience was smoother than I expected. Normally, my process is to write a detailed outline and feed that to ChatGPT to get a draft. This time I called a phone number and got a cheerful voicebot that asked me questions while I spewed out my thoughts. The questions were good (e.g., “What are the risks and challenges to using bots?”), and it clearly understood what I was talking about with the questions it asked and little summaries when I finished speaking. The only downside was that the actual written copy felt less fluid than the ChatGPT output, but the company told me that they are optimizing it for longer posts and that is coming soon.
Overall, though, it was a fun way to write a draft and a bit faster than my usual process of writing an outline. I’ll try it again once they improve the output.
Bottom line
Bot interviewers are a useful new tool that will soon have an important place in primary research and other types of interviewing. These bots are not a substitute for humans doing interviews. They will miss facial cues, might not follow up on a comment, and will not create the human bond that an actual interview with a client stakeholder will. But they are great for reaching a population that you could never interview at scale. They are a great complement and in some cases substitute for surveys.
Have any of you tried one of these? Let me know in the comments or drop me a note. I’m curious to hear about your experiences good or bad.
In the third section, I’ll talk more about the experience of being interviewed by a bot (not either of the two companies referenced above) to make this blog. The bot output at the moment is a bit staccato. Here’s what it said for this section:
“Most companies do research in two ways. They send out surveys. Or they talk to people one-on-one. Surveys are easy but shallow. Interviews are rich but slow.
Now there’s a third way. You can use a bot to run short interviews. People talk to it on video or audio. The bot asks questions. They answer like they would in a normal conversation.
This gives you more than checkboxes. You hear how people really speak. You can see their faces. You can tell they’re real. That’s not always true with online surveys.
These interviews only take about ten minutes. But they give better insight. The answers feel alive. You get stories, not just scores.” Let me know in the comments if you prefer the bot to my writing!
These are companies that help install big software systems at large corporations
The company asked that I not mention them by name because they’re still working on the product, but I do think it seems promising.
I wonder if people might also be more honest with a bot, especially with interviews of this type. That said, other trends in responses might be less helpful, such as interviewees being less thoughtful or more taciturn - the staccato production might run both ways.