Chatbots are dead, long live chatbots!
Rumors of chatbots' demise have been greatly exaggerated
At this point in the AI hype cycle, you probably cringe at the idea of chatbots. We do too. Of course, there are good chat interfaces backed by powerful models, like ChatGPT, Claude, or RunLLM (we’re biased 🙂), but there are also tons of low-quality chat-with-your-docs RAG implementations that cost $12/month and return useless results. No one wants to use those bots — they usually respond with bad answers, they’re easily confused, and they frankly give AI a bad name.
As Joey talked about in his expectations for 2025 on DeepLearning.AI, 2025 is going to be the year where we stop chatting and start doing. We actually had a slightly less developed version of this idea all the way back in February of last year in a post called *There’s more to LLMs than chat.* This is to say that we are big proponents of the idea that language models can and should be used for more than a question-in-answer-out mode of interaction.
This means that calling a product “just a chatbot” is something of an insult today. We might even be tempted to say that chatbots are dead. You might be surprised to hear that we disagree. Chat, when it’s done poorly, is of course a bad experience. But well-designed AI systems have the opportunity to break out of a rigid, question-in-answer-out mode and become proactive agents — all while maintaining the familiar chat veneer.
It’s worth starting with where the criticisms of chat interfaces hold water. This will help inform where we think it makes sense to move past chat.
We’re not going to repeat everything from those posts, but there are two key points behind both previous posts that are worth teasing apart. First, we’re saying that poorly done chat interfaces harm our trust in AI technologies and should be avoided: Don’t build a chatbot for the sake of “doing AI.” Second, we’re saying that the chat as an interaction modality is powerful but has critical limitations that stifle possible product innovation.
The way chat limits interaction modes is fleshed out in more detail in both of the posts we linked above, and we’d recommend reading those. As a quick summary, the biggest issue is that chat encourages a respond-only mode of operation. Chat interfaces only activate when a message is sent to them, but as we’ve seen in a variety of powerful products (e.g., AI security analysts), much of the innovation in LLMs can come when work is being done in the background.
Identifying where chat interfaces fall short or are overly limiting doesn’t mean that we should throw them out altogether.
Language models are ultimately about text, and it was the productization of GPT-3.5 into ChatGPT that really set off the LLM hype cycle. What we’ve been refining over the last 2 years at RunLLM has been, of course, a chat-based support engineer that builds on all the incredible innovation in language models. Again, we’re biased, but we think we’ve done a pretty good job of building the best support engineer on the market.
As you can imagine, this is a criticism we’ve heard about RunLLM many times: “It’s just a chat-with-your-docs bot — how hard can it be to build?” We disagree on the merits because there’s a lot more that goes into RunLLM than GPT-4 and a vector database, and we also think it’s an unhelpful way to describe products.
We believe that chatbots shouldn’t be limited to chat interfaces because it leaves far too much on the table. At the same time, given the current state of LLM technology and familiarity of chatbots as an interaction mode, chat is still a very reasonable place for most AI products to start — perhaps even the best place for most AI products to start.
Chat interfaces are here to say. Here’s why:
It’s familiar. This may sound silly, but chat interfaces are familiar to anyone who’s been on the internet in the last 25 years. With how quickly AI is changing, there’s a sense of familiarity that comes from sending a message and getting a response. That will help any product with adoption. Again, this isn’t where your product should end, but it’s worth having a chat interface as a starting point. We’ve even heard examples of customers asking for a chat interface when an AI product started without one.
It’s a great way to build trust. We haven’t talked very much about what it means to move past chat in this post. That’s described in the posts we linked at the top, but a one sentence summary is, “Doing work in the background.” The thing with AI products doing work in the background is that your users have to trust that you’re going to do the right work with sufficient quality. Trust is often scarce when adopting new AI products, so giving your users the opportunity to chat with your product and see high-quality responses is a quick way to build trust in your product.
Chat can be more than question-in-answer-out. When we’re talking about chat in this post, we’re actually talking about a very specific interaction mode that ChatGPT has made us all familiar with: question-in-answer-out. When you send a message, there has to be a response. But as we all know (from being regular human beings), this isn’t how humans chat at all. Conversations can have multiple messages in a row from one person and can be started by either party. Once you remove the transactional constraint, there’s a whole lot more you can do. We’ve been working on many of these features in RunLLM — sharing best practices, checking on success, executing code, etc. even when we aren’t prompted — and we’re excited about how much it improves the UX. We’ll share more about these features soon!
Chat gathers data for… everything else. One of the biggest benefits of chat is that it’s a relatively easy (and high volume) way to gather data that allows you to improve quality and build other features. Based on what we’ve learned from doing chat-based support, we’ve been able to build features that allow users to teach RunLLM new information and that enables RunLLM to improve the quality of customers’ documentation automatically. The data that powers these new features comes directly from end-users chatting with RunLLM.
As we’ve discussed many times around here, the UX of AI applications is far from being set in stone. In fact, innovating on UX is one of our key focuses at RunLLM in 2025. Despite all the shortcomings and bad implementations of chat out on the market, we still think that chat is here to stay. If done well, it can be a great starting point for any product.