Over the last couple months, we’ve been hearing customers discuss how to prevent AI tool proliferation — the idea that you have too many tools in a single organization that do somewhat similar or even completely redundant things. This feels like a natural next step in the AI hype cycle. We started off with generic excitement that wasn’t fully actionable because of the lack of applications. We then saw an explosion of applications and underlying infrastructure. And now we’re seeing concern about having too many applications. We’ve effectively speedrun the SaaS hype cycle of the 2010s in the last 3 years.
So how do you know if you have too many AI applications? As you can imagine, we have some thoughts!
We’ve organized our thinking around how buyers should tackle this concern and around how application builders can avoid getting caught in this trap. At its core, there’s absolutely a valid question here, but the framing is actually mixing up two different concerns.
The first concern — the valid one — is whether you’re buying tools that can serve the same purpose with a slightly different framing. For example, do you really need an enterprise search tool and a separate question-answer engine for HR policies in your Slack workspace? Probably not. Sure, these might traditionally come out different budgets, but for sanity’s sake, you likely can and should unify these tools. Having data spread across multiple tools is confusing at best and might lead to inconsistency down the line.
The second concern, which we find to be a little misplaced, is whether tools use similar implementations under the hood. We often get asked whether RunLLM is built on a “generic RAG architecture.” We’re never fully sure what that means (there’s obviously more than a vector DB + a GPT API call), but more importantly, we’re not sure why it matters. The value proposition of RunLLM is that it integrates into your support workflow and provides high-quality support case resolution. That’s not something, for example, a generic enterprise search product does, so it shouldn’t matter whether we have a secret magical formula for support, use a RAG architecture, or depend on a hamster in a wheel. The slightly silly analogy we always give is that you wouldn’t ask your CRM and your engineering task tracker whether they use the same database under the hood — they might, but the value proposition is clearly different. If every SaaS product was “just a database” in 2017, well… now every SaaS product is “just a database + LLMs!”
With that context, let’s dive into how we all should be thinking about managing AI tool proliferation.
Lessons for buyers
Avoid buying redundant tools while focusing on actual impact.
Have clear goals. The biggest thing that we think will help buyers make clearer decisions about AI tool proliferation is knowing what they’re trying to accomplish. The goal has to be one tick more specific than “Use AI” — that’s naturally going to lead to lead to a bunch of random tools of varying quality they want to use. You have to know what business problem you’re solving with AI and what impact that’s going to have. If you can outline those things, you can determine whether a general-purpose, specific, or in-house solution is best suited to your problem.
Don’t focus on implementation. As we outlined above, the focus on how something is implemented should matter much less than it does today. It’s true that if two products have exactly the same underlying architecture, that might point to potential redundancy. However, a marketechture diagram is never going to fully explain how a product works, and more importantly, different products might use similar AI techniques to achieve broadly different goals — just like the same SQL DB might be used to build a task tracker and a CRM both. It’s not that you should never ask about how something works under the hood, but whether something is a RAG application or not shouldn’t be the key determinant in your decision.
Do focus on applications. The nuance in the above point is that once you know how a product works, you can be thoughtful about where and how it’s used. Some of the best customer conversations we’ve had over the last year have pushed to extend our functionality in directions that we hadn’t previously considered. That’s because our customers understood what was possible and asked thoughtful questions about where else that functionality could be applied. This naturally falls out of the fact that — where LLMs stand today — AI applications are likely to be good at the basics in many adjacent fields without being experts in any of them. That kind of engagement will help you prevent proliferation of duplicate tools by being thoughtful about where you can best leverage what you already have.
An important corollary to this is that you should prioritize working with teams who will iterate with you. Nothing is set in stone with AI — anyone who presents a product as a fait accompli is lying to your or themselves (or both). A team that’s open to feedback will be listening to you and others and iterate quickly. Our best customers often ask us about how other customers are using our product.
Accept that AI is all software today. To some extent, saying that you have too many AI tools is effectively saying that you have too much software today. That doesn’t mean there isn’t proliferation — as we all know, many enterprises ended up with a few too many SaaS products for their liking. Regardless, with a few key exceptions, everyone is thinking about how to add AI into their product, so classifying a product as an AI product (as opposed to sales, support, marketing, etc.) is an unhelpful distinction. Again, you don’t want to buy products that do exactly the same thing, but depth of support for any one of those areas of specialization is valuable, even if there are shared implementation ideas. It isn’t unlikely that you’ll have a key AI product for many of the job functions in your company.
Lessons for builders
Marketing and positioning matter. As technical founders, we might be sometimes tempted to say that the product should speak for itself. To an extent, that’s true — we do tell customers that seeing is believing: Since it’s your first time buying a new product in this category, you’re never going to fully understand what’s possible till you use it. Nonetheless, how you position your product still matters. What problems are you solving, how do you explain the value prop, and what are the key differentiators of your product? Customers have to understand that. For example, we recently closed a customer at RunLLM who was pleased with the quality of our answers but didn’t realize the depth of integration into the support workflow we enabled. It was nice to get the win, of course, but it also felt failure of the product positioning and sales process because we didn’t truly enable them to understand the power of the product up front.
Know your value proposition. This might be the silliest sounding bullet point in this blog post, but it’s the natural counterpart to buyers needing to know their goals. We’ve seen a lot of products on the internet that seem to be doing a little bit of everything — like the old SNL floor wax skit. In a market this crowded and noisy, specificity matters. Specificity doesn’t mean that you have to build vertical applications (though we’re biased towards that approach), but it does mean that you have to know what you’re enabling. Is it a generic application platform or a specific vertical? Is it optimized for more or less technical audience? Clarity matters! You might be tempted at the early stage to say “yes” to everything — that’s an okay strategy within reason. If you stray too far afield, you might find floor wax on your dessert.
Don’t dismiss customers’ concerns. You might lean towards thinking that customers are being silly and that they’ll naturally grow out of thinking that your product is replaceable with something generic. Don’t give into that temptation. If your customers are confused about how you’re different from other options, that’s a clear sign that you have work to do — it might be product work, positioning work, or sales work, but there’s room for improvement. If you still disagree with us, remove yourself from your immediate area and try to honestly evaluate a tool in a different AI application area. You will quickly see how overwhelming it can be and why it’s so important to continually use that feedback to refine your positioning. Again, nothing is set in stone, so what works today might be outdated next month. Keep listening to what customers are saying!
Here’s a bonus lesson that applies to everyone: Boring stuff matters. The AI components of everything we’re building matter, of course. If you don’t get good results out of the product you’re using… why would you use it? But the boring stuff matters too. What is the boring stuff? Everything from integrations to workflow management to tone & style — and probably a million other things depending on your application area. With the breadth of different approaches to the solving enterprise problems with AI on the market, doing the stuff that allows an enterprise to confidently use your product in their workflow will make a huge difference. And from a buyer’s perspective, knowing that a product can work with all the existing software you already use is critical: That’s how you’re going to drive behavior change in your team.
This is the natural next step in the hype cycle, so we shouldn’t be surprised — but it’s still frustrating when a customer doesn’t immediately get it. The good news is — unlike keeping up with whatever the latest crazy model release is — this is something that’s in everyone’s control. With the explosion of available products, we have to make sure that we’re approaching our problem areas in a sane way. We can’t run out and buy every product on the market, and we also can’t expect to shine without putting some effort into explaining what we’re good at. A likely next step in a couple years’ time is consolidation, but the prerequisite to that is winners who explain their value clearly.