You’ve probably heard customers, investors, and even your grandparents observe that at this point most AI products are starting to sound the same. This is a valid criticism — we won’t name the name of the product, but just today, we were looking at the website for a product that tried to make itself sound like a development platform for AI applications until we noticed further down the page that they’re an enterprise search tool. Later on, we saw another place where that same tool was pitching itself as a “chat with your data” application. In other words, many products are not clear about what they do, everything is starting to sound the same.
This is a huge product positioning challenge for every AI startup. Even if your product is better, if all your content sounds exactly the same as the next company’s content, why would a customer spend time with you rather than someone else? Even worse, a bad experience with a similar sounding product may scare customers away from your product. In that world, you’re playing a game of roulette at best or racing to the bottom to win customers on cost at worst.
As we’ve discussed in the past, the “AI [job title]” framing has been very helpful. Calling RunLLM an AI Support Engineer makes it immediately obvious to customers why we’re in a different category from a generic chat-with-your-docs application. (Of course, you have to walk the walk too — you can’t put a lofty title on a generic chat application and expect to win.) Luckily for us, there’s not much going on in the AI Support Engineer space. As a customer told us recently, most of the Google results for the term are for job postings rather than products, though we’re sure that’ll change soon.
What’s so valuable about this kind of framing is that it gives customers something clear to anchor to — if a Support Engineer is responsible for a certain set of tasks, the immediate association is that an AI Support Engineer should be able to do many, if not all, of the same things that a person can do. This is why AI Sales Development Representatives (SDRs) have had so much success in the last year: the scope of an SDR is very well-defined, and just as importantly, the success of an SDR is easily measured.
What we’re increasingly realizing, however, is that this is just a starting point. Last summer, we talked about how AI applications will be able to easily move into adjacencies because there will be overlap in the breadth of skills AI systems have (even if they are lacking 20-50% of the possible depth). We’re starting to see this in our own day-to-day as customers “discover” new use cases for RunLLM the more they use it.
Let’s start with a couple examples of what this looks like.
AI SDRs — as the name obviously implies — have started off by focusing on sales development. In theory, they’re able to understand customer profiles and generate specific outreach content at a scale that no person could possibly match. However, once you’ve set up an AI SDR that’s working well for you, you can start to use that system in other ways. For example, if an AI SDR can write compelling messaging that books meetings well, it can likely also generate customized follow-up emails after customer calls and ensure timely follow-ups. Similarly, if the AI SDR starts gathering data about which messaging the most positive feedback, marketing teams will be very interested in which messaging lands with which personas.
AI Support Engineers are the other natural example that we spend a lot of our time thinking about. At RunLLM, we typically start working with Support Engineering teams who are looking to scale more effectively and improve customer experience. However, we find that once customers start using us, the product quickly spreads amongst different teams. Sales engineering and solutions architecture teams start using RunLLM to help in the customer onboarding process, and sales teams use RunLLM to answers customer questions while on live calls. And a variety of other teams — product, engineering, customer success — have started to crawl through RunLLM’s conversations to find customer insights.
What does this mean for us as product builders? We have a two key lessons that we keep coming back to.
Land and expand — but especially land. No business wants to sell a product for less than it’s worth, and you probably believe your product is worth a lot. Unfortunately, customers don’t — yet. It’s easy to do a lot of hypothetical math about what cost savings and revenue gains your product will enable, but customers won’t believe it until they see the first use case succeed as well as all the other growth in usage your product enables. (This will change gradually over time as you publish case studies and customers establish clear feature requirements.) Maximizing for revenue on the first contract is possible but probably painful; you’re likely better served by getting customers on board first and then expanding once they’re excited about the value. In our experience, this happens well before the first contract is even up.
Collaborate with your customers. Consultative sales is a popular buzzword that gets thrown around pretty often, but working closely with your customers is critical. Because it’s their first time thinking about what AI can do for their job function, they probably haven’t thought of all the use cases you have. Customers will very often ask us what other customers are doing because they simply don’t know how to think about the problem. Different use cases will resonate most with different customers, so you want to make sure they’re aware of the full range of possibilities, and you want to make sure that your product is flexible enough to support all those use cases. For example, we have customers that started off using RunLLM for a mix of open-source community support, for enterprise support, and as a support copilot — and all of those have expanded to other use cases from the initial entry point.
What all this really boils down to is that the market is incredibly early (maybe we should call this blog the Early AI Market?). The AI [job title] framework is as good a place to start as any for the reasons we described above, but it’s too restrictive in the long-term — and maybe even sooner than that. AI systems will get better at a variety of things faster than people will, but they likely will also be unable to do the hardest parts of any job function in the foreseeable future. That means that we’re going to see more and more horizontal expansion as these applications get better. You need to be open to that kind of expansion and support your customers as they figure out everything your product can do.