The most common framing around where AI fits into the enterprise today is as a member of your team. There are AI SDRs, AI software engineers, AI SREs, AI security analysts, and of course, AI support engineers. As startups like ours build out these products, we’re thinking about how we can best integrate into the tooling that’s required for each of the jobs above. The sale revolves around the product’s ability to improve key metrics like the number of leads generated or the number of alerts processed.
All of this is important, but what doesn’t receive as much attention is what happens when AI systems aren’t a part of your company but a part of your customer’s organization. What happens when your customer is an AI? This may sound like a silly question today — AI systems can barely solve some simple problems — but the truth is that this future is rapidly approaching (and already here in some cases). Being AI-friendly in your customer-facing roles (and maybe partner-facing roles in the near future?) is going to become increasingly critical as every enterprise starts to work AI into its own processes.
Let’s consider a couple examples to flesh out the point:
Product Research. As we’ve all likely experienced, o3’s high-quality web search has made it a very good research tool. We already use o3 to research everything from injury rehab to cookware and even B2B SaaS products. If we focus on product research and sales for a minute, you simply need to drop in a description of what you’re looking for, let o3 ask you any questions, and come back 5-10 minutes later to find a nicely formatted list of products for you to review. Today, that list is generated by searching the web and reading through relevant results (i.e., SEO is far from dead!).
Of course, as a startup, our documentation and our website aren’t always fully up to date (the horror! 😱), which means that we might be at a disadvantage. If a customer’s looking for a particular feature that we haven’t documented as a must-have, we may be prematurely cut from the list. Imagine, however, if you could give o3 a hook to ask questions about your product — perhaps via a protocol like A2A — so that if there’s ambiguity about what features exist, the end-user can get the most accurate answer possible. Even a year ago, this might have sounded crazy, but o3 connecting to a sales agent via A2A isn’t that far away.
Product Support. Most of you probably use Cursor regularly. Our team’s experience has been that as Cursor Agent has gotten better, we’re delegating more and more tasks to execute in the background with Cursor. Of course, Claude and GPT don’t have perfect knowledge about many of the products we use (and many of the products we support!), which means that we sometimes get incorrect or hallucinated solutions to our problems that an engineer then has to go unwind, which means that they have to figure out the right solution before they can proceed. If Cursor could talk directly to an AI support engineer like RunLLM (see an early demo here), that would save both on useless tokens being generated in the hallucinated answer and of course on the person’s time to analyze what went wrong and how they can fix the issue. Furthermore, in some cases, the AI support agent may be necessary to fix backend issues that could block even the most advanced models from completing their tasks.
These are just two of the most obvious examples of customer-facing roles, and ones that we’re of course intimately familiar with. The same could likely be said of customer success (”Am I getting all the value I can out of this product?”), marketing (”Is this content optimized for an AI?”), and a variety of other roles.
Once we accept this future, there are a few obvious implications.
Human-in-the-loop processes are unacceptably slow. Imagine if getting a question answered in either of the above scenarios required an agent to contact sales or open a support ticket. A person would of course be able to answer the question once they got around to it, but it would waste hours (or days) of time during which the customer’s agent was simply sitting around and waiting for a response. Yes, you could of course architect a system that saved the current context, waited for a response, and re-started when a result came, but that somewhat defeats the purpose of improving efficiency with AI — especially in the context of something like Cursor Agent writing code. You need an AI system on your end to be able to match the speed and efficiency of the customer’s agent; otherwise, you’ll lose out to someone who’s able to be more directly responsive to the customers’ needs.
You’ll operate at a volume that was previously impossible. Even if you had a super-employee who never slept was immediately on top of all the requests they got, they won’t be able to handle all the requests that are coming in. By the nature of AI systems, they’re going to generate a higher volume of requests — some of which might not work out. This isn’t a particularly interesting point in the sense that scale is always a challenge for any company as it gets more attention. But it’s a critical one from a planning perspective: If you’re getting more requests and implicitly dropping them because there’s a human in the loop, you’re very quickly going to fall behind someone who’s using AI to at least triage each new request.
Human-to-AI and AI-to-AI interactions are different. While answering customer questions in whatever setting might seem like it should be the same for humans and for AI systems, there’s more nuance than you might initially imagine. This is where thoughtful UX design will carry the day. We don’t have a comprehensive list of what the differences will be, but here’s a few examples of why human-to-AI and AI-to-AI interactions are different:
Level of detail: Humans almost never want to read a wall of text, but as we all know anecdotally, the more detail you give an AI the better.
Latency: Agents can process text much faster than humans can, so giving quick response that can then be speedily refined is valuable. In contrast, humans read much more slowly so the premium should be placed on getting the answer right the first time.
Parallelism: While we don’t have the right protocols for this yet, the querying agent should be able to evaluate multiple options simultaneously, making alternate solutions — at least in the context of support — extremely valuable.
This is of course a small set of examples that’s glossing over much of the fine print, but it should illustrate that designing thoughtful interfaces for AI-to-AI interactions will look significantly different from the best human-to-AI interactions.
We generally try to avoid hype and focus on what’s realistic and likely. But it’s clear that not enough attention is being paid to what happens when an AI Is your customer. It’s very likely that in the next 12 months, the companies that optimize for AI-native sales & support will likely have a huge leg up over companies that are reluctant to make a change. Our hunch is that this is only the tip of the iceberg — once you start adding agents to each job function, the same principles will likely apply throughout a business.