This is part four in what (we’ve realized) is a series on AI-powered work. Our previous posts in this series covered an introduction to the concept of AI work, the interface between AI and human workers, and the extreme importance of integrations for AI workers. As you’ll see, our thinking is evolving quickly. We don’t have a particular goal for this series, and we’ll keep adding new ideas as we learn.
A couple months ago, we wrote about our framework for thinking about AI-powered products. We won’t repeat the whole post here, but briefly, we believe that AI-powered products should think of themselves as fulfilling job functions (e.g., SDR, software engineer, support agent), and as AI-product builders we should be thoughtful about which job functions we can fulfill and how we check each box. This is what will transform AI from being a novelty technology with great demos to something that changes our economy.
In that post, we highlighted that there was a spectrum emerging between products that are going broad (building frameworks to develop multiple AI workers on a single knowledge base) and products that are going deep (narrowly focused on owning one job function). We discuss the tradeoffs between each approach in depth there, but we concluded by saying that we weren’t sure what the best approach was at the time.
Two months later, we’re starting to become more confident that domain-specific AI products that are tuned for a particular job function are going to be more productive than general-purpose AI workers. To be clear, this is not something that we’re extremely confident about — we’re probably at a 60/40 split in favor of deep products over broad ones — but it reflects a bet that we’re making with RunLLM, so we thought it was worth sharing.
Why depth might win over breadth
Early on in the current AI hype cycle, enterprises were willing to spend money on the potential of AI. They paid for tools that would help them explore what was possible with AI — this is sometimes dismissively called science project budget (but this is also the corporate money that often funds Joey’s research so we love it!). As we get further in the hype cycle, however, customers are increasingly focused on what specific value AI is going to deliver — budget saved, adoption accelerate, revenue generated. As a broader trend, we believe this will squarely put the focus on adopting AI applications.
This gives narrow, deep AI applications a clear advantage: It significantly reduces the time-to-value (because the application already has a built-out suite of features), and it makes the case for adoption much easier (because the product slots naturally into an existing function). This case is bolstered by the fact that customers buying AI products today often don’t know exactly what they’re looking for. Unlike with a CRM, there’s no standardized feature matrix that every vendor has to fill out. As a result, a narrowly-tailored but deep set of features optimized for one job function will be more impressive than a general-purpose set of features. In other words, you’re much more likely to search “AI-powered SDR” rather than “AI platform to build my own SDR.” Correspondingly, the opportunity is also for first movers to help shape the expectation for each AI product category should be doing.
From a budget and procurement perspective, it will make your sales cycle dramatically simpler if you’re selling to a single org (e.g., sales for an SDR, engineering for a developer tool) in an enterprise, rather than selling a general-purpose tool that then has to be adopted and customized by multiple organizations. Once the sale is closed, adoption and retention will also be higher in the short run.
We also believe that this is a good long-term bet because customization will get easier as the technology evolves. It’s of course difficult to predict the specific timeline on which the underlying technology will advance (although it’s probably still sooner than we expect!), but as models get smaller and more efficient, the cost of customizing a model to have a specific skill will come down. Even for large models, our understanding of how to control RLHF and post-training customization is evolving quickly, and the level of control we’ll have over model fine-tuning will increase. This is perhaps especially true if chain of thought-based reasoning becomes an API-level commodity, as o1 indicates it might.
In that future world, a company that has thousands of data points centered on a particular job function will be able to provide dramatically better customization and higher quality results (which creates a familiar data flywheel) than a product that’s trying to go broad. This is a potential future where AI products have a level of specialization that reflects how people work today — so we’re probably going to end up shipping our org charts again.
Won’t simplicity win?
There are, of course, pitfalls to this approach. We cover the advantages of a general-purpose product in our previous post, but they’re closely related to the pitfalls here, so we think it’s important enough to bear repeating. More importantly, we also want to explain why we think they’re surmountable challenges.
The main challenge comes from the fact that organizations will now have to integrate their knowledge sources into n different systems (one for sales, one for engineering, one for support, etc.) rather than one. Every vendor is, of course, going to have to build a million different integrations — and that’s no one’s idea of fun. While this sounds like a setup nightmare, this is generally already how most companies work. The budget for GitHub comes from the engineering team, the budget for Zendesk comes from the support team, and so on. Each one of those systems has their own set of internal connections and data syncs, and the team making the purchasing decision is the same one doing the set up.
A few mega-platforms like Salesforce typically sell across multiple functions, but these are generally exceptions. Given that it’s likely going to be individual teams looking for the right AI product to best serve their job function-oriented needs, this setup process will be isolated anyway. The alternative is one round of integration followed by a significant amount of per-team customization, which is less predictable and delays time-to-value.
That said, the holy grail here for a vendor is convincing a CIO and/or a CISO that their lives will be significantly easier if all the company’s AI workers are built on a single platform. This argument likely will revolve less around time-to-value and productivity and more on governance, visibility, and security. For the reasons we outlined above, we think focusing on delivering value is the best bet.
Conclusion
The world is changing so fast around us that it’s hard to take any kind of stance with confidence — the next release might pull the rug out from underneath you. That said, this is a bet that we feel reasonably good about for the short and medium term. The dynamics might change again in a few years as the market matures, and the advantages that picks-and-shovels tools had in the late SaaS era might very well return.
For that to happen, however, we’ll first need enterprises to see that they can deliver real value (top or bottom line) with AI, and we’ll need AI companies to show that they can achieve revenue at scale. If this plays out the way that we expect, the picks-and-shovels that are the most likely to succeed are the ones that lend themselves to the best customization.
Of course, we’ve been wrong before, so we’ll let you know if we change our minds. 😉
A good addition to this would be narrow AI products you use (and better if pay for) in building your product.