Tech startups are often described by analogy: “[successful thing] for [industry].” In the 2010s, the successful thing was usually Uber or Airbnb. Today, of course, it’s AI — for developers, for customer support, for product feedback, and for a million other things. It’s an easy way to explain what you’re doing without getting into a 30-minute lecture on the nuances of your technology, and one thing every founder learns quickly is that simplicity is better than accuracy.
What we’re starting to realize is that AI is going to break a lot of these analogies. Because of the breadth of their training data, LLMs and other foundation models excel — out of the box — at a variety of tasks that span traditional job functions. Simultaneously, they lack some of the depth and nuance that any professional accumulates over years of experience.
This might seem unintuitive at first, but you’ve certainly seen AI behave in this way. Many writers have observed that LLMs are good at brainstorming ideas but lack the finesse to polish writing. Developers find that ChatGPT and Copilot are great at writing code snippets for common tasks but can’t track the requisite context to fully customize it to your codebase. At the same time, it’s the same LLM that gets the basics done in both cases. This is what we mean when we say AI is breadth-first — a jack of all trades.
In practice, that means that AI for X isn’t really the right analogy. Whatever X is — software engineering, customer support, product management, etc. — LLMs will automate some of the lower-hanging fruit but likely won’t be able to fully autonomous.
A quick disclaimer: We believe that these analogies are still useful — we regularly use the phrase “AI for technical support” when discussing RunLLM — but we’ve realized that the analogy is incomplete and potentially misleading once you get into the nuances. More importantly, while “AI for X” is a great introduction, we believe it’s too limiting a framework for product builders.
So what are the implications of AI being breadth-first?
AI applications can’t and shouldn’t conform to traditional job boundaries. LLMs won’t be able to achieve the depth and nuance that a person can today in any single job. If you’re focused on building AI for X and limit yourself to what an LLM can do for function X, you’ll be leaving opportunities on the table. There’s tons of room for innovation when applications blur the boundaries between traditional jobs, by using shared context & data. You probably don’t want to have the same AI application working on sales and software engineering, but adjacent functions are fair game. As a simple example, RunLLM focuses on technical support to help developers unblock themselves, but customer pull has led us to start working on technical writing (to suggest documentation improvements) and product feedback (based on user questions). We aren’t autonomous in any one of these areas but can clearly add value in all of them.
Product builders must to integrate many perspectives. The consequence of the above is that — as a product builder — you can’t isolate your expertise to a single area. Your customers will pull you to add features that cross functional boundaries. You’ll need to be able to put yourself in the shoes of someone working in each one of those areas and design features that meet them where they are. Your advantage is that you have data from the adjacent function to help drive those decisions. The context that’s helpful in software engineering will naturally contribute, for example, in writing effective product requirements.
You should design for experts. Given that LLMs are breadth-first, that means the use cases you’ll be able to go after are the lowest-hanging fruit. In turn, that means that the people who are consuming what you generate are going to be expecting the basics to be covered well — and are going to look for the next level of insight. In the same way that a junior employee learns to report the main takeaways to their mentor or manager, your application should be prepared to surface what an expert is looking for.
Data is absolutely still a moat. As a meta-point, competition is likely going to become fiercer than ever. If every AI application gets pulled into 1-2 adjacent fields, everyone ends up with 2-3x as many competitors as before. That makes stakeholders harder to engage and deals harder to win. You’ll need to have a theory of the case — where is the data powering your application coming from? Why does your source of data make more sense or provide more value than someone else’s? There are some obvious sources (e.g., customer engagement, high-volume usage data) and some obvious technical directions (e.g., custom data engineering) to go after, but those will also be the most crowded spaces. The sooner you can build a data moat, the more obviously you’ll be able to make the expansion argument into adjacent functions.
All of this is to say that we should be expanding our horizons. The new paradigm will force us to constantly update our understanding of the technology and its implications for our product vision. We’ll repeat what we said many before: With AI, it’s impossible to predict what the next round of innovation will bring. We may very well have LLMs in 6 months’ time that are capable of achieving human-level nuance, but we think that’s unlikely — and the reaction to OpenAI’s Sky demo tells us that might be harder than some thought.
In that world, the pressure is to push the boundaries on what our products can do in order to establish market position.