Anyone building an AI product should be acutely aware of how the dynamics of work are changing — after all, most of what we’re doing doing with AI products is changing how teams work. Surprisingly, however, we’ve found that many companies building AI products themselves are resistant to adopting AI in other areas. If it’s not obvious, we think this isn’t a very good mindset.
For the record, we’re not saying that every tool is worth adopting. The market is new and there are a lot of immature and perhaps poorly designed tools. These aren’t worth your time.
When the tools are good, it’s more difficult to understand why teams might be skeptical of adopting new technology. The most generous argument might be that someone working on AI is acutely aware of the limitations of the technology, so they are skeptical that AI for X will actually work. Some of this skepticism may be warranted — and we’d put that in the “it doesn’t exist yet” bucket — but much of it seems to be misplaced. The less generous argument is that people are happy to disrupt others’ modes of work but aren’t willing to disrupt their own. Whatever the case may be, we’re not necessarily here to psychologize. Instead, we want to make the affirmative argument for why specifically teams building AI-first products should adopt other AI tools.
The obvious answer is productivity, but that’s true for every company in the world — AI or otherwise. When it comes to AI builders using AI tools, the real answer is empathy.
To be very frank, we don’t think we use AI enough right now. We’ll talk below about some cases where we’re exploring new tools, but this post is as much for ourselves and our team as it is for you all. We want to be exploring and using more AI tools, and if you have things you think we should be trying, please let us know!
Building product instincts. As we’ve been saying a lot recently, AI is very early — that means that we haven’t yet figured out what the best product design motifs will be, especially in comparison with web and mobile app development which have extremely strong defaults that have come from years of UX research.
What that means is there is higher variance in what you’ll see in other products (on both sides of the distribution). You will see things you like, things you dislike, and things that make you think, “That’s interesting.” In all of these cases, you will develop better instincts when it comes to making important decisions for your own products.
A simple example is one we came across recently at RunLLM. We’ve been evaluating AI-powered SDRs, many of which have made a strong effort to anthropomorphize themselves. Each one of these AI SDRs has a name and an AI-generated picture of a person, presumably with the goal of making you feel like you’re interacting with a real coworker of yours. This isn’t something we’ve done at RunLLM thus far — we’re not sure yet whether this is a good or bad idea, but it’s been good food for thought. Another similar feature we’ve seen is breaking apart LLMs messages into multiple chats — again to make the interaction feel more human. (There’s a whole another blog to be written about whether AIs should try to emulate humans.)
Product design is ultimately about taste. In all of these cases, looking at what others have done and understanding what you like and dislike helps develop your own personal taste, which will help you make better decisions for yourself.
Understanding limitations. Building an AI product will naturally make you (frustratingly) familiar with exactly where the limitations of LLMs are — but there are only so many hours in a day, and each one of us can’t possibly have tried every LLM use case ourselves. As a result, looking at what others are doing will help us understand where we can or can’t go with our own product development.
Something we’ve heard time and time again from friends working on agentic workflows is that LLM-based agents very quickly diverge today; the only way to build an agentic workflow is to have a very narrowly defined state space of actions that can be taken at any point. That might change as LLMs mature, but it’s also given us strong guidelines for how we’re planning to build some of our own agentic workflows at RunLLM — and thankfully, we didn’t have to learn that lesson the hard way.
Thinking outside the box. As a meta-point that encompasses the previous two thoughts, we all are anchored to some extent by our expectations for what products should be like. Most product-builders in AI are already pushing the boundaries in one way or another, but again — there aren’t well-trodden paths here. The results of thinking outside the box are going to be high variance.
Seeing how other people approach the problem is great inspiration, not necessarily because you’re going to copy everyone else’s ideas but because it will encourage you to think about how you build your product in new ways. It’s often the combination of multiple other ideas that leads to your own best ideas, and seeing new (and maybe sometimes weird) things will help get the creative juices flowing.
All of this is to say that you should be going out of your way to try new things. As technologists, most of us probably grew up being excited to try out new software, and AI’s unleashed waves of new tech to try out in every domain. That doesn’t mean, however, that you should be treating these tools as the gospel. To beat a dead horse, it’s early, and things are going to change. What’s new and shiny today might look old and stale after the next OpenAI announcement. The goal is to stay on your toes.
Whatever you choose to try, and however you choose to evaluate it, we firmly believe that it’ll make you better at what you’re doing. Being resistant to seeing new technology means that you’re likely going to do things the way that they’ve been done before. We shouldn’t throw out past lessons for the sake of throwing them out, but we also have to be open to the fact that AI is going to require us to change large parts of “how things are done” — and the best way to learn is going to be by being on the frontlines yourself, both as a builder as a user.
All the companies rely on AI to make them self sufficient and self dependent without workers' and their Salaries.
Employers All Gains Gainers Geniuses.