It’s been almost a year since we started this blog. Back then, Joey’s research group at Cal had just made its first forays into the LLM space with projects like Vicuña and Gorilla, and our team at RunLLM hadn’t yet figured out what to do with generative AI. We both had a lot of opinions, but we didn’t really know what we were going to focus on. More than anything, we figured it’d be good for us to start sharing our thinking and to see how things would evolve over time — so we started this blog and an interview series. We didn’t really know what the goal was — which is maybe reflected in the slightly punny, extremely forgettable name we chose. We didn’t end up having time to keep the interview series going, but the blog stuck.
We’ve touched on a lot of different subjects in the last year, and the AI world’s been so crazy that it’s given us plenty to write about. We were right about some things (e.g., OpenAI’s economics) and wrong about a whole bunch of others (our originally bearish stance on open LLMs most of all). Generally, we’ve written about trending topics in AI that have caught our attention and about some of the lessons we’ve learned while building RunLLM.
Writing broadly is fun (who doesn’t like tossing off takes on the latest news?) and made it easy for us to get started because it allowed us to comment on trending topics. But it can also be distracting: Talking about the latest news prevents you from going deep on specific subjects. But we’ve realized that it’s depth that helps us refine and sharpen our thinking.
Despite our relative lack of discipline, we’ve grown an audience of almost 2k subscribers (thank you!). What we’ve noticed is that the posts that got the best reception expressed clear (if controversial) points of view that came directly from our boots-on-the-ground experience with RunLLM.
At the 1-year mark, we thought it was time to shed the silly name and focus on the topics we’re the most excited about.
Introducing the AI Frontier
By virtue of our jobs, we get to be pretty close to the frontiers of innovation in AI, which is an exciting (if sometimes stressful) place to be. We’re also Star Trek fans, and while AI isn’t the final frontier, it’s certainly the next big one. It might feel like the technology’s already arrived, but realistically we’re in the early innings of AI — both consumers and enterprises are barely scratching the surface with how AI can be used, even assuming the technology stays static. That means there’s a lot to learn and share from pushing the boundaries on how AI is used.
As we’ve thought about maturing the blog, we realized it’s time to bring a little more focus into how we approach the blog and to share more based on our hands-on experience. Why do we think we should narrow our focus? We’re starting to understand the Venn Diagram overlap between what you all are interested in and the topics about which we have something unique to say. Selfishly, it will be valuable for us in refining our thinking about the market and making better product decisions. Perhaps more importantly, we’ll be able to give you all more nuance and depth in those focus areas — and fewer generic takes about AI that you might find anywhere on the internet.
What will we focus on? Many of the same topics we’ve written about in the past — what we’re learning from building a product & a business in AI, what the implications of technology changes on AI startups are, and what technology you should be paying the most attention to.
All that is to say that the AI Frontier isn’t going to be all that dramatically different — but the goal is to be more refined. You might have noticed we’ve (somewhat unintentionally) been trending in this direction with our recent posts. You will see more of what we’re learning from our experience at RunLLM — like “How to build your first LLM evaluation.” You’re going to see less in the way of general observations about AI — things like “LLMs are becoming commodities” and “AI must be more than a checkbox.”
We’re also planning some new things over the next few months. Our engineering team has been thinking about what received wisdom in AI is correct and what we’ve found doesn’t apply quite so well, and we’ll share more of that here. We’re also going to start highlighting projects and products that we’ve found particularly useful or exciting (full disclosure: we’ll probably start with RunLLM). Finally, we’re thinking about fleshing out some of the lessons we’ve learned about less-than-cool topics (i.e., data cleaning) while building RunLLM.
We’re grateful for all the positive feedback we’ve gotten over the last year, and we’re hopeful that we can turn this into something that’s even more interesting and valuable for all of you as we enter year 2. If you have thoughts or feedback, we’d love to hear from you!