Maybe AI is a regular platform shift
There will be lots of change, but the fabric of society might just remain
We’re shocked that it’s already the end of 2025, and we’re sure many of you are feeling the same way. This is our penultimate post of the year, before we look back on 2025 next week and then take a couple weeks off for the holidays. Thinking back on this year, what’s maybe most shocking to us is that nothing really shocked us in 2025. Our hot take is that maybe AI is settling into being a regular, old technology trend.
We’ll do a much more thorough review of the year next week and look back on our predictions, but looking back on 2025, some of the most noteworthy things that happened were the releases of DeepSeek R1, GPT-5, and Gemini 3 + next-gen TPUs. Beyond that, we had some incremental-but-not-crazy model releases (GPT-4.1, Claude 4.5), the continued hypergrowth of AI applications (Sierra, Cursor), and lots and lots of hand-wringing about regulatory requirements (the EU AI Act). You’re welcome – you can skip the rest of the year in review posts!
If you rewind 12 months and ask yourself whether any of these things are shocking revelations, it’s hard to say yes resoundingly:
Given the support from the Chinese government, it’s not unreasonable that a Chinese company was able to release a state-of-the-art model. The fact that it was open-source was perhaps unexpected, and the most surprising aspect was the claim around cost – but that didn’t fundamentally change what was possible with AI.
Model releases from OpenAI and Google that were significant steps forward but didn’t fundamentally change how we’re using LLMs were definitely on everyone’s bingo card.
A set of model releases that were either disappointing or didn’t make waves are pretty much table stakes now – as we’ve discussed before, it’s almost impossible to keep track of new models at this point.
Finally, while the revenue growth from applications like Sierra and Cursor is new and unbelievably impressive, these are trends that were solidly established in 2024 that held through 2025.
All of this points to the fact that AI might be settling into being a regular old generational platform shift – like cloud, mobile, and web before it – rather than foretelling the end of society as we know it.
This is not a bear case
First, let us be very clear: This is not a bear case argument for AI. We’re not arguing that modern AI as a technology will not create significant value for enterprises and customers, and we’re not arguing even that AI companies are over-valued, despite some eye-popping numbers. What we’re saying is that the evidence points towards this trend being a (large, history-making) platform shift in technology, the likes of which we absolutely have seen before.
The most valuable companies in the world today – Amazon, Google, Apple, etc. – were created by riding one (or often many) of these platform shifts, and most of these companies are very active in AI as well. We will likely see a whole another set of companies enter that conversation over the next 5-10 years, and we might even see some of these companies lose their dominant position as they fail to keep up with the latest trends. (Although, given the focus on and investment in AI, this is maybe less likely than before).
It’s not particularly controversial to say that some of those generational companies have already been formed (e.g., OpenAI, very likely) and some may still be nascent or unformed, given how early the adoption of AI applications is. We fully believe that these companies will be as valuable (or likely more valuable) than today’s technology giants, and that we will see significant value being generated for consumers and businesses alike. Again, this is not a bear case – we’re just arguing that AI won’t be orders of magnitude bigger than previous shifts.
Middle of the road
If you look closely at this argument, it’s a middle-ground position. We’re arguing that modern AI is significantly valuable enough to compete with Google, Amazon, etc. but that it’s not going to change the fabric of society as we know it. There are two sides to that argument – what’s pushing us away from the bear case and what’s pushing us away from the change-everything-about-society case.
The first argument is simple: If you stopped all model-level innovation today, either because scaling laws failed to hold up in 2026 or because of government regulation, we believe there’s 10+ years of innovation left to be done with the models as they exist today. There are tons of untouched applications, whole new UI/UX paradigms that are yet to be discovered, and even just regular old adoption and integration left. We firmly believe that all of us could be using tools like ChatGPT more every day. Simply put, you don’t need another step function change in intelligence to create generational companies. We have everything we need today.
We were never AI doomers, and the trends in AI over the last 24 months have pushed us further away from that likelihood. In the interest of intellectual humility, you never know what technological shifts will happen, so there very well might be some crazy new thing that we haven’t yet heard of, but the current generation of LLMs don’t make us think that our kidneys are going to be harvested by robots anytime soon. The rate of change has very clearly slowed down, and until we find a new architecture, we’re more likely than not to see incremental improvement.
What good looks like
Implicit in this whole argument is the idea that we can differentiate what makes a model better. That’s increasingly not the case. As the diversity of AI applications increases, we are already seeing that different models are better suited for different preferences – whether that’s doing a certain task or embodying a certain personality (or both). That trend will only strengthen as we see the requirements for consumer and enterprise applications diverge.
Consider the GPT-5 release from this past summer. OpenAI obviously released it because it performed well in their internal benchmarks and on LMArena testing, but there was a massive outcry about the deprecation of GPT-4o (and o3, if you’re us!). What caused this discrepancy? It’s hard to say, but in a couple words, it boils down to human variability. We all have different preferences, and it’s hard to predict how things like personality will impact those preferences.
While the calculus is different for enterprise applications, the same trend holds true. What makes Claude Code so much more widely used than OpenAI’s Codex? It’s some combination of training focus, a virtuous cycle from increased usage, and user preference for the planning style that Claude has prioritized. In a similar vein, we’ve discussed before the idea that increasingly abstracted reasoning models like GPT-5 are difficult to adopt for us at RunLLM because of the variability in their behavior and token consumption.
This was true in the last generation of technology too: AWS became the ease-of-use + startup cloud, Azure was the enterprise cloud, and Google was the pre-LLM AI cloud. It’s also worth saying that this dynamic is not all that different from how humans operate. Different people are naturally better at or more excited about different things, and we see the best results when people lean into what they like. If you’re biasing LLMs towards human preferences with RL, why wouldn’t you see similar patterns emerge?
More of the same?
As we remind ourselves regularly, predictions are dangerous. Will we see some totally new model in January that makes this blog post look like utter nonsense? Possibly! Will AI actually lead to mass unemployment and the demise of society as we know it? It’s hard to say that the answer is unequivocally no. The chances of all these things happening – especially given the sudden change in technological capabilities in 2022 – is non-zero.
That said, the trends point in a very clear direction: AI is undeniably incredibly useful and will generate tons of value, but we aren’t rapidly accelerating towards the singularity. Philosophy aside, what’s much more interesting to us is how the application of the technology changes the way we operate on a daily basis. What stays in scope for people, and what is fully put in the purview of AI?
We’re starting to explore what this looks like for building, deploying, and operating software in the cloud by interviewing leaders in the space. After the holidays, we’ll share a new series with these interviews, and we’re excited to see what we learn. For now, we’re looking forward to wrapping up 2025!



