Untangling concerns about consolidation in AI
As you’ve probably seen, Microsoft hired (or acquired) much of the Inflection AI team this week. This news rekindled the debate around the biggest players like Microsoft, OpenAI, and Google gaining outsized power in AI. This is a fraught topic: There are valid concerns around concentration of power, the consequences for transparency, and a potential stifling of innovation. There is also ample fear-mongering. While some of the frameworks from software markets apply here, there’s also a number of differences with AI. That makes things all the more complicated.
As usual, our perspective here is on the technology and the business behind AI — we’re not policy experts and aren’t making policy recommendations. Unlike our usual posts, we don’t have a clear a point of view in one direction or the other. As a result, you might find that there’s tension between some of the arguments below — that’s natural, as there are forces pulling in opposing direction. Nevertheless, we think it’s both timely and worth your time to begin to disentangle the arguments here and share both reasons for optimism and concern.
LLMs are core infrastructure
It’s exceedingly obvious now that LLMs are the type of infrastructure that very few companies in the world are going to operate in-house. Just like large-scale cloud storage or compute systems, the major providers’ primitives will be the default for most users. A small number of massive companies will still dabble in owning everything from the ground up. In between, there will be a set of smaller players (think DigitalOcean or Linode) and open-source tools (e.g., Minio). Starting in the late 2000s, AWS EC2 and S3 meant startups could build scalable products without hundreds of thousands of dollars in up-front server costs — and the cloud providers didn’t stop there.
From this perspective, large LLM providers who are improving models quickly are a huge benefit to the broader ecosystem, especially non-technology companies and startups. On top of those core models will likely be a much larger ecosystem of applications that benefit from the core models.
Whether LLMs are open or closed, it’s not clear that it’s economical for most companies to run their own LLMs. As we’ve seen with existing cloud software, the vendors behind projects (e.g., Databricks + Spark, Confluent + Kafka) are able to simplify the operations so significantly that they become the default cloud for users.
Acquisitions are the lifeblood of the startup ecosystem
Disclosure: We are startup founders.
Generally, acquisitions enable the startup ecosystem to thrive. Most companies fail, and most of the successful ones most will never see the public markets. Acquisitions allow startup teams to be rewarded for their time & effort, while giving larger companies an injection of innovation and diverse thinking. This type of small-scale consolidation is necessary and will be an ongoing part of the ecosystem.
All that said, only time will tell whether this acquisition was a net positive or not. We may never know the details of the Inflection acquisition (though I’m hoping there’s a Barbarians at the Gate-style tell-all coming in 20 years!). There may be very good strategic or business reasons for an acquisition like this to have happened, or it may turn out to be a blip on a frenzied radar.
Not all acquisitions are successful: Think about how big of a deal Google’s acquisition of Motorola was in 2012 — only for them to sell Motorola for a quarter of the price they paid two years later. At the same time, not all acquisitions are harmless. In the same year, Facebook’s acquisition of Instagram seemed innocuous; today, many believe the acquisition should be undone because it accumulated too much market concentration in one company. It’s too early for there to be truly market-dominant players in AI, but it’s worth keeping an eye on concentration of market share, and more importantly…
Highly concentrated talent is bad for the ecosystem
Perhaps one of the most interesting bits of news from Inflection is that much (or most?) of the Inflection team is joining Microsoft, even though Inflection itself will remain standalone. The increasing concentration of talent at OpenAI, Microsoft, Meta, and Google is a genuine cause for concern. We’re all learning fast in AI, but the core skillsets around data preparation, model training, alignment, and large-scale inference are still exceedingly rare — to an extent that simply isn’t true in cloud infrastructure. In private conversations, we hear getting the best results requires the time and resources to throw every optimization in the book at your models. Most often, this has been a lament from those without the resources or teams to do so.
Not only does this concentration of talent in a few major players reduce competition, it reduces the cross-pollination of ideas. Engineers moving between organizations will naturally bring best practices and clever ideas with them. The more options there are, the more competition there will be for talent, and the more likely people are to start this process. This year, it’s felt like the viable set of options has gotten smaller, not bigger.
Nevertheless, it’s not all doom and gloom. If the AI hype cycle follows previous trends, we’ll likely see some of the best folks trickle out of the major organizations in the next few years with invaluable experience and good business ideas. This group could turn into the next wave of founders attacking the incumbents, but the complexity of the models and infrastructure could prove difficult to recreate.
It’s not clear how much consolidation affects openness
There’s been an ongoing Twitter debate for a few weeks now about the merits and risks around pushing for more openness in AI. Putting aside the discussions around whether AI poses an existential risk to humanity, we’re not sure yet whether this week’s news affects openness. At an extreme — if Microsoft controlled all the AI labs in the world, for example — it’s very likely that there would be little incentive for a single organization to share any insights with the rest of the world.
That said, it’s unlikely that Microsoft is going to buy Google or Meta anytime soon, which means that competition will foster a diverse set of approaches. Google and Microsoft have both made token gestures towards openness — Google with Gemma, and Microsoft with its investment in Mistral — but both have been overshadowed by much bigger, larger efforts towards closed models. Meanwhile, Meta continues to push the state-of-the-art on open-access models. Medium-sized players like Mistral have started to muddle their approach to openness, and it’s not clear whether they’ll be fully open or not. From our perspective, all of this is likely good; without a horse in the race, the rest of us should want to see a diversity of open, closed, and mixed approaches to see which ones yield the most flexibility, the best progress, and the most control.
Consolidation brings government scrutiny
It’s safe to say that AI has not escaped government notice. Last fall’s executive order and more recent international laws (and reversals) have left many feeling like governments are interfering out of fear rather than out of purpose.
Whether or not you agree with that take, larger players consolidating talent and power will undoubtedly bring more government scrutiny onto the space. If that scrutiny leads to stricter regulation, the downstream consequences could very well stifle innovation at the smaller end of the spectrum: Larger barriers to entry like reporting and compliance increase costs and reduce efficiency, which in turn reduces the disruptive advantage startups have. It’s possible these regulations are necessary, but it’s also possible they simultaneously entrench the advantage the biggest players have.
Governments and larger companies will have to strike a balance — continuing to promote innovation in the ecosystem around them without pushing either side into a losing position. The game theory here is likely fascinating, but it’s well above our pay grade.
Reaching a singular future
No armchair philosophizing about the future of AI would be complete without a take on AGI. We have no idea if or when we’ll reach AGI. We also have no idea whether its possible or whether most of us even know what AGI means. So… that’s it.
We kid; independent of whether we reach this hypothetical future, we might reach a point (maybe a $10B model, but likely further) where one approach or one model is simply too advanced to recreate — especially if it is self-improving. All bets are off in this case. This kind of change alters the laws of physics for AI, and it’s impossible to prognosticate. As of now, it doesn’t look like any model provider is building an insurmountable advantage, but this is a black swan.
Consolidation of companies and its effect on the future of AI is thorny. There’s good arguments to allow the market to function as-is, and there’s good arguments to say we need to actively foster more competition. We don’t have all answers, but none of these challenges are going away. The OpenAIs, Microsofts, and Googles of the world are going to continue to innovate, and as a community, we’ll have to grapple with the extent to which we support or push back against the concentration of market share and power. We think there will continue to be room for both startups and big companies to thrive, but we’ll have to work our way there.