The SaaS Extinction Test
Why your vibe-coded weekend project won't kill Salesforce
Projections about the demise of the SaaS industry have reached a fever pitch over the last few weeks. The general belief seems to be increasingly that there’s no moat in software anymore. Consequently, we have seen the stock prices of massive, entrenched incumbents take a significant beating.
The bear case for software-as-a-service goes something like this: As coding agents improve, the build-versus-buy calculus shifts permanently. An engineer at any company can now pick up a coding agent, build a prototype that meets the company’s specific needs, iterate on feedback, and deploy a bespoke tool that provides more value than a generic SaaS platform at a fraction of the cost.
The source of this concern is our collective recent experience with the improving quality of coding agents. We have all had Claude Code or Cursor scaffold an entire application from scratch for a few dollars in tokens. When you see something genuinely useful built in minutes, it is easy to assume the multi-billion-dollar incumbents are doomed. And if an individual can build a prototype quickly, surely a startup can build a strong offering with a few months and a few engineers?
As with most hype cycles, however, the truth lands in the middle. While there are massive opportunities for disruption, the SaaS business model is not going to evaporate overnight. In our view, the distinction comes down to a quick survival checklist. If you meet one of these criteria, you are likely safe. If you miss on all of them, you should be worried:
Are you a system of record?
Do you do more than help humans automate a single workflow?
Are you mission-critical?
The Physics of Data Gravity
Data moats have long been the holy grail of enterprise software. Consumer products like Google or Instagram win on user behavior data, and enterprise titans like Snowflake or Datadog are powerful because they are systems of record. These companies do not just power workflows; they house years of historical data that companies have imported and structured.
Despite all the massive technological shifts in the last few years, the physics of data have not changed. Moving massive amounts of data is expensive, risky, and slow. This means that, as has been the case for fifteen years, the major cloud providers will continue to make their margins on networking and egress. The cost of moving data in and out of a third-party service — or your own cloud — remains incredibly high.
Beyond the operational cost, there is the issue of operational stickiness. If a product is collecting data for a core operational purpose, it becomes a load-bearing wall in the company’s architecture. You do not pay Datadog or Snowflake eight figures a year because storing logs is a nice-to-have feature. You pay them because when a mission-critical issue happens at 3:00 AM, you need a proven way to investigate, pinpoint, and fix the issue.
But realistically, your business will continue to exist if Datadog goes down for a little while. Things get much more difficult when we’re talking about truly mission-critical software.
The Mission-Critical Tautology
Even when companies don’t have the most interesting data, they’re likely safe if they are critical to the operations of their customers – the kinds of products that your business literally wouldn’t exist without. The obvious examples are Workday and Salesforce.
There is a certain recursive logic here: These companies are safe because they are big, and they are big because they are safe. No matter how shiny a rapidly scaffolded payroll system looks, a VP of HR is not going to risk payroll failing on a Friday morning. A CRO does not care how many new automations a startup offers if it means moving away from a Salesforce instance they have spent a decade customizing to their exact sales and revops motions.
When enterprises make these decisions, they are not just buying software – they are also offloading risk. A tech-forward company like Google or Meta certainly has the technical talent to build an internal payroll system. The question is not whether they can build it, but whether they want to own the risk of running it. Once they do the calculus, dedicating hundreds of engineers to a non-core area of expertise rarely makes sense.
By contrast, non-mission-critical software is in the danger zone. Analytics is the prime example. Traditionally, writing code for dashboards and plumbing database queries was prohibitively expensive to democratize.
Just last week at RunLLM, we built an internal dashboarding system following a conversation about product metrics. We used Python and Plotly to connect to internal systems and pull customer data via our CRM’s API. If this dashboard goes down for a day, it is annoying, but it does not halt our operations. More importantly, because the cost of building it with a coding agent was so low, and the upside of allowing our VP of Sales to modify it himself was so high, the option to buy a traditional BI tool never even entered the conversation.
The Workflow Moat Narrows
Pure workflow software is where we’re currently the most bearish. By workflow software, we mean anything that’s focused on connecting the dots between existing tools that either have data gravity or mission-criticality. The poster child on X for this is PagerDuty. At its core, PagerDuty processes data stored in another product’s telemetry store, determines if a threshold was met, and alerts an on-call engineer. This is connecting the dots between Datadog and Slack. While PagerDuty does much more in practice, that primary workflow is what most customers buy.
These are exactly the kinds of integration code workflows being replaced by agents today. The threat here is not just that agents will automate incident management or sales outbound; it is that new players or internal teams can build superior user experiences that blend human and agent capabilities from the ground up. The most interesting part of this comes from the fact that the solutions can (and should) very easily be customized to match each team’s workflow. These agent-first systems will be dramatically more valuable because they save human hours, easily displacing legacy systems that are human-first and currently scrambling to bolt on AI features as an afterthought.
The Ops Gap
The undiscussed concern underlying all build it with Claude Code software is operations. The reality of software operations and reliability is unfortunately still incredibly complex and oftentimes dwarfs the time taken to build something useful. The gap between building and operating is actually growing because as coding becomes a commodity, the relative cost of infrastructure and security grows.
The internal dashboard we built recently is a perfect example of this. It took about an hour to iterate on the core dashboards themselves with Claude Code. It then took about four hours to deploy it in Google Cloud so that it was properly authenticated behind our company’s SSO, set up to auto-update with new commits, and connected with the right credentials.
This gap will likely persist. The simple reason why is that Python looks the same no matter where you work, but infrastructure is dramatically different at every company. There are not yet great ways to deploy cloud software without getting into the weeds of Kubernetes clusters, networking permissions, and security + compliance. None of those things matter when you are prototyping locally, but they are the only things that matter when you are operating production software at scale.
Wrapping Up
The rumors of the demise of modern software are greatly exaggerated, but the nature of what makes software valuable is quickly changing. For the last decade, SaaS companies could survive by being a slightly better UI for a human workflow. That’s no longer defensible.
The defensibility remains exactly where it has always been: at the intersection of data and risk. If you own the system of record, the physics of networking and the high cost of data egress will protect you. If you own a mission-critical operational process, the corporate aversion to risk will protect you.
The real danger is for the middle layer of the stack — the products that have built businesses around the friction of human work. As the cost of generating code and automating workflows drops toward zero, the value of that software must move elsewhere.
This does not mean incumbents are invincible. It just means that the disruption will not come from a simple internal tool or a weekend project built with a coding agent. To win in this new environment, startups must figure out how to best manage the data, reduce the operational risk, and close the gap between a prototype and production-grade reliability.



