My colleague Frank recently posted a provocative idea in our internal Slack channel: “AI killed the SaaS model” (a play on the title of the 1979 song “Video Killed the Radio Star”, in case you weren’t born yet). The context was partly tongue-in-cheek, but it captured something real that many of us are sensing. There’s a growing feeling that AI tools are fundamentally shifting the build versus buy equation that has governed technology decisions for decades.
The hypothesis is that if AI makes building software significantly faster and cheaper, shouldn’t more companies build internally rather than paying ongoing SaaS subscription fees? It’s a compelling argument. More teams are prototyping internally because AI makes it faster. The question is whether this feeling of shift is actually bearing out economically – are SaaS subscriptions going down because more in-house building is happening, or is this still just speculation?
After working through vendor selection processes with dozens of companies over the years, I can tell you the answer is more nuanced than some of the pundits suggest. AI has changed something important, but it’s not what most CEOs think. And understanding the difference could save your organization from an expensive mistake.
The traditional framework still matters
Before we talk about how AI changes things, we need to be clear about the fundamentals that haven’t changed. The build versus buy decision has always centered on a core trade-off that remains relevant today.
When you buy a SaaS solution, you get predictable operating expenses. You pay ongoing subscription fees, and the vendor handles updates, security patches, and feature evolution. The 80/20 rule typically applies: does the vendor solution meet 80% of your requirements? If yes, you can often figure out workarounds for the remaining 20%, or negotiate some customization to extend the platform.
When you build internally, you get complete control over prioritization. A new requirement can go into the next sprint if it’s important enough. You can build exactly to your requirements without compromising. But that control comes with burdens. You need permanent staffing to maintain what you build, and you own all the operational responsibility.
Traditionally, organizations picked a path early and marched down it. You made the decision to buy something, and vendors competed in “bake-off” demos and pilots. Or you made the decision to build it internally, gave your team requirements and a timeline, and they delivered. What was rare was doing a side-by-side comparison – actually prototyping something internally while simultaneously evaluating vendor options to see if the internally built thing was good enough for what you needed.
The factors beyond initial cost that people overlook
In my experience working through these decisions with clients, there are several critical factors that often get missed in the initial analysis.
Exit strategy is frequently overlooked. If you want to switch vendors in five years, how easy is it to export the data out of that service? Can you actually move to another provider or build internally later? AWS is probably a good example: the ecosystem is very rich, and a lot of people are able to build applications quickly on top of that platform. But the more you use AWS-specific services, the harder it is to move away from it. The same thing happens with databases. If you’re moving from SQL Server and you have a lot of Microsoft T-SQL embedded in your logic, it makes it a lot harder to migrate that out to something like PostgreSQL. They have tools to help you with that migration, but it’s a lot harder.
Vendor viability matters more than people think. You might pick a company that’s very young, maybe a startup. They’ve got a ton of great capabilities, but they don’t really have a long history. Not to say that a company that’s 50 years old is going to be around next year, but there’s a little bit more assurance.
Vendor reputation is another consideration. You’re typically looking to see either through reviews or other feedback – maybe it’s a market analysis from Gartner or some other source – trying to get an understanding of how that vendor is doing from a support perspective.
These factors matter just as much today as they did before AI entered the picture.
What AI is actually changing about builds
So where does AI actually shift the equation? The most tangible change is in the prototyping phase.
More teams, when they’re doing these build-versus-buy evaluations, are more apt to do the prototyping, and they’re more apt to include that in the bake-off. Whereas before, it used to be you’re marching down one path – you’ve made the decision to buy something or you’ve made the decision to build it internally – now you might be able to do a side-by-side comparison.
Prototyping now requires less resources and less upfront capital, and so it opens up that prototyping capability to smaller companies. Organizations that previously couldn’t spare developers for proof-of-concept work can now realistically explore internal builds alongside vendor options.
But this isn’t just about AI. The initial capital isn’t just developer hours. The hosting cost and the use of public cloud infrastructure really changes a lot of the cost there. In this day and age, at least with the clients we’ve worked with, most people don’t consider having anything hosted internally or in a co-lo facility. Almost everybody is looking at using some sort of public cloud service, whether it’s AWS or Azure.
So we’re seeing a combined effect: AI-accelerated development plus cloud infrastructure equals significantly lower barriers to building. Having time to do a prototype internally now becomes a lot more attractive because you’re able to build something that is a little bit more feature-rich than you might have been able to build before.
But the question remains: does this actually change the fundamental economics, or just the timeline for initial prototypes?
The maintenance burden didn’t disappear
Here’s where the math gets more complicated than most CEOs realize. AI changes one variable in the equation – initial development time and cost. But it doesn’t eliminate several others that often dominate the long-term total cost of ownership.
The 30% rule still applies. As I’ve written about before, you need to dedicate somewhere between 20-30% of your development capacity to maintenance and operational activities. This means keeping your applications current, managing dependencies, handling security updates. AWS is going to stop supporting Python version 3.8 after November this year. If you have applications built using that version of Python, you need to make sure they’re updated before AWS not only stops supporting that version but stops allowing you to execute it.
This kind of maintenance work doesn’t add any visible features, but it’s absolutely critical for keeping your systems running. The same thing happens with commercial software and open source libraries – they essentially have a shelf life. A vendor might say they’re going to stop making changes to anything older than version two. If you have an issue, they’re not going to give you any help. If somebody finds a security vulnerability in that version, they’re not going to fix it.
AI hasn’t changed any of this. You still need permanent engineers to maintain what you build. Whether you outsource or build it internally, you still own supporting it. That ownership burden is organizational, not just technical.
The code assist dependency you’re not thinking about
There’s a new risk in this equation that didn’t exist in the traditional build versus buy framework, and it’s one we’re still figuring out how to evaluate properly.
We’ve all seen the headline “Google CEO Sundar Pichai: AI systems are now responsible for generating over 25% of all new code for Google’s products.” We see people that are cautious but don’t want to get left behind and those that are all-in. There’s the “this is just auto-complete” crowd and then there’s a whole “vibe-code everything” crowd. There’s definitely a shift in how people are thinking about coding in general and what it means to generate code versus be an engineer.
But the ownership problem isn’t just an engineering issue, it’s an organizational issue. Imagine your team deploys something into production, and let’s say a quarter of it was written using a code assist. You’ve done some review of it, but you didn’t read every line of code that it generated. You deploy it, it’s passed validation. But some bugs came out. Now you have to research where these bugs are coming from.
If you wrote the code, you kind of know where it is and what the root causes could be. But if you were using a code assist to do your development, how much code assist would you use to help you troubleshoot? “Hey, I ran into this bug. Help me troubleshoot this thing.”
This increasing reliance has been called “automation complacency”. At first, we tend to be critical of new tools and look carefully for faults but over time, as we see more evidence of their capability, our trust increases and we become less vigilant. Eventually, we become over-reliant on the new tool. In other words, we move from a state of trust-but-verify-everything to being overly trusting.
So there’s a question of how much code assist you want to use for production systems or how much reliance you want to have on it. That might be an older mentality – obviously companies like Google are moving in this direction aggressively – but it’s worth considering whether we’re trading one form of dependency (vendor lock-in) for another (code assist dependency).
Product management discipline still determines success
Regardless of how fast you can prototype with AI, if you don’t have strong product management around the thing you’re building, you’re going to run into issues. We’ve seen organizations fail at internal builds because they don’t really have a good sense of product management. They consider their technology internally as part of a service that they’re providing, and they don’t really have that product mentality.
It’s easy if you’re a product company – if you’re like Microsoft or Google, you’re working on a product for a consumer or business. That’s pretty easy to get your head around. But if you’re providing a service like royalty administration, or if you have a website that you use to communicate something about the work you’re doing (like the museum work we’ve done for the a large non-profit, where they have a bunch of different websites they use to communicate what’s going on), they don’t really consider those to be products. They’re just vehicles for communicating.
When this happens, a couple things go wrong. The decisioning process is usually pretty poor. They might not have a real strong definition of what they consider to be a minimum viable product and be able to get something shipped on time. Setting the requirements and product definition is probably the big challenge in most organizations.
AI accelerates your ability to build things. But if you have weak product management, AI just helps you create technical debt faster.
The prototype-to-production gap is dangerous
The new speed-to-prototype leads to a dangerous temptation: to deploy AI-generated prototypes directly to production. The logic goes: “It works in the demo, we need it now, and it meets enough of the functional requirements – let’s just deploy it.”
This is where the non-functional requirements gap becomes critical. Prototypes are meant to be a proof of concept. They show you that you can do it, but it doesn’t mean that they’re ready for showtime.
From an operational or maintainability perspective, a prototype might not be easy to make updates to. From a logic perspective, you really didn’t make the code flexible enough to be able to meet either all the requirements or future requirements. You’ve maybe hardcoded a lot of things that should have been coded so that you can add options later.
This is a big risk. AI makes it deceptively easy to build things that look production-ready but aren’t. Performance, security, maintainability, scalability – these still require deliberate engineering that can’t be shortcut.
How AI changes the vendor side of the equation
While we’re talking about how AI affects internal builds, it’s worth noting that AI is also changing the buying side of the equation in ways that complicate the decision.
The new evaluation criteria
When you’re evaluating buy-side vendors, you now have to really look at the underlying AI technology and the language models they’re using. Every vendor wants to slap AI on their homepage.
One of our clients has been very cautious about enabling AI features, even on products that they already use. They want to be sure that the data that they are enabling AI for stays within a private instance or private setting and can’t be used in public models or for public training.
On the one hand, having AI integrated seems like a good feature to have – it enables some capability that you might not have had before. But from an adoption perspective, that’s one additional thing to add to the checklist when you’re evaluating potential vendors.
Data privacy concerns intensify
One of the other things that comes up in these discussions is data locality. When you use a SaaS service, you’re moving data out of your local environment to an external party. Security, data confidentiality, or privacy has always been a big question around using SaaS products. That could be another factor in considering building, especially if the cost of building is much smaller.
You’re weighing the cost of a subscription model versus the cost of doing maintenance internally. If building becomes significantly cheaper with AI, the data sovereignty consideration carries more weight in the decision.
The extensibility question that determines long-term value
Something that often gets lost in the initial analysis but becomes critical over time is how extensible is the solution. Can it be adapted to meet your future requirements?
If you’re building it, you have much more control over how things get prioritized. You can extend capabilities relatively easily as needs arise –it might not be part of its core functionality today, but it’s relatively easy to extend that capability.
With a SaaS solution, you would basically have to submit that request to that product team and have that become part of their roadmap. You’re working within the constraints of that relationship. If you’re a large enough company and you’re spending enough money, you might prioritize it very quickly. But if you’re a smaller company, it might be something that gets added to the roadmap or it might not. Or gets added and then gets bumped.
For future changes, the lifecycle is very different. This is part of why you want to make sure maintenance is part of overall product management – it’s not something that you can create and then just have it up and running without having to do some care and feeding on it. You want to make sure it’s maturing in the right way and that you’re able to enhance capabilities as the needs arise.
What we still don’t know
I want to be honest about something: we’re in the middle of this shift, and there’s a lot we don’t actually know yet.
A lot of people are sensing that there is a big shift. The feeling is real. Organizations are more interested in internal prototyping, they’re more willing to include it in vendor evaluations, they’re seeing the value of being able to build feature-rich prototypes faster. Is that interest translating into actual changes in buying behavior at scale? Are people seeing actual SaaS usage going down because there’s more in-house build happening? I don’t have data on that. It’s a question worth watching, but right now, the economics are unclear.
This uncertainty should inform how you approach the decision. Don’t make strategic choices based on theoretical shifts in the market. Make them based on your organization’s actual capabilities and constraints.
How to think through this decision more carefully
If you’re facing a build versus buy decision right now, here’s the framework I’d recommend:
Five questions that actually matter
Can you maintain this? Do you have permanent staffing for ongoing maintenance, or are you fooling yourself because the prototype was fast? Are you ignoring non-functional requirements and long-term extensibility needs? Remember that maintenance isn’t just fixing bugs – it’s keeping versions current, managing dependencies, handling security updates. That’s 20-30% of your capacity, whether AI helped you build it or not.
Do you have product management discipline? Can you define minimum viable product, prioritize effectively, and ship on time? Or do you view this as infrastructure or a service rather than a product? If you lack product management maturity, faster prototyping doesn’t help, it just accelerates your path to technical debt.
What’s your exit strategy? If you buy, can you export data and switch vendors in five years? Is vendor lock-in acceptable for this capability? If you build, can you sustain it long-term, or will you be trapped maintaining something that becomes obsolete?
Is this differentiating? Does this capability drive competitive advantage, or is it commodity functionality where vendor solutions work fine? The extensibility question matters more for differentiated capabilities where your requirements will evolve in unique directions.
What’s the five-year cost? Does subscription cost over time exceed build plus maintenance, accounting for the new risks we’ve discussed – code assist dependency, potential technical debt from rapid prototyping, the ongoing staffing burden that AI didn’t eliminate?
The side-by-side evaluation approach
The viable option that AI enables, which wasn’t realistic before: prototype internally while evaluating vendors. This gives you a comparison point. Is the internally built thing good enough for what you need compared to vendor offerings?
But maintain discipline. Use the prototype to inform the decision, not as the production solution. Don’t deploy it just because it works and you need it now. If you decide to build, plan for proper production hardening that addresses non-functional requirements.
The reality behind the hype
It’s true that barriers to prototyping have dropped. More teams can realistically include internal prototypes in evaluation. Infrastructure costs dropped (though that predated AI). Initial build time is reduced as AI code assist accelerates development, and teams feel the difference.
But the maintenance burden hasn’t changed. You still need significant capacity for keeping systems current. Product management requirements didn’t disappear – you still need strong MVP definition and prioritization discipline. Non-functional requirements still need deliberate engineering for security, performance, scalability. Extensibility trade-offs remain: you’re still choosing between vendor roadmap constraints and maintenance staffing burden.
The decision-making process has become more complex. You now have code assist dependency alongside vendor lock-in concerns. Data privacy considerations carry more weight when vendors integrate AI. The prototype temptation is stronger – it’s easier to build things that look production-ready but aren’t. And fast prototyping without product discipline creates technical debt at an accelerated pace.
Frank’s joke about AI killing the SaaS model captures something real that many of us are sensing. But the question of whether the economics actually support this shift at scale, that’s still open. What we know for certain is that the decision framework got more nuanced, not simpler.
Don’t let faster prototyping seduce you into builds your organization can’t maintain. The math did change, but maintenance burden, product management requirements, and non-functional requirements engineering didn’t disappear. Make the decision based on your organization’s actual capabilities, not on the theoretical promise of AI-accelerated development.
Because the most expensive mistake isn’t choosing build over buy, or buy over build. It’s making the choice for the wrong reasons and discovering the hidden costs after you’re already committed.
At Ten Mile Square, we help companies navigate build-versus-buy decisions by conducting technology assessments that reveal your actual product management maturity, maintenance capacity, and technical requirements. We can help you structure side-by-side evaluations that use AI prototyping to inform decisions without rushing into technical debt. Contact us to learn more about our assessment process.
