In 2026, Software > SaaS
In 2026, Software > SaaS
Angela is a quality engineer at a defense prime. She works in a SCIF, a Sensitive Compartmented Information Facility. That means: no internet, no cloud, no outbound packets of any kind, and definitely no nonsense. Her job is quality management for a program that, if she told you about it, would bore you to tears and also technically be a felony.
She needs a requirements management tool. Jama Connect? Cloud SaaS. Polarion? Cloud SaaS. IBM DOORS Next? Cloud SaaS. (DOORS Classic still runs locally, but IBM has been trying to kill it for a decade, and every time they announce end-of-life, a thousand defense engineers weep quietly into their CAC readers.)
So she uses Excel. Or she uses a homegrown on-prem SharePoint horror. Or she uses DOORS Classic and prays nightly that IBM doesn’t pull the plug. Or she uses nothing, and the traceability exists on paper? (Shudder.)
This is not some rare edge case. This is life in aerospace, defense, and medtech under FDA design controls. This is anyone working in a classified, regulated, or air-gapped environment. So, like, millions of engineers and tens (hundreds?) of billions in contract value. Yet the entire “modern” software industry has decided their workflow doesn’t matter, because it doesn’t generate monthly recurring revenue. Bow to the malign gods of Venture Capital, where MRR is exalted.
I built a tool called RTMify. It does regulated requirements traceability, creating a Requirements Traceability Matrix using your spreadsheet. It scans your Git repos, parses your source artifacts, builds traceability matrices, finds gaps, generates RTMs and design history records, and talks to your AI agents over MCP. It runs on your machine. The whole thing is a single binary, about 15 megabytes, with the web UI compiled into the executable. You double-click it, a tray icon appears, and your browser opens to localhost.
The architecture looks like this:
Native shell (tray icon, startup, lifecycle)
↓
Local application server/API (SQLite, sync, API, reports)
↓
Embedded web UI (HTML/JS, served from the binary)
The ‘ole three layer cake. Native shell handles OS presence: tray icon, startup integration, child process supervision, license entry. The local server is the real application runtime: persistence, business logic, route handlers, report rendering, background sync. The browser UI drives all operator-facing workflows through localhost fetch() calls to the API.
The frontend is web technology, but it’s local, zero-dependency, same-origin, and ships with the product. No CDN. No remote API. No cross-origin anything. UI and API version together, deploy together, and run against the same local SQLite-backed state.
When I describe this to web developers, they look at me like I’m showing them a butter churn.
This architecture, or variations on it, used to be called “software.” Before roughly 2010, if you bought a program, it ran on your computer. (Just so weird, I know.) Your data lived on your disk. The vendor shipped you an installer, you ran it, and the thing worked whether or not you had internet. This was so normal that nobody had a name for it.
Then SaaS happened, and for reasons (some legitimate, some driven solely by the VC industry, some by lemming behavior), the entire software world pivoted to remote web apps with centralized data. Why? Well, monthly and annual subscriptions are predictable revenue. Customer data on your servers means churn is painful because switching vendors is hard. And true, cloud deployment eliminates installer hell. Some advantages were real, to be clear. I’m old enough to have lived through desktop distribution nightmares. (Remember “dll hell”?) When I built Room Key’s architecture back in 2011, we were cloud-native before that phrase existed. (I have the blog posts to prove it.)
So I’m not here to tell you “cloud bad”. I’m telling you, just slightly ahead of the curve: Amigos, the tradeoffs have shifted, and what’s opening up on the other side is bigger than a deployment model.
In 2019 (too early, really), Kleppmann, Wiggins, van Hardenberg, and McGranaghan published “Local-First Software: You Own Your Data, in spite of the Cloud” out of Ink & Switch. They laid out seven ideals for local-first software: fast, multi-device, offline-capable, collaborative, long-lived, private, and user-controlled. They scored everything from Google Docs to Git and found that nothing satisfied all seven. The paper was right about a bunch. It was also published into a market that could not have cared less, because SaaS multiples were still going up and to the right. Plus in quasi-academic fashion, they were really excited about their pet: Conflict-free Replicated Data Types (CRDTs).
Some seven years later now, the conditions the paper anticipated have arrived. But I think Kleppmann’s framing, as good as it was, was still too defensive. “You own your data, in spite of the cloud.” In spite of. As if local-first were a rearguard action against the inevitable march of SaaS. As if the best you could hope for was to claw back some ownership from the cloud gods.
That’s not what’s happening. What’s happening is that local software is about to eat SaaS alive in every market where the data is sensitive, the environment is constrained, or the user gives a damn about control. And it’s going to do it because local software enables things that SaaS structurally cannot.
Start with freedom.
When your data lives on your machine, ya own it. You can back it up. You can move it. You can delete it. You can inspect it with any tool you want. You are not a tenant in someone else’s database basement apartment, subject to their terms of service, their pricing changes, their decision to sunset the product, or their automated system that flags your document as abusive and locks you out. (Google Docs did this. In 2017. To paying users.)
Maybe in the past this was somewhat ideological. But now it’s law. GDPR, ITAR, HIPAA, CMMC: the regulatory world is moving toward local data custody as the price of admission, not a preference. Every year there are more jurisdictions where “your data on our servers in Virginia” is a legal liability. The SaaS vendors treat this as an inconvenient obstacle to their business model. The local-first world treats it as the default.
Now add moddability.
A local application that exposes APIs and supports plugin loading is a platform, not just a product. Think about what happened to games. Skyrim shipped with a mod API and players built ten thousand mods. Minecraft shipped with an open architecture and an entire generation learned to code by extending it. The Doom modding community is older than most SaaS companies and still shipping new content.
That culture never reached professional tools because SaaS ambushed it at the pass. You can’t mod a web app. You can’t write a plugin for someone else’s server. The vendor controls the extension points, the marketplace, the review process, the API rate limits… everything. Your customization lives on their infrastructure. When they change their mind, it disappears. This has happened more times than can be counted.
Local software with a localhost API and a plugin architecture gives that power back. An engineer who needs RTMify to parse a proprietary artifact format can write a parser and drop it in. A team that wants a custom report can build one against the same API the browser UI uses. The tool becomes a workbench, not a locked cabinet.
Now add composability and AI. This one is new (sorta) and nifty (definitely).
Local AI agents can orchestrate local tools. Your Claude or your Copilot or your whatever-comes-next can talk to RTMify over MCP, pull traceability data, cross-reference it against your test results, and generate a gap analysis. Localhost to localhost. No round-trip to MAE-East. No API keys. No credential brokering across organizational trust boundaries. No vendor-to-vendor integration contract.
SaaS tools don’t really “compose”. If they did, you wouldn’t have n8n. If they can be said to compose, it’s through APIs that cross organizational boundaries, authentication domains, and network hops. Every seam is a cyberattack vector, a latency penalty, and a business relationship. Conversely, local tools compose through… localhost. The integration cost is approximately zero, and the security boundary is your own machine. IT’s got that covered already, thanks.
Mark me: when every engineer has a local AI agent that can wire together local tools the way Unix pipes wire together command-line programs, the SaaS integration tax will be unbearable. Why would you pay Jama $50,000 a year so your AI agent can make authenticated HTTPS calls to their servers, when the same agent could just talk to a local tool over localhost for free?
Plus, local software is orders of magnitude cheaper to build and operate. Not a little cheaper. Orders, plural. SaaS bros, take note.
A SaaS company runs infrastructure for every customer simultaneously. AWS bills. Multi-tenant architecture. Ops teams. On-call rotations. SOC 2 compliance for your servers. Security surface area that scales with your user count, because you’re holding everyone’s data in one big juicy target. Uptime SLAs. Database migrations that have to work live, in production, across a million tenants at 2 AM. The whole bloody circus, with extra monkeys.
RTMify’s cloud infrastructure cost is zero, cuz there is no cloud infrastructure. The compute runs on the customer’s machine. The data lives on the customer’s disk. I don’t run servers. I don’t have an ops team. I don’t wake up at 3 AM because a database in us-east-1 fell over. My “deployment pipeline” is compiling a binary and uploading it to a download page. The customer’s IT department doesn’t have to approve a new SaaS vendor, negotiate a BAA, or audit my SOC 2 report. They download a file and run it.
The SaaS model imposes enormous operational costs on the vendor in exchange for revenue model swagger: recurring revenue, switching costs, telemetry, and on-demand upgrades. If you’re VC-funded and optimizing for MRR multiples at exit, that tradeoff has previously looked good. If you’re trying to build a sustainable business that makes money by selling tools (ie., software that does useful shizzle), it’s backwards. You’re spending millions to operate infrastructure your customers never asked for, so you can charge them monthly for the privilege of depending on your uptime.
I know how this reads. Guy builds local app, writes essay about why local apps are better. Tale as old as time.
Fair. But I’ve also built the other thing. Cloud-native architectures that scaled to 600,000 daily users on four engineers and zero capex. I know what those architectures buy you. For a hotel search engine, the SaaS tradeoff makes perfect sense. For a requirements traceability tool that operates on Git repos, CAD metadata, and design control artifacts in an environment where the data cannot leave the building? The tradeoff is insane.
The venture capital model and SaaS economics selected against local software for fifteen years. A local tool with a one-time license doesn’t generate MRR multiples, doesn’t create data gravity, and doesn’t produce the telemetry that lets you optimize engagement. The industry optimized for fundability and called it engineering.
But the forces pushing the other direction are accelerating and inexorable. Regulated markets are growing. Data sovereignty has legal teeth. Single-binary deployment is a solved problem. And AI agents are about to make local tool composition so frictionless that the SaaS integration tax will look like what it always was: a cost the customer was paying so the vendor could run a server farm in AWS.
The category is already visible if you know where to look. Syncthing, Prometheus, MinIO, Plex, Home Assistant, every Kubernetes dashboard. Local server, web UI, browser dashboard. They’re infrastructure tools today. They’ll be the architecture of commercial software tomorrow.
You can’t run Jama in a SCIF. You can’t run Polarion without internet. You can’t mod Doors Next. You can’t compose Confluence with a local AI agent.
But you can double-click a 15-megabyte binary, have your traceability matrix in three seconds, expose it to your AI over MCP, and write a plugin for the weird artifact format nobody else supports. On a plane. In a SCIF. On a submarine.
Software is coming home. Leave the lights on.