Good Enough Software
Ending the Cargo Cult of Exhaustive SDLC
“There are no solutions only trade-offs” — Thomas Sowell
This quote perfectly summarizes the challenge every software engineer faces. Problems are never solved. As an engineer, you engage in a series of trade-offs until an acceptable solution is reached. As soon as you deploy said solution, its value decays, and you repeat the process. The decay is so predictable you can model it using exponential decay. Below is a graph of the decay function of software, assuming a 20% decline over three years.
As you can see from the graph, this model predicts the system will lose nearly half (~49%) of its value in three years. That’s a lot of value! What else depreciates at this rate?
- Consumer Electronics — High-end consumer electronics lose value quickly as new models with better features are released yearly. For example, a flagship smartphone may lose 20–30% of its value annually, driven by rapid innovation and consumer demand for newer models.
- New Cars — New vehicles experience a significant drop in value the moment they are purchased (often 10–20% in the first year) and continue to depreciate annually. On average, cars lose about 15–25% of their value annually, depending on the make, model, and market conditions. After three years, many cars are worth about half their original value.
- SaaS Products — Some SaaS tools lose perceived or actual value at ~20% when they fail to meet evolving business needs, industry standards, or better competitors. For instance, outdated software that lacks critical updates or new features may be quickly replaced by alternatives, leading to a sharp depreciation in its value for businesses.
So why on earth do companies continue to pour money into the cargo cult of exhaustive software development? To answer this question, let’s first start with defining the term. In this context, “exhaustive software development” refers to the engineering around the software development lifecycle (SDLC). This includes things like:
- The software development workflow, for example, GitHub Flow, GitFlow, Trunk-based development, etc.
- Testing environments such as dev, test, and prod
- Testing methodologies
- Coding standards
- Static code analysis
- Software development methodologies such as agile
All these engineering exercises have one thing in common: they have a negative correlation to velocity and are, hence, trade-offs you make to support scale. They are also investments the business makes into an ever-depreciating asset, without which large-scale software systems can not evolve and meet ever-changing market demands. These investments should be made at the most minimal level possible for the business to meet its goals. However, today’s norm is a cargo cult of developers who make the maximum investment possible and create what we will term Exhaustive Software Development (ESD).
At first glance, ESD appears prudent. Why wouldn’t we want well-defined workflows, rigorous testing, and industry-standard methodologies? The problem lies in the unintended consequences of these so-called ‘best practices.’ They create an illusion of progress while systematically slowing teams down, bloating budgets, and reducing flexibility. Worse, they often prioritize process over outcomes, turning software engineering into an endless bureaucratic exercise rather than a means to deliver value.
Most engineering exercises marketed as best practices — whether it’s GitFlow or multi-stage CI/CD pipelines — are not designed to optimize for velocity but rather to manage the chaos of scale. Ironically, the processes meant to support growth often become barriers to innovation. A small team with a lightweight workflow and minimal overhead will consistently outperform a larger team drowning in exhaustive testing environments, rigid coding standards, and multi-layered approvals. Yet, the cult of ‘engineering excellence’ persists, often at the cost of the agility these practices were meant to preserve.
It’s time for a paradigm shift in how we think about engineering best practices. Instead of defaulting to the maximum investment in processes, we should focus on the minimum viable process — the smallest set of practices necessary to deliver business outcomes. Teams should regularly evaluate whether a process adds value or merely adds complexity. If it doesn’t serve the business goals or improve the end-user experience, it’s time to question whether it’s truly necessary.
Examples
Eliminate dev and test environments. Deploy to production.
Traditional dev and test environments often introduce unnecessary complexity, slow development, and create a false sense of security. By deploying directly to production, teams can streamline their workflow, shorten feedback loops, and focus on building robust, production-grade systems from the outset. To make this approach viable:
- Implement feature toggles or canary releases to safely test new features in production without impacting all users.
- Adopt blue-green deployments or similar techniques to minimize downtime and risk during deployments.
- Rely on real-time monitoring and observability tools (e.g., Datadog, New Relic, or Prometheus) to detect and resolve issues quickly.
Shift as far left as possible. Automate formatting, static code analysis, and tests using pre-commit utilities like Husky.
Shifting left means moving quality assurance processes earlier in the development lifecycle to catch issues before propagating. Automating as much as possible ensures that developers can focus on writing code without being bogged down by manual processes.
If it isn’t automated at the developer level, DO NOT DO IT.
Any process that relies on manual intervention is a bottleneck and a liability. Automation saves time, reduces human error, ensures consistency, and enforces standards across the team.
Adopt a Minimalist Approach to Development Processes.
Avoid adding unnecessary processes or tools that do not directly contribute to delivering value. Examples include:
- Eliminating redundant approval workflows for pull requests if teams are small and trusted.
- Reducing branching complexity by adopting simpler workflows, such as trunk-based development.
- Focusing only on tests that catch meaningful issues rather than striving for arbitrary metrics like 100% test coverage.
Invest in Monitoring and Incident Response Instead of Redundant Testing.
In a world where software is deployed continuously, detecting and responding to issues in real time is more valuable than trying to prevent every possible problem in advance. Focus on:
- Building robust monitoring, logging, and alerting systems to ensure issues are caught early.
- Creating clear, well-practiced incident response playbooks to minimize downtime when issues occur.
- Measuring Mean Time to Recovery (MTTR) rather than trying to eliminate all bugs upfront.
Embrace Small, Incremental Changes Over Large Features.
Large features introduce complexity and increase the risk of failure. Instead, prioritize delivering small, incremental changes that are easier to test, deploy, and rollback if necessary.
Empower Developers with Tools and Autonomy.
The most effective teams are those where developers can make decisions and act without being hampered by excessive processes. To enable this:
- Provide developers with access to powerful, easy-to-use automation tools.
- Reduce approval chains, allowing engineers to take ownership of their code from development to production.
- Create a culture of trust and accountability, where developers are encouraged to take risks and learn from mistakes.
By eliminating unnecessary environments, shifting processes left, and focusing on automation and simplicity, teams can achieve higher velocity, improved quality, and reduced operational overhead. Do not use these examples to form yet another cargo cult. What works for you will be the trade-offs between velocity and scale that deliver the most business value for your organization. Develop your own thesis and continue to refine it.