Why Software Process Never Works
home // page // Why Software Process Never Works

Why Software Process Never Works

Nothing induces more ambivalent feelings in working software developers than the idea of process. On the one hand we crave the sanity, insight, and opportunity for self-improvement that processes promise. On the other, we actively fear trying to introduce them into our organizations. Stories of botched or half-assed attempts to apply some new-fangled methodology are as common as dirt. Often times the end result can be worse than if nothing had been done at all. One popular explanation for this ubiquitous failure is that the process was misapplied. It failed to live up to its promise not because the process is flawed, but because there wasn’t enough fidelity to the process. Another popular explanation is that the process is the problem. What is needed is a different or better process, or to fix the existing process.

The way I see it both of these explanation have a fatal flaw, and form a convincingly hermetic false dichotomy. To me, the entire notion of process skips a huge preliminary step, and skipping this step is the reason for process failure. Let’s explore.

Process Not Required

We’ll start with a simple observation – plenty of great software has been written with little to no formally defined process. Those that do formally define their processes usually do so later, and those processes usually appear to be ad hoc[1]. In fact, it looks like you can get pretty far with hack and fix, static/dynamic analysis, and some peer reviews. Since we can’t really speak about privately developed, closed source software, we’ll have to take some examples from the open source world.

As a first example let’s look at WorldWideWeb – the first web browser, created by Tim Berners-Lee in 1991. Nowhere in the source do we find formal requirements or TDD. In fact, there’s no real process artifacts at all, and yet 24 years later it’s shockingly well-written and easy to read[2]. No Scrum or XP required, no use cases.

An even more impressive example is qmail, a security-conscious sendmail alternative developed by Daniel J. Bernstein (djb). The latest version was released in 1998, and since then there have only been 4 known bugs discovered in the software, none of which were security holes. This program is much larger than WorldWideWeb and the source code isn’t always abundantly clear or well commented. Yet, despite these “code smells” it does appear to have been meticulously constructed and documented by its creator. Again, there are no automated tests or process artifacts, simply two files TEST.send and TEST.receive with step-by-step instructions for ensuring mail sending and receiving work.

What about software developed by a team of people? Let’s take the Linux Kernel as an example here. This piece of software is massive and was largely developed through the accrual of patches from developers all over the world. The Linux Foundation website has a document detailing how developers can participate in the development process. Again, the prescriptions are mostly ad hoc, with little to no advice on how to develop the program or what artifacts to produce. The Linux Kernel is a truly enormous piece of software, successful by any standard, and yet its process is largely non-existant. While there is some automated testing, testing appears to be largely community driven.

From these examples we can conclude that a formally defined software process that produces artifacts is not a necessary or sufficient condition for developing good software at any scale or with any degree of quality. So, what is process for again?

Processes and Improvement

We can start to get at what processes are for by asking not what software processes are, but rather what it is that software processes do. Three things immediately leap out at us.

First, processes attempt to standardize routine work. Any time new work is demanded of us the question immediately arises of where to begin. Processes attempt to regiment this either by having us write up a requirements document, produce an estimate, or code a failing unit-test. The goal is to improve efficiency by removing unnecessary analysis or analysis paralysis.

Second, processes attempt to modify individual behaviors. While individuals are often explicitly glossed over in favor of “the team”, any team is really an assemblage of individuals. Therefore, any prescriptions made on team behavior are really requests for coordinated changes in individual behaviors.

Third, processes attempt to establish measurements that can be used for improvement. Many processes measure speed, quality, and estimation accuracy as a means toward process and individual improvement. The goal is to introduce the feedback necessary to help individuals, teams, and organizations grow and improve as they work.

Covertly contained in all of these are a host of assumptions about the individuals and organizations involved.

In regards to individuals: Firstly, it is assumed that these individuals have mastered basic professionalism: they can work unsupervised, are organized, and can manage their time. Without these elementary skills it will be difficult for them to correctly follow any set of prescriptions, no matter how simple. Second, it is assumed that they have a desire to improve. People who are content where they are or don’t see the need for improvement are unlikely to exert additional effort, and can find a way to satisfice any prescription given to them. People who are just unhappy are likely to do the same. Third, they have to possess the ability to adopt and follow a discipline over the long term. Without this it’s unlikely that any process can be followed long enough to see results. Finally, it is assumed that they can come to understand the spirit of the rules rather than the letter, and so learn when not to follow them.

In regards to organizations: Firstly, it is assumed that the work environment or company culture is not dysfunctional. An environment that is too dysfunctional doesn’t allow for the concentration and concerted effort disciplined practices require.  Second, it assumes that the business is stable and well run. If employees don’t know if they’ll be getting their next paycheck next week it’s going to be hard for them to focus on improving development process. Third, it assumes all of the above assumptions for individuals hold for everyone in the organization involved in successfully enacting the process.

Therefore, processes are really intended for motivated individual professionals and improvement-oriented teams in reasonably well-functioning organizations. They cannot be universal prescriptions that work across all organizations. In fact, they probably won’t work in most organizations. They are almost exclusively reserved for highly functional groups and organizations. A great team that produces great products is simply the emergent property of a great group of individuals working in line with their own tendencies and capacities. A dysfunctional team that produces a dysfunctional product is then the emergent property of dysfunctional individuals and dysfunctional organizations. The fact that a great team happens to adopt (or not adopt) a particular process or practice is then merely incidental, and has nothing to do with the quality of the end result. That quality has everything to do with the individuals and organizations involved.


From the above we can conclude that highly functional groups and individuals will tend to produce high quality products, regardless of the process, or lack of process, used. Processes, then, merely help to standardize routine elements of the work, to codify effective beliefs and behaviors, and to provide measurements that can be used for self-improvement. They do not produce highly functional groups, but instead help streamline the way highly functional groups work.

From this perspective the old chestnuts “you didn’t apply the process well enough” and “the process sucks” are both wrong. Rather, the root of the problem likely lies with the team or organization the process was implemented within. Individuals looking to improve should evaluate how well-developed their underlying skills are, and organizations looking to improve should do the same. We say, along with Kant, that the man who undermined his own house might have known it would have fallen. But he would only know this by recourse to experience and by sober evaluation of that experience.

Most organizations and individuals skip the step of ensuring they possess the basic skills necessary for successful process adoption. This causes them to “put the cart before the horse”, and to attempt improving capacities they have yet to develop. Therefore, part of any successful process adoption should be mentoring developers in these basic skills, and ensuring that both they and the organization they are a part of possess the necessary foundation that all processes attempt to build on.

Book Recommendations

  • The 7 Habits of Highly Effective People – It almost feels cliché to mention this book, but it is a staple for advice on developing basic professional habits.
  • The War of Art – This book might not be for everyone, but it’s a lot of starch, meat and potatoes advice on not letting your internal resistance to self-improvement stop you from improving.
  • PSP: A Self-Improvement Process for Software Engineers – This book introduces you to a high-discipline personal software process based on the Capability Maturity Model (CMM). It will teach you many of the basic skills developers almost never learn in school – how to collect your own performance data, how to use statistical analysis to analyze that data, how to estimate using historical data, and how to track and improve software quality.
  • Extreme Programming Explained – This book attempts to reject most of what the PSP above is about, but despite this superficial incompatibility it introduces another high-discipline process that can, in reality, easily be integrated with ideas from the PSP. The book also explains a lot of the basics of what is now called “Agile”, and it’s hard to not get a little bit excited to go out and apply it after reading it.
  • The Pragmatic Programmer: From Journeyman to Master – A masterpiece and the quintessential book on software professionalism. If you only read one book on this list, this should probably be it.


[1] For example, the Linux Foundation has a wonderful document on the kernel development process. There are some very loose process prescriptions, but nothing near as rigid as TDD et al: http://www.linuxfoundation.org/content/how-participate-linux-community

[2] It’s instructive simply to look through the source code. You’ll notice the program is logically decomposed, well commented, and overall pretty well written. It definitely looks like it has been “hack and fixed” together, but even 24 years later it’s pretty easy to read: http://www.w3.org/History/1991-WWW-NeXT/Implementation/