Computing Applications

Frequent Releases Change Software Engineering

Geeky Ventures Founder Greg Linden

Software release cycles usually are long, measured in months, sometimes years.  Each of the stages — requirements, design, development, testing — takes time.

Recently, some of the constraints on software deployment have changed.  In web software, deployment is to your own servers, nearly immediate and highly reliable.   On the desktop, many of our applications now routinely check for updates on each use and patch themselves.  It no longer is the case that getting out new software to people is slow and inconsistent.  The likelihood of a reliable, fast internet connection on most machines has made it possible to deploy software frequently.

But, just because a thing is possible does not mean it is desirable.  Why would we want to deploy software more frequently?  Is it not better to be careful, slow, and deliberate about change?

The main reason to consider frequent deployments is not the direct impact of getting software out to customers more quickly, but the indirect impact internally.  Frequent releases force changes in how an organization develops software.  These changes ultimately reduce risk, speed development, and improve the product.

For example, consider what is required to deploy software multiple times per day.  First, you need to build new deployment tools that are capable of rapidly pushing out new software, can handle thousands of potential versions and enforce consistency, and allow rapid rollbacks in case of problems.

Software development has to change.  With multiple near simultaneous rollouts, no guarantee of synchronous deployment, and no coordination possible with other changes, all software changes have to be independent and backward compatible.  The software must always evolve.

Requirements, design, and testing can be shortened and replaced with online experimentation.  To learn more about customer requirements and design preferences, deploy to a small set of customers, test against a larger control group, and get real data on what people want.  Bugs are expected and managed as a risk through small deployments, partial deployments, and rapid rollbacks.

Compare this to a more traditional development process.  Requirements gathering and design are based on small user studies and little data.  Software is developed without concern to backward compatibility and must be rolled out syncronously with many other changes.  Testing has the goal of eliminating bugs, not merely managing risk, and is lengthy and expensive.  When the software does roll out, we inevitably find errors in requirements, design, and testing, but the organization has no inherent capacity to respond by rapidly rolling back the problems or rolling out fixes.

Frequent releases are desirable because of the changes it forces in software engineering.  It discourages risky, expensive, large projects.  It encourages experimentation, innovation, and rapid iteration.  It reduces the cost of failure while also minimizing the risk of failure.  It is a better way to build software.

The constraints on software deployment have changed.  Our old assumptions on the cost, consistency, and speed of software deployments no longer hold.  It is time to rethink how we do software engineering.

Join the Discussion (0)

Become a Member or Sign In to Post a Comment

The Latest from CACM

Shape the Future of Computing

ACM encourages its members to take a direct hand in shaping the future of the association. There are more ways than ever to get involved.

Get Involved

Communications of the ACM (CACM) is now a fully Open Access publication.

By opening CACM to the world, we hope to increase engagement among the broader computer science community and encourage non-members to discover the rich resources ACM has to offer.

Learn More