“Plans are nothing; planning is everything”—Dwight D. Eisenhower, quoting 19th century Prussian General Helmuth von Moltke
One of the most visible planning “failures” of modern times occurred beginning at approximately 6:30 A.M. on June 6, 1944 on the beach between the towns of Vierville and Colleville in northern France. If we measured the value of a plan by the number of things it predicts and controls, that particular part of the Normandy Invasion during World War II would not be considered a success.
Plans and the Future
When we estimate a project and produce a plan for its completion, we are attempting to foretell the future. This is not easy, as physicist and Nobel Laureate Niels Bohr famously remarked: “Prediction is difficult, especially if it involves the future.” It is certainly true that many projects come to grief due to poor planning or failure to plan. But even good project planning does not guarantee success simply because of the intrinsic uncertainty of software development.
Software is a place in which we store knowledge where the knowledge executes (rather than simply being described as in the book form). Software development is the activity of acquiring this knowledge and populating the executable medium. Some of this knowledge population is predictable and deterministic and some is not. The difference is largely determined by whether the knowledge is readily available or whether it has to be discovered; and if it has to be discovered, how much is needed and how difficult it is to obtain.
Projects that consist of the application of what we know have little to be discovered and are intrinsically more predictable. Whenever unknown information must be uncovered, variance is introduced. If the knowledge is significantly different in type and structure from our existing knowledge base, there will be more variance still. Another word for variance is “unpredictability” and unpredictability has the effect of invalidating plans.
Why Plan?
Why plan at all, if this is true? If there really is a lot of unpredictability that we cannot, well, predict, why not just bootstrap the whole thing and figure it out as we go? As anyone who has had the misfortune to work on a poorly planned project knows, this is not the answer. Though planning has its limits, even on the most variable of projects it is essential and serves several critical functions:
- Publish project goals. This helps ensure the people on the project share an understanding of what the project is trying to achieve.
- Prepare for likely events. The plan attempts to anticipate what is most likely to happen and provide some measure of preparation to deal with it. This provides a framework for the predictable “expected” activity.
- Anticipate unlikely events. The plan identifies some of the possible things that might happen and sets up mechanisms that will deal with them or, at the very least, recognize they are occurring. These are the “expected unexpected” events.
- Provide resources. The plan provides a resource basis that supports the expected work, and a reasonable subset of the “expected unexpected” events, based upon their perceived risk. It might also allocate some resources that could be used to deal with wholly unforeseen events, should that ever become necessary.
What planning cannot do, of course, is fully factor the “unexpected unexpected” events that might (or might not) occur on the project.1 All plans have these limitations and they affect the “accuracy” of the plan and its ability to foretell, and hence control, the future.
But perhaps we should judge a plan, not by its ability to control, but by its value.
Control Versus Value
In many organizations, particularly large ones, a primary criterion for usefulness of any procedure is its predictability and the degree to which it allows us to apply control. This is highly valued by most management. However, the appearance of a high degree of control in situations where control is not possible is deceiving at best and may be dangerous. The current movement toward agile development methodologies (for examples, see [3]) such as Extreme Programming (XP) and Scrum is an appropriate attempt to operate within the limits of predictability and acknowledge that sometimes to figure out how to do something in software development, you just have to go do it.
This means that the worth of a plan should not be measured necessarily by the degree to which a project adheres to it, but by the value it brings in other ways.
Dealing with Variance
A critical issue with any plan is variance: what is it that we don’t know, and how will it affect what we expect to have happen? In software projects there are many sources of variance:
- Scope—often we don’t know how “big” the system will be, what functions will be required, and what it will take to create those functions.
- Performance—we may not know how effective we will be at acquiring and factoring the required (but variable) knowledge. There are an enormous number of potential factors that can affect our performance, and we have full control over only a few of them.
- Technology—affecting both scope and performance, real-time changes in technology can have a profound effect on a project’s behavior.
- Market—though it often translates into a scope issue, the market for a system may vary resulting in project functional, performance, platform, and technology changes.
Each of these factors introduces a certain amount of unpredictability into a plan. This volatility exists and it cannot be wished away. There are several strategies we can adopt: we can simply pretend the unpredictability doesn’t exist. We can hope the unpredictable bad things that might slow a project down simply won’t happen. We can hope that whatever bad things do happen are canceled out by an equivalent number of equally unpredictable good things that speed the project up. Wishing for such a happy outcome is hoping for what could be called a lucky (as opposed to an accurate) estimate [2]. Or we can acknowledge the uncertainty exists and deal with it.
Contingency
The concept of contingency has a bad reputation. Many managers dislike it. Some organizations forbid it. The feeling is that Parkinson’s Law will apply; if extra allowance is made—in staffing, in time, or in cost—people will use it up simply because it is there. In such cases, additional resources are obtainable only if the scope changes—and sometimes not even then. But the reality is that, on many projects, the effort turns out to be bigger than we thought. Not because the scope changes, but because we didn’t know what it was to begin with. Invariably, we better understand the scope of a project as we work it. Contingency should reflect the allocation of resources necessary to efficiently manage the risk on a project. Of course, we can choose to not allocate any reserve on a project, but this means we are also choosing to operate at a higher risk level.
Who Pays for Risk?
If an organization chooses not to intentionally manage the essential risk due to uncertainty on a project, or sometimes even to acknowledge that it might be there, the risk does not go away. As in the stock market, risk must be paid for. We pay to reduce risk in investments by accepting lower returns. But who pays for the inherent risk if no reserve is allocated in a project plan?
Risk may manifest itself in a number of dimensions: the project may overrun, or it may cost more than expected. In many organizations, especially ones that do not actively identify, quantify, and manage risk, the cost of the risk is shared by two constituencies and surfaces in two ways:
Quality>Customer—when deadlines are held firm and costs are contained, the risk will manifest itself in the product quality. When covert risk surfaces as defects in a delivered system we effectively deliver a smaller scope product. If the defects impact the customer’s business, the customer picks up the tab for this. If the defects are bad enough for long enough, the company may also pay the price through lost business.
Effort>Team—a common resort of the risk-averse organization is to increase effort by requiring overtime. If costs are contained, this overtime is unpaid and the risk is entirely covered by the people on the project. It comes out of their evenings and weekends; it comes out of their anxiety levels. It comes out of their lives.
To Plan Two Plans
A more rational approach would be to create two plans. One plan to reflect the unrealized risk, and the other the realized risk:
- Work Plan—this plan reflects what resources we think the project will take, based our (necessarily incomplete) knowledge of the project at the time we create the plan. This is the traditional project plan and the one the project team works to.
- Commitment Plan—this is the plan we tell the customer. It incorporates the work plan plus a calculated allowance for the degree of uncertainty that appears to be present in the system at the time of planning.
Note that this is not a “scope creep” allowance. It is based upon a calculation of the uncertainty of the data on which the plan is based. It is this uncertainty that drives much of the project risk. Of course, there may be additional uncertainty that is related to a possibility of scope creep or other factors, which may need to be addressed too. The fact that we state one result to the customer and work to a different result with a different return is not dissimilar to any business where the customer is quoted a price that is greater than the cost to produce.
There is a critical expectation that underlies the concept of two plans—that we do not actually expect the plan to work exactly.
Two Sets of Books, One Contract
We have what appears to be a problem here. We are telling the customer one date, while working to another. We are keeping two sets of books: one for the customer and one for the project team. Should we hide the “real” delivery date from the project team and pretend the shorter work-to date is the real one? What if they find out? Won’t they feel cheated? Won’t they feel the shorter work time is simply a management ruse to get them to work harder? Well, probably. Unless we explain the contract.
We have seen that when we tell the customer we will finish on the date that we plan our work to end, the risk is absorbed either by the customer or, more likely, the project team. What the two sets of books show is that the organization is choosing to share the cost of the risk with the team. The contract is: if the company elects to openly share a percentage of the risk (as determined by the amount of “Management Risk Reserve” (see Figure 1)) by allocating additional resources in advance of them being needed, then the team agrees to diligently work to the work plan. The work plan will not work if the reserve is used up in other ways. Its purpose is to take care of the uncertainty, not as a reason to coast.
If we have calculated correctly, resolving the uncertainty will cause the work to “grow” (more correctly, we will fully understand what it is) but hopefully this growth should not exceed the allocated reserve.
Buying Back the Risk
Figure 2 shows what usually happens. An initial estimate is almost always low. The company elects to allocate a reserve based on a calculation of the risk. As we get into more detailed planning and we are able to better understand what we are trying to do, we start to eat into the reserve. This is not scope creep, or even effort growth. It is a quite legitimate buying back of the risk. In the initial estimate, we understood that we didn’t understand enough—hence the risk. When we have the opportunity to analyze the situation more, our understanding grows. When our understanding grows the risk is less, and it is entirely appropriate that we pay to reduce the risk. This is the nature of risk in the stock market, in life, and in software projects.
The contract with the development staff might also include an understanding that the risk is shared. The company cannot be expected to resource all possible risk, and it would probably not be profitable if it did. Therefore, it shoulders some of the cost, and asks the project team, as a condition of employment and demonstration of commitment, to shoulder some of the risk too. Given that most developers know full well that they usually pick up the whole tab, the reception should be quite positive.
There is a critical expectation that underlies the concept of two plans—that we do not actually expect the plan to work exactly. To a highly controlling and deterministic management structure this is heresy. But the nature and the intrinsic uncertainty of the business of software means we must realistically expect variation. Sure we can bury it, or pretend it’s not there, or require someone else to pick up the cost, but it won’t go away. But if we diligently create two plans all the while not really expecting them to be exact, what is the point in even creating one?
Omaha Beach
The enormously elaborate plan for the landing of the U.S. 1st and 29th Infantry Divisions at Omaha Beach on June 6, 1944 neither predicted nor controlled the results of the landing. In fact, almost nothing worked the way it was expected to. However, as author Stephen Ambrose pointed out, the net result of the Omaha landing was that Hitler’s much-vaunted “Atlantic Wall,” which had taken four years to build, actually delayed the Allied invasion by less than one day [1]. Measured by adherence to the plan, the landing at Omaha Beach was a failure. Measured by the goals of the operation, it was a success.
Helmuth von Moltke, the German general whose painstaking execution of the Schlieffen plan drove the start of World War I, also remarked “No battle plan survives contact with the enemy.” Had General Eisenhower been measured against adherence to plan, he would have been fired. Instead, he led the liberation of Western Europe, became the head of NATO, and the president of the United States.
Software projects are not usually as variable as warfare, but there are lessons we can learn from them. To some extent, no software project plan completely survives contact with the project. But this is not as bad as it might seem. The primary purpose of a software project plan is to provide the direction and resources that allow a legitimate risk-based probability of succeeding in the goals of the project.
Success is then measured by achievement of the goal of delivering value to the customer within the reasonable business constraints imposed on the business, not by mindless adherence to a plan.
Join the Discussion (0)
Become a Member or Sign In to Post a Comment