Opinion
Computing Applications The business of software

A Little Queue Theory

When more work means less done.
Posted
  1. Introduction
  2. If You Do This, You Don't Do That
  3. The Random Plan
  4. Driving to Work
  5. Items in Queue
  6. Less Important Tasks
  7. Measure Queues, Relax Loading
  8. References
  9. Author
A Little Queue Theory, illustrative photo

So, how full is your in-box? How many tasks on your to-do list? Take a look at that calendar—any open spaces? No? That’s good, isn’t it? The more work that is in the hopper, the more must be getting done, right? Empty spaces in the plan means less is being achieved, right? Well, maybe not.

It seems to be an axiom of modern projects that for every project team member, every minute of every day should be filled with assigned tasks. I know project managers who boast of loading themselves and their subordinates with way more than 100% of available time. It is not uncommon for people to be juggling 10 or 20 tasks concurrently. As one who admits to poor multitasking capability, I have always thought there is a downside to having too much on one’s plate at one time. But it is difficult to argue against this overloading of effort when the equation is more work on the task list means more gets done. But is that true?

Back to Top

If You Do This, You Don’t Do That

One of the simplest ways to look at this is: if I am working on this task, I am not working on that task. That task is waiting. Of course, I am working on this task because it is the most important, it is the most critical. To me. But what are the consequences of the pending work? Who and what is waiting on that? And what is happening while they wait?

In any project, tasks and products have dependencies; some work cannot be properly done until prior work is under way or completed. Attempting the dependent work before its upstream dependency is resolved can result in rework ranging from a small percentage of the original task effort to more than 100% when the erroneous work has to be undone and the correct work redone.

Back to Top

The Random Plan

In software development, project planning is often viewed as a definitive prediction that charts the future of the project, rather like laying down track for a railroad. If the plan is "good" the project will follow it unerringly like a train on rails. Project failures are often laid at the door of planning—project failure must be planning failure. This philosophy holds that, if planning is good, more planning is better and more planning will make your project succeed. This might be true for highly deterministic activities, tasks that are predictable and do not vary from their expected course, and tasks that are not subject to volatile internal and external forces. Unfortunately, that does not describe most software projects.

While a plan might provide an overall roadmap for a project, and is certainly necessary for project staffing and resource allocation, to some extent all projects deviate from their initial plan. Elements of the plan and the dependencies between tasks often operate somewhat unpredictably and even randomly.

Back to Top

Driving to Work

Rather than a train on tracks, a better metaphor is driving to work. While we might have a chosen route, we may also encounter road construction, heavy traffic, or an accident. When this happens we do not just stick to our original plan, we adjust our route based on the latest information. Construction and traffic might be predictable but accidents are not. So it is with projects: some things we can know in advance if we do sufficient research, for other issues we might make an educated probabilistic guess, while other stuff just happens and must be dealt with as best we can if and when it happens.

Back to Top

Items in Queue

When some percentage of our workload is driven by random events, we can suggest certain characteristics of the way our tasks are done and not done. Operating at high levels of committed time (what queuing theory calls "capacity utilization") causes negative effects. One of these is the number of tasks that will typically be in queues waiting to be worked.


Project failures are often laid at the door of planning—project failure must be planning failure.


If the capacity utilization is given by ρ (the Greek letter rho), then the percentage of time work will "arrive" but not be able to be worked is simply (1 – ρ). So if we have 95% utilization and something urgent comes along, there is only one chance in 20 that it can be worked right away. It will have to go into the work queue and wait for its turn. Simple work queues can be considered to have: random work arrival (as unexpected work tends to be); variable time to complete the newly arrived work; single tasking, where a person can really only work on one thing at a time; and a really large in-box where we could have an arbitrarily large number of things waiting to be done. These are all approximations of real project situations and I will deal with the differences in a later column. But even this simple queue model gives us insights into how queues affect our productivity and our throughput. For instance, the number of items in the queue is given by ρ2/(1 – ρ). With a utilization of 80% of available capacity, we might have one thing being worked and around three things not being worked, Push this utilization to 95% and there will be 18 things not being worked. At 98% unworked items are up around 48. That is a lot of things not being done. When tasks are dependent in projects and work queues are large, each of these things not being done may have downstream effects—there is work in the project that is simply waiting on work that is currently stuck in someone’s email in-box.

Back to Top

Less Important Tasks

When important tasks with downstream dependencies are deferred, people work on less-important tasks. This means their more-important tasks are also delayed and there is a snowball effect on the project. Bit by bit, these more-important tasks are sidelined, critical paths lengthened, and the project slips. Unfortunately, the classic project management response to a project that is slipping is to pile on the work under the assumption that more work means more is getting done. But when piling on causes the work queues to get bigger, the opposite may be true. This is a factor that contributes to Brooks’ Law: Adding manpower to a late software project makes it later.1

The very high utilization levels to which project managers load their people can only work in fully deterministic systems—systems where the work that needs to be done, when it will be done, by whom, and how long it will take to complete is completely predefined and invariant.2 This does not describe the business of software at all.

The solutions that projects try may actually compound the problem. Queuing theory shows that loading up on the tasks makes the work queues worse. Throwing more people at a project is ineffective or detrimental. Replanning can work, but in volatile situations the constant replanning effort itself only adds to the workload and the planning churn itself slows down the project. There are other things we can try.

Back to Top

Measure Queues, Relax Loading

Most mature projects measure when tasks are done and what they cost to do; but these are trailing indicators. If we expect a task to take 40 hours we usually learn it actually takes 80 hours after the task has overrun and the downstream damage has already occurred. Using two cooperative approaches we should be able to better manage our throughput on projects:

  • Intentionally plan for less-than-100% task loading. Effectively managed, the unblocked time a project would allocate is a workload buffer that can absorb the variation in tasks effort and task priority when unpredicted events occur.
  • Measure our work queues; keep track, not only of what is being done, but what is not being done and how long the uncompleted task lists are becoming.

The measurement and control of our nondeterministic work queues might be one aspect of project management that traditional approaches have traditionally neglected and that would help balance an unbalanced work flow.

Back to Top

Back to Top

Join the Discussion (0)

Become a Member or Sign In to Post a Comment

The Latest from CACM

Shape the Future of Computing

ACM encourages its members to take a direct hand in shaping the future of the association. There are more ways than ever to get involved.

Get Involved

Communications of the ACM (CACM) is now a fully Open Access publication.

By opening CACM to the world, we hope to increase engagement among the broader computer science community and encourage non-members to discover the rich resources ACM has to offer.

Learn More