BLOG@CACM
Computing Applications

Anna Karenina On Development Methodologies

Posted

I'm old enough to have lived through multiple technical cycles. When I started professionally CASE tools were the rage, and OS/2 was still a thing. I even worked on a mainframe for a bit. Then came Client-Server development (as popularized by Windows applications, as strictly speaking the 'client-server' pattern wasn't new). Then Java, the first-generation web applications with application servers and a myriad of Model View Controller frameworks. Then second-generation web development with more responsive JavaScript-based web applications that replicated many of the things that people liked about thick-client applications, and then some. Then big data broke on the scene, and then cloud architectures.

And concurrent to these technical advancements, new development methodologies kept appearing such as Lean, Safe, Scrum, Agile, Extreme Programming, RAD, JAD, Spiral, and so forth to better organize software efforts. As much as each new methodology heaped scorn upon each other, they all reserved the most contempt for Waterfall, the original methodology.

Planning, Analysis, Design, Construction, and Maintenance were for luddites, so the chorus went. Waterfall was the Hive 0.10 of its day – everybody loved hating it, everybody loved measuring itself against it. I not only remember Waterfall, but I remember E&Y's Navigator Methodology, which was a particular expression of Waterfall which contained even more liquid and fell from even greater heights. Navigator was delivered with yards of binders. Even as a career newbie, I thought that was a bit heavyweight.

But the fundamental weakness was that each new methodology tried to claim that it was possible to extract 10-pounds of output from a 5-pound bag of requirements, assuming that anyone had remembered to fill the bag in the first place, and that the contents hadn't shifted or expired since packing. And when the methodology du jour eventually lost its luster it invariably triggered a search for next one. The secret to unlimited productivity had to be somewhere, right?

Happy Development Teams & Practices

"Happy families are all alike, every unhappy family is unhappy each in its own way."

—Leo Tolstoy (1828–1910)

Even those not intimately familiar with Russian literature have probably heard that opening line from Anna Karenina. One can replace "families" with "development teams" and the quote still applies. For example, imagine a development team with the following attributes:

  • Stakeholders identified
  • Priorities understood and documented
  • Effective development team
  • Effective development tooling
  • Effective end-to-end testing patterns
  • High code quality
  • Effective code deployment patterns
  • High deployment velocity
  • Effective operational tooling
  • Happy users

It's a good list, right? So, do it. Seriously, just do that.

The hardest part of development is prioritization, and that is because humans are irrational and unpredictable. No development methodology will solve that. It's even more complex with multiple stakeholders, and especially multiple groups of stakeholders. The only way through is documenting needs with a forced-ranked priority and continually reminding people about what they are and what the corresponding statuses. Tools are helpful, but it's more about the communication and understanding around the tooling. The fanciest tool doesn't help if nobody believes what it's telling them.

Someone will inevitably cry, "but we need to be flexible!" And that's fine. It's ok to change priorities from time to time. But the relative priorities have to change too, and likewise the previous expectations that have been set. That is the hard part. No fair getting the previous expectations and the revised expectations; that's crazy talk. And again, that's a human problem. Don't forget that pulling the emergency brake on a software release train arbitrarily in the middle of a trip can land a development effort in a frozen, desolate location. Metaphorically speaking, of course.

Next, design software for the immediate prioritized use cases for the targeted timelines. There are a lot of important qualifiers in the prior sentence, such as "immediate", "prioritized", and "targeted timelines." Software designs always have context because software engineering is the art of tradeoffs. There are always tradeoffs. A tank isn't a sedan which isn't a racecar which isn't a truck, but they are all valid and effective forms of vehicles. We know from the classics such as Design Patterns (Gamma, et al.) and Refactoring (Fowler) that clean software design, configurability, and extensibility are important, but we also frequently forget that utilizing every single design pattern isn't the point. Guessing on use-cases — both functional and technical — almost always makes the outcome worse. Remember Knuth's famous quote: "premature optimization is the root of all evil." He figured that out a long time ago, and the software world has only gotten more complicated since then. Temptations and distractions are everywhere.

Pentagon Wars is an under-rated 1998 dramedy which depicted a fictionalized and satirized development of the Bradley Fighting Vehicle that was trying to be everything to everybody. This is loosely based on the very real and serious book Pentagon Wars by James Burton about weapons design and procurement. It's completely reasonable and expected to make design tradeoffs within what I would call "targeted use cases." Just be aware of when the tradeoffs become so big, they compromise the integrity and efficacy of said case. Engineering is both science and art, and no development methodology can solve that.

I was once at a company where somebody was a huge proponent of the Inversion Of Control pattern. This person was so in love with the pattern that the resulting code was difficult to understand and impossible to test. In fact, the unit tests were all configured to mock implementations to the point where nothing but fake code was being tested. This seems too ridiculous to be true, but it actually happened, and once the actual code was deployed to real customers, it was a disaster. I got pulled in to clean this up because each release required scads of emergency hotfixes. Don't become so enamored with a technical pattern or framework that it obscures one's vision from the real problems that were supposed to be addressed in the first place.

Then automate, automate, automate. Everything from the lowly build process, to unit tests, to environment setup, to deployment, to all forms of monitoring and metric collection. This is as obvious as the "the code should always work and be in a deployable state" mantra. Obvious, but not necessarily easy because it requires discipline to dedicate resources to automation — again, a human factor. The good news is that there have been many improvements and advances in frameworks and techniques in this area in the past few decades that can help. Pick up the Google SRE books for starters.

Figure out the most frequent release cycle that works with the user-base. This can vary by industry, and often technical aspects aren't the gating factor. Be able to release as quickly and easily as possible, with high quality.

Lastly, iterate. Keep going. Delivery velocity is life. The worst thing that can happen to any software product is a feast and famine cycle, and no development methodology can solve that. Maintaining momentum is arguably the hardest aspect, as continuous forward progress is utterly dependent on human factors such as portfolio priorities, resources, budgets, and other things such as how well your software is doing in the market. The difficulties of managing humans and money are as old as time. But it's not hopeless, just hard. So get on it.

I heard this at Strata Data Conference years ago: "if your software is successful, you are never done." Sage advice. Similarly, one can practically ensure a software solution not being successful with erratic delivery.

I've seen too many people effectively throw themselves in front of metaphorical trains for the love of a particular development methodology at the cost of overall project success. But a development methodology is just a "how" – it's not the goal. While no software success can ever be guaranteed, the best chances for development team happiness are to focus on common human factors such as stakeholder alignment, priorities, communication, and the discipline to keep driving forward. And the humility to accept that mistakes will inevitably be made and addressed in a future release, as soon as humanly possible.

As Tolstoy might say, Ни пу́ха, ни пера́, comrade.

Postscript

Apache Hive was actually pretty good at what it did. SQL is just so darn useful, and to be able to use it over extremely large datasets was a breakthrough of its time. Hive also kept improving, but as I said earlier, most folks unfortunately only seem to want to talk about Hive 0.10, which was the version that most people were using when Hadoop went mainstream.

 

Doug Meil is a software architect at Ontada. He also founded the Cleveland Big Data Meetup in 2010.

Join the Discussion (0)

Become a Member or Sign In to Post a Comment

The Latest from CACM

Shape the Future of Computing

ACM encourages its members to take a direct hand in shaping the future of the association. There are more ways than ever to get involved.

Get Involved

Communications of the ACM (CACM) is now a fully Open Access publication.

By opening CACM to the world, we hope to increase engagement among the broader computer science community and encourage non-members to discover the rich resources ACM has to offer.

Learn More