Why would anyone undertake a multi-year software project today? Or upgrade an in-house-hosted legacy application? Or build—or use—anything that behaved like a monolithic software application? Big software project failure data is legendary.11 There are myriad horror stories with titles like "9 VERY Scary ERP and ERP System Implementation Statistics."12 The Standish Group actually labels their annual technology project analyses as "Chaos Reports."14 They reported that 66% of all technology projects completely or partially failed in 2015.
So assuming that management is reasonably well informed, it knows that big software projects are likely to fail. Yet in the 1990s and early 21st century there were still companies willing to try their hand with big software and prove they were unlike the others who failed so spectacularly. In spite of this unjustifiable optimism, many of these companies also failed. Even the U.S. Defense Department failed spectacularly.6
So that no one thinks that failure only plagues ERP applications, the data suggests all kinds of big software projects fail.13 Big customer relationship management (CRM) projects fail. Big database management systems (DBMS) projects fail. Big infrastructure projects fail. Big communications projects fail. In fact, most software projects designed to address enterprisewide problems with single, integrated platforms fail. Failure crosses vertical and functional areas as well, including retail, government, financial services, and even science.10
The high rate of failure helped kill big software. But there were other causes of death.
Causes of Death
Big software is dead. There were lots of assassins. Some were all business, some were hiding in the governance trenches, some were up in the clouds and some were architectural. Let’s look at the assassins in a little detail.
One of the business assassins was control. When a company embarks on a multiyear journey with a big software vendor it cedes significant—if not total—control to that vendor and the business processes embedded in the code. For example, ERP modules were originally designed to eliminate process chaos. Remember when there were no intra- or intercompany (or industry) standardized processes? Remember when software applications never integrated? Remember when 1970s and 1980s "legacy" software was a barrier to scalability, not to mention how expensive it was to customize and maintain? ERP vendors came to the rescue by controlling the mess that homegrown applications created. But one of the side effects was the loss of process control to the vendors who defined supply chain management, financial reporting, and other business processes for the companies (and industries) they de facto managed.
While tightly bundled standardized software made some sense back in the day, it makes little or no sense in the era of digital transformation where disruptive business processes and business models are seen as necessary paths to competitiveness: disruption and standardized big software are not birds of a feather. Of course, in 1995 would have seemed heretical. Companies were desperate to end the chaos of uncoordinated business processes and rules. Standardized processes incarnated in software were the vitamin pills everyone needed. But in retrospect it is not clear that everyone understood exactly what they were consuming. When business models moved slowly in the 20th century, slow-and-steady worked, but when whole new "disruptive" business models began to appear in the 21st century (fueled by new and more powerful digital technologies), slow-and-steady became a clear threat to competitiveness.
Governance also killed big software. Big software projects that are "standardized"—that is, required—by corporate technology groups also usually failed, not because they did not work as advertised (which they often did not) but because of the governance that forced a one-size-fits-all approach to technology use. Huge off-the-shelf software packages—like ERP, CRM and DBMS packages—or even large custom in-house developed applications mandated by corporate IT—usually failed under the weight of their own governance which, to make matters worse, often resulted in increased "Shadow IT" spending.1,2
The cloud also killed big software. Years ago, companies would implement huge software systems in their own data centers. Armies of programmers would work with armies and navies of (happy) consultants to bring big systems to life. Some years later the software might "launch" with a "switch" that—according to the data—usually failed (at least the first time). So the armies and navies would go back to work to get it right (until they got it right). Implementation cost was also a killer. $10M often turned into $50M, which often turned into $250M and sometimes into billions: the Standish Group reports that big technology projects run anywhere from 40%–50% over budget—and deliver less than 50%–60% of the promised ROI.14 Cloud delivery changed all that: it is now possible to access an enterprise application directly from the cloud from any number of providers.
While implementation pain was avoided through cloud delivery, process control was still ceded to the big software vendors who owned the embedded business processes in the cloud-delivered software (while some of the control went to the cloud provider who deployed the systems on behalf of their clients). While it was almost always cheaper (by total cost of ownership [TCO] metrics) to move from on-premise big software applications to cloud hosted applications, companies were still denied access to the transformational and disruptive playing fields.4,18,a
Software architectures must be blank canvasses capable of yielding tiny pictures or large masterpieces.
Finally, some of the assassins were (sometimes unknowingly) architects. The overwhelming technical complexity and inflexibility of huge, standardized software systems also explain the death of big software. Enormous whole-company projects were often beyond the capabilities of even the most experienced project and program managers—especially when there is never 100% consensus about the need for a total enterprise project in the first place. High-level functional and non-functional requirements were nearly impossible to comprehensively define and validate; detailed requirements were even more elusive.
But perhaps the real architectural assassin was monolithic software design. Many of the big software architectures of the 20th century were conceived as integrated functional wholes versus decoupled services. Over time, monolithic architectures became impossible to cost-effectively modify or maintain and—much more importantly—became obstacles to business process change. The trend toward microservice-based architectures represents an exciting replacement to monolithic architectures (see below).
The Rise of Small, Cloudy Software
There are also small software cloud-based alternatives that scale, integrate, and share process control through customization tools deliberately built into smaller, more manageable platforms. Companies can find lots of incredibly inexpensive alternatives, from vendors like Zoho and Zendesk, among many others.b
While "small" software packages also embed business rules and processes, they are built in smaller, more integrate-able pieces, which provides much more flexibility to clients who want to mix-and-match (existing and new) functionality.
The major driver of software change is continuous digital transformation. Big standardized software systems conceived in the 20th century were not designed to adapt or self-destruct the moment a company or industry pivots.
Another way of thinking about all this is the relationship between micro and macro (or monolithic) services. Big software begins with macroservices in monolithic architectures.3,5 Or we could just think about all this as small versus large programming.8
Architectural assassins argue that monolithic architectures are stiff, inflexible, and unyielding. They are also difficult and expensive to maintain primarily because functionality is so interconnected and interdependent. They also argue that monolithic architectures should be replaced by microservice-based architectures.16,17 According to Annenko,3 "the concept is rather easy, it’s about building an application consisting of many small services that can be independently deployed and maintained, don’t have any dependencies but rather communicate with each other through lightweight mechanisms and lack a centralized infrastructure. It is even possible to write these small (micro-) services each in its own language." Why microservice-based architectures? Annenko continues: "their benefits are undoubted, too: they easily allow for continuous deployment, and certain parts of an application can be changed, debugged or even replaced quickly and without affecting the rest. With microservices, you absolutely cannot break an application: if something goes wrong, it will go wrong only within its own microspace, while the rest of the application will continue working as before."
Was there any doubt that these architectural assassins would hit their target?
All of that said, SOA architecture dreams continue to develop.9 The big data world, for example, has already defined an open source architecture that is fast, flexible, cost-effective—and always changing.15 The tools enable low latency and real-time processing through Spark and Flink, among other open source tools. The details are specified in tools like Lam-da, Kappa, and SummingBird. MapReduce moved us from parallel processing, and file systems have evolved from Google File Systems to Hadoop. Building on Hadoop, Spark and Flink provide real-time runtime environments. Even data streaming has been addressed with tools like Storm and Spark Streaming. But while SOA complements microservice-based architecture, they are different.7 SOA is not the threat to monolithic big software that microservice-based architecture is; in fact, SOA often behaves like a big software vitamin supplement. Said differently, SOA is not a replacement for monolithic big software and is therefore not a big software assassin.c But candidly, SOA-based integration and interoperability have proved illusive in spite of continued promises and a growing library of open source application programming interfaces (APIs) and Web services. SOA is still more of a dream than an answer for continuous digital transformation. It might, in fact, be the wrong answer.
In addition, cloud delivery is becoming increasingly flexible. Container technology offered by companies like Docker offers freedom to companies who may need to pivot away from their cloud providers to another provider for any number of reasons. Containers enable clients to retain control over their applications just as emerging application architectures enable them to retain control over their software-enabled business processes.19 This means that dependencies are shrinking. So the combination of microservice-based architectures and container technology may be the response to monolithic applications.
The entire world of traditional big software design, development, deployment, and support is dead.
Will the big software vendors respond? Yes.
They will milk the current big enterprise revenue streams for as long as they can and then systematically make their offerings to look more and more like their small software competitors. Many of them, like SAP and Oracle, have already by necessity begun this process through small business and mid-market cloud offerings that are much cheaper than the gold-plated goliaths they sold for years. They began to cannibalize their own products because they too know that the days of big software are numbered. But they have not fundamentally rearchitected their applications. They have shrunken them.
The Death and Resurrection of Software
The entire world of big software design, development, deployment and support is dead. Customers know it, big software vendors know it and next generation software architects know it. The implications are far-reaching and likely permanent. Business requirements, governance, cloud delivery and architecture are the assassins of old "big" software and the liberators of new "small" software. In 20 years very few of us will recognize the software architectures of the 20th century or how software in the cloud enables ever-changing business requirements.
Join the Discussion (0)
Become a Member or Sign In to Post a Comment