Research and Advances
Architecture and Hardware

Centralization Momentum: the Pendulum Swings Back Again

Which organizations have the most to gain from recentralizing (and decentralizing) their IT hardware architectures?
Posted
  1. Introduction
  2. Drivers of Change
  3. Why Recentralization?
  4. Implications
  5. References
  6. Authors
  7. Figures

Discussing whether IT architecture should be centralized or decentralized is as old as the technology itself. We propose that, notwithstanding the various perspectives that have been brought to the debate, what is missing thus far is the seesaw between centralization and decentralization. The reasons for this important phenomenon relate not only to IT-centric issues but also to current events pervading other areas of the organization. One key issue is how management has changed its IT focus from a “must have/keeping up with the Joneses” investment to a value-based “where’s the beef?” approach pragmatically emphasizing the relevance of data and its uses. A second consideration is the business requirements for a reliable, available, fault-tolerant, backed-up, secure IT environment, even in the face of catastrophic events, including the 9/11 terrorist attacks. Executives cannot afford to not know why the renewed focus on IT architecture centralization has happened and how it might affect their organizations.


One important observation is that there has been more than one reversal of trends, something not generally acknowledged in the literature.


In 1983, an authoritative review [3] of the subject suggested that there were three aspects of centralization vs. decentralization, all ultimately related to control, including: physical location of the IS facilities; function (an activity or responsibility within the structure of the organization); and decision making. Subsequent authors concurred that control was the key determining issue in centralization vs. decentralization decisions. Here, we take a fairly narrow view of IT architecture, concerned only with the actual hardware architecture, that is, the topology employed to connect all the various hardware components in a given organization.

Back to Top

Drivers of Change

Many factors affecting IT architecture decisions have been present since IT was first used in organizations. However, their relative importance vis-à-vis other more time-relevant concerns has changed, producing the double seesaw trend outlined in the figure.

One important observation is that there has been more than one reversal of trends, something not generally acknowledged in the literature. Here, we focus on events that have triggered trends or, alternatively, triggered trend reversals (inflection points). Moreover, we discuss the relative importance of ever-present issues vs. time-localized events, pointing out their roles in the two large reversals. We define an inflection point as the moment when either a factor important enough to start a trend kicks in or a trend-reversing idea is published in an influential publication that reaches managers.

In the early 1960s, hardware architectures were mainly centralized. Cost was the determining factor; the high cost of both telecommunications and hardware, coupled with relatively limited computing capacity, made centralization imperative. The state of the art did not allow for alternative architectures. Architecture was constrained to large mainframe systems running batch processes. Due to sheer economics, such systems were well protected and provided only limited accessibility.

The need to improve responsiveness and flexibility, coupled with technological advances, enabled the appearance of increasingly less costly and more sophisticated decentralized equipment. By the mid-1970s, some amount of decentralization was commonplace, typically with dumb terminals connected to leased lines or dial-up nodes. This was a giant leap from the days when one had to physically walk to the IT equipment to run a job or program. People could send the job to the mainframe through a local dumb terminal. Members of a department, team, or local organization shared dumb terminals and sent jobs to mainframes electronically rather than through physical punch-card inputs.

Introduction of the microcomputer in 1981 created the end-user phenomenon. Now, processing could be handled both locally and in a distributed fashion. By the mid-1980s, software had evolved to a point where it was accessible to nonprogrammers, and end users were expected to be able to develop and run a set of relatively simple applications.

Servers were the next technological and cost breakthrough. As a direct result, the IT function began to be decentralized, with departments managing their own IT hardware and software. Due to the lack of enterprisewide standards and quality control, this trend generated a large amount of different and not-easily connected hardware, as well as nonintegrated and disparate software applications. The variety of hardware and software led in turn to data redundancy and consequent data inconsistency; the A-B line in the figure characterizes this first era.

The surging need for data integration and simplification of the IT architecture landscape created the first inflection point (B) in the previously consistent trend toward decentralization. The year was 1987, and the event symbolizing this date was the publication of [4], which discussed the virtues of centralization, or “recentralization.” The authors of [4] advocated centralizing management of various functions: telecommunications (“to ensure cost effectiveness and business coordination”); standard hardware and operating software architecture methodologies (“systems compatibility across business units”); information policy (“manage corporate ownership and sharing of data”); risk management and shared services (for economies of scale); shared utilities; shared human resources (including career paths for IT staff); and the centralization of hardware and software.

Another factor in the move toward centralization might have been the desire to embark on a reengineering effort, which is easier when systems are compatible—or when compatibility is created during “obliteration” [2]. This factor began to play a role in the centralization trend in the early 1990s, almost at the end of the cycle (BC). Other reasons cited for centralization were the high cost of separate data processing centers (and duplicate software licensing), the changing demographics of the IS profession, and, most important, the emphasis on enterprisewide IS for integrating business functions and supporting new business opportunities. They enabled a central group with wide-ranging knowledge across an entire organization to champion integration much more effectively [10]. At least one author advocated some exceptions, including certain functions (such as handling development of computer applications in a decentralized fashion, producing a hybrid structure). This trend toward centralization was also recognized in the popular media [11]; reasons included restoring order from chaos, budget pressure, the growth of networks, and the need for greater security.

By 1992, the beginning of the next shift toward decentralization (point C) was under way, created not only through renewed perception of increased service quality provided by decentralization but also by limitations in communication among the multitude of operating systems available at the time and by an early push toward Web solutions. Though limited in the mid-1990s, the dot-com effort was in full swing by 1997. The other powerful factor influencing the trend toward decentralization was large organizations (such as Siemens and General Motors) beginning to use e-commerce with their upstream suppliers. At the same time, business-to-consumer companies (such as Amazon.com) were dispersing their servers worldwide to improve response time and using solutions (such as those from Akamai). As late as 1997, [5] argued for decentralization of decision making centered on decision-making structures and how they affect three factors in the centralization decision: availability of information; trust; and motivation. Although [5] concluded that future organizational control was likely to be even more decentralized, it stopped short of saying how the underlying physical IT architecture supporting this trend would be deployed.

By the late 1990s, the next inflection point (D) heading toward the recentralization of hardware became apparent. Note that the trend toward recentralization of hardware does not invalidate the Internet push toward IT architecture decentralization, making it clear that different factors may concurrently influence the centralization-decentralization decision in opposite directions.

Back to Top

Why Recentralization?

The recentralization trend that started in the late 1990s involved several motivations (such as cost) beyond the traditional culprits. The key was the need for instantaneous data access across a multitude of geographically distributed decision-making environments, along with the need for reliability and security in such exchanges.

The cost of computing capacity and telecommunications has decreased immensely over the last 30 years, making it a much less important factor than it was previously. One may be tempted to argue that cost is no longer a variable in the equation. Unless IT purchasing and operating costs become negligible, the economics of new IT installations will not change. Moreover, the corresponding decline in costs is followed by yet newer needs.

Corporations are now increasingly likely to buy less expensive scalable solutions with an open architecture, especially since the collapse of the dot-com bubble and the spiraling of IT expenditure. The Economist [6] highlighted an increase of 16% in IT spending in 2000, followed by a decline of 6% in 2001. In summary, cost is still part of the reason for recentralization but in a different way from its role in the earlier phases.

Another difference between points B and D is the radically different managerial perspective at the two points. Although both points signal an increase in hardware centralization, in the first case the variety of both software and hardware was the key motivator; in the second, the problem was more of reliable data access and security in a distributed environment. In both cases cost was relevant but in subtly different ways. Some cost considerations in the first case involved how to directly cut (external) costs (such as to enable reengineering and likely decrease head count), whereas the latter goal was improving data access and security, thereby improving performance and, to some extent, lowering (internal) costs as well. For instance, it costs much more to create backup and alternative (crisis) centers for each piece of the distributed environment than it does to create a set of crisis centers for one centralized IT architecture. Moreover, the ability to quickly and reliably access data in geographically distant servers in a decentralized environment may be compromised. As the number of servers increases, the likelihood that at least one will not be working or that connection difficulties will arise also increases. One consequence is that preparing financial or operational statements may not be possible within the timeframe intended by the significant IT investments made in the recent past.


The decision regarding the degree of centralization probably varies by organization size and links with other organizations.


A problem in the 1980s may have been how to ensure that email systems would communicate in other than purely text format. Today, however, there is no question about the ability to connect, though such features as address books may not seamlessly communicate between, say, Microsoft Outlook and Lotus Notes. Therefore, the question is more of fine-tuning in this latest push for recentralization.

In addition, not only is standards enforcement more difficult in the decentralized server scenario but the security of the data is affected. It is much easier to enforce strict security access controls when there are fewer doors or when the entry points are centralized. Centralization, according to [5], cuts down “on the complexity of environments with multiple servers and numerous network change requests.”

Although a decentralized IT architecture has been promoted as a way to make an IT system more fault tolerant in case of the elimination of a node (for example, in a devastating terrorist attack), we would argue exactly the opposite. The elimination of one node could cripple or compromise the particular part of the operations or content that was the responsibility of that node, unless the node was completely replicated elsewhere. The likelihood of replicating every conceivable node is limited by cost. Therefore, one centralized facility with multiple backup and mirroring sites is the IT architecture platform most likely to not only survive disasters but to do so at reasonable cost. This is compounded by the inherent difficulties in managing multiple software applications performing backup in different locations. Furthermore, centralization tends to reduce data redundancy and inconsistent updates.

Other reasons go beyond technical issues. For example, Valeo, a French car parts manufacturer, with a complex and traditionally highly decentralized structure of 180 production sites around the world and more than 100 separate operating divisions, found out that, due to separate profit-and-loss accounts, divisional managers had little incentive to work together to globally optimize benefits. The company’s solution was to increase use of the Internet to reduce the number of broad groups that are more interconnected and can benefit from one another—a strong move toward centralization. Among other solutions, the company also created Web catalogues, improving purchasing across units by creating economies of scale, as well as by tackling quality issues [7].

Cemex, a large Mexican cement manufacturer, found itself competing successfully against worldwide established manufacturers. One key reason was production automation with varied and skillful IT applications. In addition to reducing headcount, the resulting data integration improved quality control and sales management. Although actual IT hardware was still needed in a distributed fashion, data was transmitted to a centralized location for analysis and real-time management. One way Cemex has grown has been through mergers and acquisitions. Interestingly, immediately after one particular acquisition, the company pushed to harmonize its technical and management systems with the holding companies. Plans now call for using just one personalized portal, with operations centralized through the Internet [8].

Another example is Siemens, a conglomerate with more than two dozen business units operating in 190 countries and employing 470,000 people. Until recently, it was unlikely that its units would exchange knowledge, due to its highly decentralized management style and IT systems. However, the CEO created a plan with four tenets, three of which point toward strong IT centralization, even as they include a number of decentralized pieces, including knowledge management; online purchasing; and customer relationships. From more than 20 countries, a customer can click on “buy from Siemens” and place an order. The fourth tenet is to change the company’s value chain, including job applications. In order to achieve these goals, the fragmented array of systems need to be united, creating a standardized corporate approach [9].

Finally, it is much easier to engage in one facet of outsourcing, starting from a centralized IT architecture; hence, this architecture gives management a better range of alternatives.

Back to Top

Implications

What are the consequences of our argument? The answer varies by industry, type of organization (small or large), linkage type (business to business or business to consumer), bandwidth, requirements for real-time connections, and amount of data to be exchanged when creating reports or generating milestones that require decisions. Therefore, our conclusions might have major implications for an organization at one extreme while also serving as a reminder for an organization at the other extreme.

An organization must consider two issues to understand its position vis-á-vis recentralizing IT hardware architecture. First, what is its decision-making structure? Although a detailed discussion of this topic is beyond our scope here, its implications are critical to the development of the best-possible IT hardware architecture solutions. In other words, what tools do key decision makers need, especially those whose responsibilities span the globe? And how much centralization of their IT hardware architecture would best support their needs?

The second issue is the status of key business drivers for IT architecture centralization. The decision regarding the degree of centralization probably varies by organization size and links with other organizations. The more, say, a large organization is connected, dispersed (especially if parts are in areas with less-capable infrastructure or are less politically stable), dependent on real-time information to make decisions, interested in aggregating data in real time, or vulnerable to a catastrophe in part of its network, the greater is its need for IT hardware centralization.

Back to Top

Back to Top

Back to Top

Figures

UF1 Figure. Trends in the corporate IT centralization vs. decentralization debate.

Back to top

    1. Cappuccio, D. Gartner view: Think local, control global. CIO Mag. (Nov. 1, 1995).

    2. Hammer, M. and Champy, J. Reengineering the Corporation: A Manifesto for Business Revolution. Harper Business, New York, 1993.

    3. King, J. Centralized vs. decentralized computing: Organizational considerations and management options. ACM Comput. Surv. 15, 4 (1983), 319–349.

    4. LaBelle, A. and Nyce, H. Wither the IT organization? Sloan Manage. Rev. 8, 4 (1987), 75–79.

    5. Malone, T. IS empowerment just a fad? Sloan Manage. Rev. 38, 2 (1997).

    6. The Economist. IT grows up (Aug. 22, 2002).

    7. The Economist. Less than the sum of its parts (June 23, 2001).

    8. The Economist. The Cemex way (June 16, 2001).

    9. The Economist. Electronic glue (May 31, 2001).

    10. Von Simson, E. The `centrally decentralized' IS organization. Harvard Bus. Rev. 68, 4 (July-Aug. 1990), 158–162.

    11. Von Simson, E. The recentralization of IT. Computerworld 29, 51 (1995), 1–5.

Join the Discussion (0)

Become a Member or Sign In to Post a Comment

The Latest from CACM

Shape the Future of Computing

ACM encourages its members to take a direct hand in shaping the future of the association. There are more ways than ever to get involved.

Get Involved

Communications of the ACM (CACM) is now a fully Open Access publication.

By opening CACM to the world, we hope to increase engagement among the broader computer science community and encourage non-members to discover the rich resources ACM has to offer.

Learn More