Research and Advances
Computing Applications

A Systematic Approach in Managing Post-Deployment System Changes

In current IT practices, the task of managing post-deployment system changes often falls in no-man's land.
Posted
  1. Introduction
  2. Managing System Enhancements
  3. Managing Volatility Associated With External Changes
  4. Lessons Learned
  5. Conclusion
  6. References
  7. Authors
  8. Figures
  9. Tables

An information system will change during both system development and use. But while IS change has been systematically managed during system development, it is handled haphazardly at best by IT management during system use, largely due to the static and unproblematic view of IT [4]. Chaotic management of post-deployment system changes often can lead to real work disruptions and user dissatisfaction that tend to increase the risk of IT project failure. Through an in-depth examination of one such case, we found that IT is a dynamic artifact that evolves even during production use as the result of mutual adaptation between IT and its users’ work practice. Our findings suggest that an organization should adopt a systematic and multifaceted approach in managing post-deployment system changes.

Today’s information systems are increasingly deployed to support and transform entire business processes. The transformation role played by IT implies that its deployment will enable users to work differently. Due to great difficulties in foreseeing how users would work differently by using a new IS, there are unavoidable mismatches between IT design visions and their real-world appropriation by users. IT deployments will likely be followed by mutual adaptations between the systems and the business process they support [3]. The consequences of post-deployment system changes necessitated by such mutual adaptations are far more serious than those induced by changes during system development. System changes during production use will affect more than system developers and IT management. They will likely cause real work disruption, user retraining, and reevaluation of policies and procedures. For these reasons, it is not sensible to manage such changes haphazardly.

Compared to the intense interest in system development that has sprouted more than 1,000 system development methods [2], relatively little attention has been paid to the management of post-deployment system change. A few researchers who contemplated the issue of post-deployment system changes have suggested flexible and adaptive system design as the solution [1, 5]. But no one has systematically investigated this problem from a project management perspective. This perspective is important because even changes made to a flexible and adaptive system must be carefully managed to minimize disruptions to users’ work. To investigate and understand challenges associated with managing system changes in use, we conducted this longitudinal field study to track the project management practice employed by Bank X in managing post-deployment system changes to a commercial banking workflow application. Based on findings, we then articulate a systematic approach to manage post-deployment system changes on multiple fronts.

Bank X is a large West Coast commercial bank that has more than $30 billion in assets and about 3,000 commercial lending clients. The bankers involved in its commercial lending business included about 90 line representatives and 70 staff analysts. The line representatives were stationed in 12 regional sales offices to generate loan prospects and maintain client relationships. The staff analysts worked at three central sites to provide credit analysis support and loan approval. Prior to the use of the new workflow application, the collaboration among these geographically dispersed parties relied on an antiquated document management infrastructure consisting of filing cabinets, interoffice mail, fax, express mail, and file servers. As a result, document turnaround time was long, and management perceived this delay as a serious handicap in a highly competitive market.

Under the sponsorship of the vice chairman in charge of commercial lending, Bank X chartered a project team to develop a new workflow application for its commercial lending business process shown in Figure 1. The project team consisted of an IT manager, a business process manager, and several system analysts. One of the authors served as a special member on the team to be in charge of post-deployment system support. The project team chose Lotus Notes as the application development platform and contracted out system development to a Lotus business partner. The project team secured extensive user participation during the application development process by conducting joint application design (JAD) sessions and adopting a prototyping method. The first version of the application was put through a three-month pilot test by one small commercial banking team before the bankwide deployment.


Relatively little attention has been paid to the management of post-deployment system change.


At the center of the workflow application was a Notes database that contained 27 standard types of commercial lending documents and numerous other document types embedded in them as OLE objects. These electronic documents were managed by a complex scheme that dynamically determined what could be done to a particular document based on the combination of user access privileges, the type of the document, the status of the document, and type of loan packages that the document belonged to. The new workflow application allowed users to create, store, and route major types of credit documents online. Any user, if authorized, could open, print, and edit any document at any time in any of the bank’s offices. The application also enabled co-authoring and electronically “stapled” together documents of a loan package.

As the primary IT platform for the commercial lending business process, the new workflow application was technically complex. It had six Lotus Notes servers distributed across the state of California. Multiple groups of users were assigned to each of the servers; many of them worked at different sites on different days of the week and therefore had dynamic access privileges on different servers. Another source of the system’s complexity was its connections with other banking applications. The workflow application was connected to the bank’s mainframe-based loan payment system. Existing customers’ loan payment records were periodically downloaded into the workflow application, which also needed to interact with Lotus Office Suite, WordPerfect, and special banking applications. These connections made the workflow application a multi-vendor and multi-product IT platform.

After the bankwide deployment of the new workflow application, the project team was plagued with the needs to handle post-deployment design changes. For example, a drop-down list for a field may not contain a certain value that was invented in handling a special loan package; an approved loan amount was changed and there were no extra fields to keep both the original and the changed loan amount; and so on. These needs for design changes were not necessarily generated by poor original design. The new workflow application was designed to support a business process that was simply too complex and fluid for a few system designers to completely map out all possible use cases during design. Most of these problems did not even surface in production use until the special situations emerged. Only the production use by all the users could be seen as a true, robust, and comprehensive test of a system design.

The project team anticipated the need for post-deployment design changes. But because it could not predict what and how changes would emerge, a commonsense approach shown as Iteration A in Figure 2 was initially adopted to manage emergent design revisions. When a problem was reported to a project team member, the individual would pass it on to the project manager. He would then hold a discussion among related stakeholders. Following that, a decision would be made and communicated to the contractor’s development team. The developers needed to make and test the necessary design changes on the development (Alpha) version of the application hosted at the contractor site. When it was done, the design changes would be replicated to the production (Beta) version of the application, typically overnight.

This iteration met its challenge from day one because most of the early design changes were technical ones that demanded quick action rather than elaborate discussions. The long information delay and discussion time frustrated users whose normal work activities were disrupted. Adapting to the situation, the project team quickly adopted a contingent discussion approach shown as Iteration B in Figure 2. In this new approach, a help desk staff member immediately documented all reported problems in a help event log maintained online as a Notes database. A new task called “problem characterization” was created. This task was performed by a senior system analyst who constantly monitored the help event log. If a desired design change was of a purely technical nature, the senior analyst would immediately instruct the developers to make the change. Otherwise, he would defer the decision to the project manager.

While mending the approach of technical design changes, an even bigger problem emerged. Testing design changes on the Alpha platform (the development platform) proved to be problematic because the testing data and operating environments were different from those of the production system. A change made to one form could trigger unintended effects on other forms. These unintended effects frequently surfaced only after the design changes were implemented on the production system that maintains the much richer real data. The users were extremely frustrated when work stoppage caused by these “solutions” to their previous problems led to significant new productivity drags. To solve the problem, the project team adopted the realistic testing approach shown as Iteration C in Figure 2.

The most salient feature in this new strategy is the Gamma platform. It ran application codes replicated from the Alpha and real documents replicated from the Beta platform. Any new programming changes developed on the Alpha platform would be replicated first to the Gamma platform so that they could be tested against production data before implementation on the production platform. Since the Gamma platform ran on a server that was part of the bank’s computer network, it also duplicated the operating environment of the production system. Furthermore, the project team members could do the testing. This made the testing more realistic since they were much more familiar with the “weak spots” in the application. The project team nicknamed it a “leapfrog approach” because the Gamma was the would-be future platform. Iteration C has solved most of the technical problems associated with the programming changes.

Back to Top

Managing System Enhancements

Learning happens in the process of working. Users suggested many system enhancements once they moved beyond knowing the basic functionality of the new application. For example, users found that the workflow application could also serve as a knowledge repository for sharing expertise. These enhancement suggestions pushed the application along different directions from its initial design. The project team created a public database to register these suggestions. On a rolling basis, this database usually contained between 200–250 suggestions that needed to be dealt with. Because of the resource constraints, prioritizing these suggestions became a serious challenge. Naturally, most users assigned high priorities to their own suggestions. After a month of struggling, the IT project manager realized he could not make the prioritizing decisions because the criteria, other than the resource concern, were essentially business oriented. Per his request, the bank created a user steering committee to preside over the task. This user-centered change prioritization approach is shown as Iteration D in Figure 2.

This shifting of decision power from the IT project to the users and general management focused the debate on prioritizing the merits of the suggestions rather than resource allocations. Later, the project manager realized the additional value of letting the user steering committee members use the Gamma platform to test future system releases. Packaging system changes into new releases had always been a major challenge. Fewer but bigger releases of system changes would incur less overhead in terms of weekend overtime, planning and drafting of user communications, retraining of users, data reconciliation, procedural changes, and system documentations. But more frequent smaller releases would deliver benefits to users faster and reduce risks. It turned out the user steering committee could better assess alternative release packages on their benefits, urgency, potential for work disruption, and retraining needs. The committee would first test different packaging plans and then produce recommendations to the IT project management. Later implementations of system changes became smoother with this management strategy.

Back to Top

Managing Volatility Associated With External Changes

Some major system changes were dictated by external events. For instance, after purchasing another smaller bank, the workflow application has to be adapted to accommodate its credit documents. The forming of an alliance with several large banks also necessitated changes to document sharing and control mechanisms. These types of external events often demanded rigid deadlines that led to wholesale project scheduling changes. The lack of mutual understanding between IT and general management made it difficult for the project manager to foresee these problems. According to the bank’s vice chairman who oversaw the alliance building effort, general management did not understand enough about IT to foresee the alliance-building related IT problems and therefore did not give early warning to the IT project manager. He also said the project team could have done a better job educating the general management about what types of organizational changes might affect the IT project and why the IT people needed to be informed. After several initial mishaps, the bank’s CIO pushed for the inclusion of key IT project managers in the bank’s strategic planning process.

Back to Top

Lessons Learned

The table summarizes what motivated the bank’s first three iterations of system change management approaches before it settled with the last one. Each iteration exhibited problems during execution—thus motivating the next iteration as a solution. As shown in the table, the issues that Bank X encountered in managing post-deployment system change encompassed design changes, testing, and system enhancements. The success in managing these issues hinged upon the establishment of a systematic approach that coordinates activities of multiple stakeholders. The experience of Bank X suggests that managing system changes is not an activity confined to system design and development. The goal of system design is to map out an IT solution based on both the current and the future model of a business process. It is important to recognize this design is frequently built on potentially unstable ground. While the current business model is subject to change, the future business model is only a vision of the future that has not been tested in real-world use. Because of this, system design is really a continuous activity driven both by visions for the future during development and feedback from real-world testing during production use.

In developing a systematic approach to managing post-deployment system design changes, IT management must realize the battle will be fought on two fronts. On one hand, system changes must be made for technical reasons. The management of this type of system change should emphasize speed as these problems usually can immediately disrupt users’ work. The principles in managing this type of system change include efficient reporting and communication, proper delegation of decision locus for design changes, and a robust testing environment that uses production data. Bank X, through a rather costly learning process, eventually established a systematic approach that embraced these principles. On the other front, the effective management of system changes caused by learning and organizational changes requires participation from users and general management. Design decisions of this type cannot be made by project teams as pure technical changes to a system. The user steering committee established by Bank X demonstrates one mechanism to accommodate such user and management involvement.

Back to Top

Conclusion

In current IT practices, the task of managing post-deployment system changes often falls in no-man’s land. IT professionals typically focus on system development and implementation while management and users typically focus on the use of IT. The post-deployment involvement of IT professionals has been typically restricted to providing technical support. As the trend toward intertwining IT with business logics accelerates, the current practices of managing system development and use as separate activities will be increasingly challenged. It is time for both IS researchers and practitioners to reexamine this traditional divide. This study described a systematic approach of managing IT changes as one possible solution to this problem. But even in situations where this specific approach does not fit, the principles of contingent discussion of technical changes, realistic testing of system changes, and user-centered change prioritization are still of value for crafting different solutions.

Back to Top

Back to Top

Back to Top

Figures

F1 Figure 1. The commercial lending business process.

F2 Figure 2. The iterations of the system change management.

Back to Top

Tables

UT1 Table. The evolution of system change management approach.

Back to top

    1. Alter, S.A. Decision Support Systems—Current Practices and Continuing Challenges. Addison-Wesley, Reading MA, 1980.

    2. Iivari, J., Hirschheim, R. and Klein, H.K. A dynamic framework for classifying information systems development methodologies and approaches. Journal of Management Information Systems 17, 3 (Winter 2000–2001), 179–218.

    3. Leonard-Barton, D. Implementation as mutual adaptation of technology and organization. Research Policy 17 (1988), 251–267.

    4. Orlikowski, W.J. and Iacono, C.S. Research commentary: Desperately seeking the "IT" in IT research—A call to theorizing the IT artifact. Information Systems Research 12, 2 (June 2001), 121–134.

    5. Sprague, R.H. and Carlson, E.D. Building Effective Decision Support Systems. Prentice-Hall, Englewood Cliffs, NJ, 1982.

Join the Discussion (0)

Become a Member or Sign In to Post a Comment

The Latest from CACM

Shape the Future of Computing

ACM encourages its members to take a direct hand in shaping the future of the association. There are more ways than ever to get involved.

Get Involved

Communications of the ACM (CACM) is now a fully Open Access publication.

By opening CACM to the world, we hope to increase engagement among the broader computer science community and encourage non-members to discover the rich resources ACM has to offer.

Learn More