Sign In

Communications of the ACM

Communications of the ACM

A New View of IS Personnel Performance Evaluation


View as: Print Mobile App ACM Digital Library Share: Send by email Share on reddit Share on StumbleUpon Share on Hacker News Share on Tweeter Share on Facebook

To help manage their internal IS personnel functions, many organizations scrutinize IS personnel performance. Considerable evidence in the human resources management and IS literature substantiates our view that specific, challenging goals yield improved productivity by IS professionals [1, 5, 9]. Tying the evaluation of an organization's IS personnel to the goals of their systems-development projects, as well as to strategic organizational goals, provides opportunities for enhancing IS personnel performance and productivity. Linking this evaluation to these goals obviates the need for a goal-setting-and-evaluation loop. Therefore, at the start of any system-development project, a set of specific, challenging, and meaningful performance measures should be defined based on the goals of both the organization and the systems being developed (see Figure 1).

Understanding how IS personnel performance supports corporate performance is not limited to disseminating a list of corporate and system goals. Management has to be certain all parties are aware of the relative importance and priorities of the goals and their attainment measures. Users and developers must recognize and share each others' goals, as well as those associated with the organization's strategic posture. The organization must structure its evaluation and reward system to encourage attainment of the desired strategic goals. Motivation theory argues that people tend to engage in purposeful behavior to the degree they believe their efforts will result in valued outcomes [9]. In sum, performance criteria, evaluation techniques, and reward packages should be tied closely to organizational goals. Therefore, we highlight here a comprehensive feedback process derived as a result of a detailed survey we conducted in 1998 that enables the integration of these activities (see the sidebar "How the Survey Was Done").

Likewise, it seems logical to us that different groups of people might assign different weights to the same criterion due to their personal views as to what is important. Productivity and quality measures usually exist to reflect the view of developers and managers. Yet the customer, or user, of the system function is a primary source for determining any system's success. Thus, user views on goal attainment must be part of any comprehensive goal-setting and performance evaluation [4]. Differences in attitudes about what are desirable outcomes appear as gaps in an evaluation setting [11]. Despite being proposed as a way to measure system success, these gaps have not been integrated into any of the various comprehensive evaluation structures available today [8].

Popular evaluation techniques are unable to completely integrate goals and the different views of how to attain them [1]. Clearly needed is a more complete framework for setting goals and measuring performance against them. Figure 2 shows a feedback loop that includes goal setting and gaps; in such a full evaluation loop, the following performance-expectation disconnects may occur:

Expectation gap. At the start of any system-development project, users and IS personnel may not share the same view of the importance of the goal criteria, thus creating a gap in expectations. Users and IS personnel should meet to resolve any such differences to ensure they share the same expectations. Research indicates that sharing is essential on the most important goals. Disagreement on the less-critical goals, such as experience gained and system efficiency, may actually be healthy for a system's ultimate performance and strategic benefit, allowing the freedom to achieve the top goals, such as system reach and functional range. Regardless of procedures and the number of goals selected, agreement before beginning a system's development is critical for achieving "consonance," or harmony, between system goals and user and corporate goals and providing legitimate goals for future evaluations, as in Figure 2.

Mismatched impressions. Differences may also exist as to whether a system meets expectations. The most vexing result of IS personnel having a false impression of user desires is a communication gap that can lead service providers to falsely target user desires.

Performance gap. Once a system is operational (or at the intermediate delivery phases of large projects), performance is often measured differently by users and IS personnel. A performance gap between the user evaluation and corporate goals represents a variance from the contract negotiated at the start of the process. Little or no gap indicates that users are satisfied and the contract is met. A gap might indicate that IS personnel exceeded expectations and the IS function delivered unexpected quality. A gap might also indicate that performance was not up to expectations and prompt an analysis to determine the cause of the variance.

Secondary performance gap. A performance gap may crop up between the joint IS-personnel/user expectations and the IS personnel post-implementation evaluation. This gap represents a variation in production, possibly implying that IS personnel recognize shortfalls, perceive their performance to be as expected, or feel that expectations were exceeded. This gap is best used in conjunction with the satisfaction gap.

Satisfaction gap. A satisfaction gap crops up when IS personnel and users do not evaluate performance in the same way. It is actually important in the feedback cycle, and time should be scheduled to identify the reasons for the perceived differences. Differences can serve as points of discussion in the next evaluation round of setting targets. Discrepancies should also be used in the evaluation of IS personnel to consider multiple viewpoints and ensure that personnel evaluation measures are tied to the same goals and performance criteria as the system evaluation.

Similar ratings at any gap location indicate that users and IS personnel are in consonance. The measures may then be used in both the evaluation of IS personnel and the planning of subsequent projects.

Back to Top

Consonance Across Criteria

Seeking consonance across groups is not a novel idea to IS practitioners and researchers. General congruency models suggest that behavior is a function of personal and environmental characteristics. Congruency means that resolving discrepancies from multiple sources is deemed an effective technique for personnel evaluation and motivation; the IS function is more successful when its objectives are consistent with those of the overall organization; technology should be blended and balanced with the structure of an organization to meet both its business and its system performance objectives [7]. Variations in perception or interpretation of the goals, perhaps resulting from a lack of communication, would have the same effect as different goals or measures across the various stakeholder groups.

In strategic management, approaches for creating change in organizational behavior, such as defining the evaluation criteria for assessing employee job activities, are critical. An organization can design appropriate administrative structures and actions to emphasize and motivate performance on the criteria [9]. But it is not sufficient to consider only the views of a single interest group or functional area when determining the criteria and providing feedback [5]. In systems work, the organizational goals driving development should be used to define and integrate evaluation criteria.

Most systems involve a combination of software packages, hardware components, training, and constant improvement and support activities. Therefore, system goals should reflect a realistic approach to measuring IS professional performance. To define such a set, we approached one of the world's largest furniture companies. This U.S. company's management developed the set of seven measures discussed here for evaluating its computer-support personnel. Users are accountable for their evaluations of system performance, while IS professionals are accountable for developing and maintaining the system. Without a common set of measures understandable by developers and users alike, the judgements derived from the measures would be faulty, upsetting the goal-setting process, as well as the related performance evaluation. Moreover, as all concerned parties are involved in setting goals for these seven measures, clarification and definition can occur at the start of the process, so any deficiencies in understanding can be remedied.

Quality. Many quality problems undermining system performance stem from IS personnel glossing over design and delivery details. Therefore, IS employees should pay careful attention to these details and feel morally obligated to meet their responsibilities. The evaluation items associated with this measure are related to the overall quality of the work performed. Areas in the spotlight include the following of standards, employing procedures and tools, implementing the new system, and pursuing effectiveness.

Project work. Project management by IS personnel is critical to the success of any project. Adhering to users' schedules, budgets, and system specifications is an important aspect of project success and user satisfaction. The measures that help evaluate performance cover planning, control, and communications.

General tasks. Adherence to task is related to dealing with user concerns. IS professionals are often limited in the resources they need to deliver system specifications. A customer-focused IS professional should be prepared to inform users about information technologies, understand problems related to users' jobs, and anticipate users' needs. User relations drive the details that affect communications, persistence, and understanding of user functions.

Personal qualities. The way IS staff members relate to users is an indicator of user satisfaction. IS professionals need a customer-oriented attitude and be adept at such "soft" interpersonal communication skills as negotiating, managing change, being politically astute, and understanding user desires.

Dependability. Users should be able to rely on the IS staff. Successful IS professionals tackle assignments without having to be prodded by project leaders or users. IS professionals need to maintain a high standard of work performance and dedication to quality. IS professionals are expected to have a strong work ethic in order to meet commitments, seek appropriate solutions to problems, and complete tasks.

Teamwork and leadership. System development is often a team activity; team members should take pride in their work and enjoy working cooperatively. A team player is concerned more about achieving team goals than about individual accomplishment. How well an IS staff member secures cooperation and progresses toward user and organizational goals defines this measure.

Career-related training. The kind of IS worker needed to achieve system success should be able to handle multiple tasks, interruptions, and diverse assignments. Interest and willingness are not sufficient skills; knowledge and ability are also needed.

Back to Top

The Satisfaction Gap

The users and IS personnel we surveyed reflect different levels of satisfaction with performance on the seven evaluation measures; Figure 3 (derived from the survey) includes the responses for the criteria categories on a normalized five-point scale, five being best. The IS personnel we surveyed were generally more satisfied with their own performance than the users were with IS's performance. These overall results are not unexpected; personal bias can lead to high self-ratings in any discipline.

The surveyed IS personnel also expressed satisfaction with their own personal dependability, while users ranked IS staff dependability much lowerthe greatest difference in ratings between the two surveyed groups. There was also a large difference in rating teamwork and leadership. The survey identified a special need for IS personnel to improve their dependability, teamwork, and leadership to enable users and IS personnel to move toward agreement on evaluations. The results of the survey indicate that IS customers and their IS providers disagree on the quality of the delivered goods, as in Figure 3.

The evaluation feedback derived from self-appraisals and consumer appraisals gives rated personnel information not readily available otherwise. Such multiple-source feedback allows the IS professional staff to understand how they are viewed by others and develop a better sense of goal accomplishment. Gaps between IS personnel and user evaluations suggest areas for IS personnel skill development and performance improvement. Thus, IS employees need to develop proficiency in evaluating their own behavior in a way that recognizes other sources of input to the evaluation. This knowledge should help them tailor their performance to organizational needs.

Back to Top

The Expectation Gap

A look at the data generated by the survey provides insight into how IS professionals and users differ as to goal setting. Figure 4 lists the importance ratings of the seven measures, as perceived by both groups. On all items but "general task," views are statistically different at the .05 level. General tasks include responding in a timely fashion, giving high priority to user needs, sticking to a problem until arriving at a solution, finding permanent solutions, and keeping users informed. These highly visible tasks involve the user directly, perhaps yielding consonant responses.

The rankings of the criteria for users and IS personnel (in brackets in Figure 4) are similar both visually and statistically. The rank order correlation coefficient r of .83 is statistically significant. Personal skills/interpersonal skills rank as the most important issue; dependability was ranked second by both groups. Although both groups agreed on the rankings of the importance of the criteria, IS professionals cited much greater importance for each criteria (except general task) than the users did. It is not unusual for IS personnel to rate their own tasks as more important than other groups would rate them. Moreover, these criteria are often stressed in IS professional training.

Back to Top

The Communication Gap

To satisfy users, IS personnel have to understand user needs. A significant gap in the measures of user ratings and IS personnel perception of user ratings indicates a misunderstanding of certain user needs. The individual mean ratings of the IS personnel perceptions identified in the survey significantly miss on three of the seven measures: quality, teamwork and leadership, and career-related training. Users view quality as more important than IS personnel view quality. Similarly, users view teamwork and leadership as more important than IS personnel think users believe them to be. This gap between perception and reality indicates a need for IS personnel to emphasize quality, teamwork, and leadership if they want to meet user expectations. Still, the rankings of the importance of the seven measures between what users said in the survey and what IS personnel believed users would say are similar (rank order correlation of .79, significant at the .01 level), indicating both groups have similar attitudes. There is general agreement as to the priority of the measures, not necessarily on their magnitude.

Figure 4 also highlights the difference between how users think IS personnel rate the seven measures and how IS personnel actually rate them. Even though only two measures differ significantly, the rank order correlation is only .68, or not significant. The first difference involves project work; IS personnel rate this measure higher than the users think they rate it. Similarly, IS personnel think dependability is more important than users think they think it is. This gap between perception and reality might indicate a need for IS personnel to demonstrate more effectively how they set up projects and concentrate on the tasks associated with dependability. A multiple-source evaluation technique, such as the one used in our survey, can help communicate these important differences and assist in closing the gaps.

Another gap measure useful in improving IS postures is derived by comparing how IS personnel rate the seven measures and how IS personnel think users rate them. The difference represents a form of willingness to meet expectations. A gap here is a service-error gap; the IS staff believes there's a difference but cannot close it. As per the results of the survey, there are many differences between IS professional and user perceptions, even a surprising number of disconsonant priorities (r = .54). The numbers indicate that IS personnel should be doing a better job eliciting the needs of users and incorporating them into their development activities.

Examining how users in the survey rate the criteria and perceive how IS staff rate them finds the ratings closer than the other gaps examined, including a significant rank order correlation of r = .74. The only real exception to this generalization involves training. A possible explanation is that users generally perceive that IS personnel view the training criteria in a way consonant with users, even though the comparison of the expectation gap casts doubt on this conclusion.

The larger patterns in the perception gaps indicate significant differences in the treatment of goal measures between users and IS personnel. The use of a gap instrument helps highlight the gaps on a project or organizational level, as well as on the aggregate level used in analyzing the survey results. Once areas of concern are identified, corrective action can be taken. In the case of large gaps in perception, an effective mechanism to close them, such as improved communication practices, might be implemented. Specific techniques might include focus groups or Delphi methods in which the subject is the targeting of goal levels for the metrics. Using such techniques would provide a set of expectations that can be used in performance evaluation. Numerous resources are available for conducting formal gap analysis in order to determine service satisfaction [11].

Back to Top

Conclusions

We defined and used a seven-measure system to explore IS personnel performance, user satisfaction, and importance ratings. Overall, we found many significant differences between user and IS personnel expectations and evaluations, indicating general concern for consonance in real-world industrial settings. Wherever there are significant differences, there are also problems with consonance (for example, where users and IS personnel are on different wavelengths regarding the importance of performance measures). This disconsonance could lead to user (and IS personnel) dissatisfaction, as well as to poor system performance. Users and IS personnel alike have to develop proficiency in observing and evaluating their own performance in a manner consistent with how others perceive and evaluate it [10].

Our 1998 survey found that IS personnel rate satisfaction significantly higher on a set of seven measures than users rate it on the same seven measuresand could be a cause of problems when developing systems intended to support strategic corporate goals.

Inflated self-evaluation relates to career derailment [6]. Managers whose self evaluations are consistent with both coworker and customer assessments of them are more likely to be promoted. On the other hand, IS personnel rate all seven performance measures as more important than users rate themand could show IS personnel to be demanding of excellence in their own work. These comparisons show more consonance than the ratings of performance satisfaction and the importance of criteria.

Organizations can use the evaluation measures we identified to create a custom instrument to measure the gap in performance perceptions. The measurement process should be incorporated into a formal process that allows for goal setting, evaluation, and feedback, as in Figure 2. Shortcomings show up when analyzing the results of the evaluation technique. They point up the need for more communication and attention to the measures to increase overall satisfaction. This approach should lead to more consonance in user and IS work evaluations. Consonance promotes user satisfaction while improving the work of IS personnel.

Multiway communication and participation in the process of selecting targets is essential for the common pursuit of common goals. Users may be reluctant to participate in an activity that sets staff goals for the IS function, possibly due to lack of knowledge in the referent discipline. It is critical that IS staff work with users to overcome reluctance and help define criteria for assessing the performance of both the IS staff and the overall IS function. A process whereby users and IS personnel participate in selecting and ordering goals would promote understanding. Participation is a key, even to the point of involving users in training sessions for the IS staff. Training in the evaluative measures and procedures may be necessary to implement an effective process of goal setting and evaluation.

A top-down leadership style (with economic rewards) further promotes achievement of measurement goals. Like managers in other corporate areas, those in IS should be effective communicators while selecting and implementing a set of goals. They should consider adopting a number of communication techniques; for example, group methods, including brainstorming, Delphi methods, focus groups, and the nominal group technique are effective means of producing consensus [3]. Dissemination and implementation devices include employee handbooks that define the criteria, policy manuals that state the practices to be followed, organizational newsletters, training videos, committees, suggestion programs, and email.

Other issues arising in more complex performance evaluations are those associated with the management of IS personnel. Implementation of any feedback process requires an effective management structure. Selecting an appropriate managerial structure is critical to the success of system development, more so than technology or software development tools alone [2]. Once the structure is in place, an evaluation process for both users and IS personnel can be designed by the organization's human resources department. The key is to be certain that there is agreement on the measures, the timing of the measurements, and the use of the results. The goal is consonance.

Regardless of the techniques individual organizations use to implement the communication and goal-setting aspects of achieving consonance, all participants should realize it is a comprehensive technique. The planning phase provides targets and measures that are to be evaluated in later stages. Organizations must then look at differences between user and IS personnel perceptions and investigate whether any of them arises from lack of adequate communication or from genuine differences in needs. This user-IS communication process provides an important feedback and control structure missing today from many post-implementation evaluations that rely on measuring a surrogate for success, rather than helping achieve success directly.

Back to Top

References

1. Church, A. and Bracken, D. Advancing the state of the art of 360-degree feedback. Group & Org. Mgmt. 22, 2 (June 1997), 149161.

2. Constantine, L. Work organization paradigms for management and organization. Commun. ACM, 36, 10 (Oct. 1993), 3543.

3. Delbecq, A., Van De Ven, A., and Gustafson, D. Group Techniques for Program Planning. Scott, Foresman & Co., Glenview, IL, 1975.

4. Holtzblatt, K. and Beyer, H. Making customer-centered design work for teams. Commun. ACM 36, 10 (Oct. 1993), 92103.

5. London, M. and Smither, J. Can multi-source feedback change perceptions of goal accomplishment, self-evaluation, and performance-related outcomes? Theory-based applications and directions for research. Pers. Psych. 48, 4 (winter 1995), 803839.

6. McCall, M. and Lombardo, M. Off the Track: Why and How Successful Executives Get Derailed, tech. rep. 21. Center for Creative Leadership, Greensboro, NC, 1983.

7. Parker, M. Strategic Transformation and Information Technology. Prentice-Hall, Upper Saddle River, NJ, 1996.

8. Pitt, L., Watson, R., and Kavan, C. Service quality: A measure of information systems effectiveness. MIS Quart. 19, 2 (June 1995), 173187.

9. Steers, R., Porter, L., and Bigley, G. Motivation and Leadership at Work, 6th Ed. McGraw-Hill, New York, 1996.

10. Yammarino, F. and Atwater, L. Understanding self-perception accuracy: Implications for human resources management. Human Res. Mgmt. 2, 3 (summer/fall 1993), 231247.

11. Zeithaml, V., Parasuraman, A., and Berry, L. Delivering Quality Service: Balancing Customer Perceptions and Expectations. Free Press, New York, 1990.

Back to Top

Authors

Gary Klein (gklein@mail.uccs.edu) is the Couger Professor of Information Systems in the College of Business and Administration, the University of Colorado, Colorado Springs.

James J. Jiang (jiang@bus.ucf.edu) is a professor of management information systems in the College of Business Administration at the University of Central Florida, Orlando.

Marion G. Sobol (msobol@mail.cox.smu.edu) is a professor of management information sciences in the E.L. Cox School of Business, Southern Methodist University, Dallas, TX.

Back to Top

Figures

F1Figure 1. System performance feedback loop.

F2Figure 2. Gaps in consonance between user and IS professonal perception.

F3Figure 3. User vs. IS personnel on performance satisfaction.

F4Figure 4. Users vs. IS personnel on the importance and perception of performance measures.

Back to Top


©2001 ACM  0002-0782/01/0600  $5.00

Permission to make digital or hard copies of all or part of this work for personal or classroom use is granted without fee provided that copies are not made or distributed for profit or commercial advantage and that copies bear this notice and the full citation on the first page. To copy otherwise, to republish, to post on servers or to redistribute to lists, requires prior specific permission and/or a fee.

The Digital Library is published by the Association for Computing Machinery. Copyright © 2001 ACM, Inc.


 

No entries found