Research and Advances
Architecture and Hardware

Evaluating End-User Training Programs

With some planning and coordination, organizations can implement an evaluation scheme to get the most out of their computer skills training programs.
Posted
  1. Introduction
  2. Five Levels
  3. Lessons Learned
  4. Conclusion
  5. References
  6. Authors
  7. Figures
  8. Tables

The pervasive use of IT makes knowledge of and the ability to use IT essential requirements, no matter what kind of work is being done. Whether in a government agency or a multinational corporation, inadequate IT skills by employees are sure to undermine the day-to-day functioning of any organization. End-user (EU) training [7], which helps employees acquire and hone their IT skills, plays a key role in ensuring the smooth operation of organizations in the information economy. How the lack of IT skills affects organizational performance is illustrated by two anecdotes:

  • Virus attack. While the “I Love You” virus affected millions of email users in 2000 and reportedly caused billions of dollars in lost business due to computer downtime and lost data, many companies found they could minimize their exposure to the attack through effective EU training and other preventive measures [2]; and
  • Application processing. When a new security protocol went into effect at the U.S. Immigration and Naturalization Service following the terrorist attacks of 9/11, the processing of thousands of applications was significantly delayed in its New York office due in part to a shortage of computer-trained personnel [3].

The critical role of EU training is regularly noted by corporate managers, as evidenced by the fact that U.S. companies planned to spend approximately $57 billion on employee training in 2001 and that more than one-third (37%) of such programs were targeted at improving the computer skills of employees [4]. While organizations invest heavily in EU training, little effort is made to systematically evaluate the outcomes of the related programs. Training evaluation is often limited to administering a test following a course. The effect of training on a trainee’s job performance—what training researchers call “transfer of skill to workplace”—is rarely measured. Almost 10 years ago, [7] bemoaned the lack of effective measurement of EU training outcomes, labeling EU training programs “random-in-random-out” processes [7]. While investment in EU training has grown significantly since then, little has changed with regard to how managers measure training effectiveness.

Our recent review of the research literature on EU training turned up several studies that investigated how to design a better training program but little on training evaluation, especially in an organizational setting. Here, we try to fill the void in EU training practice and research by presenting a comprehensive framework for evaluating EU training programs.

Back to Top

Five Levels

In 1959, in a classic analysis, Donald Kirkpatrick, a pioneer training and education researcher, proposed that training programs be evaluated at four levels: reaction; learning; behavior; and result [5]. The first measures the satisfaction of the trainee with the training material, instructor, instruction, and environment. The second measures the skill and/or knowledge learned. The third measures the effect of the training program on the trainee’s job performance. And the fourth measures the effect of the training program on overall organizational performance. While this model has been used extensively for almost 50 years in evaluating management training programs, its application to EU training evaluation is almost nonexistent.

The figure outlines the two dimensions of our proposed framework: the evaluation dimension, which suggests what to evaluate, and the evaluator dimension, which identifies the person or group responsible for doing the evaluation. The evaluation dimension has five levels: (1) technology; (2) reaction; (3) skill acquisition; (4) skill transfer; and (5) organizational effect. Levels 2 to 5 are compatible with the four levels in the model in [5]. We added the new level—1, technology—to assess the technology component of EU training programs. Training in general, and EU training in particular, is increasingly dependent on IT. Communications technology fosters the growth of virtual learning environments [8]. Computer-aided instruction is an increasingly popular low-cost anytime-anyplace alternative to face-to-face, classroom-oriented training. Multimedia technology and Web-enabled workstations are revolutionizing training design and delivery strategies. Thus, IT today has the potential to facilitate many aspects of EU training programs. A level-1 evaluation measures how well IT is used in the design and delivery of EU training programs.

The evaluator dimension is motivated by the multiple-constituency approach, positing that because an organization serves the interests of multiple constituencies, or stakeholders, organizational effectiveness is best measured by assessing how well the expectations of each of them are being met [9]. Our analysis of the EU training process identified three major constituencies: training providers, trainees, and business managers. The table summarizes the factors each must evaluate at the various levels of the framework.

Training providers include training designers, content developers, trainers, and anyone else who might be responsible for developing, deploying, and managing training programs. IT influences the training provider’s job in many ways, offering an array of training-design-and-delivery strategies to match the needs of the trainees [6]. IT-based tools (such as for authoring training programs and for handling training administration) simplify training development and administration. The training provider must evaluate two things: how IT supports training-related tasks and the ease of use and the usefulness of IT-based tools. The relevant data may be collected through a questionnaire survey or through structured interviews.

The trainee is the direct beneficiary of these programs. IT aids presentation of training material to training participants while supporting their communication [8]. The trainee’s own evaluation of technology must include an assessment of the effectiveness of IT-based presentation, as well as the ease of use and the usefulness of communication tools (such as email, chat rooms, and desktop videoconferencing). The technology-evaluation data is helpful for both designing better training programs and for improving IT-based tools for training.

The trainee’s perception of a training program (measured at level 2 of the evaluation dimension) provides valuable information for designing future training programs. Trainees must evaluate whether the course they’ve just been through covered concepts and skills that are meaningful to their jobs, whether its content was well designed, and whether its coverage of useful skills and concepts was adequate. The trainee also evaluates the effectiveness of the instructor and the quality of the training facility. A trainee’s evaluation of technology and reaction to the program are best measured through a questionnaire survey given immediately following the program.

Evaluation of the knowledge and skills acquired by the trainee must be made by training managers in light of the program’s learning goals. Concepts learned may be assessed through written or oral tests, while mastery of the application software may be evaluated through hands-on tests. Skills acquisition is best measured immediately following the training.

Skills transfer is best evaluated soon after the trainee gets a chance to use the software at work, ideally no more than one to three weeks following the start of on-the-job use of the software. Skills transfer can be evaluated by surveying trainees about how they are able to use the software and how often they seek help when using it. We do not recommend measuring skills transfer through a test; such tests are difficult and time consuming to create, and trainees often lack the motivation to take them.

Department managers help determine training needs by identifying gaps between the desired and the actual software skills of their subordinates. They also allocate resources for training. Past research indicates that trainees often fail to transfer the knowledge and skills learned in training programs to their work environments [1]. Our framework provides for measurement of this key indicator of training effectiveness, from both the trainee and the manager perspectives. The manager fills out a questionnaire assessing how well the trainee uses the software at work. This evaluation must coincide with the trainee’s evaluation of skills transfer, facilitating comparison of the two.

Finally, a training program’s organizational effect (level 5 of the evaluation dimension) must be measured by department managers. While this evaluation is the most difficult to measure of the five levels of measurement, it provides the greatest value to business managers by relating investment in training and organizational goals. It is greatly facilitated by setting explicit training goals during the planning phase of a training program and linking them to departmental and organizational goals.

Successful implementation of the training evaluation scheme requires planning and coordination. The training manager coordinates the activities related to training evaluation. The evaluation plan and instruments to be used for data collection must be investigated and implemented as part of the process of designing a training program. Evaluation measures must be based on the training goals set by training managers in consultation with their organizations’ business managers. The primary users of the training evaluation data are the training manager and the business managers whose departments are most likely to benefit from the related programs. This feedback helps training designers improve subsequent training efforts. It also helps business managers evaluate the effectiveness of their organizations’ investment in EU training.

Back to Top

Lessons Learned

We used the case research method [10] to evaluate our framework. As a case study site, we chose a U.S.-based manufacturer of telecom products with worldwide operations and sales revenue of approximately $400 million in 2003. It used a training program (designed, developed, and delivered by its own training department) to teach end users to use enterprise resource planning software. Training sessions were conducted in computer-equipped classrooms, each with 11 multimedia PCs for the trainees and one for the facilitator. The training material was stored in a company-owned Web server and delivered to the training computers through the company’s intranet. A typical training session took approximately four hours, beginning with an introduction to important concepts by the facilitator. The trainees then reviewed the training material at their own pace; hands-on exercises followed, guided by the facilitator.

We used structured interviews to collect the responses of the training providers and managers and questionnaire surveys to collect responses from the trainees. The framework was found to be helpful in guiding training evaluation while providing a comprehensive assessment of the training program. Evaluation of the technology component (level 1 in the evaluation dimension) revealed strengths and weaknesses of the tools and technologies used to develop and deliver the training. For example, the company’s training developers felt the authoring software was easy to use and enhanced their personal productivity. The trainees reported they liked the software demonstration modules these developers had created using Lotus Screencam, a popular screen-recording application for PCs.

Contrary to the designer’s expectations, the trainees only rarely accessed the training material stored on the Web site as a resource for post-training support. Subsequent investigation revealed that the limited availability of multimedia workstations in user departments was partly to blame. The trainees felt that hands-on exercises emphasizing real-world applications were helpful in holding their attention during training, further facilitating skills acquisition. These results reflect the importance of crafting training material to match the background and experience of each user group. We found cross validation of the transfer of skills from the trainee and the manager viewpoints to be a useful feature of the framework.

Our experience suggests that, with proper planning, the evaluation framework can be integrated into any organization’s EU training program. It also revealed the importance of goal setting. The training manager (in consultation with business managers) must define training goals early in the program, then track effectiveness against this benchmark. Defining training goals and linking them to organizational goals is especially important in measuring the organizational effect (level 5 of the evaluation dimension) of EU training.

Back to Top

Conclusion

Although evaluation is critical for ensuring that EU training programs help create a computer-literate work force, it remains a weak link in the training process. We’ve addressed this issue by designing, testing, and now presenting a comprehensive framework for evaluating EU training programs. Our proposed framework is readily integrated into all kinds of EU training programs, especially for teaching the basic skills involved in using mainstream business applications. Business managers and training managers alike can use it to design their own EU training-evaluation process as a feedback system for monitoring training effectiveness and for generating the information they need to improve their EU training programs.

Back to Top

Back to Top

Back to Top

Figures

UF1 Figure. Framework for evaluating EU training.

Back to Top

Tables

UT1 Table. When implementing the framework, these factors need to be evaluated.

Back to top

    1. Baldwin, T. and Ford, K. Transfer of training: A review and directions for future research. Personnel Psych. 41, 1 (Spring 1988), 63–105.

    2. Copeland, L. Attacks breed disaster plans. Computerworld 34, 20 (May 15, 2000), 4.

    3. Fox, R. INS high-tech holdup. Commun. ACM 45, 8 (Aug. 2002), 9–10.

    4. Galvin, T. Industry 2001 report. Training 41, 10 (Oct. 2001).

    5. Kirkpatrick, D. Evaluating Training Programs. Berrett-Koehler Publishers, San Francisco, 1998.

    6. Leidner, D. and Jarvenpaa, S. The use of information technology to enhance management school education: A theoretical view. MIS Quarterly 19, 3 (Sept. 1995), 265–291.

    7. Nelson, R., Whitener, E., and Philcox, H. The assessment of end-user training needs. Commun. ACM 38, 7 (July 1995), 27–39.

    8. Piccoli, G., Ahmad, R., and Ives, B. Web-based virtual learning environments: A research framework and a preliminary assessment of effectiveness in basic IT skills training. MIS Quart. 25, 4 (Dec. 2001), 401–426.

    9. Tsui, A. A multiple-constituency model of effectiveness: An empirical examination at the human resource subunit level. Admin. Sci. Quart. 35, 3 (Sept. 1990), 458–483.

    10. Yin, R. Case Study Research. Sage Publications, Newbury Park, CA, 1984.

Join the Discussion (0)

Become a Member or Sign In to Post a Comment

The Latest from CACM

Shape the Future of Computing

ACM encourages its members to take a direct hand in shaping the future of the association. There are more ways than ever to get involved.

Get Involved

Communications of the ACM (CACM) is now a fully Open Access publication.

By opening CACM to the world, we hope to increase engagement among the broader computer science community and encourage non-members to discover the rich resources ACM has to offer.

Learn More