Research and Advances
Architecture and Hardware

Closing the User and Provider Service Quality Gap

A method for measuring service quality that includes both the user and IS service provider perspectives.
Posted
  1. Introduction
  2. Measuring IS Service Quality
  3. Interpreting the Gaps
  4. Negotiating the Gaps
  5. Implementing and Managing Service Quality
  6. Conclusion
  7. References
  8. Authors
  9. Footnotes
  10. Figures
  11. Tables

The service component related to the information systems function in most organizations is growing in significance. According to the technology research firm Extraprise Group, high-tech companies and IT organizations spent $95 billion supporting end users in 1999. As the volume of requests for support continues to increase, the end user is experiencing the replacement of face-to-face meetings and phone calls with email service requests and Web-based service systems. However, as service delivery mechanisms are enhanced the focus on product is still undiminished and common product metrics fail to focus on the IS service component [6]. Whether provided in-house or through outsourcing arrangements, the assessment of IS effectiveness must include a measure of IS service quality. Without appropriate feedback, IS departments may misunderstand their customers’ service requests and be unable to meet customer expectations.

Efforts to measure IS service quality yield a plethora of problems, primarily including: What indicators yield an appropriate value for measuring the quality of a service? Which stakeholders should provide the analysis? Moreover, measurement of the quality of service may require affective judgment. A combination of measurements regarding expectations for service and perception of that service provision allows for examination of a gap in service delivery. Such a gap measure is a function of existing differences in expectation and performance reported by stakeholders. When perceptions exceed expectations, the stakeholders got more than they bargained for. When expectations exceed perceptions, a valuable measure would provide for examination of dimensions that led to the shortfall. Negotiations can proceed toward achieving a level of consonance that narrows any gap, where consonance is the collective understanding and agreement of the metrics used to evaluate an information system [4].

Back to Top

Measuring IS Service Quality

Service quality is one of the most researched areas of service marketing; the common assumption being that high-quality service leads to satisfied customers. Prescriptions for measuring service quality range from perception strategies that ignore customer expectations to equally insufficient proposals that suggest understanding expectations and meeting customer needs is the single most critical determinant of service quality [7]. In other words, to be complete, a measure of service quality must be founded on a comparison between what the customer feels should be offered and what is ultimately delivered.

One measure of service quality that some IS researchers support is SERVQUAL, an instrument designed to assess both service expectations and perceptions of deliverables [3]. The SERVQUAL instrument consists of two parts. The first part consists of 22 statements for measuring expectations (see the accompanying table). These statements are framed to describe the performance of an excellent provider of the service being studied. The second part consists of the same items, but phrased to measure perceptions of the actual service delivered. Underlying the 22 items are five dimensions used by customers to evaluate most types of service [7], which include:

  • Tangibles: Physical facilities, equipment, and appearance of IS service providers.
  • Reliability: The ability of IS service providers to perform the promised service dependably, accurately, and on time.
  • Responsiveness: IS service providers’ willingness to help customers (users) and provide prompt service.
  • Assurance: Knowledge and courtesy of IS service providers and their ability to inspire trust and confidence.
  • Empathy: Caring, individualized attention the IS service providers give their customers.

Service quality for each dimension is captured by a gap score (G) indicating perceived quality for a given item, where G=P–E and P and E represent the average ratings of a dimension’s corresponding perception (P) and expectation (E) statements. The size and direction of G offer a measure of service quality as perceived by customers. G is a result of other gaps within the IS department that offer a framework for understanding barriers to IS quality service. One potential gap is between what IS managers perceive user expectations to be and users’ actual expectations. A negative difference indicates managers don’t adequately understand the needs and desires of the users. Another gap is between what IS managers perceive user expectations to be and the managers’ ability to translate these perceptions into IS service quality standards. A performance gap may occur when standards are set on delivered IS service such that the department is unable to deliver based on resources or capabilities. The gap model can be employed as a diagnostic tool to manage the quality of service that IS departments provide to users.

Service quality research in the 1990s involved efforts to refine measurement methods of gap models in an effort to provide a dynamic model of service quality [3]. Employing a diagnostic tool to measure service quality from the perspectives of both stakeholders allows investigators to examine discrepancies between user expectations and IS staff interpretation of these expectations as well as user and IS staff evaluation of system performance. This framework would help IS managers identify satisfaction gaps between users and IS personnel, and recommend directions that will result in greater customer satisfaction.

Back to Top

Interpreting the Gaps

Using data from a survey evaluating IS service quality, we are able to illustrate some management issues affecting the quality of IS service. The user community and the IS staff often have different interpretations of the importance of various system features and IS success [2]. A multisource evaluation provides opportunities for IS service providers and users to recognize these differences and more closely examine customers’ needs and expectations. To acquire the multisource data, IS managers in 612 organizations in the Midwest U.S. were contacted to determine a willingness to participate in the study. Each willing manager also identified an IS user in their organization to complete a matched instrument. Participants were asked to rate their expectations and perceptions of IS department service using the SERVQUAL items on the SERVQUAL instrument. The total response was 386 users and IS professionals.1 Analysis examines the questions listed here.

The expectation gap: What are the customers’ service expectations? What are the service providers’ expectations? Do IS service providers understand their users’ expectations? If not, where are the gaps? This expectation gap measure embodies user actual expectations, managers’ perception of these expectations, and managers’ ability to deliver based on IS service quality standards and resources.

The performance gap: What are the customers’ perceptions of the delivered service? What are the service providers’ perceptions of the service performance? Do they agree? If not, are customers over- or under-satisfied with service providers’ performance? The performance gap represents an inability to deliver according to expectations.

IS and user service quality: What are the users’ perceptions of service quality? Are they satisfied with IS services? If not, which areas need to be improved? What are the service providers’ perceptions of service quality? Are they satisfied with their performance? If not, which areas need to be improved? These measures represent the gap (G) as described in [6] from both the users’ and service providers’ perspectives.

The satisfaction gap: Is there a gap between users’ and service providers’ service quality measures? If yes, which underlying SERVQUAL dimensions contribute to this gap? These are independent for the two groups and each group’s expectations and perceptions are uniquely identified.

We believe the answers to these questions pave the way to a richer diagnosis of service quality in the IS sector and provide a measure for promoting better management of IS service quality. The mean responses are calculated for each dimension within each of the criteria categories on a normalized five-point scale, five being the best. Figure 1 illustrates our measures using a star diagram representing a different dimension of SERVQUAL on each point of the star. The origin of each point is at the center of the star. Thus, as the shaded area in the center of the star grows, the gaps grow across multiple dimensions. Large gaps indicate IS professionals do not understand the level of their customers’ desires, a misunderstanding that can lead to improper goals or specifications for a system under development.

The Expectation Gap. This look at the data provides insight into how the two groups differ in terms of their service expectations. Expectation gap data reveals both groups agree the reliability dimension is the most important from an expectations perspective. A quick glance at the group means across dimensions (reliability, 4.46; response, 4.18; assurance, 4.41; empathy, 4.15) indicates the IS staff seems able to understand their users’ needs and satisfy user expectations accordingly. However, when we take the absolute expectation differences from each matching pair (one user and one IS staff member from each organization), all item differences become significant. In other words, significant service expectation differences exist between users and IS staff in each organization. If the expectations of IS staff are higher than those of users, the result might be inefficient IS resource allocation. IS staff expectations lower than user expectations lead to dissatisfied customers.

The Performance Gap—User and IS Personnel Perceptions of Delivered IS Service. Users and IS personnel indicate different levels of satisfaction with performance of the delivered service based on the dimension measures considered (see the table). Dimensions rated the highest for both IS staff and users are assurance and empathy. The responsiveness dimension of delivered service is the least important to both IS staff and users. The scores, by dimension, are a measure of how users and IS personnel perceive (believe) the service was provided and are not indicators of satisfaction.

In general, IS personnel were more satisfied with their own performance than were the users. These overall results are not unexpected. Personal bias will more likely favor oneself. An examination of the performance gaps between users and IS staff perceptions reveals the largest gap in the reliability dimension (0.33), followed by the response (0.24), assurance (0.24), and empathy (0.26) dimensions. These four gaps are significantly different from zero at the 0.05 level using t-tests. In addition, the results of pair differences and the overall differences are very similar (in terms of sign and magnitude), which confirms IS staff having higher scores on their performance than users.

Service Quality From Users’ and IS Staff’s Perspectives. The service quality gap from users’ and IS service providers’ perspectives is calculated as the difference between the perception (P) and expectation (E) for each group. A positive result is a measure of more than satisfactory service; a negative result indicates unsatisfactory performance. Users perceived that they have received services lower than what they expected. The largest differences were reliability (-0.82) and responsiveness (-0.73), followed by assurance (-0.52) and empathy (-0.46). This gap information provides IS managers with a focus for efficiently improving customer service. Starting with the largest difference, providers can analyze items associated with that dimension and open communication with customers on how best to improve.

Measures of service quality from the providers’ perspective yield similar results. IS staff were least satisfied with their reliability (-0.56), followed by responsiveness (-0.47), assurance (-0.37), and empathy (-0.18). This data indicates the perceived need for improvement from the IS staff. Again, a positive result indicates more than satisfactory service provision; a negative result is unsatisfactory. Managers can work with service providers to examine gap items in an effort to determine why the performance levels did not meet with expected service standards and help overcome identifiable barriers.

The Satisfaction Gap. Figure 1 illustrates the satisfaction gap between users service satisfaction and IS staff’s satisfaction. In every area, user quality measures (reliability, -0.82; response, -0.73; assurance, -0.52; empathy, -0.46) exceed provider quality measures (reliability, -0.56; response, -0.47; assurance, -0.37; empathy, -0.18) indicating that discrepancies between expected and provided service are more significant for users. The significance of this gap indicates a need for improved communications between users and IS staff and among the IS staff. The satisfaction gap is most significant in the reliability and responsiveness dimensions, providing a place for management to start in improving user-provider communication.

Back to Top

Negotiating the Gaps

The size of the gap in each area indicates a point of entry for negotiation toward narrowing the gap. Recent studies support the notion that greater satisfaction can be achieved when stakeholders align goals and expectations prior to initiating any project and continue to monitor progress as development progresses [4]. Survey results described here indicate end users and IS staff report reliability as the most important dimension, followed by responsiveness, assurance, and empathy. In all but one case (empathy), service provider ratings exceeded those of the end user. For all cases, user expectations exceeded user perceptions of service received. The gap in prior expectations between users and providers is directly related to the final satisfaction of the user.

Resolution of discrepancies prior to project inception can provide an effective evaluation and control technique. Reduction of gaps in prior expectations can be accomplished by building a foundation for collaboration between stakeholders before disputes and problems arise [5]. Prior to system development, metrics should be determined and procedures for evaluation established. All stakeholders must at this point agree on goals, measures, and deliverables that will provide for continuous project improvement.

Back to Top

Implementing and Managing Service Quality

Figure 2 shows an updated feedback loop including expectations, perceptions, and gaps for the IS service provider and the IS user. Within this framework, a thorough diagnosis of service quality includes the satisfaction gap. Using this modified 360-degree feedback framework implemented using the SERVQUAL instrument provides opportunities for IS service providers and users to recognize where differences exist. The proposed thorough diagnosis of the quality of a service product would include each of the elements shown in Figure 2.

At the start of any project, a difference (expectations gap) may exist in the expectations of the product (service) by users and IS service providers. This gap indicates a discrepancy between the users’ requirements and the IS service providers’ understanding of those requirements. Using SERVQUAL, measures of stakeholders’ expectations can be obtained prior to project inception. This implementation provides for negotiation of expectations prior to proceeding with the project. When common goals are negotiated, there is a much better chance that expectations are shared.

Once completed, perceived product (service) performance is measured separately by users and IS personnel, resulting in a performance gap. A gap might indicate IS personnel are ill-prepared to evaluate the impact of delivered service from the users’ perspective. Prior agreement on expectations may well serve to narrow or even eliminate the performance gap. Remaining deficiencies as described by the user could be resolved through project maintenance. Post-implementation review should consider how best to proceed in narrowing this gap in future projects.

The users’ view of service quality is measured as the difference between users’ expectations and performance perceptions of the product. A significant gap might indicate IS personnel exceeded expectations. More likely, the IS service provider has failed to meet agreed-upon expectations. At this point, based on discussions occurring in the early stages of the project, unrealistic expectations should not be the source of the gap.

The IS view of service quality is also measured as the difference (gap) between IS personnel’s service expectation and service performance perception. A significant gap indicates an acknowledgment by IS personnel of failure to provide the requested service. A satisfaction gap represents the difference between the users’ and IS personnel’s measure of service quality. This gap provides important feedback for refining expectations and improving the service quality of future projects. Similar gap sizes indicate similar perceptions in the achievement of service.

Given this thorough diagnosis of the service quality of a product, an organization could proceed to implement a service strategy for improving service quality through the following means:

  • Use the multisource evaluation instrument prior to proceeding with development as a mechanism for establishing agreement on service/product expectations.
  • Adopt a customer-oriented organization that focuses on the needs of users rather than the needs of systems.
  • Use the identified dimensions as a basis for training employees in both technical and functional (reliability, responsiveness, assurance, and empathy) skills.
  • Adopt a goal of providing excellent service to internal customers and reinforce that goal by making the most of user-provider interactions.
  • Use SERVQUAL to establish stakeholders’ service quality measures.
  • Where significant differences exist, identify the source of discrepancies and discuss how to bridge these gaps.

This service strategy implementation represents the best strategy for closing the satisfaction gap and achieving consonance [1].

Back to Top

Conclusion

Information systems service providers face an increasingly competitive market. Given the difficulty in managing what one cannot measure, we have offered here a method for quantifying a measure of service quality that includes both the user and the IS service provider perspective. The tool provides an informative assessment that offers direction for improvement based on the sign associated with the gap measure. Given a more complete picture of the service quality behavior within the firm, organizations can negotiate metric measures using pre-project partnering activities to achieve consonance. A proposed implementation of the framework presented employs a service strategy implementation toward the goal of closing the satisfaction gap.

Implementing a comprehensive service quality diagnostic as presented here requires commitment on the part of all stakeholders. As service delivery mechanisms are enhanced, the measurement of IS service quality is even more critical. An implementation of this nature may prove to be indispensable as a competitive weapon.

Back to Top

Back to Top

Back to Top

Back to Top

Figures

F1 Figure 1. Gap measures.

F2 Figure 2. Measuring service quality.

Back to Top

Tables

UT1 Table. SERVQUAL measurement: Expected service quality.

Back to top

    1. Ferguson, J.M. and Zawacki, R.A. Service quality: A critical success factor for IS organizations. Information Strategy 9, 2 (Winter 1993).

    2. Jiang, J. and Klein, G. User evaluation of information systems: By system typology. IEEE Transactions of Systems, Men, and Cybernetics 29, 1 (1999), 111–116.

    3. Kettinger, W.J. and Lee, C.C. Replication of measures in information systems research: The case of IS SERVQUAL. Decision Sciences 30, 3 (Summer 1999), 893–899.

    4. Klein, G. and Jiang, J. Seeking consonance in information systems. Journal of Systems and Software 56 (2001), 195–202.

    5. Larson, E.W. Partnering on construction projects: A study of the relationship between partnering activities and project success. IEEE Transactions on Engineering Management 44, 2 (1997), 188–195.

    6. Pitt, L., Berthon, P., and Lane, N. Gaps within the IS department; barriers to service quality. Journal of Information Technology 13, 3 (Sept. 1998), 191–200.

    7. Zeithaml, V.A., Parasuraman, A., and Berry, L.L. Delivering Quality Service Balancing Customer Perceptions and Expectations. The Free Press, NY, 1990.

    1Confirmatory Factor Analysis (CFA) results (available from the authors) support consideration for the reliability, responsiveness, assurance, and empathy dimensions.

Join the Discussion (0)

Become a Member or Sign In to Post a Comment

The Latest from CACM

Shape the Future of Computing

ACM encourages its members to take a direct hand in shaping the future of the association. There are more ways than ever to get involved.

Get Involved

Communications of the ACM (CACM) is now a fully Open Access publication.

By opening CACM to the world, we hope to increase engagement among the broader computer science community and encourage non-members to discover the rich resources ACM has to offer.

Learn More