Computer-aided software engineering (CASE) technology is suggested as a possible means to increase the productivity and quality of systems development [4, 6]. While some studies report productivity gains from CASE usage [2, 4, 6], others observe that expected productivity gains are difficult to achieve [7, 9]. The importance of looking into these contradictory results can hardly be overemphasized as uneven success of CASE has led many information systems managers to delay implementation [8].
Many reasons have been put forth to account for these inconsistencies [6]. Most important of these seems the lack of use of common outcome measures [5]. Lack of carefully constructed outcome measures constrains IS managers in their benchmarking and technology assessment efforts. Iivari [6] makes a similar observation in the context of CASE technology. A meaningful assessment of technology outcomes requires that appropriate measures of technology usage within business processes be first defined [3]. We propose a measurement framework to assess CASE usage in the systems development1 process and its constituent subprocesses.
We conducted a national survey to collect the empirical data for our study. Using our CASE usage assessment framework, we compute two measures, adoption and infusion, to describe CASE spread in IS organizations. Our analysis provides insights on demographic differences and similarities between adopters and non-adopters of CASE technology. Development tasks that are most supported and least supported by CASE are identified. Our results have implications for organizations that use CASE and for those examining it as a possible planning and design support technology for systems development. The results also have significant implications for the vendor community.
A Framework for Assessing CASE Usage
We draw upon Henderson and Cooprider’s [5] model of IS planning and design support technology to inform our framework to assess CASE usage for systems development. They identify that a planning and design support technology such as CASE should address three key processes, namely production, coordination, and organization of systems development [5]. Innovations in these technologies should be targeted at and deployed in each of these processes. While gains can be accrued from improved efficiency and effectiveness obtained from CASE usage within any one of these processes, greater gains are likely to be accrued when all the three processes are enhanced and the interdependence among them is appropriately managed. Here we discuss these three processes and their constitutive subprocesses.
Our results show that organizations have adopted CASE for only a few more than half of the development tasks we considered here.
CASE support for production. CASE technology support for production should focus on positively impacting an individual or group’s generation capacity of planning or design decisions and subsequent artifacts or products. The production process consists of three interrelated subprocesses, namely, representation, analysis, and transformation. These subprocesses, in turn, consist of many different tasks. We provide a quick summary of these subprocesses in systems development here:
- Representation tasks enable the definition, description, or modification of an object, relationship or process. These tasks support the abstraction and conceptualization of a phenomenon. Process flow diagrams, functional charting, entity modeling, domain set specifications, and association or relation mapping are some techniques that are part of the representation repertoire of CASE technology.
- Analytical tasks enable users to explore, simulate, or evaluate alternate representations or models of objects, relationships, or processes. They focus on the problem-solving and decision-making aspects of planning and design.
- Transformation tasks automate a significant planning or design task by essentially substituting a human designer/planner. From an economic perspective, these tasks can substitute labor for capital and essentially emphasize efficiency enhancement through technology investments.
CASE support for coordination. CASE technology support for coordination should enable the interaction of multiple agents who are participants in the development process. Control and cooperation are identified as two coordination processes that CASE technology should support. Control tasks enable users to plan for and enforce rules, policies, or priorities that govern or restrict the activities of team members. These tasks encompass both resource management and access control. Cooperation tasks enable users to exchange information with other individuals for the purpose of influencing (or affecting) the concept, process, or product of the team. These tasks can serve as a communication channel and a facilitation aid.
CASE support for organization. CASE technology support for organization should enable the implementation of policies or procedures that determine the environment in which production and coordination tasks are applied. Support and infrastructure are two facets of organization in the development process that CASE technology should be targeted at. Support tasks help individual users understand and use the technology effectively. These tasks can provide common online support. More importantly, they can provide a medium to accumulate and share the experiences and learning of other developers. Infrastructure tasks enable standards implementation which, in turn, enable portability of skills, knowledge, procedures, or methods across processes within a development project, and conceivably across projects. Also addressed by these tasks are enforcing and managing consistency of the data definition storage structure and standards for a central repository.
Survey Details
Table 1 shows the framework used to assess CASE usage. It shows the three development processes and their subprocesses and the 22 tasks that we use to measure CASE usage. CASE usage for each task is measured on a scale of 0 to 4 (0 = not used at all; 1 = used on an experimental basis; 2 = used on a regular basis by a few people/projects; 3 = used on a regular basis by most people/projects; and 4 = used on a regular basis by all people/projects).
We used a survey-based approach to collect the data for our study. Twelve IS professionals, including senior IS executives, development personnel, and researchers assessed if items used in our survey captured the essence of development tasks and if the proposed measurement approach was appropriate to assess CASE usage within an IS organization. They were also asked to comment on item wording and clarity of instructions provided to respondents. Their feedback was incorporated in the questionnaire.
Subsequently, we proceeded to conduct a large-scale, national survey as this enables us to profile the nature of use (or lack of use) of CASE technology within IS organizations in the U.S. Restricted, convenient samples are more likely to depict biased viewpoints. Our sample was selected from the Directory of Top Computer Executives, a listing that includes over 34,000 IS executives representing more than 15,000 organizations, compiled by Applied Computer Research Inc., Phoenix, AZ. The database is updated twice every year to maintain its currency. Our survey, accompanied by a cover letter explaining the purpose of the study, was sent to 1560 top IS executives. A follow-up mailing was done three weeks after the initial mailing.
A total of 350 usable questionnaires were returned representing a response rate of 23%. Of these respondents, 245 did not use CASE, 59 had considered using it at one point in time, but did not use it, and 462 were using CASE.
Measuring CASE Usage
We performed a principal component factor analysis followed by a varimax rotation to empirically assess groupings of the tasks listed under production, coordination, and organization aspects of development. The results of the factor analysis are shown in Table 2.
The production process has three subprocesses. Our data does not suggest two distinct groupings for representation and analysis, but rather indicates one aggregate grouping. This may be because of the close logical relationship between them. Interestingly, transformation has two subprocesses—transformation of development activities (automation of planning or design tasks, database code/schema generation, procedural code generation) and transformation of validation and enhancement activities (automatic restructuring of program code, analysis of program structure, test data generation). We have labeled these subprocesses “design and construction” and “testing and validation” respectively. The control and coordination subprocesses emerge as one process suggesting a close relationship and lack of discrimination between them. Both of these subprocesses require adherence to rules set out by the system development team and require that communication capabilities be used to achieve requisite control and enable desired cooperation. As expected, the organizational process has two subprocesses, namely support and infrastructure.
CASE adoption. We define CASE adoption level as the proportion of development tasks for which CASE tools are used at or beyond the experimental level. A score of 0 means that CASE tools are not being used for any of the development tasks/projects, while a score of 1 implies that CASE tools are being used for all concerned tasks/projects. We compute measures of CASE adoption for each of the three processes and their subprocesses, and, in addition, we compute an overall measure of CASE adoption.
Table 1 shows an IS organization using both representation and analysis tasks at or beyond the experimental level. The CASE adoption level for the representation and analysis subprocesses for this organization is 2/2 = 1.0. We similarly compute adoption levels for each subprocess suggested by our factor analysis procedure (Table 2). We then compute the adoption levels for each process by averaging the adoption levels on their underlying subprocesses. For example, in the case of the production, we first compute CASE adoption levels for representation and analysis, design and construction, and testing and validation. We then average their adoption measures to obtain the CASE adoption level for production. An aggregate measure for CASE adoption is computed by averaging adoption scores on the three processes. Our adoption measure is consistent with and extends past research that has focused on studying the adoption of complex information technologies including CASE [8].
CASE infusion. Adoption measures the proportion of development tasks for which CASE was introduced into the IS organization to support them. However, any two organizations that have adopted CASE to support a development task can differ in their usage of CASE. One organization may use CASE in a limited sense, such as in a pilot project, while the other can broadly deploy CASE in several or all development projects. We define CASE infusion as the extent to which CASE is used to support development tasks. The CASE infusion level is computed as a ratio of the total usage score on development tasks for which CASE was adopted 3 and the maximum possible usage score on these tasks.
Table 1 shows the total CASE usage score on two representation and analysis tasks is 4 + 3 = 7. The maximum possible usage score is 2 x 4 = 8. Hence, the CASE infusion level for representation and analysis for this organization is 7/8 = 0.88. As with CASE adoption levels, we compute CASE infusion levels for each of the production, coordination, and organization processes and their subprocesses. In addition, we compute an aggregate measure of CASE infusion.
Findings and Implications
Figures 1, 2, and 3 profile the organizations in our sample and present a contrast between adopters (N = 304) and non-adopters (N = 46). Some interesting insights emerge from these figures:
- Both adopters and non-adopters have almost equal representation in the three industry sectors—manufacturing, government, and service. Overall, almost half of the adopter and non- adopter organizations come from the manufacturing sector, a third come from service, and the remainder is government organizations. Industry sector membership is not a differentiating variable between adopters and non-adopters of CASE.
- Organizations with 1150 employees in the IS department constitute the highest percent (43.44%) of adopters, while those with 110 employees represent the highest percent (43.42%) of non-adopters. As the IS department size grows larger, the proportion of adopters is almost twice that of non-adopters, which is consistent with past findings. It appears that larger IS departments are more likely to have the requisite resources to support the costs of CASE innovation.
- Portfolio size, as assessed by the number of development and maintenance/ enhancement projects, does not differentiate between adopters and non-adopters. Further analysis indicated no difference between adopters and non-adopters in terms of their make-up of development and maintenance/enhancement projects.
Adoption and infusion of CASE. Table 3 and Figure 4 show the levels of CASE adoption and CASE infusion. The aggregate level of CASE adoption is moderate among adopter organizations. Adopters in our sample have an average CASE adoption level of 0.6 out of a maximum possible score of 1.0. This can be interpreted to mean that 60% of development tasks are being supported to some degree by CASE, while CASE is not being used at all for the remainder 40%. The aggregate infusion level is 0.5 out of a maximum of 1.0. This value has to be interpreted in the context of the measurement scale used (see Table 1; 0—no usage to 4—used on a regular basis by all people/projects.) On average, the IS organizations represented in our sample have few people/projects using CASE on a regular basis in the tasks where CASE has been adopted. Interpreting the aggregate CASE adoption and infusion levels collectively, our results suggest that CASE is being used for 60% of the development tasks, but the degree of CASE usage in these tasks is limited to a small segment of projects.
Some interesting insights emerge by comparing the adoption and infusion levels for production, organization, and coordination. As shown in Table 4, the CASE adoption levels for production and organization are about the same. Significantly lower adoption levels are observed for coordination suggesting that organizations are using few CASE capabilities to enable collaboration among development personnel. Similarly, adopted tasks for production and organization have been infused to a greater extent in comparison to those for coordination.
In a relative sense, CASE capabilities are being targeted at, experimented with, and used to a greater extent for production and organization purposes more than they are for coordination purposes. Although group work and team activities consume significant resources in the development process and the importance of technology support for teamwork has been emphasized [10], this is where we observe the least adoption and infusion of CASE tools.
Adoption and infusion of CASE for production. The CASE adoption level for representation/analysis is significantly higher than the observed level for design and construction which, in turn, is significantly higher than the observed adoption level for testing and validation (Table 4). No significant differences are observed in the CASE infusion levels for representation/analysis and design and construction, but the CASE infusion level for testing and validation is significantly lower than that for the other two subprocesses. Table 5 shows that CASE usage for testing and validation tasks ranks the lowest within production-related tasks and CASE usage for representation/analysis tasks ranks the highest not only within the production process, but across the tasks constituting the other two processes as well.
Adoption and infusion of CASE for coordination. Table 5 summarizes the strikingly low CASE usage levels for each of the coordination tasks. Organizations in our sample are using CASE in a few projects to enforce rules, policies, or priorities concerning systems development process activities. However, CASE usage for other coordination tasks is limited to experimentation. It is conceivable that organizations are leaning toward dedicated, user-friendly tools, such as Microsoft Project for resource management and Lotus Notes for communication. We conclude from our study that CASE is being used very minimally for coordination tasks.
Adoption and infusion of CASE for organization. The adoption and infusion levels of CASE for infrastructure are significantly higher than the observed levels for support (Table 4). Like CASE usage for representation/analysis, we find the level of CASE usage for infrastructure purposes has spread beyond experimentation to institutionalized use in a limited number of projects. However, CASE is being used minimally to provide support for developers (Table 5), although effective support may be critical in reducing knowledge barriers associated with the assimilation of complex technologies such CASE [1].
Conclusion
Past studies show that the lack of use of common outcome measures is one of the most important reasons behind inconsistent results reported about CASE usage and impacts. This, in turn, may have discouraged organizations from using CASE. In this study, we have presented a framework to measure CASE usage in organizations. We move forward to address this issue by developing a framework and a set of measures to gauge CASE usage in IS organizations. We use this approach to collect data from a random sample of IS organizations and to assess the nature and degree of CASE usage in these organizations. Our findings have implications for IS organizations that use CASE and for those examining it as a possible core technology for systems development. The results also have significant implications for the vendor community.
The potential of any technology can be realized when key capabilities are not only adopted but also infused across the organization. Users should recognize that while gains can be made from improved efficiency and effectiveness within the individual processes of production, coordination, and organization processes, greater gains are likely to be accrued when all three processes are improved and their interdependencies are well managed. Our results show that organizations have adopted CASE for only a few more than half of the development tasks we considered here. Furthermore, they use CASE for these tasks only in a small subset of development projects. It is not surprising then that CASE technology has not shown much effect on systems development performance or that there is no broad acceptance about the plausible impacts of the technology.
Specifically, CASE capabilities are being targeted at, experimented with, and used to a greater extent for production and organization purposes than they are for coordination purposes. However, group work and team activities consume significant resources in the development process. The importance of technology support for teamwork can hardly be overemphasized. However, this is where we observe the least adoption and infusion of CASE tools. Both users and vendors need to recognize the very low adoption and infusion rates of this facet of CASE technology.
In general, CASE adoption is lower than infusion. This is true at the aggregate development level and for the constituent processes of production, coordination, and organization. This is not surprising as initiating the use of technology is an easier undertaking than broadly assimilating it into organizational work systems. Indeed, infusion of a complex technology such as CASE may require not only its modification to suit the organizational needs, but also substantial modifications in organizational practices and procedures to foster a conducive environment for its increased use. Such modifications, although difficult, are not impossible to make. Organizations interested in spreading the use of CASE should carefully evaluate how their organizational systems, procedures, and management practices may be constraining the internal spread of the technology.
We would like to bring attention to two specific areas where the level of CASE adoption and infusion is especially low. Of the production sub-processes considered, the adoption and infusion levels of CASE for testing and validation purposes is the lowest. These tasks are tedious, but it does not appear that CASE has been the preferred technology to aid developers in the execution of these tasks. Furthermore, CASE is being used minimally to provide support for developers. Vendors should examine the degree to which the support packaged with CASE tools plays a role, if any, in reducing knowledge barriers and increasing CASE usage. Such barriers are critical obstacles that need to be overcome in order to achieve higher levels of adoption and infusion of complex technologies such as CASE.
Figures
Figure 1. Distribution of organizations by industry (numbers on bars represent the number of organizations in the sample in respective categories).
Figure 2. Distribution of organizations by ISD size (numbers on bars represent the number of organizations in the sample in respective categories).
Figure 3. Distribution of organizations by portfolio size (numbers on bars represent the number of organizations in the sample in respective categories).
Figure 4. Adoption/Infusion levels of CASE dimensions (numbers on bars represent CASE in organizations adoption/infusion levels for respective processes).
Tables
Table 1. Measurement framework for CASE usage.
Table 2. Results of factor analysis.
Table 3. CASE adoption and infusion levels for development processes.
Table 4. Comparison of CASE adoption and infusion levels across development processes.
Table 5. Mean CASE usage scores and ranks for development tasks.
Join the Discussion (0)
Become a Member or Sign In to Post a Comment