Research and Advances
Architecture and Hardware

Using the Dea Methodology to Rank Software Technical Efficiency

The use of Data Envelopment Analysis to evaluate software efficiency offers a more formal and effective option.
Posted
  1. Introduction
  2. Selection of Inputs and Outputs
  3. Results
  4. Conclusion
  5. References
  6. Authors
  7. Tables

An analogy between software and companies can be drawn as software transforms inputs into outputs in a more or less efficient way. For example, file compressors can be seen as a black box that transforms inputs (a given file) into outputs (the compressed file) in an efficient way (speed, compress ratio, and so on).

Several approaches can be found in the literature to measure technical efficiency of firms, though the most common are Stochastic Production Frontiers (SPF) and Data Envelopment Analysis (DEA). The SPF model possesses the advantage that takes into account random noise but it is limited to the use of single outputs. Where multiple outputs exist, they need to be aggregated into a single output. Another key issue is that SPF models produce good estimators for the case of panel data. However, this is not the case for cross-sectional data [3]. On the contrary, DEA is a nonparametric deterministic method to estimate technical efficiency based on linear programming techniques. DEA frontiers are based on optimal units and represent the maximum level of output that can be obtained given a certain level of inputs if the units considered were efficient. This is a relative measure where a given unit is compared against the others in the sample. DEA can be applied in multi-output situations and no assumptions must be made on the distributions of the errors on the underlying production process. The major deficiency with the DEA approach, however, is its deterministic nature.

Several reasons motivate us to consider DEA to be the most appropriate method for the estimation of software efficiency; among them the general absence of time series data and the fact it can be considered multi-output processes, which can only be handled with DEA.

For a better understanding, we present the method through its fractional model. Intuitively, and assuming that a company’s activity can be synthesized by a single input and a single output, the efficiency of a company could be defined as the ratio of the level of its output over the level of the input used. However, companies often present a more complex activity and several inputs and outputs must be taken into consideration. When this is the case, efficiency can be measured by aggregating the inputs and outputs using an appropriate set of weights. Efficiency could then be defined as the ratio of the weighted sum of outputs over the weighted sum of inputs.

The main problem associated with this definition lies with the selection of the correct set of weights. In the DEA methodology, it is assumed that each firm may value its own inputs and outputs in different ways. Moreover, the Charnes et al. [2] model (or CCR model) is based on the concept the unit or Decision Making Unit (DMU) being evaluated could choose the best set of weights subject to the condition that the efficiency rate for all the units has to be less than or equal to one when using that same set of weights.

If Xj (Yj) represents the s×1 (t×1) column input (output) vector corresponding to the j-th DMU (this sub-index has been considered equal to zero for the DMU being evaluated), this problem can be formulated as:

eq01.gif

and it must be computed for each of the DMUs. The optimal values of µ and v are called the virtual multipliers and they represent the associated weights of the inputs and outputs respectively. The unit being evaluated is considered efficient if, by using the most favorable set of weights, the ratio of the weighted inputs over outputs is equal to one. Note that the virtual multipliers µ and v are restricted to be greater than e, which is an archimedean number (it is a very small positive number, as small as we desire). This restriction is to avoid null values for the virtual multipliers, µ and v.

This fractional program can be transformed into a linear one by maximizing the numerator and setting the denominator equal to a constant. In the DEA methodology, this constant is considered to be equal to one, in order to get an efficiency rate upper-bounded by one (equivalent to 100%). This gives place to the model:

eq02.gif

This problem has t+s variables with t+s+n+1 constraints.

Since n is the number of units (larger than t+s) it is more convenient to work with the dual rather than with the primal. The dual form can be formulated as follows:

eq03.gif

where s+ is a sx1 column vector representing the slack variables corresponding to the first set of constraints and s is a tx1 column vector representing the slack variables corresponding to the second set of constraints.

If the efficiency had been defined as the ratio of the weighted sum of inputs to the weighted sum of outputs (rather than the inverse) the output-oriented model (CCR-O) would have been obtained (this determines the orientation of the problem, either output or input). All developments and ideas can be reasoned analogously for the output orientation. The resulting dual-CCR-O (CCRD-O) formulation is given by:

eq04.gif

Model (4) imposes constant returns to scale. To relax this restriction, Banker [1] proved it necessary to include what is called the convexity restriction on the lambdas (the sum of lambdas equal to unity). It is known as the BCC model and is the one we will use in our analysis. The results of this model provide the overall efficiency whereas the CCR model provides the pure technical efficiency. The quotient of both values is the scale efficiency [1].

eq05.gif

The BCC model seeks to maximize the proportion of the outputs of the firm under evaluation, j0, based on a weighted combination of the inputs and outputs of the other firms, which outperform firm j0-th.

Theta lies between 0 and 1 and represents the technical inefficiency of the firm under evaluation. If the optimal theta for a given unit is equal to one, then it is considered radially efficient and radially inefficient otherwise.

When the unit under evaluation, DMU0, is rated as inefficient, the solution to the dual problem provides a number of DMUs—the peer group or reference set—which are rated as efficient with the weights of DMU0. These units are those whose corresponding values of lambda are non-zero for the unit under evaluation. Moreover, the optimal solution of the model provides a virtual unit on the frontier constructed as a linear combination of the units in the reference set. The unit being evaluated should be transformed into that virtual DMU in order to become efficient. This is made by a radial reduction of the inputs or expansion of the outputs—for an input or output orientation respectively—by means of the optimal value of theta. Therefore, theta is the expansion ratio that the outputs should be expanded to make the analyzed unit efficient. If we desire the unit under analysis to be more efficient, we must multiply the output (or input, depending on the orientation) vector by theta as well as add the slacks to the radial expansion (or contraction) of the outputs (inputs).

Back to Top

Selection of Inputs and Outputs

Some excellent examples and characteristics on a wide variety of different compressors appear in [4]. We used some of the examples provided on that Web page, including the measurement of several variables that can be used for an efficiency analysis.

To analyze the efficiency of different compressors, we should take into account a series of variables or characteristics they possess (speed, compress ratio). In our study, we considered just one input, which is given by the size (in bytes) of the file to be compressed. This input has the same size for all units (as we have used the same file for the analysis of all compressors). Hence, as the DEA methodology is unit invariant, the inclusion of this input is effectively the same as taking an input equal to one for all units.

As we only have one input and as it is the same for all units, the CCR and the BCC models provide the same results; a scale efficiency analysis cannot be carried out in this case.

We have also considered three different outputs—the speed (in bytes per second) at compressing the input files, the speed at extracting files, and the compress ratio. The first two can represent a positive service or output (the benefit obtained by speed of the compressor) that we want to maximize. The third output is also important as it indicates how much the input file can be compressed. The compression ratio has been calculated as the percentage the input file has been reduced. For example, if a file of size 100 has been reduced to a file of size 25, we will say the compression ratio is 75%. Therefore, the higher the compression rate the better as we will have a greater compressed file.

A unit is considered efficient if it gets an efficiency rate equal to one for any set of weights of the outputs. This implies the weight given each output can vary freely among the different units and no restrictions are imposed between them (all outputs can be of high or low importance). If the importance of one of the outputs were known to be higher than another, this could have been included in the formulation by some extra constraints. Alternatively, if a preference structure on the outputs (provided by some multi-criteria decision techniques, like AHP) had been known, then other methodologies could have been used taking into account this preference structure.

Back to Top

Results

Here, we present an example of use of the DEA methodology in the analysis of software. We carried out three different analyses: In the first one, we analyzed the technical efficiency (TE) of several compressors relative to text files (Table 1), in the second analysis we evaluated the performance relative to exe files (Table 2) and in the third analysis we evaluated the technical efficiency relative to both types of files (Table 3).

In the case of the text files, the average value of the TE scores was equal to 0.81 with a standard deviation of 0.14. The most efficient compressors for this kind of file are BIX 1.00b7, EXPv1, LZOP1.00w, PAR1.49b, PKZIP2.6.02, and XPA32.1.02, all of them have a TE score equal to one. Some of the most common compressors, like Winzip 7.0 and Winace2.11 are among the most efficient, with scores equal to 0.96 and 0.98, respectively. The TE scores associated to the compressors can be found in the accompanying tables. All models were run using DEA specific software, Banxia Frontier Analyst, though they could have been carried out with most mathematical software containing a linear program solver (GAMS, SAS).

The overall average expansion of the outputs included in the analysis of text files was also studied. On average, it seems that most of the inefficiency is due to a lack of speed at compressing and extracting (43% and 48% of the total inefficiency, respectively) whereas the compress ratio does not seem to be a key reason of the inefficiency as only 9% of the total is due to this variable.

In Table 3, the TE scores of the compressors pertain to the exe files. Results show that some of the most common compressors, like WINACE or WINZIP, are very efficient for text files, but are not so effective at compressing exe files (Tables 1 and 2). The former has an efficiency rate of 0.98 for text files whereas for executable (exe files) files the score is 0.58. This means that WINACE should radially expand its output by 2% (the inverse of 0.98) to become weakly efficient [2]. That is equivalent to expanding by 2% its compress ratio and the speed at extracting and compressing files. Similarly, WINZIP (7.0) has a high efficiency score (0.97) for text files but the score is lower for compressing exe files (0.73). Hence, depending on the type of file it may be more convenient to use one compressor than another.

When the different outputs were analyzed, results showed the speed at compressing was by far the main reason of the inefficiency, representing an 85% of the total. On the contrary, the speed at extracting represents the 11.5% of the inefficiency. In fact, the compression ratio is, as was the case with the text files, hardly a reason of inefficiency, representing in this case a 3.5% of the total. Indeed, the majority of the compressors should focus on the improvement of the speed at compressing exe files, which is the major deficiency in most of them. In the case of text files, the improvement of compressors should concentrate on both the speed at compressing and extracting.


From a theoretical point of view, we find DEA methodology a stronger and more reliable way of measuring software than the ones that commonly appear in the literature. Moreover, it has the additional characteristic of being independent of the units of measure.


In the third analysis we considered all characteristics as outputs, that is, those regarding exe files and those related to text files. Results were coherent with the two previous analyses. As the efficiency related to each type of files seemed to be complementary, results show a very high average efficiency when both types of file characteristics were included simultaneously in the analysis. The average efficiency was equal to 0.95 with a standard deviation of 0.04. Among the most efficient compressors were ARC6.02, LZOP1.00w, EXPv1, PAR1.49b, PKZIP2.6.02, and XPA321.02. Note that the first two were fully efficient regarding exe files and the last five were fully efficient regarding text files. PPMC3h+TAR and WINACE2.11 that were not fully efficient in none of the previously analyses are fully efficient when both types of files are considered simultaneously.

Back to Top

Conclusion

We have proposed the use of DEA methodology to analyze software efficiency. From a theoretical point of view, we find it a stronger and more reliable way of measuring software than the ones that commonly appear in the literature. Moreover, it has the additional characteristic of being independent of the units of measure. The overall score provided in [4] has a difficult meaning as it adds up very different outputs that are measured using very different units like (compress and extraction) time and the size of the compressed file. Therefore, the bigger the score, the worse the compressor. On the contrary, in our method, the bigger the score the better the compressor. The efficiency scores we provide are bounded by zero from below (for inefficient units) and by one from above (for full efficient units). The correlation between our index and the index provided in [4] is -0.75 for txt files. This is a coherent figure as the indexes are highly negative correlated. However, for exe files the correlation is positive. The most reasonable explanation seems to be due to the fact that the score provided in [4] is primarily based on the size of the compressed file as these figures are much bigger than the time values. Hence, the effect of the extraction and compression times are insignificant in comparison to the effect of the size of the compression file. However, our analysis considered all outputs equally important and our efficiency rate does not depend on the units of measures. Furthermore, results indicate the extraction and compression time were of high importance to explain the inefficiency rates. For these reasons, the scores do not need to be necessarily similar.

Back to Top

Back to Top

Back to Top

Tables

T1 Table 1. TE related to text files.

T2 Table 2. TE related to exe files.

T3 Table 3. TE related to exe and text files.

Back to top

    1. Banker, R.D., Charnes A., and Cooper, W.W. Some models for estimating technical and scale inefficiencies in DEA. Management Science 30, 9 (1984), 1078–1092.

    2. Charnes, A., Cooper, W.W., and Rhodes, E. Measuring the efficiency of decision making units. European Journal of Operational Research 2 (1978), 429–444.

    3. Coelli, T.J., Rao, D.S.P., and Battese, G.E. An Introduction to Efficiency and Productivity Analysis. Kluwer, Dordrecht, The Netherlands, 1998.

    4. Gilchrist, J. Archive Comparison Test (2002); www.compression.ca.

Join the Discussion (0)

Become a Member or Sign In to Post a Comment

The Latest from CACM

Shape the Future of Computing

ACM encourages its members to take a direct hand in shaping the future of the association. There are more ways than ever to get involved.

Get Involved

Communications of the ACM (CACM) is now a fully Open Access publication.

By opening CACM to the world, we hope to increase engagement among the broader computer science community and encourage non-members to discover the rich resources ACM has to offer.

Learn More