Research and Advances
Computing Applications Virtual extension

Ensuring Transparency in Computational Modeling

Posted
  1. Introduction
  2. Ethical Implications of Values in Computational Modeling
  3. A Study of Values in Computational Modeling
  4. Why Computational Models Should be Transparent
  5. How Computational Models Can Be Transparent
  6. Conclusion
  7. References
  8. Authors
  9. Footnotes

Computational models are of great scientific and societal importance because they are used every day in a wide variety of products and policies. However, computational models are not pure abstractions, but rather they are tools constructed and used by humans. As such, computational models are only as good as their inputs and assumptions, including the values of those who build and use them. The role of ethics and values in the process of computational modeling can have farreaching consequences, but this is still a significantly understudied topic in need of further research.

This article focuses on one particular value, transparency, documenting both why models should and how models can be transparent. Transparency is the capacity of a model to be clearly understood by all stakeholders, especially users of the model. Transparent models require that modelers are aware of the assumptions built into their models, and that they clearly communicate these assumptions to users. It is important that computational modelers recognize the potential for and importance of building computational models to be transparent.

This article builds on an earlier article that makes the argument that computational models should be designed transparently to ensure parity and understanding among stakeholders, including modelers, clients, users, and those affected by the model.3 Data from an empirical study of computational modelers working in a corporate research laboratory is used to support this argument by demonstrating the importance of transparency from political, economic, and legal perspectives. This article also illustrates how transparency can be embedded in computational models throughout the stages of the modeling process.

Back to Top

Ethical Implications of Values in Computational Modeling

Friedman, Kahn, and Borning,4 Johnson,5 and Nissenbaum,8 building on the literatures of science and technology studies and the ethics of information technology, argue that technologies are not value-neutral, but rather are inescapably influenced and shaped by values. Similarly, Mason7 argues that while the development and use of information technology has created novel and complex ethical dilemmas, humans are fundamentally responsible for resolving ethical issues related to information technology development and use. Transparency can play a key role in resolving ethical dilemmas in the development and use of computational models.

Within the domain of computer decision systems, Johnson and Mulvey6 conclude that professional norms of behavior must be developed to ensure that designers of these systems are held accountable for the impacts that their systems have in the real world. Similarly, many authors in an edited volume, Ethics in Modeling,11 use hypothetical and anecdotal examples to grapple with ethical issues within the domain of computational modeling. While this work is a valuable starting point, the issue of why and how to make models transparent can best be addressed through the collection and analysis of empirical data.

Back to Top

A Study of Values in Computational Modeling

This article reports the findings related to transparency of a study of values in computational modeling. Participants for this study included computational modelers employed at an anonymous corporate research laboratory. Data collection for this study included interviews, surveys, focus groups, and participant observation. The data for this paper include fourteen semi-structured interviews. Due to the modest number of participants and the self-selection bias that affects any voluntary research project involving human participants, this paper is not meant to be representative of the views of all computational modelers but rather to provide an example of how some computational modelers incorporate transparency into their models. References to transparency within these interviews were spontaneously generated, since the interview questions focused on values in general and did not specifically refer to any specific values, including transparency. Although not all of the participants in this study explicitly brought up the issue of transparency, enough of the informants voluntarily raised this issue that it emerged from the data as an important value in computational modeling. This article includes relevant quoted material from the anonymous interviews that refer to the role of transparency in the development and use of computational models.

Back to Top

Why Computational Models Should be Transparent

The reasons why computational models should be transparent can be broken into three perspectives that emerged directly from the interview data: political, economic, and legal. First, from a political perspective, transparent models can be used to empower users. For example, one modeler explains that when interacting with users, "I think that is always an issue when you try to explain what you’re up to but they want to know in a way that you could explain to their mother how it works." Users want to know what’s going on, and they want to be able to explain the outcome to those affected by the computational models. Making computational models transparent empowers them to take control of the situation by mastering the technology.

Since users are held responsible for the decisions that they make, opaque models disempower users while transparent models can empower them. Empirical support for this argument was provided by the data from this study. As another modeler explains, "my goal is to present to [users with] a clear understanding of [the] tradeoffs [to help a user] to make an informed decision." Thus, transparency can empower users by enabling them to make more informed decisions and be held accountable for the outcomes of these decisions.

Finally, making models transparent can allow users to maintain their current role and status, rather than taking away their authority. As another modeler explains, "In cases where I had control of the project from start to finish… I do think that part of the success has been based on one of my values, which is openness, that it be transparent. [For example,] one initial proposal had been that we take a previously manual process and turn it into a black box. I didn’t like doing that because I don’t like taking things away from people who know what they’re doing."

Here, the modeler explicitly aims to allow the user to maintain their current role, ensuring that computational models can serve as tools to aid users rather than as restraints that limit the options and understanding of users. Thus, transparency is an important consideration in computational modeling from a political perspective.

Making computational models transparent can also be beneficial from an economic perspective. Illustrating this point, one modeler provides the example of a model that provides a visualization that can be used to explain the corporation’s decisions to customers of the corporation. The modeler comments, "we want to make sure that customers get an explanation…with the visualization, we can explain why [the decision is being made]." Transparent models can make it easier for customers to understand the decisions made by the corporation, and improve relationships between customers and the corporation in the long run, leading to a long-term financial relationship that can be economically rewarding for the corporation.

Another modeler provides a detailed and compelling account of why models should be transparent from an economic perspective. In this example, transparency is viewed as bad for individuals who are trying to make the most possible money for themselves, but are good for the corporation as a whole, since they can serve as checks and balances on the greed of individuals. This modeler explains, "As a dealmaker, you want a complete black box, that way you know the constraints and you make a lot of money. We want [our corporation] to make a lot of money. This guy wants more commission. We want more value for [our corporation]. If you underprice or overprice, it’s bad. So you really want to get it right."

Thus, transparency can be economically beneficial for the corporation because it can allow individuals within the corporation to understand what other individuals are doing, making it possible for them to detect when actions are being taken in the name of the best interests of the individual rather than the corporation.

Based on the important role that transparency plays for limiting the potential for acting according to greed, this modeler states that the value of transparency is important to him. The modeler comments, "We hate black boxes. That’s my value. It’s from observing many years of people. They’re generally arrogant. They get paid a lot of money…these people really aren’t adding a lot of value. I have no tolerance for that. That’s my value. People don’t understand how they’re making decisions. A lot of that can be blown out when you get some science into it. You make it transparent."

This example demonstrates the role that transparency can play in the success of computational models, since the transparent solution was a financial success. Striving for transparency can be economically beneficial for corporations that develop and use computational models.

There can also be legal motivations for making computational models transparent. For example, one modeler recalls a situation where the need for making a computational model transparent arose from legal requirements, "There was one case where the users wanted to make sure that the decisions were interpretable. We couldn’t use a black box. That was very important for them, because for legal reasons, they needed to justify their decisions in a court of law." In this example, transparent models allow users to uphold their legal responsibility to make interpretable decisions. Overall, making models transparent can be beneficial from political, economic, and legal perspectives.

Back to Top

How Computational Models Can Be Transparent

Before examining how modelers make computational models transparent, it is useful to consider the stages of the modeling process. Lifecycle models are simplified explanations of how technologies are developed in practice. Typically, lifecycle models reduce the complex process of developing technologies into a fnite number of stages with rules of progression among the different stages. This paper draws most directly on the stages of the modeling process outlined by Willemain:12 problem context, model structure, model realization, model assessment, and model implementation. These five stages match other lifecycle models for information technologies, including the waterfall lifecycle model for software engineering,10 the software reuse model,2 and the iterative interaction design model.9 The main difference among these various lifecycle models lies in the relationships between the stages. However, the stages remain relatively comparable among these lifecycle models. This paper breaks down the modeling process into the five stages of the modeling process and traces the role of transparency in the development of computational models during each of these stages.

Computational models are designed to solve problems. Understanding the problem is a critical first step toward solving it. Willemain’s12 first stage of the modeling process, problem context, refers to how the problem is structured or defined. In this stage, the key point is that transparency should be considered early on, as a goal of the modeling process. The examples of why computational modelers make models transparent provided above clearly illustrate that transparency can be an explicit goal of the computational modeling process. While there may be different perspectives that make transparency important, in all of these cases, modelers strive for transparency and make their models transparent. These modelers adhere strongly to the value of transparency and take steps to ensure that transparency is considered starting from the beginning of the modeling process in the problem context stage.

Once the problem has been defined and structured, the next step is to figure out how a model can be developed to solve this problem. Before this model can be built in software, it must first be created within the modeler’s head. Willemain’s12 second stage, model structure, refers to the process of building the conceptual model. Here, the modeling paradigm selected plays a crucial role. For example, one modeler lists a preference for Bayesian models, noting that they are a "nice white box where I can include both data and expert knowledge." Another modeler explained a situation where the choice of modeling paradigm was driven directly by a goal of transparency, and that only modeling paradigms that could be easily explained were considered for the primary modeling paradigm. The modeling paradigm selected during the model structure stage plays a large role in the feasibility of making the model transparent once the model is realized, assessed, and implemented.

After conceptually creating the model, it is necessary to computationally build the model so that it can be tested and then used. Willemain’s12 third stage, model realization, involves fitting data to the model and computationally building the model using statistical or other methods. Here, the modeler actually builds the model as defined in the problem context stage with the modeling paradigm selected during the model structure stage. The key issue is whether or not the user needs to understand the technical aspects of how the decision is being arrived at. For example, one modeler explains, "Algorithms that are developed would be incorporated into the software. There’s no secret about how the algorithms are defined, but there would be no need for them to take the algorithms out of the software." In this case, the modeler argues that it is not the technical details that need to be transparent, but rather the logic followed by the program, which is more closely tied to model structure than to model realization. Similarly, another modeler argues that in the case of transparent models, users "would have an overview knowledge of it, not the details." These modelers agree that the details of how the model is actually realized are not necessarily critical for transparency. Thus, the user does not need to be able to see and understand every single line of code in order for a computational model to be transparent.

Willemain’s12 fourth stage, model assessment, refers to evaluation of the model. It is important before the model is widely used that it goes through rigorous analysis to ensure its validity and usefulness. Transparency is also an important consideration at this stage. For example, one modeler explains that one approach to transparent modeling is to use a modeling paradigm that lends itself to being more transparent as the primary paradigm, but then to evaluate the transparent model by building a non-transparent model and comparing the results of the two models, or, in the modeler’s own words, the modeler "used the more black box stuff as a check on the primary." Transparency is important if assessment is being performed, at least in part, by individuals other than modelers, such as users. For example, one modeler explains, "A scenario might be where a user…doesn’t operate the equipment in the way that you intended…as a researcher, you feel that you always tend to know a lot about the limitations and specifications of a particular [technology], and a concern always is a user might not know as much about it or might not use it correctly. If more was known about a particular result…the information might have been better used." An understanding of the limitations and specifications of the model clearly would improve not only use of the model but also assessment of it. Thus, transparency plays a critical role in the modeling assessment stage of the modeling process.

Finally, once the model has gone through assessment, it is ready for broader release. Willemain’s12 fifth stage, model implementation, is the deployment of the model to solve specific problems and achieve particular aims. Here, the role of transparency is readily apparent. As one modeler explains, there is "no question that my desire to make the project open, and not closed up…as a black box, increased the favorability of the project for [users]." Another modeler provides an example of how transparency can be used to circumvent fraud. Another example provided is of the use of transparency to explain decisions to customers so that they understand the decision and why it was made. Certainly, then, transparency may lead to beneficial outcomes for all stakeholders in the modeling process. Clearly, transparency can be incorporated at any stage in the modeling process, from problem context to model implementation.

Back to Top

Conclusion

The data presented in this paper demonstrate that it is possible for modelers to realize the significance of transparency and intentionally design computational models to be transparent. Transparency is important from a number of different perspectives, including political, economic, and legal. Models can be made transparent during each of the five stages of the modeling process.

This study has provided new insights into transparency that are a product of combining the responses from several different modelers, which altogether are more than just the sum of the parts. This synthesis has resulted in an overall framework that explains why and how to make models transparent and thus goes beyond the individual perspectives of the modelers who participated in this study. These findings can be of use to computational modelers and others who design computer systems that are used to make important decisions that require accountability, to provide them with a broad perspective of what transparency is, why it is important that computing systems are transparent, and how to make computing systems transparent.

To make the findings of this study the rule rather than the exception, it is important that modelers are aware of why transparency is important and how to make computational models transparent. This goal can be achieved through a variety of measures, including incorporation of transparency as a key theme in education and training in both academic and industry settings and in both organizational and professional codes such as the ACM Code of Ethics and Professional Conduct.1 This goal coincides with increased emphasis on ethics and values in general in both organizational and professional cultures. Hopefully, this paper can help computing educators and professionals to realize the importance of the value of transparency in computational modeling.

This article demonstrates several specific rationales for making models transparent from the perspectives of computational modelers and illustrates how modelers can make models transparent. Perhaps this can best be summarized by a computational modeler who explains, "I just do what I think is the right thing, that’s probably what values is, doing what you think is the best thing." According to the computational modelers who participated in this study, making models transparent is not only possible, it is ‘doing the right thing.’

Back to Top

Back to Top

Back to Top

    1. Anderson, R.E., Johnson, D.G., Gotterbarn, D. and Perrole, J. Using the new ACM Code of Ethics in decision making. Communications of the ACM 36, 2 (Feb. 1993), 98–107.

    2. Bersoff, E.H. and Davis, A.M. Impacts of life cycle models on software. Commun. ACM 34, 8 (Aug. 1991), 104–118.

    3. Fleischmann, K.R. and Wallace, W.A. A covenant with transparency: Opening the black box of models. Commun. ACM 48, 5 (May 2005), 93–97.

    4. Friedman, B, Kahn, P.H., and Borning, A. Value sensitive design and information systems. In Human-Computer Interaction in Management Information Systems: Foundations, P. Zhang and D. Galletta, Eds. M.E. Sharpe, New York, 2006.

    5. Johnson, D.G. Is the global information infrastructure a democratic technology? Computers and Society 27, 3 (Sep. 1997), 20–26.

    6. Johnson, D.G. and Mulvey, J.M. Accountability and computer decision systems. Commun. ACM 38, 12 (Dec. 1995), 58–64.

    7. Mason, R.O. Applying ethics to information technology issues. Commun. ACM 38, 12 (Dec. 1995), 55–57.

    8. Nissenbaum, H. Values in the Design of Computer Systems. Computers and Society 28, 1 (Mar. 1998), 38–39.

    9. Preece, J., Rogers, Y., and Sharp, H. Interaction Design: Beyond Human-Computer Interaction. John Wiley & Sons, New York, 2002.

    10. Royce, W. Managing the development of large software systems: Concepts and techniques. WESCON, Aug. 1970; reprinted in Ninth International Conference on Software Engineering, Washington, D.C., IEEE Computer Society Press, 1987, pp. 328–338.

    11. Wallace, W.A., Ed. Ethics in Modeling. Elsevier, Tarrytown, NY, 1994.

    12. Willemain, T.R. Model formulation: What experts think about and when. Operations Research 43, 6 (Nov./Dec.), 916–932.

    Research for this article was funded by the National Science Foundation Award Nos. SES-0521117 and SES-0521834.

    DOI: http://doi.acm.org/10.1145/1467247.1467278

Join the Discussion (0)

Become a Member or Sign In to Post a Comment

The Latest from CACM

Shape the Future of Computing

ACM encourages its members to take a direct hand in shaping the future of the association. There are more ways than ever to get involved.

Get Involved

Communications of the ACM (CACM) is now a fully Open Access publication.

By opening CACM to the world, we hope to increase engagement among the broader computer science community and encourage non-members to discover the rich resources ACM has to offer.

Learn More