One important ethical aspect of the use of models for decision making is the relative power of the various actors involved in decision support: the modelers, the clients, the users, and those affected by the model. Each has a different stake in the design of decision support models, and the outcome of modeling depends on both the technical attributes of the model and the relationships among the relevant actors. Increasing the transparency of the model can significantly improve these relationships. Here, we explore the importance of transparency in the design and use of decision support models.
While other scholars have identified ethical standards that modelers should adopt, without transparency it may be impossible for users to determine whether modelers adhere to these standards. Mason  identifies two essential obligations of modelers to users: the "covenant with reality" and the "covenant with values." Similarly, Johnson and colleagues [1, 4, 5] discuss the ACM Code of Ethics and its utility in the professionalization of decision support modeling. Since users are most directly accountable for any errors resulting from use or misuse of a model , steps must be taken to empower them so they are not overly dependent on modelers and can make fully informed decisions based on a clear understanding of the decision support model.
Transparency is an essential tool for preserving the autonomy of users and empowering them relative to model builders. Modelers should work with users, rather than expecting users to work for them as mere implementers of models. At the human-machine interface, models should help users, not the other way around. Transparency should help clients and users get the best possible outcome for those affected by the model. The following hypothetical scenario illustrates the importance of building transparency into decision support models.
Rosemary M. works for an investment company that manages several pension funds. She has been managing pension accounts for 10 years and is increasingly successful. She is now in charge of one of the company's largest accounts, the pension fund of a major automotive company.
A year ago, her company purchased an expert system for investment decisions. Each week a manager can input current figures on key economic indicators, and the system will produce a set of recommended actions. This week Rosemary is quite nervous about the stock market. All the indicators she has come to trust point in the direction of a serious and long-term downswing, so she believes it would be best to reduce the pension fund's holdings in stocks. However, the expert system recommends just the opposite. She rechecks the figures she has put into the system, but the output continues to recommend that the percentage of holdings in stocks be increased to 90% of the portfolio.
Rosemary does not understand the reasoning built into the expert system; hence, she does not know whether the system's reasoning is better or worse than her own. It is even possible the system is malfunctioning. If she follows the advice of the expert system, the account could lose heavily, so much so that income to retirees would have to be adjusted downward. On the other hand, if she goes against the expert system and she is wrong, the account could be jeopardized, and she would be in serious trouble for diverging from the recommendation of the expert system. What should she do?1
To analyze this scenario, it is important to consider the potential outcomes depending on Rosemary's decision (follow the model and increase stock holdings or go against the model and reduce stock holdings) and on whether the expert system is right (stock market goes up) or wrong (stock market goes down). These four potential outcomes are listed in the table.
Since Rosemary is given the responsibility of making the final decision, without the freedom to make an informed decision a more transparent model would provide, she incurs serious risk with little chance of benefit. If she follows the model and the model is right, she can be dismissed as a mere implementer of the model. On the other hand, if she follows the model and the model is wrong, she may be seen as incompetent for following a flawed model.
Going against the model is also highly problematic. If she goes against the model and the model is wrong, she may still be labeled as a risk taker who happened to be right this time. If she goes against the model and the model turns out to be right, she could be held responsible for the failure of the fund to benefit from the rise of the stock market, and her job could be in jeopardy, despite her strong track record. Each situation may result in an unfavorable evaluation of Rosemary.
Rosemary's responsibility stems from the relationships among the various actors. Although it might seem the model would be held responsible if she chooses to follow its advice, the pensioners do not have a direct relationship with the model or the modelers . Rather, they have a relationship with the company that controls their account, Rosemary's employer, which in turn has hired the modelers to build the model for Rosemary to use. Rosemary here is caught between her employer, which commissioned the design of the model and want her to use it, and the pensioners, who depend on her to manage their accounts and who will blame her if they lose money or lose an opportunity to make money. She has a responsibility to make her employer happy by making the best use of the model and a responsibility to make money for the pensioners. Yet, she does not have the freedom to make decisions with a full understanding of the model.
Is there a way to ensure that Rosemary can use the model to make an informed decision? The reasonable person doctrine states, "information givers should provide enough information to takers for reasonable people to make decisions" . Any less information than this established minimum is unacceptable. Although Rosemary does have some minimal decision-making ability, her responsibility does not come with the information necessary for her to have the freedom to make a reasonable decision. Instead of forcing Rosemary to implement the decisions of an opaque black box, why not give Rosemary a transparent decision support system and allow her to make the decision?
The user must be able to see the model's depiction of reality in order to determine whether the model lives up to the covenant of reality.
What should Rosemary do? There is no clear answer, since Rosemary does not have enough information to make an informed decision, even though she is considered responsible for the decision. However, the situation can be resolved by reconsidering some of the assumptions made here. Why should the model be so opaque that Rosemary cannot compare its reasoning to her own, or even tell if it is malfunctioning? Shouldn't she have the freedom to make an informed decision for the sake of the client (her employer), the user (herself), and those affected by the model (the pensioners of the automotive company)? Transparency is essential to ensure that Rosemary and the pensioners receive due process and due respect as citizens of an ethical information society .
According to Johnson and Mulvey , self-regulation is an essential part of the professionalization of modeling. Otherwise, they explain, ethical considerations will be left up to ad hoc accountability. That situation may disadvantage clients and users, who tend to be less technically skilled than modelers. It may also harm those affected by the model, who, due to their lack of a relationship with the modelers, are detached from both the client/modeler relationship and the model/user interface. Johnson and Mulvey further argue that professionalization will benefit modelers because it will "promote public trust in computer decision systems" . Thus, professionalization of modeling through an agreement, such as the ACM Code of Ethics and Professional Conduct , benefits modelers, clients, users, and those affected by the model.
The ACM Code of Ethics and Professional Conduct is a strong starting point in this professionalization of modeling. Yet, while it clearly lays out the obligations of modelers and their duties toward clients, users, and the general public, it does not provide an adequate mechanism for non-modelers to determine whether modelers are living up to their obligations. Instead, it argues for the education of non-modelers, without consideration of the role the design of the model plays in users' understanding of the model. Transparency can accomplish this goal, and can thus increase public trust in modelers and in modeling as a profession.
The ACM Code of Ethics and Professional Conduct notes the need for non-modelers to understand how models work. It seeks to achieve this aim through education alone, without emphasizing on transparency. The relevant elements of the code for this discussion are Imperative 2.7, "improve public understanding of computing and its consequences," and Imperative 3.6, "create opportunities for members of the organization to learn the principles and limitations of computer systems" . Both focus on education as the means for creating understanding. By emphasizing the role of education alone, these imperatives put the responsibility to learn on non-modelers, who may have neither the inclination nor the ability to do the job of modelers. More to the point, why should these non-modelers learn everything that modelers learn? If this is necessary, what is the true need for modelers in the first place?
Instead, transparency can serve as a way to communicate to non-users how the model works. With a transparent model, even non-experts can understand the inner workings of the model. These non-experts may not be able to replicate the process of modeling, but they can verify not only the transparency of the model but also other principles that they would otherwise have to take on faith from the modeler. Transparency helps users trust modeling, modelers, and models. Chopra  provides an in-depth empirical analysis of the implications of trust in electronic environments. Here, we argue that trust is essential in maintaining the professional relationship between the modeler and the client/user, following Mason . Transparency is important on both the micro and macro levels because it can build trust in both particular models and the profession of modeling. Finally, with transparency, users and other non-modelers are not required to learn the skills and knowledge of a modeler, so modeling retains its importance and sustainability as a profession. Thus, transparency has clear benefits for modelers, clients, users, and those affected by the model.
Mason  argues that ethical standards should be the basis for the relationship between modelers and the clients. He advocates that modelers follow two covenants with clients and with society at large. His covenant with reality states that models must be faithful to reality. His covenant with values states that models must be faithful to the values of the client. These covenants serve an essential role in the modeler-client relationship by making explicit the obligations that modelers should uphold and that clients and users should expect from modelers.
We assert that in addition to these covenants, modelers should adhere to a covenant with transparency. The covenant with transparency not only allows the client to assess whether the model conforms to the first two covenants, it allows the client to assess when the model is misbehaving or malfunctioning. It also allows the user to avert circumstances where model errors might lead to negative consequences for those affected by the model, as in Mason's  examples of Louis Marches and Gary Brown. Thus, the covenant with transparency reinforces the obligations of the modeler and reduces the ability of a faulty model to harm those affected by the model.
A covenant with transparency would help users verify that the other two covenants are being fulfilled. The user must be able to see the model's depiction of reality in order to determine if the model lives up to the covenant with reality. Rosemary must be able to determine whether the model is using a valid depiction of reality. By comparing the model's depiction of reality to her own, she can determine whether it is taking into account a factor that she has overlooked, or if it is failing to use certain data or otherwise malfunctioning. Likewise, users must be able to see into the black box to evaluate the model's values and how they match with their own. For example, the model may place a value on taking extreme risks, probably not a value that Rosemary shares. By making the representations and values of the model transparent, the modeler enables users to verify the model lives up to the covenants with reality and values.
As technology advances, it becomes increasingly powerful and, often, user friendly. For example, agent-based technologies are far more powerful and adaptable than yesterday's expert systems, and agent-based user support attempts to simplify the increasing complexity of computing systems from the perspective of the user. In some cases transparency may even be confused with user friendliness. They are not the same, and are often at cross-purposes. Leet and Wallace  argue that user friendliness may interfere with transparency because it may lead an increasing number of users to know less about the internal workings of a computer or computer-based model. Turkle  agrees, explaining the view that user friendliness is complementary to transparency is contrary to the original meaning of transparency related to computers: being able to see inside the box and understand what is going on.
Additional research on how to make models more transparent is necessary. Yet, several means of increasing transparency seem apparent . First, a model should be thoroughly documented so individuals who did not directly participate in its construction can easily understand it. Documentation is already perceived as essential for projects that involve teams of modelers to build large models, and it can serve the dual purpose of informing other designers as well as users of the purpose of a segment of code. Documentation can help explain how a model works to the user in straightforward English.
Second, the model's assumptions about reality and values should be explicit and testable for validity. Experienced modelers know a model is only as good as its programming and inputs, as demonstrated by the phrase "garbage in, garbage out." Just as scientists should strive to make their assumptions explicit, modelers should explain the reality and values embedded in their models.
Finally, the individual elements of a model should be explicitly available to the user. Crapo  utilizes the concept of the Semantic Web to demonstrate the potential for a networked representation of information that allows users to explore the meaning of content, giving them direct access to the model content. Explicitly linking the parts of a model can allow users to examine the reality and values embedded within the model. Documentation, explanation of assumptions, and inspection of components are all techniques that can be used to design for transparency.
As Leet and Wallace  explain, the result of a lack of transparency in modeling is that "power remains...in the hands of those who design" models, a point echoed by Johnson . As a part of the responsibility that comes with this power, Leet and Wallace argue that model builders must be held to ethical standards. Here, we've argued that, along with a covenant with reality and a covenant with values, model builders should also uphold a covenant with transparency. The emphasis on transparency not only allows modelers to live up to their ethical obligations, it also undermines the power inequality, since an informed user is in a better position to evaluate a model. By allowing the user to see into the model, the modeler shares the control and responsibility of information technologies with the user, allowing the user to make informed decisions based on all available data, rather than placing blind faith in a black box.
Research for this article was funded by the National Science Foundation Award No. ITR/IM-0081219.
1This scenario is taken from the interactive distance video workshop "Ethics in Modeling," organized by William A. Wallace, Deborah Johnson, Saul I. Gass, John Little, and Warren E. Walker, sponsored by the National Science Foundation, and held at the University of Maryland, Massachusetts Institute of Technology, the European-American Center for Policy Analysis, and Rensselaer Polytechnic Institute in 1994. Johnson and Mulvey  discuss a similar scenario.
©2005 ACM 0001-0782/05/0500 $5.00
Permission to make digital or hard copies of all or part of this work for personal or classroom use is granted without fee provided that copies are not made or distributed for profit or commercial advantage and that copies bear this notice and the full citation on the first page. To copy otherwise, to republish, to post on servers or to redistribute to lists, requires prior specific permission and/or a fee.
The Digital Library is published by the Association for Computing Machinery. Copyright © 2005 ACM, Inc.
No entries found