Opinion
Computing Profession Viewpoint

Data Science Meets Law

Learning Responsible AI together.
Posted
  1. Introduction
  2. Pedagogical Principles
  3. Course Structure and Student Evaluation
  4. Challenges and Next Steps
  5. References
  6. Authors
  7. Footnotes
  8. Sidebar: Learning Objectives
  9. Sidebar: Examples of Methodological Tools to Foster a Multidisciplinary Dialogue
  10. Sidebar: Class Format
patterned text of 'data' and 'law'

The legal counsel of a new social media platform asked the data science team to ensure the system strikes the right balance between the need to remove inciting content and freedom of speech. In a status meeting, the team happily reported that their algorithm managed to remove 90% of the inciting content, and that only 20% of the removed content is non-inciting. Yet, when examining a few dozen samples, the legal counsel surprisingly found that content which was clearly non-inciting has been removed. “The algorithm is not working!” she thought. “Anyone could see that the content removed has zero likelihood to be inciting! What kind of balance did they strike?” Trying to sort things out, the team leader asks whether the counsel wants to decrease the percentage of removable non-inciting content, to which the counsel replied affirmatively. Choosing another threshold for classification, the team proudly reported that only 5% rather than 20% of the removed content was non-inciting, at the expense of reducing the success rate of omitting toxic content to 70%. Still confused, the legal counsel wondered what went wrong: the system now was not only removing clearly non-inciting content, but it had also failed to remove evidently inciting materials. Following several frustrating rounds, new insights have emerged: the legal counsel has learned the inherent precision-recall trade-off. In addition, the team leader realized that the definition of inciting content that was used in the process of labeling the training data was too simplistic. The legal counsel could have helped clarify the complexities of this concept in alignment with the law. The team leader and the counsel regretted not working together on the project from day one. As it turns out, both were using the same words, but apparently, much of what they meant has been lost in translation.

While both data scientists and lawyers have been involved in the design of computer systems in the past, current AI systems warrant closer collaboration and better understanding of each others’ fields.2 The growing prevalence of AI systems, as well as their growing impact on every aspect of our daily life create a great need to that AI systems are “responsible” and incorporate important social values such as fairness, accountability and privacy. It is our belief that to increase the likelihood that AI systems are “responsible,” an effective multidisciplinary dialogue between data scientists and lawyers is needed. Firstly, it will assist in clearly determining what it means for an AI system to be responsible. Moreover, it would help both disciplines to spot relevant technical, ethical and legal issues and to jointly reach better outcomes early in the design stage of the system.

We have designed a course on Responsible AI, Law, Ethics, and Society that helps develop effective multidisciplinary dialogue. The novel approach we use sought to build collaborative skills among joint teams of data sciencea and law students, by engaging them in joint problem-solving tasks on real-world AI challenges, such as liability and responsibility, discrimination and equality, transparency and privacy. See the first sidebar here for the learning objectives.

The idea of introducing legal or ethical studies in computer (and data) science syllabi is not new. Recent years have seen an increase in the number of tech ethics modules offerings. Of these, some are offered as embedded units in existing courses rather than standalone courses,4,6,7,11,16 and several are offered by instructors whose background is in philosophy or information science rather than computer or data science. In addition, to the best of our knowledge, the nature of many of the courses is based on insights from law, ethics and STS (Science, Technology, and Society), and lacks technical activities. Finally, there are other courses directed for students from additional disciplines, such as philosophy.

We deliberately chose to design a course to an audience of data scientists and lawyers (a choice that might be revisited in the future). By doing so, we target two of the main professions that are most likely to meet and work together in “the real world.” Our approach is based on a standalone course design, yet we believe that integrated ethics modules in core computing courses could, and perhaps even should, co-exist. To achieve our goals, a standalone course seems a feasible and suitable design for the plurality of the multidisciplinary setting: students, staff and pedagogy. On top of that, from our experience, such heterogeneous setting requires building trust and shared language among the participants, which can be better achieved with the amount of time, continuity and intensity of a standalone course.


To increase the likelihood that AI systems are “responsible,” an effective multidisciplinary dialogue between data scientists and lawyers is needed.


The first iteration of the course consisted of six four-hour sessions, and was taught in May 2020 in an online format to approximately 20 students from each discipline (data science and law). A nine-sessions extended version of the course was offered in Spring 2021 as a joint course in Cornell Tech, Tel Aviv University and the Technion with 40 students. In March 2022, a third iteration of the course will be offered to law and data science students from Boston university, Tel Aviv University, the Technion, and Bocconi University; this time with the format of eight sessions. In this Viewpoint, we describe our pedagogical principles and the course structure in-depth based on the two iterations of the course that already took place.

Back to Top

Pedagogical Principles

The three main learning objectives of the course, namely achieving multi-disciplinary dialogue among students, acquiring literacy on Responsible AI, and developing a sense of professional responsibility among students are categorically different. While the second and third learning objectives (Responsible AI literacy and professional responsibility) are “classic” academic goals based on learning new knowledge and skills, the first objective (multidisciplinary dialogue) is fundamentally different and novel, as it focuses on establishing communication and collaboration between two distinct professions.

Achieving the multidisciplinary dialogue learning objective. It is not necessarily easy to `build a bridge’ between data scientists and lawyers. Data science and law are discipline that require years of studying and training. Unless knowledgeable in both, neither party can step into the shoes of the other and truly understand the legal or technological constraints when designing new systems. Accustomed to conversing about professional questions with colleagues of their own profession, and at times having an aversion toward technology (lawyers) or law (data scientists), the practice of engaging in multidisciplinary dialogue might be rare and difficult to initiate.

To address these challenges, our first learning objective of multidisciplinary dialogue was achieved through two principles:

  • The mixture of students and staffs’ disciplinary backgrounds: the course was taught by multi-faculty teaching; and staff, to a combination of students from law and data science.
  • The nature of the learning activities in class: learning activities were designed to foster a multidisciplinary dialogue (see the sidebar “Examples of Methodological Tools to Foster a Multidisciplinary Dialogue” for examples of how these principles were manifested in practice).

Classes were designed according to the signature pedagogies of each of the disciplines, namely the common practices and styles of instruction of a profession.15 Law students are familiar with case studies with concrete circumstances, raising legal questions that needed to be answered. Data science students felt comfortable with iterated and interactive research of data.

As in real-world scenarios, the tasks were designed such that they could only be addressed by a joint effort. Thus, mixed-discipline teamwork was inevitable. Some of the classes involved adversarial settings such as potential lawsuits where student teams acted as different parties to the suit. Such a competitive environment incentivized students’ collaboration in a playful setting. We made ensured teams were not evaluated nor judged against each other, in order to avoid unhealthy competition. For example, in our first class, dealing with liability of autonomous vehicles, legal arguments on the allocation of fault required technical auditing of a machine learning model of traffic signs recognition, to discover whether the system was sufficiently trained on all weather conditions.

Overall, the course design utilized challenge-based learning, active learning, and mixed-discipline teamwork. These principles do not only support our dialogue approach, but are also based on what we know about evidence-based education from the learning sciences.12

Achieving the Responsible AI Literacy learning objective. Multidisciplinary dialogue is not only an end in itself, but is also a means to accomplish other goals, particularity, the Responsible AI learning objectives. In recent years, one can observe how the Responsible AI research communityb has evolved through a series of contributions and their criticism, coming from a diverse array of disciplines. The dialectic process pushes the field forward, and we designed our learning activities to mimic this dynamic in class.1 Law students were the scaf-folders of data science students, and vice versa.

For example, our second class dealt with discrimination in human-resources automated decision-making, tackling one of the greatest challenges of Responsible AI, namely capturing the notion of fairness in a way that is applicable to AI systems.3 Generally speaking, data science and computing tend to focus on concreteness and “crisp” definitions, while legal definitions are always subject to interpretation, giving way to multiple meanings.8,13 In our class, data science students proposed various definitions rooted in the evaluation of machine learning models (such as equality of false-positive rates), while law students suggested other definitions (such as counterfactuality, “what-if a sensitive attribute of an individual was different”). Together, the students managed to conclude that restricting the notion of fairness only to measurable and quantitative terms is far from sufficient, as there are multiple facets to fairness, and human context should also be taken into account.14 The intended intense interaction between law and data science students allowed a deeper multidisciplinary learning to occur and scale well, in a way that might not be possible in settings where only the staff is multi-faculty.

Achieving the professional responsibility learning objective. The multidisciplinary disciplinary nature of Responsible AI raises a challenge pertaining to the third course objective of shaping students’ professional responsibility. The importance of professional responsibility is on the rise, given the evergrowing individual impact of designers of AI systems.

A course taught by a professional in law or ethics might cause data science students to feel less connected: if legal or philosophical training is required in order to address professional responsibility questions, then data science students might be tempted to “leave it to the lawyers” to deal with (and vice versa).9 In order to situate the subject as an integral part of the students’ discipline and allow them to feel professionally connected, teaching the course by mixed staff, to students of both backgrounds, and using mixed signature pedagogies have proven effective once again.

Back to Top

Course Structure and Student Evaluation

The pedagogical principles discussed here guided us in designing the format and the actual activities in each class (see the sidebar “Class Format“). As to the course content, a challenge we faced was the lack of consensus as to what Responsible AI is, as is also reflected in the syllabi of other tech-ethic courses.4,6,11 Therefore, our “working framework” for Responsible AI was rather practical (see the accompanying table). We picked the following elements that appear in many other tech-ethics courses, as well as in AI ethical codes and principles published by various organizations:5 robustness; discrimination and fairness; transparency and explainability; and privacy. As to accountability and governance, which are naturally two significant elements in Responsible AI, these were not the subject of whole classes but rather were running themes throughout the entire course. We acknowledge other elements are equally important and could be chosen for an extended version of the course. We dedicated a class to integrating all four elements, using the Build-it, Break-it, Fix-it method (see “Examples of Methodological Tools to Foster a Multidisciplinary Dialogue“). Each class was also paired with a vertical or a domain in which the case study took place (see the table here), allowing the demonstration of contextualizing Responsible AI. Each domain and case study has its own nuances. At the end of each of the first five classes, each team submitted a write-up summarizing their analysis of the case study, bringing together legal argumentation and data science research.

t1.jpg
Table 1. Course structure for Spring 2020 class (first offering).

For their final project, teams were asked to develop a new case study that uses datasets and data science techniques to demonstrate legal dilemmas regarding Responsible AI, Law, Ethics and Society. It was inspired by our own multidisciplinary learning journey in developing this course and its activities. The projects were mentored by the staff, and the students presented them in the final class.

In alignment with the two first learning objectives, the teams were evaluated, both for the in-class assignments and the final project, based on two criteria: the extent to which the students integrated both legal and data science perspectives in their deliverable; and applied Responsible AI knowledge and skills. These criteria follows the premise of our course: Responsible AI analysis or design could occur only when integrating (at least) both perspectives. The students received oral informal guidance and feedback while working on the main challenge in class, as well as written feedback for their submitted work.

Overall, students’ impression of the course was positive. During an open discussion, the students felt the course was challenging but also rewarding. Some students mentioned that they were deeply affected by the group interactions. As an anecdotal example of these effects, one of the Law students has mentioned that she now “sees models everywhere.” For the Spring 2020 offering, one of the methods we used to evaluate the course was a concluding survey (35/44, 80% response rate). Almost all of the students that replied to the survey stated that they would recommend the course to a fellow student (33/35, 94%) to either a very great extent or a great extent. Table 2 summarizes the students’ replies of whether the course achieved its learning objectives for the the spring 2020 offering. For a first offering of the course, which in the pre-COVID-19 era was originally planned to be delivered in-person, but eventually conducted virtually, we find this feedback very encouraging.

t2.jpg
Table 2. Descriptive statistics to the students’ reply to the end of the course survey for the Spring 2020 offering.

In the open feedback section, the students mentioned multidisciplinary teamwork was the most important component, and the most common improvement suggestion was to reduce the intensity in the classes.

Back to Top

Challenges and Next Steps

Developing and teaching Responsible AI to data science and law students involves multiple challenges: some stem from the multidisciplinary nature of the course, others arise from the subject of Responsible AI, while others yet are manifested in the intersection between these two aspects.

First, launching a real dialogue is a delicate task. Our opening class appeared to be the toughest. Students from two profoundly different professions found themselves in the same team instructed to tackle a multidisciplinary challenge together, though for most it was the first time engaging in professional dialogue with the other discipline. While this task was designed for “soft onboarding” compared to the rest of the course, the confusion, and even aversion, was unavoidable. While teamwork has gradually developed as early as the first session, the aversion toward the other discipline was not quick to disappear. On our second class, where students were representing different sides of an algorithmic discrimination lawsuit, a data science student considered to withdraw the course, feeling unable to articulate his thoughts as the lawyers do and therefore thought he had nothing to contribute. On our third class, where students acted as regulators deciding whether to demand changes to the explainability of a credit scoring system, a top of her class law student said to the staff that she has never felt so frustrated, as this was the first time ever she utterly understood nothing and had no clue on how to proceed. While our pedagogical principles set the environment for fostering dialogue, naturally the principles take time to be internalized by the students. Therefore, for the upcoming offering, we introduce a new first class where the joint-task is less disciplinary-intense, but still follows our pedagogical principles. The students are required to balance between human values and trade off between AI system design constrains. Not only is this task designed to facilitate the first multidisciplinary teamwork encounter, it also demonstrates the breadth of Responsible AI issues that are covered by the course in general and during the first class in specifics.

Second, the notion of Responsible AI is evolving. There is no crystallized consensus of what “Responsible AI” consists of and how it should be taught. We had to develop our own working framework for this course, a framework that must be dynamically adapted based on evolving research and practices. On a similar note, student evaluation is another key issue: What does it mean to be competent in Responsible AI?

Looking forward to actualizing our theory of change for integrating Responsible AI into the life cycle of AI systems using a multidisciplinary dialogue approach, the course should be delivered in scale. Therefore, we release the course materials as open education resources under Creative Commons license (see https://teach.responsibly.ai). This effort includes building a community of instructors, and supporting contextualization and localization for institutes in diverse locations and cultures. The course serves us as an in vivo environment to test dialogue-fostering methods that can be applied in other settings, for example, as part of the policymaking process.

Back to Top

Back to Top

Back to Top

Back to Top

Back to Top

Back to Top

    1. Abebe, R. et al. Roles for computing in social change. In Proceedings of the 2020 Conference on Fairness, Accountability, and Transparency. 252–260.

    2. Barocas, S. and Boyd, d. Engaging the ethics of data science in practice. Commun. ACM 60, 11 (Nov. 2017), 23–25.

    3. Chouldechova, A. and Roth, A. A snapshot of the frontiers of fairness in machine learning. Commun. ACM 63, 5 (May 2020), 82–89.

    4. Fiesler, C. What do We teach when We teach tech ethics? A syllabi analysis. In Proceedings of the 51st ACM Technical Symposium on Computer Science Education. 289–295.

    5. Fjeld, J. et al. Principled artificial intelligence: Mapping consensus in ethical and rights-based approaches to principles for AI. Berkman Klein Center Research Publication 2020-1 (2020).

    6. Garrett, N., Beard, N., and Fiesler, C. More than "if time allows": The role of ethics in AI education. In Proceedings of the AAAI/ACM Conference on AI, Ethics, and Society. (2020), 272–278.

    7. Grosz, B.J. et al. Embedded EthiCS: Integrating ethics across CS education. Commun. ACM 62, 8 (Aug. 2019), 54–61.

    8. Hildebrandt, M. Understanding law and the rule of law: A plea to augment CS curricula. Commun. ACM 64, 5 (May 2021), 28–31.

    9. Johnson, D. Who should teach computer ethics and computers & society? ACM SIGCAS Computers and Society 24, 2 (1994), 6–13.

    10. Ruef, A. et al. Build it, break it, fix it: Contesting secure development. In Proceedings of the 2016 ACM SIGSAC Conference on Computer and Communications Security. 690–703.

    11. Saltz, J. et al. Integrating ethics within machine learning courses. ACM Transactions on Computing Education (TOCE) 19, 4 (2019), 1–26.

    12. Sawyer, R.K. The Cambridge Handbook of the Learning Sciences. Cambridge University Press. 2014.

    13. Schauer, F. Playing by the Rules: A Philosophical Examination of Rule-based Decision-Making in Law and in Life. Clarendon Press, 1991.

    14. Selbst, A.D. Fairness and abstraction in sociotechnical systems. In Proceedings of the Conference on Fairness, Accountability and Transparency. (2019), 59–68.

    15. Shulman, L.S. Signature pedagogies in the professions. Daedalus 134, 3 (2005), 52–59.

    16. Stoyanovich, J. and Lewis, A. Teaching responsible data science: Charting new pedagogical territory. International Journal of Artificial Intelligence in Education (2021), 1–25.

    a. We use the term data science student as a generalized term that refers to students in relevant disciplines, for example, data science, computer science, and computer engineering with sufficient background in machine learning and basic understanding of data life cycle (such as problem forming, data collection and management, analytics, development, visualization and deployment).

    b. Sometimes referred to as FATE (Fairness, Accountability, Transparency, Ethics) and similar abbreviations.

    The authors thank the course team, our co-instructor for the Spring 2021 class, Helen Nissenbaum, and the teaching fellows over the years (alphabetically) Alex Chapanin, Guy Berkenstadt, Hofit Wasserman Rozen, Margot Hanley, Nitay Calderon, Shir Lissak, and Sivan Shachar. The authors also thank our students for their active participation, involvement and valuable feedback. Gal also acknowledges the support of the Benjamin and Florence Free Chair. Chagal-Feferkorn acknowledges the support of the Scotiabank Fund for the AI + Society Initiative.

Join the Discussion (0)

Become a Member or Sign In to Post a Comment

The Latest from CACM

Shape the Future of Computing

ACM encourages its members to take a direct hand in shaping the future of the association. There are more ways than ever to get involved.

Get Involved

Communications of the ACM (CACM) is now a fully Open Access publication.

By opening CACM to the world, we hope to increase engagement among the broader computer science community and encourage non-members to discover the rich resources ACM has to offer.

Learn More