BLOG@CACM
Computing Profession

The Ethical Responsibilities of the Student or End-­User Programmer

Posted
Mark Guzdial

I recently wrote a blog post inspired by Annette Vee’s 2017 book (MIT Press) Coding Literacy: How Computer Programming is Changing Writing. My post touched on issues of making coding accessible to all, just as we want basic language composition skills to be available to all (see post here). The comments in response were interesting and challenging.

Shriram Krishnamurthi of Brown raised the question that I found the most fascinating. To what extent is a student or an end-user programmer (like a computational scientist or journalist) responsible for code that they share? We can dispense with the easy form of this question — if any programmer writes a program that has an intent of doing harm, the programmer is clearly responsible.

There are lots of edge cases that are interesting and challenging:

  • What if a physicist developed an interesting simulation and shared the code, and the code accidentally damaged computers running it? Is the physicist responsible? Even if they have no background in testing code, or didn't know that this code could possibly cause damage?
  • What if the physicist shared the simulation, and the simulation was completely benign, but the code had a security vulnerability that someone else could exploit to cause harm? Is the physicist responsible for that? Even if they don't even know the field of cybersecurity exists?
  • Scratch 3.0 has just been released. What if there is a bug in Scratch such that some combination of blocks could expose a vulnerability. Who is responsible there? The 3rd graders who accidentally combined those blocks? The Scratch team that released a product with that kind of vulnerability? One of Scratch 3.0’s new features is the ability to extend Scratch with new blocks and new capabilities. If the vulnerability is in an extension set of blocks, does the responsibility (and any liability) lie with the developers of the extension, with the Scratch team that allowed the extension, or with the student who used the extension?

The answers to these questions have implications for what is taught in the national "CS for All" movement and in curricula like Code.org’s and ExploringCS. Do we need to teach all students about testing, so that they can check for potential damage and vulnerabilities? Should cyber-security be part of our computer science learning objectives and standards? Even if we teach everyone about testing and cyber-security, mistakes will be made. How do we handle those and protect the larger society?

I reached out to colleagues with interest in computing ethics. Some answered in the blog comment thread. H.V. Jagadish of U. Michigan pointed out that companies have a greater responsibility than do individual student or end-user programmers. We can expect that many (maybe all) computationally literate people will write some software, and we cannot reasonably make expectations of quality and security. Users can reasonably expect greater quality and security from software developed by a commercial entity. Alan Kay pointed out that code processes are much like biological organisms in the world ecology. We can’t achieve absolute security, but we might think about protecting ourselves from rogue processes (developed by students or otherwise) much as we think about protecting society from disease.

Ben Kuipers who explores ethics of AI created an analogy to vaccine development. Polio vaccine is based on an inactivated form of the polio virus, but mistakes have sometimes been made. The most famous of these was the Cutter Incident of 1955 when live polio was distributed (despite safety tests) and many people developed polio from what they thought was a vaccine (see Wikipedia page). Who’s responsible? In the end, Cutter Laboratories was ordered to pay compensation, but was not found negligent (see discussion here). This ruling has scared off developers of vaccines, which society relies upon. In 1986, the US National Vaccine Injury Compensation Program was established, so that persons harmed by vaccines are compensated, but the companies are protected to keep developing vaccines (see CDC page here). Ben suggests that a similar mechanism might be created to protect companies developing self-driving cars from inevitable lawsuits. He further suggests that we might have a similar mechanism for student or end-user programs. We want people to learn programming. It’s in their best interests and in society’s best interests to have more people learn programming. Inevitably, mistakes will be made. We have to protect the development of knowledge about programming, while also compensating victims.

I also reached out to two recent collaborators who work in computing education and ethics. Amy Bruckman at Georgia Tech wrote a CACM Viewpoints article with me in August on Providing equitable access to computing education. Sepehr Vakil worked with me on the ECEP Alliance. He’s now at Northwestern, and has published on equity in computing education based on a justice framing (see link here). He has an article to appear in CACM Viewpoints in March on issues of teaching ethics in computing education.

Both Amy and Sepehr focused on what we should be teaching students. Sepehr wants us to teach computing students much more about ethics and responsibility, but worries about stifling interest and creativity by too great an emphasis on security. Students are clearly not absolved of responsibility when they write programs, but we have to separate error from carelessly inflicting harm. Amy emphasized teaching students to make honest disclosure of assumptions and limitations. If students or end-users share their code, they need to make explicit statements about how much testing they did and what other users might need to watch out for.

Software is powerful. That is exactly why so many non-professionals are learning programming. Anything that powerful can inflict harm, even if accidentally. Professionals have an ethical responsibility to consider wider impacts of the use the power of computing and to support the public good (using the phrases from the June 2018 ACM Code of Ethics and Professional Conduct). It’s an open question how we support learners and end-user programmers to develop software safely and to protect the others who might use their software.

Join the Discussion (0)

Become a Member or Sign In to Post a Comment

The Latest from CACM

Shape the Future of Computing

ACM encourages its members to take a direct hand in shaping the future of the association. There are more ways than ever to get involved.

Get Involved

Communications of the ACM (CACM) is now a fully Open Access publication.

By opening CACM to the world, we hope to increase engagement among the broader computer science community and encourage non-members to discover the rich resources ACM has to offer.

Learn More