BLOG@CACM
Artificial Intelligence and Machine Learning

Students Speak to Ethical Issues

Posted
Robin K. Hill, University of Wyoming

At the end of our one-credit course in Ethics for the Computing Professional, required for computer science majors, I give the students this question on their take-home Final Exam:

Much of standard computing ethics focuses on privacy, security, and intellectual property. If you had a chance to bring one single other concern to the attention of the computing profession and the public, what would it be? Justify this as an ethical problem.

In the fall of 2021, after class had ended, I asked the students for permission to publish their answers, anonymously, and about half replied, all with agreement. Here they are, unedited.


"One other concern that I would bring forward would be accuracy. What I mean by this is that, if we as computing professionals put forth a product, it must work well, not be dangerous, and provide accurate results when needed.

For example, in the case of AI, there have been many applications that can have very negative effects due solely to problems with the models being used. By the definition of many AI algorithms, they are not meant to give perfect results, they are only meant to give the best probability. This has many chances to go wrong when it comes to sensitive or critical operations. For example, Ai that predicts crime rates or recidivism rates has a very high chance of disproportionately affecting minority communities because of the data that it learns off of. Another case of AI going wrong is with self-driving cars that crash and harm people. These are both cases where AI is just simply not good enough to be able to be pushed forward into the public realm.

Overall, this is an ethical issue that needs to be brought to the attention of developers and the public. We must be careful to not do more harm than good with our products."


"I would choose to bring the concern of technology addiction to attention. I believe that tech companies are aware of certain negative aspects of their products, yet they exploit weaknesses in users so that they continue to use the products. According to Avery Hartmans, Twitter uses a psychological trick called 'variable ratio schedule" to reward users "…but at various times. The user doesn't know when they'll be rewarded… That's what slot machines do" (Hartmans 2018). Tech companies research and exploit this human weakness and others to, by all means, get their users 'addicted' to their product. This is undoubtably a troubling situation.

This can be justified as an ethical problem due to many things. First of all, the problem has a very wide spread effect. It is almost a bare necessity to use tech in today's world (The Digital Divide, for example). This gives big tech access to many users, which gives them a large audience that they can manipulate. Secondly, the moral patients have no choice in the issue, while the agents have complete control. Given that companies are exploiting psychology, patients really have no choice in how to react and are subconsciously pulled into addiction. The agents, companies, are the only ones in control of what they put out. Lastly, there is no redress mechanism for wrongdoing. These companies can't be held legally accountable for addicting users and causing issues in their personal life. It is not redressable legally, or in any other form, so users are nearly helpless in stumbling into technology addiction. Because of these reasons, I believe that technology addiction is an ethical issue that needs more attention in 2022."


"Another concern I have for computing ethics is abuse of power. What I mean by this is that computers are incredibly powerful things that can let humans do tremendously good things but in some cases bad things. For instance we have computers that can remotely bomb a village in the middle east with a drone directed by a man with an Xbox controller in Britain. Computers are expanding our minds and our power and in ways that we may not be ready for it as when we think of honorable war we think of two equal sides fighting with this technology we just through Intuitionism feel something is wrong that such an action can take place. Computer companies abilities to use up so much power that of entire cities just to power these computers and pay no taxes at the same time is also another abuse of power. Therefore I believe there should be developed a set of standard computing ethics that tackles this and maintains sensible use of this power. We should analyze using our ethical theories whether the harm of commuting these abuse of powers is correct. Such as with consequentialism which would see that these computer companies that are wasting so much needed power are causing harm and brought to task over it. That drone striking could be seen as a true ethical wrong due to the distortion of power at play from the raw computing power the military has been given. A huge problem with computer science is can I do such things, have I got enough processors, can I push this to over 100% when the true ethical problem of this is should I do this and who am I harming."


"I would present the concern of the environment. In the article by Glanz it states that data centers are not efficient and are effecting the environment with the use of burning diesel. I think that the way technology effects the environment should be an ethical focus because the environment effects everyone. We can use consequentialism theory to show that the outcomes for destroying the environment will not always be "good". It is in fact not good; it is bad. The deterioration of the environment can cause harm to the public which can be seen as ethical bad. Companies that explicitly harm the environment which in turn harm the public should be held accountable with standards of computing ethics."


"So, this is a little strange, but I think the idea of presentation of computing knowledge would be an interesting concern to look at. This is a poor title, so I will define what I am meaning. I think that a lot of jobs in the field of computer science call for "computer science degree, or equal experience", and that "equal experience" can be completely self taught through internet tutorials, and then selling your skills as a freelance worker. So the meat of this issue is, is the lack of higher educational experience potentially dangerous / posing a threat to the integrity and quality of code, which leads to a girth of other issues. To go further into this, is it potentially unsafe for privacy and security to have someone who "appears" to have the same experience in the field as someone who actually studied it in school. Like, I had to take this course on ethics, and it has certainly taught me some do's and dont's for my professional career after college. But joe smoe, who took two summers and learned python, now has a job with the FBI in cyber security. Is joe smoe going to be able to be able to protect the data being worked with, and recognize what is wrong and right in that field?

I think there is a potential issue here, in that you cannot guarantee someone who is a self taught coder is going to have the same professional understanding of computer science as someone who learned it at a four year university. There are so many useful things that I have learned through very specific courses that I guarantee are not something I would learn through my own research. I think there is potential for harming people in lack of security and protected data from someone who is less professional, and that makes this an ethical problem."


"Other Concern: Economy

Computing can have drastic impacts on local and global economies. Obviously, there are many stakeholders when it comes to local/global economies. In some situations, computing can alter these economies. In these situations, the economies stakeholders are moral patients, they have no control, and the computer professionals behind the economic alteration are the moral agents, because they have control over the situation. An example of this would be automation, and how it can significantly impact an economy. This economic impact should be considered when one examines the ethics of computing."


"I would like to see more discussion and responsibility for social media, games, and other forms of internet use / technology use that are addictive, by design or not. I have witnessed addiction to social media such that it destroys self-esteem, confidence, and grounded sense of self in young people. I have witnessed addiction to video games that has ruined academic success, career success, relationship success, and general life-fulfillment. I have seen that binge-watching YouTube videos in an endless scroll of free content changes the behavior of people around me, and even my own behavior and life have been affected by all three of these things. Around a year or more ago, I chose to get rid of all my personal social media because I didn't like my mindset when I was being consumed by it, and I wanted those hours per day back. I will never go back to the mindless scrolling and I recommend dropping it to my friends and family. I love technology, I wouldn't be in the right major if I didn't, but I can see real harm being done by social media and some (not all) video games and media.

I understand and agree that addiction is a personal battle that must be fought and won by the individual alone, mentally. But I find it strange that tech companies do not even address it. Other companies that produce addictive products, such as alcohol and nicotine companies, have the FDA to regulate them and make sure that warnings about that are on the products. They have restrictions on types of advertising they're allowed to do, and to whom. Because of studies done on those products, legislation such as age restriction are widely adopted and accepted. There are still adults who are addicted to alcohol, nicotine, and other illegal products, but there would be a lot more without the efforts that have been put in. Now, why isn't there even a semblance of anything like that for social media? Is it because there hasn't been enough research/study done on it? Is it because the public doesn't believe that things that aren't chemical can't be addictive? Social media is so widely used that the outcome does affect the kind of society we have as a whole – especially in terms of what we place value on. In just one specific example, because of social media, people value content that is outrageous, controversial, and radical, because it is the most interesting. So those are the ideas that get spread the widest and fastest, and now we have seen a massive divide politically in the United States because radicalism is pushing leftists further left and right-wings further right, to the point that there is no reconciliation or compromise possible between these two views. This clearly is a degradation of human interaction and promotes misunderstanding, another parameter of an ethical quandary by our definition.

Addiction is a personal responsibility, but I think tech companies should at least discuss this effect their products have on some of their consumers, and hopefully in the future there is some form of regulation to combat it."


I am pleased at their concern, and find hopes for the future in their answers.

 

Robin K. Hill is a lecturer in the Department of Computer Science and an affiliate of both the Department of Philosophy and Religious Studies and the Wyoming Institute for Humanities Research at the University of Wyoming. She has been a member of ACM since 1978.

Join the Discussion (0)

Become a Member or Sign In to Post a Comment

The Latest from CACM

Shape the Future of Computing

ACM encourages its members to take a direct hand in shaping the future of the association. There are more ways than ever to get involved.

Get Involved

Communications of the ACM (CACM) is now a fully Open Access publication.

By opening CACM to the world, we hope to increase engagement among the broader computer science community and encourage non-members to discover the rich resources ACM has to offer.

Learn More