Sign In

Communications of the ACM


Consumers vs. Citizens in Democracy's Public Sphere

fingerprint with a red tag, illustration

Credit: Lightspring

From foreign intervention in free elections to the rise of the American surveillance state, the Internet has transformed the relationship between the public and private sectors, especially democracy's public sphere. The global pandemic only further highlights the extent to which technological innovation is changing how we live, work, and play. What has too often gone unacknowledged is that the same revolution has produced a series of conflicts between our desires as consumers and our duties as citizens. Left unaddressed, the consequence is a moral vacuum that has become a threat to liberal democracy and human values.

Surveillance in the Internet Age, whether by governments or companies, often relies on algorithmic searches of big data. Statistical machine learning algorithms are group-based. Liberal democracy, in contrast, is individual-based, in that it is individuals whose rights are the chief focus of constitutional protection. Algorithmic opacity, which can be the product of trade secrets, expert specialization or probabilistic design,3 poses additional challenges for self-government because it by definition abstracts away the individual on which a rights-based regime depends. Even with attentiveness to constitutional constraints, NSA surveillance, as Edward Snowden revealed, violated the right to privacy that all American citizens have, leading to revision of the Patriot Act. Europe's privacy protection standards are even higher, restricting second- and third-hand use of customer data.


Joseph Bedard

The fifth paragraph from the end is ambiguous, but it implies that scientists are primarily responsible for developing existential threats to civilization. Furthermore, the general ending of the article seems to blame engineers and scientists more than anyone else for the problems we are having with information technology today. If this is intended, then it is naive. This article doesn't mention the role that governments (as representatives of the citizens) and entrepreneurs play in the development of new technology.

"Scientists at the dawn of the nuclear age made possible weapons of mass destruction that still today could wreak total destruction." This is a negative framing of nuclear energy. There was an arms race at the end of WW2 to develop a nuclear bomb and win the war. In that case, the military was responding to the will of citizens, and thus *we* weaponized nuclear energy. It was a fraction of physicists (working for the military) who developed nuclear bombs--not scientists in general. She also does not mention the benefits that nuclear energy has had to civilization.

"Scientists in the internet age are developing intelligent machines to do what was previously the work of humans. In the information age, scientists are seemingly on the brink of rendering large segments of society utterly superfluous." Based on the negative framing of nuclear energy, these two sentences imply that automation is a bad thing, which as we know from history is not the case. Automation has had an overwhelmingly positive impact on civilization.

"Since both algorithmic design and data categorization can be amplifiers of prejudice, the perfect algorithm will be no silver bullet for protecting individual rights." Let's improve that idea. Poor algorithmic design and poor data categorization can result from prejudice--either from the neural network training data, or the designers' biases. If technology amplifies our actions, then poor algorithms and poor data can amplify prejudice. Also, the second part of the sentence is not a result of the first. An algorithm that does not protect individual rights is by my definition imperfect. Perfect algorithms (without prejudice) are not impossible. It is very unlikely that the algorithm will be created via deep neural networks (based on what we know about them so far). It is more likely that it would be created by people.

Software engineers in Silicon Valley have good intentions to make the world better. This optimistic attitude comes from the entrepreneurs (or engineers acting as entrepreneurs). Without this level of optimism, very few citizens would take the risk of starting technology companies. The problem is that this kind of optimism is blind to the evil side of humanity. I do agree that ethical awareness is important for software engineers. The ACM Code of Ethics is a great step in that direction. However, it is not enough. Algorithms that judge, score or evaluate people should be subject to regulatory review or clinical trials. Only a fraction of technologies will fall under this definition, and we must be careful to limit the scope of this regulation so that it does not stifle innovation.

Displaying 1 comment

Log in to Read the Full Article

Sign In

Sign in using your ACM Web Account username and password to access premium content if you are an ACM member, Communications subscriber or Digital Library subscriber.

Need Access?

Please select one of the options below for access to premium content and features.

Create a Web Account

If you are already an ACM member, Communications subscriber, or Digital Library subscriber, please set up a web account to access premium content on this site.

Join the ACM

Become a member to take full advantage of ACM's outstanding computing information resources, networking opportunities, and other benefits.

Subscribe to Communications of the ACM Magazine

Get full access to 50+ years of CACM content and receive the print version of the magazine monthly.

Purchase the Article

Non-members can purchase this article or a copy of the magazine in which it appears.