Sign In

Communications of the ACM


Consumers vs. Citizens in Democracy's Public Sphere

View as: Print Mobile App ACM Digital Library Full Text (PDF) In the Digital Edition Share: Send by email Share on reddit Share on StumbleUpon Share on Hacker News Share on Tweeter Share on Facebook
fingerprint with a red tag, illustration

Credit: Lightspring

From foreign intervention in free elections to the rise of the American surveillance state, the Internet has transformed the relationship between the public and private sectors, especially democracy's public sphere. The global pandemic only further highlights the extent to which technological innovation is changing how we live, work, and play. What has too often gone unacknowledged is that the same revolution has produced a series of conflicts between our desires as consumers and our duties as citizens. Left unaddressed, the consequence is a moral vacuum that has become a threat to liberal democracy and human values.

Surveillance in the Internet Age, whether by governments or companies, often relies on algorithmic searches of big data. Statistical machine learning algorithms are group-based. Liberal democracy, in contrast, is individual-based, in that it is individuals whose rights are the chief focus of constitutional protection. Algorithmic opacity, which can be the product of trade secrets, expert specialization or probabilistic design,3 poses additional challenges for self-government because it by definition abstracts away the individual on which a rights-based regime depends. Even with attentiveness to constitutional constraints, NSA surveillance, as Edward Snowden revealed, violated the right to privacy that all American citizens have, leading to revision of the Patriot Act. Europe's privacy protection standards are even higher, restricting second- and third-hand use of customer data.

In contrast, in illiberal regimes, the distinction between the public and private spheres is not drawn in the same way, and the individual is not the regime's point of departure. Privacy is routinely sacrificed at the altar of national security and societal goals. For example, China's Google, Baidu, has partnered with the military in the China Brain Project. It involves running deep-learning algorithms over the data Baidu collects about its users. According to an account in Scientific American, every Chinese citizen will receive a so-called 'Citizen Score," which will be used to determine who gets scarce resources such as jobs, loans, or travel visas.a China also uses facial recognition software to monitor its Uighur Moslem minority for law enforcement purposes.b China's Social Credit System, designed to reward "pro-social" and punish "anti-social" behavior, is becoming operational.c

Since capitalism and democracy developed contemporaneously and symbiotically, the divergence between technological advances and human values has been all too easy to overlook. In the Chinese context, deploying citizen scores and racial profiling in such utilitarian fashion may be legitimate, but in a rights-based democracy, such algorithmic discrimination must be illegitimate. Bell curves do not matter for human rights. Individuals do.

In The Human Condition, Hannah Arendt lamented "the absurd idea of establishing morals as an exact science" by focusing on things that are easily measurable or quantifiable.1 From her perspective, what was most significant about modern theories of behaviorism is "not that they are wrong but that they could become true ... It is quite conceivable that the modern age—which began with such an unprecedented and promising outburst of human activity—may end in the deadliest, most sterile passivity history has ever known."1 The question is not "whether we are the masters or the slaves of our machines, but whether machines still serve the world and its things, or if, on the contrary, they and the automatic motion of their processes have begun to rule and even destroy world and things."1

The widening digital age conflict between producers/consumers and citizens reflects a heightened tension between market and liberal democratic/republican values. To see this more clearly, it is helpful to think about two models of man: bourgeois man and citizen man. For Marx and his heirs, the class struggle between bourgeois man and working man, between the oppressor and the oppressed, is the central dynamic. Montesquieu, Machiavelli, Montaigne, and the American Founders, however, "ransacked the archives of antiquity," as Arendt puts it, to imagine a different model of man for the new republic. The model man of this new system, which built on the Roman conception of the public sphere, was the citizen of the Athenian polis.2 The American Constitution's architects thus drew on both Greece and Rome in imagining the new republic of the United States. Inclusive citizen engagement in American political life is thus essential for both self-government and human flourishing. While it is a fact that the original vision excluded blacks and women from citizenship, it is equally true the same values evolved over time to include all humans of voting age.

It is easy to see how contemporary free market fundamentalism—the idea that free markets are the solution to all challenges of public life—is a logical consequence of trends Arendt astutely identified over a half-century ago. Whenever the people allow companies to pursue profit maximization relentlessly in a global market without attention to consequences, producers are unwittingly elevated over citizens. Along parallel lines, whenever citizens allow their personal data to be harvested in exchange for a better deal, consumers are inadvertently elevated over citizens. Progressive Supreme Court Justice Louis Brandeis foresaw this accelerated challenge to the health of democracy's public sphere, when he described the Gilded Age consumer as "servile, self-indulgent, indolent, ignorant" and thereby easily manipulated by advertising, the opposite of the engaged citizen.4

The March 2016 standoff between Apple and the FBI illustrates the new potential for conflict between business and government interests that technological change has wrought. Apple refused to help the government unlock the iPhone of Syed Farook, who was charged with killing 14 in the December 2, 2015 San Bernardino terrorist attack. Since Apple's market is global, it had no interest in complying with the FBI's request, as its foreign customers are unlikely to pay a premium for a smart phone that the U.S. government can access. Yet Apple is also a company headquartered in the U.S., and American citizens have an obvious interest in preventing future terrorist attacks. The same friction between the profit motive and public interest was present in the decisions of Facebook's senior leadership to downplay Russian interference in the 2016 elections until a free press forced them to own their self-interested choices.

The Internet thus has had at least two major consequences for American constitutional democracy. First, as our public conversations move online, disparate virtual spaces are replacing the public square, undermining democratic deliberation. As we saw with the 2016 elections, Facebook first looked the other way when its platform was manipulated by the Russians and others to increase polarization and help elect Donald Trump. To get a better sense of the magnitude of the problem, Facebook announced in May 2019 that it had deleted more than three billion fake accounts, a number approximately comparable to the combined 2018 populations of the U.S., China, and India.d

The global nature of the ad market for Google and Facebook represents the greatest challenge. Facebook profited when Russian troll farms bought ads in the run-up to the 2016 election, but the American public sphere was simultaneously diminished. Looking to the future, the prospect of an alliance between authoritarian states and large IT monopolies that would effectively merge corporate and state surveillance, as George Soros has warned, could facilitate totalitarian control unlike anything the world has previously seen.e

The move to cloud computing has also had important implications for privacy rights. The Fourth Amendment requires the government to justify to a court why it has a compelling interest in your personal information, protecting the contents of your laptop and desk from illegal search and seizure. What most Americans do not understand is that once you upload material to the Cloud, you trade that Constitutional protection for a corporate guarantee, yet the Fourth Amendment is mute on corporate violations of privacy.

When they sell themselves to the public as promoters of ideals rather than as profit-seeking companies, Silicon Valley firms have a vested interest in obscuring this simple fact. Until very recently, Google's mantra was "Don't Be Evil," and Facebook still defines its mission as "to make the world more open and connected." Apple recently rebranded its retail outlets as "town squares." This sincere Newspeak made it easier for consumers unthinkingly to trade their personal data for continued free use of the relevant platform.

Engineers and senior corporate leadership alike must be mindful of this decoupling of profit margins from the common good. All Western computer scientists should care about the consumers vs. citizens tension, not only because as citizens they value liberal democracy, but also because the long-term sustainability of the companies for which they work depends on it. When platforms or products appear to undermine human values, brands are tarnished in the free world, sometimes irreparably. The Google/Apple collaborative effort using Bluetooth to support contact tracing apps appears to have learned this lesson.

To be fully human in a liberal democracy is to be a citizen first and a consumer second.

The challenge will be to reclaim the public sphere for the people and democratic deliberation rather than as a locus for self-promotion and manipulation. A necessary condition for meeting that challenge will be to reintroduce practical ethics to scientific knowledge. When technological innovation outstrips the capacity of existing norms and laws, it will take more than science to re-harness science to the public interest.

Yet while Arendt and Brandeis discerned the general trajectory, in some ways, we are in uncharted territory. Scientists at the dawn of the nuclear age made possible weapons of mass destruction that still today could wreak total destruction. Scientists in the internet age are developing intelligent machines to do what was previously the work of humans. In the information age, scientists are seemingly on the brink of rendering large segments of society utterly superfluous.

One thing is certain: Silicon Valley will not be capable of safeguarding human values without public pressure and thoughtful regulation. The conflict of interest is too stark, since the core dilemma often embodies a clash between higher short-term profit margins and doing the right thing for equality before the law. There is strong sentiment, a lingering effect of decades of considering government as the problem rather than a solution, that tech companies can simply engineer processes that used to be the preserve of courts (such as Facebook's recent move to create its own oversight board to judge what content or accounts are approved or removed from the platformf), but it is important to remember what government is for and that there are some things that only government can do well. Simply put, nobody elected or appointed Silicon Valley.

In navigating these challenges, we can start with things we know to be true. Since both algorithmic design and data categorization can be amplifiers of prejudice, the perfect algorithm will be no silver bullet for protecting individual rights. An algorithm cannot fathom the human experience. An algorithm cannot understand the requirements of the democratic system itself.

Put another way, to be fully human in a liberal democracy is to be a citizen first and consumer second. Politics has no place in scientific research. At the same time, scientists are also citizens who are ideally positioned to evaluate both the perils of AI systems and their potential to better the human condition. Scientists who understand the inherent trade-offs between what we want as consumers or producers and what we need as citizens can be critical allies rather than enemies of pluralism and the freedom of the individual.5

To sustain democracy in the Coronavirus era, Americans need to be citizens first. That is a choice we all must make. Human beings in a free society cannot be reduced to data or algorithms unless we allow ourselves to be.

Back to Top


1. Arendt, H. The Human Condition, University of Chicago Press, Chicago, IL, 1958, 311.

2. Arendt, H. Thinking Without a Bannister, Schocken Books, New York, 2018, 467–468.

3. Dourish, P. Algorithms and their others: Algorithmic culture in context. Big Data & Society 33, 2 (July–Dec. 2016), 6–7.

4. Rosen, J. Louis D. Brandeis: American Prophet. Yale University Press, New Haven, CT, 2016, 76.

5. Vienna Manifesto on Digital Humanism;

Back to Top


Allison Stanger ( is the Russell Leng '60 Professor of International Politics and Economics at Middlebury College, Cary and Ann Maguire Chair in Ethics and American History at the Library of Congress, Center for Advanced Study in the Behavioral Sciences Fellow at Stanford University, and an External Professor at the Santa Fe Institute.

Back to Top


a. See

b. See

c. See; and

d. See

e. See

f. See; and https://

Copyright held by author.
Request permission to (re)publish from the owner/author

The Digital Library is published by the Association for Computing Machinery. Copyright © 2020 ACM, Inc.


Joseph Bedard

The fifth paragraph from the end is ambiguous, but it implies that scientists are primarily responsible for developing existential threats to civilization. Furthermore, the general ending of the article seems to blame engineers and scientists more than anyone else for the problems we are having with information technology today. If this is intended, then it is naive. This article doesn't mention the role that governments (as representatives of the citizens) and entrepreneurs play in the development of new technology.

"Scientists at the dawn of the nuclear age made possible weapons of mass destruction that still today could wreak total destruction." This is a negative framing of nuclear energy. There was an arms race at the end of WW2 to develop a nuclear bomb and win the war. In that case, the military was responding to the will of citizens, and thus *we* weaponized nuclear energy. It was a fraction of physicists (working for the military) who developed nuclear bombs--not scientists in general. She also does not mention the benefits that nuclear energy has had to civilization.

"Scientists in the internet age are developing intelligent machines to do what was previously the work of humans. In the information age, scientists are seemingly on the brink of rendering large segments of society utterly superfluous." Based on the negative framing of nuclear energy, these two sentences imply that automation is a bad thing, which as we know from history is not the case. Automation has had an overwhelmingly positive impact on civilization.

"Since both algorithmic design and data categorization can be amplifiers of prejudice, the perfect algorithm will be no silver bullet for protecting individual rights." Let's improve that idea. Poor algorithmic design and poor data categorization can result from prejudice--either from the neural network training data, or the designers' biases. If technology amplifies our actions, then poor algorithms and poor data can amplify prejudice. Also, the second part of the sentence is not a result of the first. An algorithm that does not protect individual rights is by my definition imperfect. Perfect algorithms (without prejudice) are not impossible. It is very unlikely that the algorithm will be created via deep neural networks (based on what we know about them so far). It is more likely that it would be created by people.

Software engineers in Silicon Valley have good intentions to make the world better. This optimistic attitude comes from the entrepreneurs (or engineers acting as entrepreneurs). Without this level of optimism, very few citizens would take the risk of starting technology companies. The problem is that this kind of optimism is blind to the evil side of humanity. I do agree that ethical awareness is important for software engineers. The ACM Code of Ethics is a great step in that direction. However, it is not enough. Algorithms that judge, score or evaluate people should be subject to regulatory review or clinical trials. Only a fraction of technologies will fall under this definition, and we must be careful to limit the scope of this regulation so that it does not stifle innovation.

Displaying 1 comment