The ethics crisis in computing was “launched” in 2018. In March of that year, The Boston Globe asserted, “Computer science faces an ethics crisis. The Cambridge Analytica scandal proves it!” This was in response to the Techlash,a where Wall Street Journal columnist Peggy Noonan described Silicon Valley executives as “moral Martians who operate on some weird new postmodern ethical wavelength” and Niall Ferguson, a Hoover Institution historian, described cyberspace as “cyberia, a dark and lawless realm where malevolent actors range.”
But in my January 2019 Communications column,b I dismissed the ethical-crisis vibe. I wrote, “If society finds the surveillance business model offensive, then the remedy is public policy, in the form of laws and regulations, rather than an ethics outrage.” I now think, however, I was wrong. I think I was right to advocate for laws and regulation to address the adverse impacts of computing, but I now believe we do have an ethics crisis in computing.
What changed my mind? First, my anxiety about the ills brought on by computing has risen dramatically, as a perusal of my column over the past five years shows.c I bemoaned that humanity seems to be serving technology rather than the other way around. I argued that tech corporations have become too powerful and their power must be curtailed. I asked that ACM dedicate itself to the public good. I pointed out that Big Tech’s business models are unethical. I explained how technology increases societal polarization. I wailed that computing has blood on its hands. About two years ago, I started giving talksd on how to be an ethical computing technologist.
But I have yet, until now, to point at the elephant in the room and ask whether it is ethical to work for Big Tech, taking all of the above into consideration. ACM’s Code of Ethicse does offer a clear ethical guideline. It opens with the following sentences: “Computing professionals’ actions change the world. To act responsibly, they should reflect upon the wider impacts of their work, consistently supporting the public good.” So, the ethical star we should follow is the support of the public good.
Supporting the public good is not always straightforward. All of us must navigate the trade-off between “me” and “we.” A famous Talmudic quote states: “If I am not for myself, who will be for me? If I am only for myself, what am I?” We must balance optimizing for oneself with optimizing for others, including the public good. So how does working for Big Tech thread this needle? This is the question that people who work for Big Tech must ask themselves.
The profit motive is not inherently against the public good. As Adam Smith pointed out, “They are led by an invisible hand…and thus without intending it, without knowing it, advance the interest of the society.” But the belief in the magical power of the free market always to serve the public good has no theoretical basis.f In fact, our current climate crisis is a demonstrated market failure.g To take an extreme example, Big Tobacco surely does not support the public good, and most of us would agree that it is unethical to work for Big Tobacco. The question, thus, is whether Big Tech is supporting the public good, and if not, what should Big Tech workers do about it.
Of course, there is no simple answer to such a question, and the only reasonable answer to the question of whether it is ethical to work for Big Tech is, “It depends.” But consider Uber. In 2023, Wired magazine reportedh that “…Travis Kalanick, had built an enormous, enthusiastic user base by subsidizing rides with the company’s vast reservoir of VC funding. Under Kalanick, Uber skirted regulations, shrugged off safety issues, and presided over a workplace rife with sexual harassment.” Was it ethical to have worked at Uber under Kalanick? I am sure many Uber employees were not aware of Kalanick’s shenanigans, but many were, and yet they continued to work at Uber. It was only in 2022 that a whistleblower leaked more than 124,000 company files to the Guardian, exposing its misdeeds.
“It is difficult to get a man to understand something, when his salary depends on his not understanding it,” said the writer and political activist Upton Sinclair. By and large, Big Tech workers do not seem to be asking themselves hard questions, I believe, hence my conclusion that we do indeed suffer from an ethics crisis.
I’m disappointed in the oversimplification of the issue presented in this article. Blaming individual tech workers for the ethics crises perpetuated by their companies is unfair and inaccurate. These workers are often caught between corporate expectations and personal values, with limited options for change.
A more nuanced analysis would consider the systemic issues that drive these problems. The 2022 talk by Vardi mentions how engineers collaborate professionally in groups, unlike physicians who work individually, but his conclusion is that tech workers should simply choose to work elsewhere if they have ethical concerns. This approach neglects the fact that similar power dynamics and ethics challenges exist in other industries.
Big tech may have a disproportionate impact on society, but its influence doesn’t erase the fundamental issues that affect workers across various sectors. We need to look beyond individual failings and examine the systemic structures that perpetuate these problems.
Having worked 17 years at Google and recently moved to Meta, I have had plenty of opportunity to think about this question. One problem with the academic perspective on this topic is the tendency to construe these big companies as monocultures. In my experience they are built from a diversity of ambitions and perspectives. Personally, I have found roles in these companies that relate directly to making users safe or to understanding and obeying the law(s). They are not the most glamorous roles, not the easiest way to get promoted, but they share the comforts typical with employment in these companies and have other rewards. I believe it is possible to make these companies better from the inside.
While I see huge unsolved moral challenges for big tech, challenges shared with society, it is too easy to ignore some basic facts. Firstly, there are literally billions of users who really like these services, and get tremendous value every day. Secondly, there is a tendency to moral panic around advertising and use of data in big tech that makes little sense historically or across other industries such as finance and retail. We too easily forget that broadcast media, the communications phenomena of the 20th century, was ad supported. We also forget that the penny newspapers, the infotech revolution of the 19th century, and the cradle of the modern notion of objective journalism, were only viable with advertising.
I worry that the popular preoccupation with big tech may be distracting us from bigger risks. An example is the demise of high-quality, free-to-use information services. I grew up with television, radio, and public libraries that provided very high quality information resources at essentially zero cost. These institutions are under siege, trending toward a system where most high-quality information services are behind a paywall. When we challenge ad-based monetization, we push more content behind paywalls. When we fail to demand easy public access to public-domain content and to library copies of copyrighted works, we ignore the principle of public benefit that motivated copyright in the first place. In these ways, the rage against online advertising drives us towards an ecosystem that lacks this foundation of high-quality, zero cost information services.
It’s hard for me to see how the public is served by putting all high-quality information behind a paywall. That is a moral failure that would set society back over a century. I hope I am wrong, or that we wake up to the challenge before going too much further down that path.
The overall impact of toxicity, misinformation, etc., on all the negative aspects of society, such as harassment, health, and mental health, and polarization, … have continued to exceed all the worries as time has passed. This has surpassed all the early concerns:
https://www.slideshare.net/slideshow/understanding-online-socials-harm-examples-of-harassment-and-radicalization/155671897