Whereas the philosophy of computer science has heretofore been directed largely toward the study of formal systems by means of other formal systems—semantics and denotations, computatability and set theory—concerned professionals have also devoted attention to the ethics of computing, taking on issues like privacy, the digital divide, and bias in selection algorithms. Let’s keep it up. There are plenty.
The recent American election cycle revealed a disturbing trend, the viral spread of egregiously false and misleading stories posing as news. The problem is widely understood: Because sensational stories propogate so rapidly on the Internet, and because social network practices feed to each user such items as that person wants to see, observers worry that the public, so dependent on social media, is not getting an accurate picture of current events. The Editor’s Letter from Moshe Vardi in the January CACM outlines the problem [Vardi]. The ethics of computing should have something to say about this.
Facebook CEO Mark Zuckerberg had something to say, in an open letter of February 16th: "…the information you are getting through the social system is going to be inherently more diverse than you would have gotten through news stations." [Zuckerberg 2017] How right he was, in the regrettable sense of "diverse" that applies to degree of trustworthiness rather than to the mix of subject matter. His letter shows sincere concern, and sincere conviction that the remedy is more social networking. I agree at least with his statement, after the election, that Facebook "must be extremely cautious about becoming arbiters of truth ourselves." [Zuckerberg 2016].
Should Facebook and other social media be forced to provide vetting of stories? (The term "vetting," common overseas, includes notions of review, scrutiny, and appraisal, leading to approval or rejection.) Imposing some control at the point of distribution is tempting, but such a mandate would be misguided. My objections are that (1) it’s not Facebook’s job, and (2) no one should be getting their news from Facebook in the first place.
Rational decision-making depends on reliable sources of information, people who perform steady and thorough gathering of facts that are woven into complete stories. There is already a cadre of professionals who do that—journalists. By "journalism," I mean real journalism, the news outlets that adhere conscientiously to professional principles [SPJ 2014], such as the big-city newspapers (print or online) that serve as the country’s records, not the sensational tabloids that scrape up and peddle the outrageous, nor the ideological outlets that spin for effect.
Facebook was intended for social exchange, for gossip, rumors, and teasing, for banter and jokes, for updates on trivia, the kind of thing that we do in face-to-face conversation, a context in which we understand the norms. Although it has grown into a general-purpose Web communication platform, its mission does not encompass fact-checking or verifying sources or seeking counter-testimony or explaining alternate perspectives. Let me say that Facebook is a perfectly legitimate business. Human nature thrives on a balanced diet of the substantial and the frivolous (and myriad other kinds of communication). Fashion and sports, while not substantial, are not inherently evil, and they enjoy popularity and make money with manifest success; no reason that social networking cannot do the same.
Not that anybody is asking us. We acknowledge a sad lack of calls from social networking businesses for philosphers to ride to the rescue. Yet, undeterred, in fact, so accustomed to such treatment that we hardly notice, we forge on to the question of what to do. Should Facebook, in particular, check for facts? It’s clear in Zuckerberg’s letter that he is ready to apply some algorithmic tests to posts, which amounts to automated—rather than human—vetting. That leaves the process open to gaming, which will surely emerge, as surely as there are ambitious Macedonian youth free of the restraints of conscience who grasp how little stands between them and lucrative ad revenues. [Smith and Banic 2016] What program would successfully anticipate all such phenomena, that is, what programmer could successfully articulate all the conditions in advance?
Detection of truth is hard, to put it mildly. It takes the degree of wit found only in wetware to see the satire in the article in The Onion, "Horrible Facebook Algorithm Accident Results In Exposure To New Ideas." [Onion 2016]. AI algorithms for diverse purposes, even when safe from deliberate exploitation or adversarial hacking, have shrinking but non-zero failure rates. Tim O’Reilly thinks that’s good enough: "Note that the program does not have to find absolute truth; it just has to cast a reasonable doubt, just like a human jury." [O’Reilly 2016] That’s exactly the claim that I deny. Jury verdicts are not factive, and a near miss is not acceptable, and in fact, makes the situation worse than an epic fail. A public commitment to vetting (as in the major newspapers) would lead the reader to confidently accept the veracity of printed articles under a false sense of security. That reader becomes vulnerable to subtle and insidious falsehoods if they are allowed to appear even rarely. Both false positives, that is, stories that meet all the criteria and are therefore declared to pass when they are actually false, or false negatives, that is, stories that fail the criteria and are therefore rejected when they are actually true, are bound to occur, and may inflict great damage. In fact, insofar as dichotomous assessment is better done by the human intellect, we should promote that it be done on that platform all the time. No attempt should be made to vet the stories. People will exercise their own critical thinking skills and figure it out.
When they figure it out, their strengthened judgment will direct them toward serious news sources other than social media, quite appropriately. Zuckerberg says in his letter that "… our community will identify which sources provide a complete range of perspectives so that content will naturally surface more." For the sake of journalism, we can hope that even if it surfaces on Facebook, the actual news sources are paid, too, as in his suggestion that "[t]here is more we must do to support the news industry…" [Zuckerberg 2017]
The justification for objection (1), raised earlier, is Facebook’s business model, interpreted internally. The justification for point (2) is also Facebook’s business model, interpreted as it should be, externally, by the public. In short, Facebook’s business model does not embrace journalism. The company should explicitly disavow any claim to veracity in the posts that it carries and refer users to serious news outlets.
Next Post (March): Ethical Theories Spotted in Silicon Valley
References
Onion. 2016. Horrible Facebook Algorithm Accident Results In Exposure To New Ideas, The Onion, 52:35. Sept. 6, 2016. Accessed 24 February 2017.
O’Reilly, Tim. 2016. How I Detect Fake News, Medium, Nov. 23, 2016.
Smith, Alexander and Banic, Vladimir. 2016. Fake News: How a Partying Macedonian Teen Earns Thousands Publishing Lies, NBC News, Dec. 9, 2016.
Society of Professional Journalists (SPJ). 2014. SPJ Code of Ethics. Accessed Feb. 26, 2017.
Vardi, Moshe. 2017. Technology for the Most Effective Use of Mankind. Communications of the ACM, 60:1.
Zuckerberg, Mark. 2016. Untitled. Accessed Nov. 12 2016.
Zuckerberg, Mark. 2017. Building Global Community. Facebook page, Feb. 16 2017.
Robin K. Hill is adjunct professor in the Department of Philosophy, and in the Wyoming Institute for Humanities Research, of the University of Wyoming. She has been a member of ACM since 1978.
Join the Discussion (0)
Become a Member or Sign In to Post a Comment