Credit: Facebook / Flickr
On those rare occasions when an academic research study receives frenzied media attention, it usually indicates the topic has touched on societal fears in some way. On June 17, 2014 the Proceedings of the National Academy of Sciences (PNAS) published a paper by researchers at Facebook and Cornell University presenting evidence for widespread emotional contagion in online social networks. The Facebook study manipulated the percentage of positive or negative posts appearing in Facebook user news feeds and reported that their manipulations had the anticipated effects of demonstrating emotional contagion at scale. The study contributes interesting results to an underserved area of research, but it triggered an understandable flurry of concern because of a failure to obtain consent from participants before attempting to influence their emotions. Public concern was also intensified by the fact that most people were unaware that Facebook already filters user news feeds (by necessity due to scale), that the company has such unprecedented reach, and that the manipulation involved something as personal as feelings and emotions.
The lack of awareness regarding information filtering supports an illusion of neutrality regarding technology design—the notion that computer programs are bereft of values or moral intent by default. This and other important issues have received less attention amid the flurry of criticism pertaining to research ethics (most of the 120 papers Google Scholar identified as citing the study focused on ethics). Some open issues include the lack of transparency given restrictions to data access and the difficulty in systematic replication due to a lack of Facebook-level resources. This Viewpoint takes a different approach by discussing the implications of the study on technology design and the criteria on which technologists should base their design decisions.
No entries found