On those rare occasions when an academic research study receives frenzied media attention, it usually indicates the topic has touched on societal fears in some way. On June 17, 2014 the Proceedings of the National Academy of Sciences (PNAS) published a paper by researchers at Facebook and Cornell University presenting evidence for widespread emotional contagion in online social networks. The Facebook study manipulated the percentage of positive or negative posts appearing in Facebook user news feeds and reported that their manipulations had the anticipated effects of demonstrating emotional contagion at scale. The study contributes interesting results to an underserved area of research, but it triggered an understandable flurry of concern because of a failure to obtain consent from participants before attempting to influence their emotions. Public concern was also intensified by the fact that most people were unaware that Facebook already filters user news feeds (by necessity due to scale), that the company has such unprecedented reach, and that the manipulation involved something as personal as feelings and emotions.
The lack of awareness regarding information filtering supports an illusion of neutrality regarding technology design—the notion that computer programs are bereft of values or moral intent by default. This and other important issues have received less attention amid the flurry of criticism pertaining to research ethics (most of the 120 papers Google Scholar identified as citing the study focused on ethics). Some open issues include the lack of transparency given restrictions to data access and the difficulty in systematic replication due to a lack of Facebook-level resources. This Viewpoint takes a different approach by discussing the implications of the study on technology design and the criteria on which technologists should base their design decisions.
The Facebook-Cornell study emerged from an attempt to better understand widespread emotion contagion in social networks. The authors experimentally studied how two filtering algorithms influenced the emotional expressions of a large number of users (N=689,003) by manipulating the likelihood of positive or negative posts appearing in the users’ news feeds. They then studied the emotional content of users’ status updates, which were ostensibly influenced by the emotional content in their news feeds. Emotion feelings were measured by the inclusion of emotional terms in the posts, computed with the Linguistic Inquiry and Word Count system (LIWC)5 that provides psychologically grounded lists of positive and negative emotional terms (amongst other categories). After a week, those in the positivity-reduced condition (for whom the number of positive posts was reduced) used fewer positive (0.1%) and more negative (0.04%) emotional terms compared to a control condition where a similar proportion of posts were reduced at random (that is, without respect to emotional content). In contrast, when negative posts were reduced (negativity-reduced condition), users used more positive (0.06%) and fewer negative (0.07%) emotional terms compared to the control condition. The authors’ interpretation of these findings was that users felt more negative and positive emotions in the respective conditions—thus emotion contagion occurred.
If we accept the authors’ conclusions that emotion contagion did in fact occur (see the accompanying sidebar), this leads us to a larger question for technology: Can design ever be emotionally neutral and if not, on what criteria should technologists base design decisions? Beyond the controversy surrounding the way the Facebook study addressed informed consent, it is important not to miss the valuable contribution made by this study. This study contributes to a critical area of modern inquiry: How does digital experience and the design of it affect our emotions? This is significant because one issue neglected in the media discourse is that design, be it of a filter, interface, or algorithm, is arguably never neutral. For example, newspapers use editorial guidelines to filter what information is published and search engines use algorithms to make these decisions. Every design decision must be based on some criteria. As researchers in Values-sensitive Design have made clear, the values and goals of designers and stakeholders will shape the design of any technology.3 Thus, the obligation to understand the impacts of our design decisions, and to be transparent about what influences them, becomes imperative.
If design is known to affect emotions, above and beyond this particular study on emotion contagion (see Calvo and Peters1 for a list of examples), how can we study these effects and how should we apply the knowledge gained? If software design is not neutral, how should a software designer make decisions about what is a "good" design—particularly when we are designing interfaces so closely linked to what we care most about: family, friends, and relationships? For instance, if it is in fact impossible for Facebook not to filter information due to scale, how should the filter criteria be determined? Should filters only ever be randomized and not optimized for the user experience? Or can we look deeper and seek to support greater transparency and user autonomy; what if designers allowed users to make more of these decisions themselves? For example, what if users could set the parameters for their news feed filter or aspects of their search algorithms on their own? Transparency and autonomy seem underexplored opportunities for respecting individual differences and safeguarding against paternalism or misuse.
We also posit that, as we look for criteria upon which to base technology design decisions, we should be turning to the research on psychological well-being—that design decisions should seek to promote (rather than hinder) thriving (an area we call Positive Computing1). It is important to note that well-being is not defined simply as an increase in positive emotions. Other determinants include empathy, compassion, self-awareness, engagement, autonomy, and connectedness according to research in psychology. Negative emotions are also an important component of lasting well-being.2 In this view, empathizing with a friend in need or receiving that empathy, may contribute more to one’s well-being than merely positive expression. Clearly, emotional impact represents a rich, nuanced, and complex space of inquiry into which we are still only scratching the surface.
Conclusion
There is still much left to be understood about how our emotional lives play out in digital experience and how the design of systems, interfaces, and interactions shape our emotional experience. By publishing studies like the one mentioned here, companies are helping contribute important knowledge, not just to the academic community, but also to those who care about the psychological impact of technology. We believe the controversy over the Facebook study is a useful reminder of how important it is to uphold ethical guidelines in research, and the important role technology plays in our emotional experience. However, we hope it will encourage rather than deter further research into understanding ourselves better and understanding how we, as computing professionals, can make design decisions that are of optimal benefit to society.
Join the Discussion (0)
Become a Member or Sign In to Post a Comment