With the proliferation of smart devices and mobile and social network environments, the social side effects of these technologies, including cyberbullying through malicious comments and rumors, have become more serious. Malicious online comments have emerged as an unwelcome social issue worldwide. In the U.S., a 12-year-old girl committed suicide after being targeted for cyberbullying in 2013.20 In Singapore, 59.4% of students underwent at least some kind of cyberbullying, and 28.5% were the targets of nasty online comments in 2013.10 In Australia, Charlotte Dawson, who at one time hosted the "Next Top Model" TV program, committed suicide in 2012 after being targeted with malicious online comments. In Korea, where damage caused by malicious comments is severe, more than 20% of Internet users, from teenagers to adults in their 50s, posted malicious comments in 2011.9
Recognizing the harm due to malicious comments, many concerned people have proposed anti-cyberbullying efforts to prevent it. In Europe, one such campaign was called The Big March, the world's first virtual global effort to establish a child's right to be safe from cyberbullying. The key motivation behind these campaigns is not just to stop the posting of malicious comments but also to motivate people to instead post benevolent comments online. Research in social networking has found benevolent comments online are not alone but coexist in cyberspace with many impulsive and illogical arguments, personal attacks, and slander.14 Such comments are not made in isolation but as part of attacks that amount to cyberbullying.
Both cyberbullying and malicious comments are increasingly viewed as a social problem due to their role in suicides and other real-world crimes. However, the online environment generally lacks a system of barriers to prevent privacy invasion, personal attacks, and cyberbullying, and the barriers that do exist are weak. Social violence as an online phenomenon is increasingly pervasive, a phenomenon manifesting itself through social divisiveness.
Research is needed to find ways to use otherwise socially divisive factors to promote social integration. However, most previous approaches to online comments have focused on analyzing them in terms of conceptual definition, current status, and cyberbullying that involves the writing of malicious comments.1,8,13,16,21,22 Still lacking is an understanding of why people post malicious comments in the first place or even why they likewise post benevolent comments that promote social integration. Unlike previous studies that focused on cyberbullying itself as a socially divisive phenomenon, this study, which we conducted in Korea in 2014, involved in-depth interviews with social media users in regard to both malicious and benevolent comments. To combat the impropriety represented by the culture of malicious comments and attacks, our study sought to highlight the problem of malicious comments based on the reasons people post comments. Here, we outline an approach toward shaping a healthier online environment with fewer malicious comments and more benevolent ones that promote social integration.
As an exploratory study, we took an interview approach. Unlike previous studies where the research typically reflected the perspective of elementary, middle, or high school students, we included in-depth interviews with a broader range of age groups. Questions dealt with reasons for benevolent and malicious comments, problems associated with online comments, and suggestions for addressing the problems.
As a qualitative study, we adopted the convenience-sampling approach for selecting interviewees. For qualitative researchers, it is the relevance of interview subjects to the research topic rather than their representativeness that determines how they select participants.4 The interviewees should be able to explain the reasons or motivations for such postings. We thus checked whether interview subjects had posted comments online.
Our 110 interview subjects ranged from students in their teens to adults in their 50s. The number was determined by confirming theoretical saturation,18 indicating no additional relevant responses, or codes, emerged from additional data collection and analysis. By grouping 10 interview subjects at each stage, we were able to analyze interview data based on the coding at each stage. After conducting interviews over 11 stages with the 110 subjects, we could no longer find new codes. For this reason, we limited ourselves to the 110 subjects, of whom three did not complete their interviews. We thus included 107 subjects in the analysis (see Table 1). The average interview time per participant was 30 to 40 minutes. We gave gift certificates for books to participants to encourage sincerity.
Using the open coding approach,19 we subjected transcripts of the interviews to content analysis, permitting inclusion of a large amount of textual information and systematic identification of its properties.7 Coding was performed by two researchers, one of whom, to avoid potential bias, was not involved in data collection. With open coding, each coder examined the interview transcripts line by line to identify codes within the textual data. We then grouped the codes into categories.
The inter-rater agreement scores for the entire coding process averaged 0.79, with Cohen's Kappa scores averaging 0.78, indicating an acceptable level of inter-rater reliability.9 Inter-rater disagreements were reconciled through discussion between the two raters. We then grouped the identified codes into broader categories that reflected commonalities among the codes. We further counted the frequency of relevant responses, or codes, for each category.
Table 2 outlines the reasons for posting benevolent comments online. Five of the seven main ones accounted for 85.4% of the total: encouragement (36.3%); self-satisfaction (21.5%); providing advice or help (11.8%); supporting other benevolent comments (8.9%); and actualizing the social good (7.4%). Many respondents said they write benevolent comments to "encourage" or give hope or courage to someone and think such an attitude can yield positive change. People also post benevolent comments to "provide advice and help" to others. Ranked next as the main reasons for posting benevolent comments were "support for other benevolent comments" and "actualizing society's good." They revealed themselves by agreeing with others' benevolent comments, following others in the selected online context ("online context norm"), or preventing malicious comments and spreading benevolent comments. Posting benevolent comments based on such reasons made the people doing the posting feel "satisfaction," motivating them to continue to post further benevolent comments.
Table 3 outlines the reasons for posting malicious comments. Five of the seven main ones accounted for 85.0% of the total: resolving a feeling of dissatisfaction (28.1%); hostility (20.3%); low self-control (15.7%); supporting other malicious comments (8.9%); and fun (8.5%). Many interview subjects said people write malicious comments to express "anger and feelings of inferiority" and to get attention. "Hostility toward others" suggests malicious commenters attack and slander a certain person's blog or bulletin board and spread groundless rumors elsewhere. "Low self-control" suggests people post malicious comments irresponsibly for "fun" and to release "stress." "Supporting other malicious comments" and "online context norm," or following the general pattern of the selected online context, demonstrate how often people post malicious comments by simply following others.
The figure here outlines the identified problems with online comments ("anonymity" ranked highest at 42.8%) and suggestions for addressing them. Writing comments online allows people to participate without a certification process, creating an environment in which they can criticize, curse, and malign others in the course of expressing their opinions. "Lack of responsibility" was cited by 30.3% of interview subjects. Most people posting malicious comments do not appreciate the potential seriousness of their effect or actually belong to a category involving some kind of violence; they impulsively write comments without a sense of responsibility for their effect. Moreover, "online context climate," or the environmental conditions of the online context, represented 11.0% of the interview subjects, implying that as people become less caring and more disdainful of others, their internal dissatisfaction can be expressed through malicious comments. There also appears to be a "lack of regulation and punishment" for malicious comments and a rise of "commercial selfishness" (such as programs that trigger witch hunts and prompt advertising by commenters).
Among suggestions from interview subjects on how to deal with malicious online comments, improvement of awareness through "educational programs and campaigns" ranked highest, with 39.7%, meaning organized education for Internet users was seen as a way to ensure a civil public dialogue. A more proactive type of Internet ethical education targeting teenagers should be a priority in schools, as well as at home, to teach commenters to appreciate the seriousness of malicious comments and the potential for cyber violence. Such programs should highlight Internet users' responsibility for their online comments. They should also promote the idea of writing benevolent comments as a way to limit malicious comments and increase benevolent comments. Regarding the seriousness of anonymity in online comments, the "use of real identity" ranked next, with 29.3%. Many interview subjects suggested using real identities (such as real names or photos) as a way to reduce malicious comments in the online context.
Another suggestion from interview subjects was "more and stronger regulation," with 20.8%. Enforcing official punishment may be difficult online, but inadequate punishment is one of the reasons for malicious comments. In contrast, many interview subjects endorsed the idea of stronger regulation and legal punishment for malicious comments. Yet another suggestion was "role of management," with 6.4%. Many interview subjects highlighted the role of managers of social media providers in monitoring and deterring online comments, especially malicious ones. Social media management should have a role in resolving how to reduce malicious comments and promote benevolent comments in a social media context; for example, a management team might develop an algorithm for filtering online comments and identifying malicious writers, then implement a supporting system. Other suggestions included "a reward program for reporting malicious comments," "clarifying the range of privacy," "abolishing any program that triggers a witch hunt," and "managing the culture of the online community." Despite the value of our findings, it would be useful to further test their robustness by replicating the study in countries other than Korea, in light of cultural differences.
Motivations for malicious comments identified in the study involved targeting people's mistakes. Conversely, most benevolent comments involved encouragement and compliments to help people in difficult or risky situations, showing malicious comments is a primary reason for degradation of online social networks. Moreover, the abolition of anonymity and intensification of punishment in social media can be effective in reducing malicious comments and rumors. However, potential violation of freedom of expression also risks trivializing the online social network itself. Because anonymous forms of freedom of expression have always been controversial in theoretical and normative spheres of social research,5,6,11,12,17 careful consideration of any limiting of comments is necessary before a ban might be contemplated.
To establish an environment of healthy online commentary, a quick measure is needed of the damage caused by the spread of false or toxic information and formation of socioeconomic support for the potential victims. Measures are required to minimize damage when toxic information is posted in social media and investigate legal steps that might be needed to pursue awards for financial damage. Besides imposing legal punishment on people posting malicious comments, victims' mental pain is much more urgent. A well-organized legal support program for such damage is necessary.
Also needed are ways to enforce social media users' right to control their personal information, as well as to verify distorted information. Such systems would involve monitoring to detect distorted information, services for filtering information, an "information confirmation system," and laws supporting the rights of users concerning their consent and choices in how their personal information is used. Regarding malicious comments, Internet portal sites should suggest preventive measures (such as systems to report malicious comments, disclose commenters' IP addresses, and create lists of prohibited words).
Education in socially appropriate and legal use of social media is necessary to minimize the social, cultural, and economic gaps among approaches and applications prevalent today. Education along these lines should emphasize not only the role of the producer but also of information users and instill a sense of responsibility for information. Such an effort could be accompanied by encouraging benevolent comments. Educational programs and campaigns could also be directed at motivating the posting of benevolent comments. Our study found Internet users post benevolent comments mainly to encourage and help others, often making them because other Internet users have already done so, and gain a sense of satisfaction from their action. Efforts to develop social norms of posting benevolent comments should also consider the reasons identified here for positive posts. People, especially teens, tend to take collective action in the use of social media.15 Our study further found that people post benevolent comments and malicious comments due to the online context, as in Table 2 and Table 3. It is therefore important for all online sites that accept comments to develop social norms of posting benevolent comments through educational programs and campaigns.
Unrestricted by time and space, online communication has led to increasing numbers of both benevolent and malicious comments, with the latter including impromptu and irrational personal abuse and defamation. Because malicious comments can provoke pain and even violence in cyberspace, they have emerged as a serious social issue, including allegedly causing their targets to commit suicide. To combat the abuse represented by the culture of malicious comments and attacks, our study investigated their sources and role in social disintegration. Our study also suggested ways to address the problem and increase the number of benevolent comments that can contribute to social integration and harmony.
The study has several implications for research, as it was among the first to comprehensively consider the reasons for posting comments, identify related problems, and explain how to address them. Previous research explored general reasons (such as to redirect feelings, revenge, jealousy, and boredom) for cyberbullying based on data collected from high school students,22 but there was a lack of understanding among researchers as to the motivators that lead to posting malicious comments and benevolent comments, respectively. Our study thus adds value to the literature by explaining the reasons for malicious and benevolent comments and how they differ. Although previous research classified types of cyberbullying and investigated the consequences of cyberbullying,10,16 missing was an understanding of the problems related to online comments in general. This study thus adds value to the literature by identifying the relevant problems and advancing our understanding of the phenomenon. Meanwhile, several studies have discussed strategies school guidance counselors and parents might use to prevent cyberbullying and proposed coping strategies for students.2,16 Extending the previous research, we have contributed by explaining how to manage the problems of online comments as a way to reduce malicious ones and promote benevolent ones.
Our results also suggest how social media service providers, educational institutions, and government policymakers might be able to establish a positive, nonthreatening online comment culture. Educators must understand why students post malicious, as well as benevolent, comments. They can consider updating their curricula or teaching content by adding ethical issues related to online comments. That is, based on the reasons we identified, they can help guide their students (such as for self-satisfaction and society's advancement through benevolent comments), what to post (such as support), and how to post (such as self-expression). They can likewise also teach why not to post (such as social problems), what not to post (such as rage), and how not to post (such as poor self-control). Students especially should be educated in a way that instills a sense of responsibility for their postings. If they come to have a sense of responsibility and perceive that posting malicious comments is a form of violence, cyberbullying would likely be reduced. Educators might also consider launching campaigns that promote the posting of benevolent comments, establishing social norms of conduct that would reach many other people online and be accepted by them.
Policymakers should understand government regulations and corresponding legal punishment can be useful in regulating cyberbullying, especially in the form of malicious online comments. Our results further suggest cyberbullying, especially through such comments, should be regulated at the government level. Many people also believe legal penalties for posting malicious comments should be strengthened. Both regulation and punishment have a role to play in reducing and even preventing malicious online comments.
For social media service providers, including information systems developers, our results suggest they should consider requiring real identities for postings. When people access social media and post comments anonymously, they think less about what they post. Requiring true identities would cause them to be more careful and responsible. Our results also suggest providers of social media services can apply text filters to their systems. Because certain texts are used repeatedly in cyberbullying or malicious comments, providers should be persuaded to develop a system to detect certain texts and alert them as to when to possibly take action against the people posting them. Such a filtering function could reduce the number of all kinds of malicious comments. Conversely, social media service providers should consider posting lists that rank users most active in posting benevolent comments on their sites. Because people generally enjoy self-expression, these rankings could motivate more people to post positive comments as a way to develop a new social norm in which malicious comments are unwelcome and the people posting them are scorned.
This work was supported by the National Research Foundation of Korea Grant funded by the Korean Government (NRF-2012S1A3A2033291).
9. Korea Internet and Security Agency. Internet Ethical Culture Research, Seoul, Korea (2012); http://isis.kisa.or.kr/board/index.jsp?pageId=040100&bbsId=7&itemId=786
13. Park, H.J. A critical study on the introduction of the cyber contempt. Anam Law Review 28, 1 (2009), 315-347; http://kiss.kstudy.com/journaL/thesis_name.asp?tname=kiss2002&key=2751961
14. Poster, M. Cyber Democracy: Internet and Public Sphere. University of California, Irvine (1995); http://www.hnet.uci.edu/mposter/writings/democ.html
20. The Guardian. Florida cyberbullying: Girls arrested after suicide of Rebecca Sedwick, 12. The Guardian (Oct. 15, 2013); http://www.theguardian.com/world/2013/oct/15/florida-cyberbullying-rebeccasedwick-two-girls-arrested
©2015 ACM 0001-0782/15/11
Permission to make digital or hard copies of part or all of this work for personal or classroom use is granted without fee provided that copies are not made or distributed for profit or commercial advantage and that copies bear this notice and full citation on the first page. Copyright for components of this work owned by others than ACM must be honored. Abstracting with credit is permitted. To copy otherwise, to republish, to post on servers, or to redistribute to lists, requires prior specific permission and/or fee. Request permission to publish from email@example.com or fax (212) 869-0481.
The Digital Library is published by the Association for Computing Machinery. Copyright © 2015 ACM, Inc.
No entries found