November 2019 -- Cheshvan-Kislev 5780,  Volume 25, Issue 11

c2019 Shoreline Publishing, Inc.      629 Fifth Avenue, Suite 213, Pelham, NY 10803      P: 914-738-7869

ADL Issues New Report About Online Hate

Online hate and harassment1 have increasingly become a common part of the online experience as public attention has usually focused on harassment of celebrities and public figures. However, the Anti-Defamation League {ADL}  recent work has shown that a substantial swath of the American public has experienced online harassment, with 37 percent of adults having experienced severe online harassment2, defined by the Pew Research Center as including physical threats, sexual harassment, stalking and sustained harassment3.


For this study called The Trolls are Organized and Everyone’s a Target: The Effects of Online Hate and Harassment, the  ADL wanted to examine the effects of online hate and harassment on private individuals—the type of people whose experiences represent the bulk of that statistic. They engaged in an extensive literature review and also conducted 15 in-depth qualitative interviews to better understand and chronicle the full experience of being a target of online harassment. They explored the personal stories of targets of online hate in an attempt to paint a more complete picture of the ways in which harassment can envelop multiple facets of a person’s life.


Five findings stand out from the literature review and interviews:


1. Online hate incidents are frequently connected to the target’s identity. Whether it was simply being a Jewish business owner or authoring a blog post on feminism, the online hate incidents experienced by the interviewees were frequently centered around issues of identity.


2. Harassers use platform design to their advantage. Coordinated attacks often caused harm to a target by leveraging key features of social media platforms. This included the ability to be anonymous online, to create multiple accounts by one person, the fact that there is no limit to the number of messages one user can send to another, and the use of personal networks as weaponized audiences.


 3. Online hate can cause significant emotional and economic damage. Targets of harassment reported deep and prolonged emotional difficulties. Additionally, harassers often targeted individuals’ economic wellbeing by trying to tarnish their reputation or by contacting their employers. 


 4. Harassers attack and impact others in the target’s community. Interviewees revealed experiences of harassment where perpetrators would also attack their relatives, friends and employers. Targets were highly disturbed by the spillover of hate into their offline lives and felt that the increase in radius of attack was meant to cause further harm to them.


5. Social media platforms are not adequately designed to remove or efficiently review hateful content . Respondents were universally unhappy with the processes and functions of the reporting systems on major social media platforms. Interviewees expressed frustration in having to wait weeks for the content moderation teams to respond to their reports of harassment. They also felt that the ability to only report one piece of content at a time created a bottleneck in content flagging.


6. The interviews and the review of the literature also point to ways to prevent or mitigate the impact of hate and harassment on victims. These include:


A. Increase users’ control over their online spaces. Interviewees felt like they had no control over their profiles, pages or accounts to prevent attackers from targeting them relentlessly. Platforms could provide more sophisticated blocking features like blocking a user’s ability to stalk someone across a platform. Platforms should also allow users to designate friends’ accounts as co-moderators, with specific permissions to assist with harassment management and moderation.


B. Improve the harassment reporting process. Companies should redesign their reporting procedures to improve the user experience. Platforms should provide step-by-step tracking portals, so users can see where their abuse report sits in the queue of pending reports. Platforms should also allow bulk reporting of content, consider harassment occurring to the target on other platforms, and respond to targets quickly. Platforms could set up hotlines for people under attack who need immediate assistance and assign case managers to help targets of hate through the process.


C. Build anti-hate principles into the hiring and design process. Safety, anti-bias and anti-hate principles should be built into the design, operation and management of social media platforms. Platforms should prioritize diversity in hiring designers, including individuals who have been targets of online harassment. Platforms should create user personas and use cases that address the needs of vulnerable populations. Platforms should also weigh tool functionality against increased opportunities for harassment before implementing new features. They also recommend that input from a diverse set of community representatives and outside experts should be solicited before additions to or changes in platforms features are made.


While interviewees did not directly comment on the Government’s role in passing legislation that holds perpetrators accountable for their actions, it’s important that federal and state governments strengthen laws that protect targets of online hate and harassment. Many forms of severe online misconduct are not consistently covered by cybercrime, harassment, stalking and hate crime law. Legislators have an opportunity, consistent with the First Amendment, to create laws that hold perpetrators of severe online hate and harassment more accountable for their offenses.