October 2020 -- Tishrei- Cheshvan 5781,  Volume 26, Issue 10

c2020 Shoreline Publishing, Inc.      629 Fifth Avenue, Suite 213, Pelham, NY 10803      P: 914-738-7869      hp@shorelinepub.com

ADL Releases Online Hate Index Report

During this bitterly divisive 2020 U.S. presidential election season, it becomes crucial to understand the information that Americans are exposed to online about political candidates and the topics they are discussing. It is equally important to explore how online discourse might be used to intentionally distort information and create and exploit misgivings about particular identity groups based on religion, race or other characteristics.

 

The Anti-Defamation League {ADL}has  brought together the topic of online attempts to sow divisiveness and misinformation around elections on the one hand, and antisemitism on the other, in order to take a look at the type of antisemitic tropes and misinformation used to attack incumbent Jewish members of the U.S Congress who are running for re-election. This analysis was aided by the Online Hate Index (OHI), a tool currently in development within the Anti-Defamation League (ADL) Center for Technology and Society (CTS) that is being designed to automate the process of detecting hate speech on online platforms. Applied to Twitter in this case study, OHI provided a score for each tweet which denote the confidence (in percentage terms) in classifying the subject tweet as antisemitic.

 

This study presents a snapshot in time of “Problematic” content, which for the purposes of this report they have defined as including both antisemitic tweets as well as tweets that include antisemitic tropes but require more context to be definitively categorized as antisemitic. The findings of this report are based on a review of 5,954 tweets directed at all 30 Jewish incumbents up for re-election on November 3, 2020. The tweets in their sample were all posted between July 23, 2020 to August 22, 2020.

 

A large number of tweets questioned the loyalty, honesty, ideology, and faith of Jewish incumbents, making up 48 percent of all tweets labeled as problematic. There appears to be a concerted effort at trying to portray Jewish incumbents as less patriotic and more dishonest, due in part to their Jewish background. Many of these tweets also claimed that Jewish incumbents are Communists and Marxists in hiding, claimed dual loyalty of lawmakers, or questioned their Jewish faith if they were seen photographed next to Muslims.

 

Misinformation related to Jewish Hungarian-American financier and philanthropist George Soros, whose Open Society Foundations operates in the United States and abroad, constitutes an astounding 39 percent of all tweets targeting Jewish incumbents that were labeled Problematic in the sample set. These tweets push a series of debunked antisemitic conspiracy theories tied to Soros.

 

Fifteen percent of tweets analyzed also included tropes related to the broad conspiracy theory that Jews control key political, financial, and media systems and exploit them for their advantage to the detriment of others. These tweets allege that Jewish incumbents are part of the “Deep State” or claim the American political and financial systems are controlled by the Rothschild family to benefit the Jewish community.

 

Outside of antisemitic conspiracy theories, tropes, and misinformation, this sample set also includes content that employed Explicit Antisemitic Language. While these only accounted for 7 percent of all Problematic tweets, Twitter has yet to remove them -- despite posts containing explicit forms of antisemitism that violate the platform’s Rules and Policiesi.

 

Recommendations for Social Media Companies:

Develop strong policies and create distinct rubrics for different forms of hate targeting marginalized and minority groups: social media companies must develop decision-making rubrics for their content reviewers and AI tools that are tailored to the needs of different identity-based groups. These rubrics should cover a comprehensive set of tropes and phrases that are used to target different identity groups.

 

Collect and share data on identity-based hate: Developing ways to counter online hate requires that we know which groups are targeted, the extent to which they are targeted, and the nature of the attacks. Without this information, it is impossible for platforms, researchers, and civil society to address these problems in a way that is informed by empirical evidence.

 

Improve both manual and automated processes for classifying hate: In addition to creating better rubrics for specific forms of hate speech, social media platforms should assume greater responsibility to enforce their policies and to do so accurately at scale.

 

Run Informational Interventions on the platform: Companies should experiment with a new set of features that help users navigate the world of disinformation. They can do this through interventions that provide accurate information on candidates and identity-based groups to safeguard the democratic system.

 

Expand tools and services for targets of hate: At present, Platforms are doing little to nothing for targets of hate.

 

Design to reduce influence and impact of hateful content: social media companies should redesign their platforms and adjust their algorithms to reduce the prevalence and influence of hateful content and harassing behavior.

 

Recommendations for Lawmakers and Candidates

Dedicate resources to studying the impacts of online hate: Congress should commission a report to study how the online hate ecosystem impacts the election process, how misinformation sways voters, and how aspiring political candidates at every level are impacted by content that targets them based on their identity.

 

Incorporate informational interventions in election campaigns: At the outset of a campaign, candidates should use their reach to counter disinformation and hate speech in real time on social media. Additionally, candidates should amplify accurate information and educate the electorate on the impact of hate speech disguised as political speech and how it reverberates in different identity-based groups.

 

To view the entire study go to adl.org