top of page
  • Writer's pictureDiverting Hate

Misogynistic Extremism on Twitter Before Musk, & How it May Worsen

Introduction

Misogynistic hate has become a growing national security concern in the United States and in other parts of the globe. In March 2022, the National Threat Assessment Center (NTAC) highlighted the role misogyny plays in motivating individuals to commit mass violence. In September 2022, the Organization for Security and Co-operation in Europe highlighted the overlap between misogynistic extremist attitudes and attitudes that support violent extremism and radicalization that leads to terrorism. While the growing attention is promising, there is more need to monitor, research, and counter misogynistic extremism.


Anti-feminist and male supremacy radicalization pathways are complex and nuanced, n online on-ramps are uniquely amplified by explosive narratives that are perpetuated by engagement algorithms; this process allows for an individual to go down a rabbit hole that leads to echo chambers of misogynistic hate.


Twitter & Free Speech Before and After Musk

After months of negotiations and lawsuits, Elon Musk bought Twitter for 44 billion dollars on October 27, 2022. Prior to his acquisition of the social media platform, Musk noted he wanted to make changes to content moderation “advocating for free speech”on Twitter and make it a platform for free speech. Since October 27, Musk instructed managers to carry out a mass layoff of roughly 50 percent of employees company wide. Content moderators remaining at the company had their access to moderation tools severely restricted. Content moderators are currently unable to discipline accounts that violate the hateful conduct and misinformation policy unless harm was included in the violation. The company stated it halted access to moderation tools in order to reduce “insider risk” during the transition, however, it is unclear whether this access has been restored to content moderators, which is crucial to ensuring hate and misinformation does not spread on the platform.


Through the Internet Archive’s Wayback Machine, our team has been able to identify when changes were made to Twitter’s site in relation to freedom of speech.


On October 28, 2022, the company’s About page directed site visitors to a page entitled “Our Company” (screenshot provided). On this page the company’s guiding principles explicitly state under Promoting health, “Freedom of speech is a fundamental human right – but the freedom to have that speech amplified by Twitter is not. Our rules exist to promote healthy conversations.”


Twitter Policy Page | WayBackMachine.com October 28th, 2022

On October 29th, 2022, the company’s About page no longer directed visitors to “Our Company,” but rather the selection of subpages had changed, and included “Twitter for Good” (screenshot below). The tone related to free expression is starkly different from the day prior. Under Internet safety and education, the company shares their support for “organizations that tackle issues like bullying, abuse, and hate speech,” and declare to “support initiatives that defend and respect all voices by promoting free expression and defending civil liberties.” The latter statement is repeated under Freedom of speech and civil liberties.

Twitter Policy Page | October 29th, 2022

The change to the platform’s site took place on October 29th, 2022, and has remained the same through the publication of this article.


What is Twitter’s policy against misogynistic hate? (Pre-Musk to present)


Twitter’s hateful conduct policy (now rarely enforced) identifies the following protected groups: race, ethnicity, national origin, caste, sexual orientation, gender, gender identity, religious affiliation, age, disability, or serious disease. In other words, under the hateful conduct policy, no user is allowed to threaten or promote violence against individuals based on those categories. Under this policy, a user is banned from having their sole purpose be to incite harm against individuals based on their affiliation with the listed categories.


Twitter technically has a zero tolerance policy against violent threats, and the consequence of making a violent threat against an identifiable individual is immediate and permanent removal from the platform. The platform also prohibits users from desiring harm on individuals belonging to a protected category or to the protected category in general. As Musk reinstates formerly banned bad actors such as Andrew Tate, Donald Trump, Babylon Bee, and Ye1, this zero tolerance policy begins to water down.


Lastly, the hateful conduct policy prohibits the repeated use of slurs and tropes against protected categories. Additionally, in this portion of the policy, Twitter acknowledges its abilities to limit visibility of users who moderately use slurs or tropes. However, the platform does not commit itself to limiting visibility of these users, but rather states it “may” limit visibility through methods of downranking, ineligibility for amplification in top search results, and excluding these tweets and accounts in promotional materials.


Misogynistic language is bountiful on Twitter:


Misogynistic language is not new to Twitter. The language used within the manosphere varies greatly, ranging from co-opting everyday, mainstream language, to very concerning, coded language that is derogatory toward women. These are just some of the few words our team has identified as some of the more concerning words seen within the Twitter manosphere.

  • 666 Rule: Argues that women look for a man who is at least 6 feet tall, have a 6-figure income, and have a 6-inch penis. The Incel community labels men who have all three, plus 6-pack abs, as an “Alpha Male,” men with two to three as a “Beta Male,” and men with just one as being doomed to permanent incelhood.

  • The Wall: A metaphorical wall that indicates a woman begins to look bad once she starts to visibly age. It is common in anti-woman narratives, especially in narratives about older women being able to find suitors.

  • Hypergamy: Used to describe the act of women marrying men of higher socio-economic status. Individuals in the manosphere use hypergamy to justify hate against women.

If Twitter ever regains Trust & Safety efforts and team, here are few places to start:

  1. Distinguish the difference between gender and misogyny, providing definitions for both and all other categories named in hate conduct policy.


Twitter identifies protected categories, which includes gender and gender identity, however, as stated in the OSCE report, gender is often conflated with ‘woman,’ but ‘gender’ is much broader than that. Therefore, it is essential that Twitter's hateful conduct policy include misogynistic hate in their policy and provide definitions to specifically bring to attention that hatred for women is its own form of supremacy. Beyond misogynistic hate, Twitter’s policy would be more powerful and clearer to enforce if all categories identified clear definitions.


  1. Identify language commonly used in the manosphere-related hate group , in order to identify tweets that should be down-ranked, ineligible for amplification, and excluded in promotional emails.

Groups and individuals in the manosphere use a coded language to express their misogynistic beliefs. This creates an extra step in identifying what language is contributing to dangerous content. Twitter is only able to limit the visibility of tweets and accounts they identify as violating their hateful conduct policy, and with the manosphere, that is only possible if Twitter recognizes how dangerous the coded language in the manosphere is.


  1. Maintain a content moderation team, with access to all moderation tools, to ensure misogynistic hate does not spread on the platform.

Elon Musk announced the creation of a “Content Moderation Council,” that will make decisions on the platform’s moderation and will be filled with “widely diverse viewpoints.” The council, which was announced by Musk on October 28, 2022, is still in the early stages, and very little information has been made public regarding the council.Thus, in the meantime, it is vital that the frontline content moderation staff are restored access to the tools necessary to determine violations of the hateful conduct policy, and to be able to examine user history in order to make decisions on whether or not they are allowed space on the platform.


Misogynistic extremism was not hard to find before Musk, but with his relaxed approach to hate speech, it is likely to flourish in the new ownership. Defining misogynistic hate in the hateful conduct policy and identifying the language used within misogynistic extremist circles are necessary to ensure this dangerous content is not being amplified on the platform. In the coming months, it will be important to watch for changes in the written policy, and to see how advertisers on the platform interact with content moderation changes. Advertisers don’t want to be associated with hate, many have already left the platform.


Entering 2023, the questions are: Is it possible that the relationship between the platform and its number one revenue maker might be what keeps Twitter from turning into a platform of never ending rabbit holes leading to misogynistic echo chambers? Or does it no longer matter to Twitter that advertisers leave and misogyny stays?


Footnotes

  1. On December 1, 2022, Ye’s twitter account was suspended after he shared a photo of a symbol of a swastika inside the Star of David. Elon Musk later announced Ye’s suspension from the platform due to his tweet’s attempt to incite violence. In early October, Ye’s twitter account was locked after he tweeted an antisemitic.





90 views0 comments
bottom of page