Twitter Safety Head Acknowledges ‘Surge In Hateful Conduct’ Amid Reports Of Company Limiting Access To Moderation Tools


A Twitter executive on Monday acknowledged the platform has seen a “surge in hateful conduct” recently, a statement which comes days after billionaire and self-professed “free speech absolutist” Elon Musk acquired the platform with a promise to make changes to how it moderates content.

Key Facts

Yoeh Roth, Twitter’s Head of Safety and Integrity said he and his team have been working to address the issue since Saturday.

The safety team has taken down more than 1,500 accounts during this period and has managed to reduce the reach of such content “to nearly zero,” Roth claimed.

Roth added that this surge was driven by a “focused, short-term trolling campaign” with many of the deleted accounts being “repeat bad actors.”

Earlier on Monday, Bloomberg reported that Twitter had restricted most of its Trust and Safety staffers from accessing internal content moderation tools.

Staff members whose access has been restricted are unable to take actions against Twitter users who have either been flagged for violating the platform’s rules by other users or Twitter’s automated tools.

Roth corroborated the report in a tweet stating this is “exactly” what any company undergoing a corporate transition would do and added that Twitter’s rules were still being enforced.

Big Number

215. That’s the number of mentions recorded every five seconds on Twitter for a racial slur when hateful content on the platform peaked on Saturday evening, Bloomberg reported. This represented a 1,700% spike. Several activists also pointed out that Musk’s takeover had prompted several bad actors to test the water by tweeting out racial, homophobic and anti-Semitic slurs.


At a time when Twitter’s content policy was under the scanner, Musk himself drew sharp criticism over the weekend after tweeting out an unfounded conspiracy theory about the attack on House Speaker Nancy Pelosi’s husband. The now deleted tweet included a link to an article from the Santa Monica Observer—a website known to share fake news.

Key Background

Questions about Twitter’s handling of content moderation come just days before the midterm elections, raising fears that the platform could be used to spread disinformation or incite violence. During the 2020 elections, Twitter added a label to all claims about the election result before it was officially called—including those made by the candidates. After the results were called by all major news outlets, Twitter started labeling former President Donald Trump’s misleading tweets about election fraud. After taking over the social media company last week, Musk indicated he was not in any rush to implement changes to the platform’s content moderation system, stating that these decisions will be made by a moderation council which will be set up by the company to include “diverse viewpoints.”

Further Reading

Twitter Limits Content-Enforcement Work as US Election Loom (Bloomberg)

A Verified Badge On Twitter May Cost Users $20 A Month, Report Says (Forbes)