TikTok Blocks Influencers From Creating Paid Political Content Ahead Of Midterms—Here’s How Social Media Platforms Are Preparing

Topline

TikTok on Wednesday unveiled a strategy for fighting misinformation ahead of the midterm elections, including a ban on paid political content from influencers, after Facebook and Twitter also outlined a series of steps to prepare for the November election after facing a barrage of criticisms for fueling the spread of false information during previous elections.

Key Facts

TikTok will launch an Elections Center this week to “connect people” with “authoritative information” about voting in more than 45 languages, which the company will link to through labels, TikTok’s Head of Safety Eric Han wrote in a blog post.

TikTok is reinforcing its 2019 ban on paid political advertising to include paid content from influencers as well, after challenges arose during the 2020 election educating influences on that rule, Han said.

The plans come a day after Facebook’s parent company Meta briefly shared some of its preparations for the election, including banning “political, electoral and social issue ads” the week before voting starts November 8, as it did in 2020.

Twitter last week, meanwhile, said it would reinforce its “Civic Integrity Policy” it launched in 2020, which includes not recommending or amplifying “harmful” and “misleading” information, and will create new labels for this content, including links to credible information.

Tangent

TikTok’s new strategy comes three days after the New York Times reported the platform has the potential to become an “incubator” of false information during midterm elections, according to interviews with researchers who monitor the online spread of misinformation. TikTok’s recommendation algorithm, short video length and millions of users all could contribute to the spread of misinformation, while videos containing false claims of potential voter fraud in November have already reached many users, the Times reported.

Key Background

Social media companies launched a series of new steps in the wake of the 2020 election as misinformation and false allegations of election fraud spread rapidly online. Several platforms, including Twitter, Facebook, Instagram and Youtube, moved to block former President Donald Trump from posting after the January 6 insurrection. Despite these new policies, social media posts containing false claims of election fraud are still rampant, according to experts, the Washington Post reported last week, and companies have faced criticism from experts and advocates for failing to take more steps to reign in this misinformation. Twitter sparked controversy earlier this year because the company said it had stopped enforcing its Civic Integrity Policy—which would suspend or sometimes ban users for spreading false information about the 2020 election—in March 2021. Other experts, meanwhile, worry TikTok’s video and audio content could be more challenging to moderate than text. Twitter said this week it has had some success with its new policies: The company said labels—which it first introduced last year—helped decrease replies to tweets containing misinformation by 13% and reduced retweets and likes of those tweets by 10% and 15% respectively.

Further Reading

TikTok Bans Paid Political Influencer Videos Ahead of US Midterms (Bloomberg)

Twitter activates election policy enforcement for US midterms (CNN)

On TikTok, Election Misinformation Thrives Ahead of Midterms (New York Times)

U.S. Midterms Bring Few Changes From Social Media Companies (Associated Press)

Source: https://www.forbes.com/sites/madelinehalpert/2022/08/17/tiktok-blocks-influencers-from-creating-paid-political-content-ahead-of-midterms-heres-how-social-media-platforms-are-preparing/