Facebook and TikTok are accepting ads that researchers say contain “clear” misinformation about midterm voting.


New York
CNN business

Facebook and TikTok have been unable to block ads with misinformation about when and how to vote in US mandates, or about the integrity of the voting process, according to a new report by human rights watchdogs Global Witness and Cybersecurity for Democracy. Team (C4D) at New York University.

In one experiment, researchers sent 20 ads to Facebook, TikTok, and YouTube containing false claims. The ads targeted battleground states like Arizona and Georgia. While YouTube was able to detect and reject all test submissions and suspend the channel used to publish them, the other two platforms fared worse, according to the report.

Researchers found that TikTok accepted 90% of ads that contained false or misleading information. Facebook, meanwhile, accepted a “significant amount,” according to the report.

The ads, sent out in English and Spanish, falsely included information that voting days would be extended and that social media accounts could double as a way to verify voters. The ads also contained claims designed to discourage voter participation, such as that election results could be hacked or that the outcome was already decided.

The researchers withdrew the ads after they went through the approval process, and if they were approved, the ads containing the misinformation were not shown to users.

“YouTube’s performance in our experiment shows that detecting harmful election misinformation is not impossible,” said Laura Edelson, co-director of NYU’s C4D team, in a statement accompanying the report. “But all the platforms we looked at should get an ‘A’ on this task. We call on Facebook and TikTok to do better: stop bad election information before it reaches voters.”

In response to the report, a spokesperson for Facebook parent Meta he said the tests were “based on a very small sample of ads, and are not representative of the number of political ads we review every day around the world.” The spokesperson added: “Our ad review process includes multiple layers of analysis and detection, both before and after an ad is published.”

A TikTok spokesperson said the platform is “a place for authentic and entertaining content, which is why we ban and remove election disinformation and paid political advertising from our platform. We value feedback from NGOs, academics and other experts, which helps us continually strengthen our processes and policies.” to us.”

Google did not immediately respond to CNN’s requests for comment.

Although limited in scope, the experiment could renew concerns about the steps taken by major social platforms to combat not only misinformation about candidates and issues, but also about the voting process, with weeks to go. middle school

TikTok, which has grown in influence and control over US politics in recent election cycles, launched an Election Center in August to “connect people engaging with election content with authoritative information,” including directions on where and how to vote, and adding hashtags. identifying content related to midterm elections, according to a company blog post.

Last month, TikTok took more steps to ensure the veracity of political content ahead of the midterms. The platform began requiring “mandatory verification” for US-based political accounts and implemented a blanket ban on all political fundraising.

“As we’ve stated before, we want to continue to foster and develop policies that foster a positive environment that brings people together, not divides them,” Blake Chandlee, TikTok’s president of Global Business Solutions, said in a blog post. the time “Today we do just that by working to keep harmful misinformation off the platform by banning political advertising and connecting our community with authoritative information about the election.”

Meta said in September that his mid-term plan would include removing false claims about who can vote and how they can vote, and removing calls for election-related violence. But Meta stopped short of banning claims of rigged or fraudulent elections, and the company told The Washington Post that those types of claims will not be removed.

Google also took steps to protect against election disinformation in September, uploading and displaying reliable information more prominently on services including search and YouTube.

Large social media companies typically rely on a mix of artificial intelligence systems and human moderators to analyze the large number of posts on their platforms. But despite similar approaches and goals, the research reminds us that platforms can be very different in their content enforcement actions.

According to the researchers, the only ad that TikTok rejected was one that claimed voters had to get the Covid-19 vaccine in order to vote. Facebook, on the other hand, accepted this submission.