New York
CNN Business
—
Facebook and TikTok failed to block ads with ‘gross’ misinformation about when and how to vote in the US midterm, as well as the integrity of the voting process, according to a new report by human rights watchdog Global Witness and Cybersecurity for Democracy. New York University (C4D) team.
In one experiment, researchers submitted 20 advertisements with inaccurate claims to Facebook, TikTok and YouTube. The ads targeted battleground states such as Arizona and Georgia. While YouTube was able to detect and reject each test submission and suspend the channel used to post them, the other two platforms did significantly worse, according to the report.
TikTok approved 90% of ads containing blatantly false or misleading information, the researchers found. Facebook, meanwhile, endorsed a “significant number“, according to the report, although significantly lower than TikTok.
The ads, submitted in English and Spanish, contained information that falsely stated that voting days would be extended and that social media accounts could also be used as a means of voter verification. The advertisements also contained claims aimed at discouraging voter turnout, such as claims that election results could be hacked or that the outcome was decided in advance.
The researchers removed the ads after going through the approval process, if approved, so ads with misinformation were not shown to users.
“YouTube’s performance in our experience demonstrates that it is not impossible to detect harmful election misinformation,” said Laura Edelson, co-director of NYU’s C4D team, in a statement accompanying the report. “But all the platforms we studied should have gotten an “A” for this mission. We call on Facebook and TikTok to do better: stop bad election information before it reaches voters.
In response to the report, a spokesperson for Facebook parent Meta said the tests “were based on a very small sample of ads and are not representative given the number of political ads we review daily across the world.” The spokesperson added, “Our ad review process involves multiple layers of analysis and detection, both before and after an ad goes live.”
A TikTok spokesperson said the platform “is a place for authentic and entertaining content, which is why we ban and remove election misinformation and paid political advertising from our platform.” We value feedback from NGOs, academics, and other experts who help us continually strengthen our processes and policies. »
Google said it had “developed extensive measures to address misinformation on our platforms, including misrepresentation about elections and voting procedures.” The company added, “We know how important it is to protect our users from this type of abuse – especially ahead of major elections like those in the United States and Brazil – and we continue to invest in and improve our application systems to better detect and remove this content.”
Although limited in scope, the experiment could rekindle concerns about steps taken by some of the largest social platforms to combat not only misinformation about candidates and issues, but also seemingly clear-cut misinformation about the voting process. itself, with only a few weeks until the halfway mark.
TikTok, whose influence and scrutiny of US politics has grown in recent election cycles, launched an Elections Hub in August to “connect people who interact with election content with authoritative information,” including advice on where and how to vote, and added labels to clearly identify content related to the midterm elections, according to a business blog post.
Last month, TikTok taken additional measures to protect the veracity of political content before the midterm elections. The platform began requiring “mandatory verification” for US-based political accounts and implemented a blanket ban on all political fundraising.
“As we’ve stated before, we want to continue to develop policies that foster and promote a positive environment that brings people together, not divides them,” said Blake Chandlee, president of Global Business Solutions at TikTok. in a blog post at the time. “We currently do this by working to keep harmful misinformation off the platform, banning political advertising, and connecting our community with authoritative election information.”
Meta said in september that his medium-term plan would include removing misrepresentations about who can vote and how, as well as calls for violence linked to an election. But Meta refrained from banning claims of rigged or fraudulent elections, and the company said The Washington Post these types of claims will not be removed for any content involving the 2020 election. Going forward, Meta has banned U.S. ads that “question the legitimacy of an upcoming or ongoing election,” including mid-term elections, in accordance with company policy.
Google too took action in September to protect against election misinformation, by elevating trusted information and displaying it more prominently on services including search and YouTube.
Large social media companies typically rely on a mix of artificial intelligence systems and human moderators to vet the large amount of posts on their platforms. But even with similar approaches and goals, the study is a reminder that platforms can differ wildly in their content enforcement actions.
According to the researchers, the only ad they submitted that TikTok rejected contained claims that voters must have received a Covid-19 vaccine in order to vote. Facebook, on the other hand, accepted this submission.