abusesaffiliationarrow-downarrow-leftarrow-rightarrow-upattack-typeburgerchevron-downchevron-leftchevron-rightchevron-upClock iconclosedeletedevelopment-povertydiscriminationdollardownloademailenvironmentexternal-linkfacebookfiltergenderglobegroupshealthC4067174-3DD9-4B9E-AD64-284FDAAE6338@1xinformation-outlineinformationinstagraminvestment-trade-globalisationissueslabourlanguagesShapeCombined Shapeline, chart, up, arrow, graphlocationmap-pinminusnewsorganisationotheroverviewpluspreviewArtboard 185profilerefreshIconnewssearchsecurityPathStock downStock steadyStock uptagticktooltiptwitteruniversalityweb

16 Aug 2022

Global Witness

Brazil: Facebook failing to tackle election disinformation ads ahead of presidential election, says Global Witness

Facebook’s self-proclaimed efforts to safeguard election integrity in Brazil appear to be failing, as our new investigation uncovers. Our investigation finds that Facebook appallingly failed to detect election-related disinformation in ads in the run-up to the Brazilian election on 2 October 2022.


This follows a similar pattern we uncovered in Myanmar, Ethiopia, and Kenya about Facebook’s inability to detect hate speech in volatile political environments – but what’s different this time is the ad content contained outright false election information (such as the wrong election date) and information designed to discredit the electoral process, and therefore undermining election integrity.

Facebook touts its election integrity efforts, claiming to have “advanced security operations to take down manipulation campaigns and identify emerging threats”, but our findings are a stark reminder of how easy it is for bad actors to circumvent their measures. Given the high stakes nature of the Brazilian election, Facebook is failing in its efforts to adequately protect Brazilians from a disinformation nightmare.


A Meta spokesperson said in response to our findings that they “are and have been deeply committed to protecting election integrity in Brazil and around the world”. They said that they have prepared extensively for the upcoming election in Brazil including launching tools to label election-related posts and establishing a direct channel for the Superior Electoral Court to send them potentially harmful content for review. They cited figures for the number of posts they removed in the last election for violating their policies. Their full response is included in the endnote.


Our findings in Myanmar, Ethiopia and Kenya show that Facebook’s content moderation efforts are seriously lacking – now reinforced in Brazil, where the bar for advertising with explicit political content is ostensibly even higher. This follows reports from employees that Zuckerberg is no longer prioritising safeguarding elections, instead focusing on the so-called ‘metaverse’ - Meta’s new frontier of growth.

Our findings also suggest that Facebook’s account authorisations process – a compulsory measure for anybody wanting to post political or social issue ads – is opt-in, and easily circumvented. This means that Facebook’s own ad library, its “most comprehensive ads transparency surface”, does not give full transparency into who is running ads, who was targeted, how much was spent, and how many impressions the ads received. This information is vital so researchers, journalists, and policy makers can investigate what’s going on and suggest interventions to help protect democratic systems.


We call on Facebook to:

Urgently increase the content moderation capabilities and integrity systems deployed to mitigate risk before, during and after the upcoming Brazilian election – and ensure that the moderators understand the appropriate cultural context and nuance of Brazilian politics.

Immediately strengthen its ad account verification process to better identify accounts posting content that undermines election integrity.

Properly resource content moderation in all the countries in which they operate around the world, including providing paying content moderators a fair wage, allowing them to unionise and providing psychological support.

Routinely assess, mitigate and publish the risks that their services impact on people’s human rights and other societal level harms in all countries in which they operate.

Publish information on what steps they’ve taken in each country and for each language to ensure election integrity.

Include full details of all ads (including intended target audience, actual audience, ad spend, and ad buyer) in your ad library.

Allow verified independent third party auditing so that Meta can be held accountable for what they say they are doing.

Publish their pre-election risk assessment for Brazil.

Respond to the 90+ Brazilian civil society organisations’ policy recommendations in their report The Role Of Digital Platforms In Protecting Electoral Integrity In The 2022 Brazilian Election.