abusesaffiliationarrow-downarrow-leftarrow-rightarrow-upattack-typeburgerchevron-downchevron-leftchevron-rightchevron-upClock iconclosedeletedevelopment-povertydiscriminationdollardownloademailenvironmentexternal-linkfacebookfiltergenderglobegroupshealthC4067174-3DD9-4B9E-AD64-284FDAAE6338@1xinformation-outlineinformationinstagraminvestment-trade-globalisationissueslabourlanguagesShapeCombined Shapeline, chart, up, arrow, graphLinkedInlocationmap-pinminusnewsorganisationotheroverviewpluspreviewArtboard 185profilerefreshIconnewssearchsecurityPathStock downStock steadyStock uptagticktooltiptwitteruniversalityweb

这页面没有简体中文版本,现以English显示

文章

2022年9月6日

作者:
Jan Rydzak & Leandro Ucciferri, Ranking Digital Rights

RDR affirms Meta can do better if they try & responds all Meta's claims about its Big Tech Scorecard standards & findings

"Meta Can Do Better, If They Try", 6 September 2022

Ranking Digital Rights wishes to address Meta’s response to the letter campaign led by Access Now, in coordination with RDR. This year, Access Now called on Meta to be more transparent about government censorship demands, particularly those targeting WhatsApp and Facebook Messenger. While several companies issued responses, Meta’s was unique in raising questions about RDR’s standards and findings.

Meta’s response made a number of claims that we have decided to address directly below.

  1. Meta’s claim: RDR’s standards are unattainable.
    What our data says: Meta notes that “it’s important that there be ambitious goals…but also that at least some of these be attainable.” Yet all of the goals set forth in RDR’s indicators are attainable. However, they require that corporate leadership dedicate time and willpower to fulfilling them.
  2. Meta’s claim: The Big Tech Scorecard doesn’t give points for publishing the results of human rights due diligence processes.
    What our data says: Meta claims that the Scorecard does not consider “criteria related to communicating insights and actions from human rights due diligence to rights holders.” It is true that our human rights impact assessment (HRIA) indicators focus on procedural transparency rather than simply the publication of results. We do recognize that Meta has coordinated with reputable third parties such as BSR and Article One Advisors to publish several abbreviated country-level assessments as well as to guide its work on expanding encryption. However, it has yet to demonstrate the same degree of transparency on issues that are fundamental to how it operates, including targeted advertising and algorithms. In addition, its country-level assessments have notable gaps. Human rights groups have raised serious questions about the lack of information Meta shared from its India HRIA in its inaugural human rights report. Societies where Meta has a powerful and rapidly growing presence deserve more than a cursory view of the company’s impact, especially when Meta is being directly linked to such explicit human rights harms.
  3. Meta’s claim: RDR should have given Meta a higher score for its purported commitment to human rights standards in the development of AI.
    What our data says: Meta points to its Corporate Human Rights Policy, arguing that it “clearly specifies how human rights principles guide Meta’s artificial intelligence (AI) research and development” and questioning why our Scorecard “indicate[s] [Meta] do[es] not commit to human rights standards in AI development.” The problem is: Meta’s human rights “commitment” on AI falls short of actually committing. Our findings acknowledge an implied commitment to these standards (which equates to partial credit).
  4. Meta’s claim: RDR unfairly expects “private messaging” services to meet the same transparency standards as other services.
    What our data says: By inquiring about the factors RDR considers when “requir[ing] private messaging services, including encrypted platforms, to conform to the same transparency criteria as social media platforms,” Meta seems to be implying that we do not understand how their products work or that our indicators are not fit for purpose with respect to so-called “private messaging” services like Messenger and WhatsApp.
    To start with, Facebook Messenger, the more popular of the two apps in the U.S., is not even an encrypted communications channel (at least not yet). Meanwhile, many users are not fully aware of how “private” (or not) a messaging service is when they sign up for it. There is abundant evidence that Meta monitors Messenger conversations, ostensibly for violative content, but the precise mix of human and automated review involved remains a mystery. As efforts to strip people of their reproductive rights continue to grow, Meta has a responsibility to shine a light on government demands for users’ messages and information. Law enforcement in U.S. states where abortion is now illegal have successfully obtained Messenger chats that eventually led to criminal charges. Finally, even for encrypted platforms like WhatsApp, our standards call for companies to be as transparent as possible regarding automated filtering, account restrictions, and other enforcement actions. Transparency on such basic protocols shouldn’t be too big of an ask.

Meta also notes its plan to build out its disclosures on government demands for content restrictions. This is an encouraging sign. In particular, Meta announced that it plans to publish data on content that governments have flagged as violating the company’s Community Standards—a tactic governments often use to strong-arm companies into compliance without due process. It also committed to start notifying users when content is taken down for allegedly violating a law. Our indicators have long called for companies to enact these two measures. Still, much work remains, not all of which is reflected in Meta’s plans.

The issues Meta has expressed about how our standards pertain, in this case, to transparency on government censorship demands. This means that our most fundamental concern about Meta’s human rights record remains unaddressed: The company’s business model still relies almost entirely on targeted advertising. Meta does not report on the global human rights impacts of its targeting systems and publishes no data on how it enforces its advertising policies. These omissions are unjustifiable.

Without addressing the problems that lie at the root of many of its human rights impacts or recognizing the need for systemic change, Meta will continue to “nibble around the edges,” as shareholders have argued in recent calls to action.

时间线