abusesaffiliationarrow-downarrow-leftarrow-rightarrow-upattack-typeburgerchevron-downchevron-leftchevron-rightchevron-upClock iconclosedeletedevelopment-povertydiscriminationdollardownloademailenvironmentexternal-linkfacebookfiltergenderglobegroupshealthC4067174-3DD9-4B9E-AD64-284FDAAE6338@1xinformation-outlineinformationinstagraminvestment-trade-globalisationissueslabourlanguagesShapeCombined Shapeline, chart, up, arrow, graphLinkedInlocationmap-pinminusnewsorganisationotheroverviewpluspreviewArtboard 185profilerefreshIconnewssearchsecurityPathStock downStock steadyStock uptagticktooltiptwitteruniversalityweb

这页面没有简体中文版本,现以English显示

报告

28 五月 2025

作者:
By Equidem (UK)

Scroll. Click. Suffer: The Hidden Human Cost of Content Moderation and Data Labelling

... Launched on May 28, 2025, ‘Scroll. Click. Suffer.’ is Equidem’s in-depth investigation into the hidden workforce behind AI and social media—data labellers and content moderators. Based on interviews with 113 workers across Colombia, Kenya, and the Philippines, the report exposes the extreme occupational, psychological, sexual, and economic harms faced by those moderating violent content and training AI models for platforms like Meta, TikTok, and ChatGPT.

From PTSD and substance dependency to union-busting and exploitative contracts, it reveals an industry architecture that normalises harm and conceals abuse. Tech giants outsource risk down opaque, informal supply chains, while workers are silenced by NDAs, punished for speaking out, and denied basic protections.

Grounded in worker testimonies and legal analysis, the report outlines a clear roadmap for reform—and calls on lead firms, investors, governments, and the ILO to centre the rights of digital workers...

[Equidem shared the reports findings with Bytedance, Meta, OpenAI, and Remotasks. ByteDance did not respond to Equidem. Meta said it takes “the support of content reviewers seriously” and that it “requires all the companies” it workers with to “provide 24/7 on-site support with trained practitioners” and other safety measures. OpenAI said it conducted an investigation and found no evidence to the support the claims. Remotasks also outlined safety measures in place, including a “24/7 anonymous hotline”, and noted “ongoing improvements” they claim the firm has undertaken. The responses can be read in more detail in the report]

隐私资讯

本网站使用 cookie 和其他网络存储技术。您可以在下方设置您的隐私选项。您所作的更改将立即生效。

有关我们使用网络存储的更多信息,请参阅我们的 数据使用和 Cookie 政策

Strictly necessary storage

ON
OFF

Necessary storage enables core site functionality. This site cannot function without it, so it can only be disabled by changing settings in your browser.

分析 cookie

ON
OFF

您浏览本网页时我们将以Google Analytics收集信息。接受此cookie将有助我们理解您的浏览资讯,并协助我们改善呈现资讯的方法。所有分析资讯都以匿名方式收集,我们并不能用相关资讯得到您的个人信息。谷歌在所有主要浏览器中都提供退出Google Analytics的添加应用程式。

市场营销cookies

ON
OFF

我们从第三方网站获得企业责任资讯,当中包括社交媒体和搜寻引擎。这些cookie协助我们理解相关浏览数据。

您在此网站上的隐私选项

本网站使用cookie和其他网络存储技术来增强您在必要核心功能之外的体验。