abusesaffiliationarrow-downarrow-leftarrow-rightarrow-upattack-typeburgerchevron-downchevron-leftchevron-rightchevron-upClock iconclosedeletedevelopment-povertydiscriminationdollardownloademailenvironmentexternal-linkfacebookfiltergenderglobegroupshealthC4067174-3DD9-4B9E-AD64-284FDAAE6338@1xinformation-outlineinformationinstagraminvestment-trade-globalisationissueslabourlanguagesShapeCombined Shapeline, chart, up, arrow, graphLinkedInlocationmap-pinminusnewsorganisationotheroverviewpluspreviewArtboard 185profilerefreshIconnewssearchsecurityPathStock downStock steadyStock uptagticktooltiptwitteruniversalityweb

このページは 日本語 では利用できません。English で表示されています

レポート

28 5月 2025

著者:
By Equidem (UK)

Scroll. Click. Suffer: The Hidden Human Cost of Content Moderation and Data Labelling

... Launched on May 28, 2025, ‘Scroll. Click. Suffer.’ is Equidem’s in-depth investigation into the hidden workforce behind AI and social media—data labellers and content moderators. Based on interviews with 113 workers across Colombia, Kenya, and the Philippines, the report exposes the extreme occupational, psychological, sexual, and economic harms faced by those moderating violent content and training AI models for platforms like Meta, TikTok, and ChatGPT.

From PTSD and substance dependency to union-busting and exploitative contracts, it reveals an industry architecture that normalises harm and conceals abuse. Tech giants outsource risk down opaque, informal supply chains, while workers are silenced by NDAs, punished for speaking out, and denied basic protections.

Grounded in worker testimonies and legal analysis, the report outlines a clear roadmap for reform—and calls on lead firms, investors, governments, and the ILO to centre the rights of digital workers...

[Equidem shared the reports findings with Bytedance, Meta, OpenAI, and Remotasks. ByteDance did not respond to Equidem. Meta said it takes “the support of content reviewers seriously” and that it “requires all the companies” it workers with to “provide 24/7 on-site support with trained practitioners” and other safety measures. OpenAI said it conducted an investigation and found no evidence to the support the claims. Remotasks also outlined safety measures in place, including a “24/7 anonymous hotline”, and noted “ongoing improvements” they claim the firm has undertaken. The responses can be read in more detail in the report]

プライバシー情報

このサイトでは、クッキーやその他のウェブストレージ技術を使用しています。お客様は、以下の方法でプライバシーに関する選択肢を設定することができます。変更は直ちに反映されます。

ウェブストレージの使用についての詳細は、当社の データ使用およびクッキーに関するポリシーをご覧ください

Strictly necessary storage

ON
OFF

Necessary storage enables core site functionality. This site cannot function without it, so it can only be disabled by changing settings in your browser.

クッキーのアナリティクス

ON
OFF

When you access our website we use Google Analytics to collect information on your visit. Accepting this cookie will allow us to understand more details about your journey, and improve how we surface information. All analytics information is anonymous and we do not use it to identify you. Google provides a Google Analytics opt-out add on for all popular browsers.

Promotional cookies

ON
OFF

We share news and updates on business and human rights through third party platforms, including social media and search engines. These cookies help us to understand the performance of these promotions.

本サイトにおけるお客様のプライバシーに関する選択

このサイトでは、必要なコア機能を超えてお客様の利便性を高めるために、クッキーやその他のウェブストレージ技術を使用しています。