abusesaffiliationarrow-downarrow-leftarrow-rightarrow-upattack-typeburgerchevron-downchevron-leftchevron-rightchevron-upClock iconclosedeletedevelopment-povertydiscriminationdollardownloademailenvironmentexternal-linkfacebookfiltergenderglobegroupshealthC4067174-3DD9-4B9E-AD64-284FDAAE6338@1xinformation-outlineinformationinstagraminvestment-trade-globalisationissueslabourlanguagesShapeCombined Shapeline, chart, up, arrow, graphLinkedInlocationmap-pinminusnewsorganisationotheroverviewpluspreviewArtboard 185profilerefreshIconnewssearchsecurityPathStock downStock steadyStock uptagticktooltiptwitteruniversalityweb

Diese Seite ist nicht auf Deutsch verfügbar und wird angezeigt auf English


6 Feb 2023

Michael H. Keller & Kate Conger, The New York Times

Twitter continues to fail to curb child sexual abuse content by eliminating experts & critical detection software

"Musk Pledged to Cleanse Twitter of Child Abuse Content. It’s Been Rough Going.", 6 February 2023

Over 120,000 views of a video showing a boy being sexually assaulted. A recommendation engine suggesting that a user follow content related to exploited children. Users continually posting abusive material, delays in taking it down when it is detected and friction with organizations that police it.

All since Elon Musk declared that “removing child exploitation is priority #1” in a tweet in late November.

Under Mr. Musk’s ownership, Twitter’s head of safety, Ella Irwin, said she had been moving rapidly to combat child sexual abuse material...

But a review by The New York Times found that the imagery, commonly known as child pornography, persisted on the platform, including widely circulated material that the authorities consider the easiest to detect and eliminate.

After Mr. Musk took the reins..., Twitter largely eliminated or lost staff experienced with the problem and failed to prevent the spread of abusive images previously identified by the authorities... Twitter also stopped paying for some detection software considered key to its efforts.

All the while, people on dark-web forums discuss how Twitter remains a platform where they can easily find the material while avoiding detection...

In a Twitter audio chat with Ms. Irwin in early December, an independent researcher working with Twitter said illegal content had been publicly available on the platform for years and garnered millions of views. But Ms. Irwin and others at Twitter said their efforts under Mr. Musk were paying off.

The effort accelerated in January, Twitter said, when it suspended 404,000 accounts.

Ms. Irwin, in an interview, said the bulk of suspensions involved accounts that engaged with the material or were claiming to sell or distribute it, rather than those that posted it. She did not dispute that child sexual abuse content remains openly available on the platform...

Inside Elon Musk’s Twitter

She added that Twitter was hiring employees and deploying “new mechanisms” to fight the problem.

Wired, NBC and others have detailed Twitter’s ongoing struggles with child abuse imagery under Mr. Musk. ...Senator Richard J. Durbin, Democrat of Illinois, asked the Justice Department to review Twitter’s record in addressing the problem.

To assess the company’s claims of progress, The Times created an individual Twitter account and wrote an automated computer program that could scour the platform for the content without displaying the actual images, which are illegal to view. The material wasn’t difficult to find. In fact, Twitter helped promote it through its recommendation algorithm — a feature that suggests accounts to follow based on user activity.

In the first few hours of searching, the computer program found a number of images previously identified as abusive — and accounts offering to sell more. The Times flagged the posts without viewing any images, sending the web addresses to services run by Microsoft and the Canadian center.

In all, the computer program found imagery of 10 victims appearing over 150 times across multiple accounts, most recently on Thursday. The accompanying tweets often advertised child rape videos and included links to encrypted platforms.

Alex Stamos, the director of the Stanford Internet Observatory and the former top security executive at Facebook, found the results alarming. “Considering the focus Musk has put on child safety, it is surprising they are not doing the basics,” he said.

Separately, to confirm The Times’s findings, the Canadian center ran a test to determine how often one video series involving known victims appeared on Twitter. Analysts found 31 different videos shared by more than 40 accounts, some of which were retweeted and liked thousands of times.

The center also did a broader scan against the most explicit videos in their database. There were more than 260 hits, with more than 174,000 likes and 63,000 retweets.

“The volume we’re able to find with a minimal amount of effort is quite significant,” said Lloyd Richardson, the technology director at the Canadian center. “It shouldn’t be the job of external people to find this sort of content sitting on their system.”

In 2019, The Times reported that many tech companies had serious gaps in policing child exploitation on their platforms. This past December, Ms. Inman Grant, the Australian online safety official, conducted an audit that found many of the same problems remained at a sampling of tech companies.

The Australian review did not include Twitter, but some of the platform’s difficulties are similar to those of other tech companies and predate Mr. Musk’s arrival, according to multiple current and former employees.

Twitter, ... started using a more comprehensive tool to scan for videos of child sexual abuse last fall, they said, and the engineering team dedicated to finding illegal photos and videos was formed just 10 months earlier. In addition, the company’s trust and safety teams have been perennially understaffed, though the company continued expanding them even amid a broad hiring freeze that began last April, four former employees said.

Over the years, the company did build internal tools to find and remove some images, and the national center often lauded the company for the thoroughness of its reports.

The platform in recent months has also experienced problems with its abuse reporting system, which allows users to notify the company when they encounter child exploitation material.

The Times used its research account to report multiple profiles that were claiming to sell or trade the content in December and January. Many of the accounts remained active and even appeared as recommendations to follow on The Times’s own account. The company said it would need more time to unravel why such recommendations would appear.

To find the material, Twitter relies on software created by an anti-trafficking organization called Thorn. Twitter has not paid the organization since Mr. Musk took over... Twitter has also stopped working with Thorn to improve the technology.

Ms. Irwin declined to comment on Twitter’s business with specific vendors.

Twitter’s relationship with the National Center for Missing and Exploited Children has also suffered, according to people who work there.

After the transition to Mr. Musk’s ownership, Twitter initially reacted more slowly to the center’s notifications of sexual abuse content, according to data from the center, a delay of great importance to abuse survivors, who are revictimized with every new post.

Late last year, the company’s response time was more than double what it had been during the same period a year earlier under the prior ownership, even though the center sent it fewer alerts.

The Canadian center, which serves the same function in that country, said it had seen delays as long as a week.

In addition, Twitter and the U.S. national center seem to disagree about Twitter’s obligation to report accounts that claim to sell illegal material without directly posting it.

The company has not reported to the national center the hundreds of thousands of accounts it has suspended because the rules require that they “have high confidence that the person is knowingly transmitting” the illegal imagery and those accounts did not meet that threshold, Ms. Irwin said.

Mr. Shehan of the national center disputed that interpretation of the rules, noting that tech companies are also legally required to report users even if they only claim to sell or solicit the material. So far, the national center’s data show, Twitter has made about 8,000 reports monthly, a small fraction of the accounts it has suspended.

Ms. Inman Grant, the Australian regulator, said she had been unable to communicate with local representatives of the company because her agency’s contacts in Australia had quit or been fired since Mr. Musk took over. She feared that the staff reductions could lead to more trafficking in exploitative imagery.

Ms. Irwin said the company continued to be in touch with the Australian agency, and more generally she expressed confidence that Twitter was “getting a lot better” while acknowledging the challenges ahead.