X Corp & TikTok allegedly used to spread disinformation amid the Israel-Hamas conflict
"Israel-Hamas war and the impact of online disinformation", 13 October 2023
False and misleading information has surged online since the militant Islamist group Hamas launched its surprise attack on Israel, manipulating world opinion, fomenting local confusion and bolstering calls for retribution, experts say.
Israel has since rained down retaliatory strikes on the Palestinian enclave of Gaza, leaving 180,000 homeless and 2.3 million without electricity or water.
At least 1,200 Israelis and 1,200 Palestinians have been killed in the conflict, according to reports.
Rights groups and researchers have warned against social media users sharing misleading or baseless claims, including miscaptioned imagery or altered documents, in an effort to shape public perception.
European Union industry chief Thierry Breton this week urged social media leaders Elon Musk and Mark Zuckerberg to tackle the spread of disinformation on their respective platforms - X, Facebook, and Instagram - to comply with new EU online content rules.
What disinformation is spreading?
There have been four main narratives that have spread across social media, according to Jack Brewster, an editor for news rating website and misinformation tracker NewsGuard...
The Arab Center for Social Media Advancement, a non-profit known as 7amleh, also tracked some inaccurate accounts of Jewish babies being held captive in Gaza, as well as of sexual abuse.
For families in the Middle East, disinformation can have a personal toll.
What has helped fuel disinformation?
Across social media, dis- and misinformation have been spread about the violence in an echo of the fake news unleashed in the early stages of the Russia-Ukraine war, Brewster said.
The most notable change in the social media space is how X, formerly Twitter, is being used to spread disinformation, tech and media experts said.
Other social media platforms, such as TikTok, have been used to share out-of-context videos.
TikTok did not provide comment when contacted by Context.
X directed Context to statements made by CEO Linda Yaccarino that it had "redistributed resources and refocused internal teams ... to address this rapidly evolving situation."
Theodora Skeadas, a former public policy staffer at Twitter who worked on content moderation, said that staffing cuts had significantly undermined the platform's capacity to tackle the deluge of doctored posts and misleading videos and images.
How are platforms tackling the problem?
X has said that more than 500 unique Community Notes, a feature that lets users add context to potentially misleading content, have been posted about the conflict.
But Skeadas said community notes "can't keep up with the volume of posts during a crisis".
YouTube has said that graphic content may be allowed on the platform if it provides sufficient news value, but is moderating for videos that violate its rules.
Snap says it is monitoring for misinformation and incitement of violence.
Meta, which owns Instagram and Facebook, said a team of experts including Hebrew and Arabic speakers were monitoring the "rapidly evolving situation in real-time".
What are the real-world consequences?
The main aim of false narratives is to manipulate public opinion and justify collective punishment, Nadim Nashif, executive director of 7amleh, told Context.
“These phenomena have a considerable impact on ... access to information, something quite worrying in a context in which Palestinian narratives are censored and/or unable to make it to the online realm," he said.
This can lead to further calls for violence and to actual harm, as well as obscuring human rights violations and preventing justice from being served, he said.