upper waypoint

Social Media Companies Get 'Big Fat F' in Moderating Israel-Hamas War Content, Say Hate-Speech Watchers

Save ArticleSave Article
Failed to save article

Please try again

Two women wearing headscarves hold 'Free Palestine' and 'No justice, no peace' signs during a large outdoor rally.
Foaz Kayed (left) and Laila Al rally alongside hundreds of demonstrators outside the Federal Building in San Francisco on Oct. 19, 2023, calling for the end of United States military aid to Israel. (Beth LaBerge/KQED)

A growing group of academics and civil discourse advocates are sounding the alarm over a surge in hate speech and disinformation on all major social media platforms as the Israel-Hamas war escalates.

Consider the most recent dramatic example, in the hours following the Oct. 17 air strike of a hospital in Gaza that killed scores of civilians. As journalists and respected investigative groups tried to make sense of the incident, social media exploded with unfounded accusations from Hamas and its supporters that the missile had been fired by Israel and had killed close to 500 people. They then cast doubt on subsequent evidence suggesting that the hospital was most likely hit by an errant rocket fired by Palestinian militants and that the death toll — while still strikingly high — was significantly lower than initially reported.

While incontrovertible confirmation of who perpetrated this particular tragedy may not come for some time — if ever — it’s clear that the chaotic online discourse around it further inflamed tensions.

Eroding trust

“It’s not just that there are fraudulent pieces of information out there. When the authentic pieces of information come out, we don’t know if we should trust it,” said Hany Farid, a UC Berkeley School of Information professor specializing in detecting manipulated media and deep fakes. “And that makes reasoning about what is happening really difficult. Nobody fundamentally knows what’s going on anymore, and that’s insane.”

Over the last year, major social media platforms have gutted their content moderation teams, a shift that many say is in part responsible for the proliferation of photos and videos of this war that turn out to be recycled from other conflicts — or are sometimes even clipped from video games.

“Let’s start with Twitter. (I refuse to call it X.) They just get a big fat F,” Farid said. “It is clear that Twitter has become more of a hellhole than it was pre-Musk, and it continues to decline.”

Since Elon Musk bought Twitter last year — and then changed its name to “X “— many observers say the social media platform, long influential among journalists, has increasingly become a de facto rebroadcaster of unfiltered war propaganda posted on even more loosely moderated, conspiracy-prone platforms like Telegram.

But if X gets an “F” from hate-speech watchers during this latest conflict, Meta, which owns Facebook, Instagram and WhatsApp and has considerably greater reach, gets something just north of F, said Callum Hood, head of research for the Center for Countering Digital Hate.

“If I know that one of the most popular posts on Facebook — according to data that I know they have access to, as well — is footage of an execution, with no warnings on it, at all, I have very serious concerns about what they’re doing,” he said.

In a statement to KQED, a Meta spokesperson pointed to a company blog post about its special operations center staffed with experts, including fluent Hebrew and Arabic speakers, “working around the clock to monitor our platforms while protecting people’s ability to use our apps to shed light on important developments happening on the ground.”

‘These are not new problems’

Content moderation is no easy task, especially when individuals with strong opinions post or repost factually inaccurate material, said Jillian York, director for international freedom of expression with the San Francisco-based Electronic Frontier Foundation. Last week, her group posted an open letter calling on social media companies to better handle misinformation, particularly during major international conflicts.

Sponsored

“These are not new problems,” York said. “We want platforms to ensure that their content moderation practices are transparent and consistent. We want them to sufficiently resource in every location in which they operate.”

Every researcher KQED spoke to also lamented the lack of federal regulation of social media platforms. They noted how, in contrast, the European Union’s Digital Services Act went into effect a couple of months ago, requiring large platforms to employ robust procedures to tackle systemic risks and abuse.

In a blog post, Meta acknowledged growing concerns among users that Facebook and Instagram appeared to be algorithmically curtailing the reach of certain posts, a technique known as “shadow banning.” The company characterized those incidents as “bugs,” which it says have since been fixed.

“This bug affected accounts equally around the globe – not only people trying to post about what’s happening in Israel and Gaza – and it had nothing to do with the subject matter of the content,” Meta said in its blog post.

related coverage

But researchers say their ability to monitor what’s actually gaining traction on Meta’s platforms through the company’s application programming interfaces, or APIs, has been limited. Crowdtangle is another analytics tool researchers have found useful in monitoring content — one they say Meta bought but has failed to maintain.

“Facebook and Instagram is harder to study than ever. The truth is, I don’t think any organization has a very good grip on how disinformational hate is spreading on Facebook or Instagram right now because every possible tool that we once had for investigating it, they’re unusable,” Hood said. “Overall, maybe there’s less on these platforms, but we can’t actually say.”

According to Hood and other researchers, a similar lack of transparency makes it impossible to independently assess the efforts of Tiktok, which recently announced it launched a command center that brings together “key members” of its “40,000-strong global team of safety professionals,” and was working to remove posts that support or incite violence.

Hood and Farid, among many other observers, say these recent efforts are largely ineffective because they are overlaid on top of an ad-based business model designed to keep users on the platforms by promoting engaging content, regardless of its veracity.

‘Stop getting your information from social media’

“People should be angry that when they go online, they are being lied to. They are being manipulated by other people, by state-sponsored actors, and by the very platforms, and we are no longer informed citizens,” Farid said. “We’re not arguing about how to do something or if to do something. We’re arguing about 1 + 1 = 2.”

In contrast, Farid adds, most news organizations have structural incentives to try to get the facts right, even though a large proportion of Americans don’t trust them either. That is to say, journalists are concerned about maintaining their own credibility with news consumers and competitively assessing rivals’ news coverage to probe for weaknesses.

“When things are unfolding as fast as they are, stop getting your information from social media,” he said. “I’m not saying that The Washington Post and The New York Times and the San Francisco Chronicle always get it right. But at least they’re trying to get it right. And you can’t say that about social media.”

Farid says he finds hope for the future in emerging content authentication protocols and technologies. He points to new efforts like the Coalition for Content Provenance and Authenticity (C2PA), an alliance between Adobe, Intel, Microsoft and other major tech companies to develop technical standards for certifying the provenance of media content.

“So if I am in Gaza, and I film the bombing of a hospital, I can now verify when that was taken, who took it, where it was taken, and what was recorded,” Farid said. “That technology, we know how to do it. It just has to get deployed.”

Sponsored

lower waypoint
next waypoint