upper waypoint

With Mass Social Media Layoffs, Researchers Warn of Rise in Hate Speech

Save ArticleSave Article
Failed to save article

Please try again

A close up of illuminated glowing keys on a black keyboard spelling "Hate Speech" as a 3D illustration.
 (Paul Campbell via iStock/Getty Images Plus)

Social media platforms have historically had a hard time neutralizing the threat of hate speech before it leads to real-world violence. Now, researchers at the Network Contagion Research Institute and the Rutgers University Center for Critical Intelligence Studies are warning that mass layoffs, especially at social media platforms like Twitter and Meta, are leaving the door wide open to a growing phenomenon called “cyber-swarms.”

Cyber-swarms, as defined by the institute, are surges of hateful memes, videos and comments on social media directed at certain ethnic and religious groups. They may be coordinated, but the real-world violence that follows may be conducted on impulse by individuals inspired by what they see online.

“A lot of these trust and safety teams don’t have a policy. They just put out fires, and sometimes, they don’t even know where a fire is burning,” said Network Contagion Institute Director Joel Finkelstein. The institute’s latest report, which came out this week, details a recent rearview-mirror look at social media platform failures in July and August, just a few months before layoffs decimated their ranks in the U.S. and abroad.

“Cyber Social Swarming Precedes Real World Riots in Leicester: How Social Media Became a Weapon for Violence” explains how “Ethnic riots in one place can now leak into cyber-swarming, and turn into ethnic violence somewhere else,” Finkelstein said.

The cyber-swarm in July fomented violence between Hindus and Muslims on Twitter, TikTok, YouTube, and Instagram. Real-world incidents surged in the following weeks — primarily in the U.K. but in the U.S. as well, including one tirade* at a Taco Bell in Fremont in August, recorded by the man it was directed at, Krishnan Iyer.

Related Posts

*(Be advised the footage contains highly offensive and triggering language. The Alameda County district attorney’s office filed multiple charges against 37-year-old Singh Tejinder days later.)

Tejinder, who can be heard speaking in Punjabi, spewed a stream of expletive-laced insults against Iyer, calling him “dirty Hindu” among other slurs commonly spotted on social media networks.

“Even before these cuts (to Silicon Valley companies), Hindus as a community have been facing a regular barrage of hate on online platforms for several years,” wrote Pushpita Prasad of the Coalition of Hindus of North America. “Moderation policies online have not been very sensitive to Hindu concerns and fears.”

The recent attacks on Hindu communities constitute just one collective example of how hate speech can proliferate online against ethnic/religious groups if left unchecked.

Another, more recent example: Twitter trolls emboldened by Elon Musk’s $44 billion acquisition prompted a 500% surge in the use of a particularly offensive racial epithet directed at African Americans, according to the NCRI.

“As soon as Elon Musk walked in the door,” said Finkelstein, who added the Institute reached out to Twitter staff to let them know about the finding. “You’d think that’s something everybody would notice. They hadn’t noticed it. They noticed it because LeBron James took our tweet and literally singled out Elon Musk.”

Twitter no longer has a press relations department to respond to reporter questions. A Meta spokesperson declined to offer specific numbers for how many layoffs affect the company’s trust and safety teams in the U.S. and abroad, but did write, “Our integrity efforts remain a top priority, which is why we continue to have over 40,000 people devoted to safety and security efforts and we will continue to invest in this work.”

Electronic Frontier Foundation Legal Director Corynne McSherry says the foundation is still trying to determine exactly what teams at Meta are most dramatically impacted by the layoffs.

As for Twitter, McSherry said, “What they really needed to do was probably double their trust and safety teams, for a start. To actually do it well requires humans making nuanced decisions. They didn’t have enough staff in the first place and they certainly don’t now. I worry a lot that they’re thinking — at both companies — ‘We’ll just automate more.’ What we’ve seen is that just doesn’t work.”

Finkelstein says social media companies need to refocus on preventing violence, instead of just trying to avoid blowback from the press — and politicians. “When trust and safety teams are like, ‘Let’s put out this political fire, cause we can get in trouble,’ you will have a garden manicured by political concerns. And the harms that are really happening, no-one really cares about.”

With mass layoffs gutting the very trust and safety teams that are supposed to watch out for threats, what can vulnerable communities expect from Silicon Valley now? Not much, said Denver Riggleman, an Institute advisor and former Republican congressman from Virginia, as well as a technical lead on the Jan. 6th committee.

Riggleman calls for federally funded public and private partnerships to scale up predictive modeling of the kind the Institute used to warn of that cyber-swarm over the summer.

“Almost a social media warning dashboard. This isn’t something regulatory, but it’s informational. It’s situational awareness,” Riggleman said.

Sponsored

lower waypoint
next waypoint