Facebook logos are pictured on the screens of a smartphone and a laptop computer in central London on Nov. 21, 2016. (Justin Tallis/Getty images)
The Russians who posed as Americans last year on social media tried several different disguises — from gun rights activists to Black Lives Matter supporters — all in an attempt to influence voters during the 2016 election. How was all of this done?
That’s what the Senate Intelligence Committee will be asking Facebook, Twitter and Google at a hearing this week. Members of the committee want to know how Russian operatives were able to use the social media platforms to spread propaganda and what the media companies are doing to stop it from happening again.
Some users, like Kimberly Foster, had an inkling last year that trolls were spreading propaganda messages.
Foster is the CEO of “For Harriet,” a digital platform for black people. She has more than 31,000 followers on Twitter, and she tweets a lot. But when it comes to retweets, Foster has learned to be careful.
Sponsored
“I’m really dedicated to making sure that the information that I engage with and the people that I connect with are real,” she said.
A few months ago, Foster started seeing other Twitter influencers retweet a person named “Crystal Johnson.” The account posted all kinds of content, incendiary posts from the right and the left, mostly about race. And even though the Crystal Johnson profile picture was that of a black woman, Foster thought there was something off about it.
“When I see accounts like Crystal Johnson with an image that is clearly a stock photo with no original content, it really was a hallmark fake account,” Foster said.
Foster even tweeted a few months ago that she thought the account was fake. And she was right. Twitter suspended the Crystal Johnson account and 200 others believed to be linked to Russian operatives. And under pressure from members of Congress, Facebook announced it would hand over 3,000 political ads bought by Russian sources that ran during the presidential election campaign.
It’s believed that Russian operatives used topics like race to divide the country.
“The issue of race and racial injustice is at the forefront of so many of our minds right now,” Foster said. “It makes perfect sense to me that the Russians utilized these platforms to incite racial discord.”
One example is an account called Blacktivist on Facebook and Twitter. Organizers of the page, believed to be Russian trolls, presented themselves as Black Lives Matter supporters. Jonathan Albright, research director of the Tow Center for Digital Journalism at Columbia University, published research showing a snapshot of the reach Russian operatives were able to get through social media.
For example, Blacktivist content, according to Albright’s research, was shared more than 103 million times during and after the presidential campaign.
The tweets however, weren’t just about race. Craig Timberg, technology reporter with the Washington Post, has been reporting on this story and found there were other sophisticated ways Russian operatives used the social media platforms to incite discord.
Timberg gives one example of an anti-Hillary Clinton ad paid for by the Mercer family, who are big Donald Trump supporters.
“The ad ran on television in some battleground states,” Timberg said. “But then it got tweeted out by a Twitter account that was supposedly the Tennessee Republican Party, but it was in fact controlled by Russian trolls.”
And once it got tweeted out by Russian trolls, it was then retweeted by lots of other people, including Michael Flynn, who later became the national security adviser to President Trump.
“Keith Olbermann and Ann Coulter and Nicki Minaj, you name it, a whole host of people across the celebrity and political and journalism worlds fell for this phony account, and that’s just one of thousands upon thousands upon thousands of social media accounts that the Russians created and controlled,” Timberg said.
As part of the Senate Intelligence Committee hearing, Facebook, Twitter and Google will also be asked to explain the steps they’re taking to mitigate foreign interference in the future.
“I am worried that disinformation disrupts democracy,” said Malkia Cyril, founder and executive director of the Center for Media Justice in Oakland. “This is a global affair, and it’s something we should all be afraid (of).”
Cyril and others have been pushing companies like Facebook to hire more humans and rely less on algorithms to assist in weeding out disinformation on their platforms.
“Human beings can see context,” Cyril said. “So Facebook, Twitter and other social media outlets should be hiring folks who are in the editorial process looking at content and making some decisions about what’s fake and what’s real.”
Facebook founder Mark Zuckerberg said during a video about the Russian-bought ads that the social media giant will require more transparency from those who purchase ads on the site. For instance, Facebook will allow users to visit an advertiser’s page and see the ads the advertisers have running throughout the network.
As for the general public, Foster said, people also have to be smarter social media consumers.
“I think we absolutely need to be telling people to vet their sources, to actually click links before we share them,” Foster said.
lower waypoint
Stay in touch. Sign up for our daily newsletter.
To learn more about how we use your information, please read our privacy policy.