Are Facebook, Twitter Doing Enough to Stop Misinformation and Fraud? | Social Media

Facebook and were hauled before Congress to explain how their platforms were weaponized in the 2016 U.S. election by Russia-linked trolls to try to sway the results.

Since then, they've pledged to clean up their platforms. Leading up to the pivotal Nov. 6 midterm elections in the U.S., Facebook and Twitter have publicized their efforts to eliminate bogus accounts and misinformation posted by nefarious actors and provide better transparency about political ad spending. The internet companies also are trying to burnish themselves as good corporate citizens by promoting get-out-the-vote campaigns.

It's a far cry from two years ago, when Facebook chief Mark Zuckerberg argued after Donald Trump's presidential victory that “the idea that fake news on Facebook… influenced the election in any way is a pretty crazy idea.”

To be sure, social-media firms didn't invent the dark art of political misinformation. Led by President Trump, “The Republican Party today has become a vast repository of conspiracy theories, fake news, false accusations and paranoid fantasies,” Fareed Zakaria wrote in a Washington Post column last week.

But critics say social networks, given their vast reach and the ease with which falsehoods and hoaxes can still promulgate across their systems, still aren't doing enough — and that there are still big holes in their tools and strategies for eliminating and misinformation aimed at swaying voters.

Vice News and Business Insider last week separately published results of tests they ran to place bogus political ads on Facebook. The social-media giant ran the fake ads as submitted, including Vice News' ads on behalf of all 100 sitting U.S. Senators (which it submitted, but did not actually purchase) and BI's U.K. ads that purported to be from Cambridge Analtyica, the now-defunct political consultancy that had improperly obtained data on millions of Facebook users.

On Friday (Nov. 2), Sens. Amy Klobuchar (D-Minn.) and Mark Warner (D-Va.) released a letter addressed to Zuckerberg saying in part, “We write to express concern about significant apparent loopholes in Facebook's ads transparency tool and to urge you to promptly address this issue.”

Facebook said the fake political ads from Vice News and BI violated its policies, but the company couldn't explain how the ruses successfully evaded detection.

“At this point, it should be clear that asking Facebook to do better is about as useful as asking water to flow uphill,” said Sarah Miller, co-chair of Freedom From Facebook, a coalition of progressive advocacy groups including Open Markets Institute, Demand Progress, MoveOn and the Communications Workers of America. “The FTC has the authority to rein in Facebook today, if it wanted to — and it's a scandal that they have not yet taken meaningful action to do so.”

Facebook this past May launched its political ad-labeling initiative, which is intended to provide clear disclosures about who's purchased an ad on Facebook and Instagram. It set up an archive (at facebook.com/politicalcontentads) that lets anyone search ads with political or issue-oriented content run in the U.S. on its platforms for up to seven years.

The company said it has taken down dozens of accounts, pages and groups linked to Iran and Russia that were targeting U.S. users with divisive political messages, as part of its efforts to demonstrate it's doing all it can to stop election meddling.

According to Facebook, it's been the single-biggest spender on political advertising on its own services — about $13.8 million — from May through Oct. 27. That was followed by Beto O'Rourke, the Texas Democrat running against Sen. Ted Cruz, who spent $6.35 million over the same time frame; and Donald Trump's campaign organizations, which have spent $5.2 million on Facebook ads.

Facebook says it has set up a war room in Menlo Park, Calif., staffed with two dozen employees, to look for and stop election interference in “real time.” That “builds on almost two years of hard work and significant investments, in both people and technology, to improve security on Facebook, including during elections,” Samidh Chakrabarti, director of product management, civic engagement, wrote in an Oct. 18 blog post. “We're committed to the challenge.”

Twitter, in a Nov. 1 blog post, highlighted its steps to “safeguard the integrity of conversations surrounding the U.S. elections.” It cited ongoing efforts to eliminate spammy and fake accounts as well as its release of research about all accounts and related content associated with potential information operations it has found on our service since 2016. On Friday, Twitter said over the last two months it had deleted more than 10,000 bot accounts that were falsely registered to look like they were from Democrats, which had been posting messages discouraging people from voting on Nov. 6.

But the swamp of misinformation online has proven extremely hard to drain. After Twitter last week launched a dedicated page with info related to the U.S. midterm elections, BuzzFeed News found that it initially aggregated tweets from well-known conspiracy theorists.

Facebook and Twitter deserve credit for at least trying to rectify the problems, said Nihal Krishan, a political journalist currently on staff at the New Republic.

“Overall, the intentions are very much in the right place by Facebook and Twitter, to try to disclose more and have more transparency in the past few months,” he said.

But despite their efforts, it remains early days. “It's still kind of amateur hour,” said Krishan, adding that it will probably be a year before the platforms' ad-transparency reporting features are mature enough to be useful to ordinary people. “There are still plenty of flaws in their transparency tools and new problems are being found constantly.”

You might also like
Leave A Reply

Your email address will not be published.