No feed items found.
Washington is clashing with Silicon Valley once again.
U.S. lawmakers investigating Russia’s interference in the 2016 presidential election have focused their attention on some of the world’s most influential internet companies, including Facebook, Google and Twitter, where Kremlin-backed agents sought to spread misinformation and provoke political discord.
These tech giants are now under fire for failing to stop — or even realize — the ways their services had been used to these ends. Now they’re scrambling to update their policies to ensure that Russia, or any other foreign government, can’t weaponize the web again.
So how, exactly, did this even happen? Here’s a rundown:
Who’s investigating, and why?
For one thing, there isn’t just one investigation into Russia’s interference in the 2016 election.
The primary probe has unfolded at the Justice Department under the watch of Robert Mueller. A former FBI director, Mueller began serving as special counsel this May at the request of Deputy Attorney General Rod Rosenstein — an appointment that came after Jeff Sessions, the president’s attorney general, recused himself on matters relating to the Russia investigation.
Mueller’s mandate is broad: It focuses on determining to what extent, if any, Russia interfered in the 2016 election, from the Kremlin’s contacts with key Trump officials to the ways in which Russian forces may have spread misinformation on social media.
As that official inquiry proceeds, though, Congress is forging ahead with its own investigations. Leading the charge is the Senate Intelligence Committee, which has heard testimony from the likes of ousted FBI Director James Comey earlier this year. The panel is helmed by Sen. Richard Burr, a Republican lawmaker from North Carolina, and Sen. Mark Warner, a Democratic from Virginia.
Their counterparts in the House — fittingly, the House Intelligence Committee — similarly are scrutinizing Russia’s role in the election. Its chairman has recused himself from the investigation, so the top Republican there is Rep. Mike Conaway from Texas, and the leading Democrat is Rep. Adam Schiff from California.
These panels have a number of special, powerful privileges, not the least of which includes access to classified documents. To that end, a third congressional committee studying Russian interference — the Senate Judiciary Committee — has played a less obvious role. Nevertheless, it’s chaired by Sen. Chuck Grassley, a Republican from Iowa, and Sen. Dianne Feinstein, a Democrat from California.
What have tech companies found?
Facebook in September said it had discovered 470 profiles explicitly tied to Russian agents. Those profiles purchased approximately 3,000 advertisements ahead of Election Day, which had been viewed by about 10 million U.S. users before and after Trump’s victory.
Many of those ads — none of which have been made public — sought to stoke racial, religious or other social and political tensions. Sources have said the Russian-backed ads even took both sides of contentious issues, like Black Lives Matter or gun control, in a bid to intensify public debate and foment discord. Sources speaking with Recode, along with a series of reports from other outlets like CNN, have suggested these ads and other forms of Russia-supplied content explicitly targeted crucial election swing states.
Additionally, Facebook found another 2,200 ads of interest that did not violate its policies.
Initially, the social giant provided the full information on Russian-backed content only to Mueller’s team at the DOJ, frustrating congressional investigators who felt they had been circumvented.
By Oct. 1, though, Facebook delivered copies of the 3,000 ads along with other data to the House and Senate Intelligence Committees as well as the Senate Judiciary Committee. Leading the company’s efforts to study Russian election meddling is Alex Stamos, Facebook’s chief security officer.
Twitter also reviewed its platform for Russian interference, thanks in no small part to threat data shared by Facebook. And it found 200 accounts tied in some way to the Russia-backed profiles that Facebook previously had flagged.
Twitter took that information to House and Senate investigators in a briefing at the end of September led by Colin Crowell, Twitter’s vice president of global public policy. But the company’s efforts initially drew sharp criticism from Warner and Schiff, who felt Twitter should have done a more exhaustive search of its sales records for potential Russian meddling. (The two have since tempered their tone.)
Nevertheless, Twitter also turned over to the committees the text of ads — in the form of promoted tweets — purchased by the Russia government-backed media network RT ahead of the election. The network spent $274,100 on Twitter ads last year, according to the tech company.
But Twitter has not stopped RT from continuing to advertise on its site. (Nor has Facebook, in fact.) Previously, the U.S. government’s top intelligence agencies have flagged RT as a Kremlin propaganda arm.
Google has not revealed anything about its internal investigation, but a probe is under way. The inquiry appears to be focused on search advertising and YouTube, sources have said. And Google has been querying its records with the help of data furnished by Facebook, other sources confirmed.
Other tech giants have been less forthcoming about what may have happened on their websites ahead of the election — or if they’ve even searched for potential misuse. Oath, formerly Yahoo, and Reddit, both declined to answer detailed questions about their sales records and user accounts. Reddit, in particular, served as a major source for some of the conspiracy-minded, hate-tinged, alt-right content that proliferated on social media in advance of the election.
Snap, however, did query its data, and a spokeswoman acknowledged to Recode it found no Russian-bought ads on its app.
How did Russian-tied agents do this on Facebook, exactly?
Facebook sells virtually all of its advertising with self-serve software programs, which means that anyone with a couple of dollars can buy a targeted Facebook ad — without any help or oversight from a company employee.
In the case of the election, the owners of numerous Pages with Russian ties bought ads on Facebook without anyone noticing. Facebook made almost $27 billion in advertising revenue last year, so $100,000 worth of ads, especially spread out among hundreds of buyers, wouldn’t raise any flags unless Facebook knew what it was looking for.
This automated sales process is basically the same at other advertising companies like Google and Twitter; it helps the companies — which have hundreds of millions or billions of users — scale more quickly. The problem is that it creates an opportunity for abuse, including a situation earlier this year in which ProPublica was able to target an ad campaign to “Jew haters.”
How have tech companies responded?
Facebook has promised a number of changes to its advertising policies in the wake of this investigation.
It’s pledged to hire 1,000 more ad moderators who will review and remove inappropriate ads, and it claims it will roll out a new feature that will allow users to see all of the ads that any organization is promoting on the service. It has also promised to invest more in machine learning and artificial intelligence software to find these kinds of ads automatically.
Perhaps most importantly, Facebook said it will now require all advertisers who wish to buy political ads to provide “more thorough documentation.” That should, theoretically, prohibit foreign entities from paying for any kind of political message.
Of course, one of the reasons the Russian ads were never detected in the first place is that they weren’t technically political ads promoting any specific candidate — they pushed social issues to create animosity among voters.
Facebook admits this kind of content won’t go away entirely. “Even when we have taken all steps to control abuse, there will be political and social content that will appear on our platform that people will find objectionable, and that we will find objectionable,” Facebook wrote in a blog post this week. “We permit these messages because we share the values of free speech.”
Twitter claims it does a number of things to try and prevent spam and bots from circulating information online, and it promised to “roll out several changes” in how it handles bots and “suspicious tweets.” It has not yet announced any specific changes to its advertising or safety policies since we learned about Russia’s use of the platform during the election. The company did say it “supports making political advertising more transparent,” though, again, it hasn’t said what that means practically.
Google has not finished its internal review, so it has not announced any changes to its practices.
What is the U.S. government doing to address the problem?
Federal law is clear: Foreign nationals cannot donate to candidates or purchase political ads — “electioneering communications,” as the government calls them. The term “foreign nationals,” of course, includes governments officials and agents, in Russia or elsewhere.
But the 2016 presidential election proved that foreign malefactors can circumvent U.S. law, no matter what it says. Automated ad purchasing systems — plus limited rules around political ad disclosure — can make it easy for foreign political dollars to slip through the cracks on social media sites.
Lawmakers, like Burr and Warner, remain as concerned as ever these companies might again serve as weak points for potential future election meddling, an alarm they sounded at a press conference on Wednesday. Warner’s own state of Virginia has major elections next month.
To that end, there are two efforts under consideration in Washington, D.C., to tighten political advertising rules.
The first is an attempt from lawmakers on Capitol Hill to impose new regulations on digital ads. That idea is the brainchild of Warner and fellow Democratic Sen. Amy Klobuchar. They have not officially introduced a bill, but their proposal is expected to focus in large part on transparency.
Under the plan, as Warner has described it to Recode, large companies like Facebook, Google and Twitter would have to save copies of all ads running on their sites and make them available for public inspection. That mirrors a similar, much older rule that already governs political ads in newspapers and on TV networks. Warner and Klobuchar seek to require disclosure of more information about the origins of those ads, too.
At the same time, the country’s voting-and-election regulator, the Federal Elections Commission, has kicked off a renewed public debate over the information that advertisers should disclose about themselves and their campaigns on social media.
The battle actually dates back to 2011, when Facebook actually sought an exemption from FEC rules that require political advertisers to disclose who paid for their efforts.
At the time, Facebook argued ads on social media sites would be so small that disclosure in the text of the ad itself would be impossible. The social giant felt the regulatory burden, if it applied to ads on its platform, would stunt digital advertising altogether. And Facebook cited a similar, earlier petition from Google for permission to skirt the FEC regulations.
As with most matters at the FEC, its commissioners never came to a decision. But as a number of campaign-finance watchdogs have noted — and Bloomberg, among others, has reported — Facebook allowed ads without significant disclosure on its platform anyway.
The FEC has a chance to specify more precise, tougher rules of the road. But once again, it may depend on whether the partisan-hobbled commission can overcome its tendency for gridlock.
What happens next?
All three tech giants — Facebook, Google and Twitter — are continuing with their internal investigations. And Google is set to brief House and Senate lawmakers in the coming weeks, sources have said, though a date does not appear to have been set.
Crucially, these companies face the prospect of two tough congressional hearings on the horizon. Both the House Intelligence Committee and the Senate Intelligence Committee plan to grill the companies on Nov. 1.
Facebook and Twitter have confirmed they plan to attend that latter hearing; Google has not responded to requests for comment. Either way, it is not clear if those companies plan to send their chief executives or other representatives for the grilling, which could prove brutal.