Facebook is now grappling with problems Mark Zuckerberg never foresaw when he launched the social network in 2004. The CEO has spent much of the past year apologizing for the Cambridge Analytica data scandal and a host of issues around misinformation, fake accounts, data misuse, and Facebook’s role in the 2016 election.
He went on an apology tour across the country, testified before Congress, and has now published the first in a series of notes explaining Facebook’s myriad problems and what the company is doing about it.
Today’s topic is election interference on the platform. Much of the Zuckerberg’s 3,000-plus word post covers the same talking points Facebook COO Sheryl Sandberg gave during Congressional testimony last week alongside Twitter CEO Jack Dorsey. Nonetheless, Zuck writes frankly about the breadth and depth of Facebook’s security problems when it comes to the platform’s outsized role in modern elections.
The note covers five primary areas: fake accounts, false information, advertising transparency and verification, an independent election research commission, and how the social giant is coordinating with governments and other companies.
“In 2016, our election security efforts prepared us for traditional cyberattacks like phishing, malware, and hacking. We identified those and notified the government and those affected,” wrote Zuckerberg. “What we didn’t expect were foreign actors launching coordinated information operations with networks of fake accounts spreading division and misinformation. Today, Facebook is better prepared for these kinds of attacks.”
On the fake account front, Zuckerberg returned to one of his oft-used responses during April’s congressional testimony. Facebook uses some version of what it calls artificial intelligence for everything from flagging fake news to detecting offensive memes, and Zuck said Facebook’s machine learning systems have blocked more than a billion fake accounts in total and millions more each day.
He called detecting the bulk creation of fake accounts an “arms race,” and said it’s still difficult to identify sophisticated actors who build fake account networks manually or co-opt legitimate accounts as part of coordinated post-boosting campaigns. To that end, the company has doubled its safety and security team in the past year from 10,000 to more than 20,000 employees.
Zuckerberg talked about the trial and error Facebook has gone through in trying to improve its fake account identification process, from flagging and investigating to takedowns and notifying governments and users. He mentioned specific campaigns linked to Russia’s Internet Research Agency (IRA) troll farm, propaganda accounts linked to Iranian state media, and fake account networks shut down in Brazil and Myanmar.
Zuckerberg didn’t talk as much about specifics regarding fake news, but categorized the spread of misinformation in three ways—by fake accounts, spammers (where Facebook’s strategy is to reduce economic incentives by blocking spammers from making ad money), and users who are unaware they’re sharing false information.
“Beyond elections, misinformation that can incite real world violence has been one of the hardest issues we’ve faced,” wrote Zuckerberg. “In places where viral misinformation may contribute to violence we now take it down. In other cases, we focus on reducing the distribution of viral misinformation rather than removing it outright.”
While he didn’t mention WhatsApp specificially, the Facebook-owned messaging app perpetuated false child kidnapping rumors in India that led to mob murder in rural villages. WhatsApp has since restricted message forwarding.
Concerning how ordinary users perpetuate fake news, Zuck talked about Facebook’s use of human fact-checkers certified by the non-partisan International Fact-Checking Network (IFCN). Posts rated as false are demoted and lose on average 80 percent of their future views, he wrote. However, Facebook’s fact-checking has had problems on that “partisan” front, recently marking a story incorrectly as false after a fact-check from a conservative magazine with IFCN approval.
Ads, Independent Commissions, and Coordination
Facebook’s new political advertisement verification policy has been widely publicized. Zuckerberg stressed that users can now see when an ad is paid for by a PAC or third-party group, and anyone running political or issue ads in the US must now verify their identity and location. He also talked up now these new transparency tools can help journalists, watchdogs, academics, and others report abuse and hold political advertisers accountable.
Interestingly, Zuckerberg said Facebook initially talked about banning political ads altogether but said the decision not to do so was not motivated by ad revenue.
“Initially, this seemed simple and attractive. But we decided against it—not due to money, as this new verification process is costly and so we no longer make any meaningful profit on political ads—but because we believe in giving people a voice. We didn’t want to take away an important tool many groups use to engage in the political process,” he wrote.
Zuckerberg also detailed Facebook’s independent election research commission, announced back in April to study exactly what role Facebook has on the election process. Of course, he said this time there’ll be a lot more control over what data researchers are getting.
Finally, the post covers how Facebook is working with governments and other companies in stopping election interference campaigns. The tl;dr is that Facebook is still having a lot of problems on this front. Zuck said tighter coordination would be very useful and “real tensions still exist” between working with governments and law enforcement to share intelligence.
Ultimately, Zuckerberg discussed Facebook’s progress in identifying and removing fake accounts ahead of elections in Brazil, France, Germany, Mexico, and the state of Alabama, and thwarting foreign election influence campaigns from Russia and Iran. But he also said “we face sophisticated, well-funded adversaries. They won’t give up, an they will keep evolving.”