Facebook

‘The Facebook Papers': What They Are and Takeaways From the Leaks

Internal company documents obtained by a former Facebook employee-turned-whistleblower reveal an internally conflicted company, where data on the harms it causes is abundant but solutions are halting at best

facebook logo facebook generic
Chesnot/Getty Images File photo- Thousands of pages of internal Facebook documents obtained by Frances Haugen cover topics including how its might harm children and its alleged role in inciting political violence.

The Facebook Papers project represents a unique collaboration among 17 American news organizations, including The Associated Press and NBC News. Journalists from a variety of newsrooms, large and small, worked together to gain access to thousands of pages of internal company documents obtained by Frances Haugen, the former Facebook product manager-turned-whistleblower.

A separate consortium of European news outlets had access to the same set of documents, and members of both groups began publishing content related to their analysis of the materials at 7 a.m. EDT on Monday, Oct. 25. That date and time was set by the partner news organizations to give everyone in the consortium an opportunity to fully analyze the documents, report out relevant details, and to give Facebook’s public relations staff ample time to respond to questions and inquiries raised by that reporting.

Each member of the consortium pursued its own independent reporting on the document contents and their significance. Every member also had the opportunity to attend group briefings to gain information and context about the documents.

The launch of The Facebook Papers project follows similar reporting by The Wall Street Journal, sourced from the same documents, as well as Haugen’s appearance on the CBS television show “60 Minutes”and her Oct. 5 Capitol Hill testimony before a U.S. Senate subcommittee.

The papers themselves are redacted versions of disclosures that Haugen has made over several months to the Securities and Exchange Commission, alleging Facebook was prioritizing profits over safety and hiding its own research from investors and the public.

These complaints cover a range of topics, from its efforts to continue growing its audience, to how its platforms might harm children, to its alleged role in inciting political violence. The same redacted versions of those filings are being provided to members of Congress as part of its investigation. And that process continues as Haugen’s legal team goes through the process of redacting the SEC filings by removing the names of Facebook users and lower-level employees and turns them over to Congress.

The Facebook Papers consortium will continue to report on these documents as more become available in the coming days and weeks.

“AP regularly teams up with other news organizations to bring important journalism to the world,” said Julie Pace, senior vice president and executive editor. “The Facebook Papers project is in keeping with that mission. In all collaborations, AP maintains its editorial independence.”

Former Facebook employee Frances Haugen testified on Tuesday about the social media company.

Carol's Journey to QAnon Research: Facebook-created fake users proved algorithm radicalized people

In 2019, researchers at Facebook created fake accounts as part of an experiment to test how the social media app’s algorithm could spread misinformation and potentially radicalize users.

The study, entitled “Carol’s Journey to QAnon—A Test User Study of Misinfo & Polarization Risks Encountered through Recommendation Systems,” described results of an experiment conducted with a test account established to reflect the views of a prototypical “strong conservative” — but not extremist — 41-year North Carolina woman. This test account, using the fake name Carol Smith, indicated a preference for mainstream news sources like Fox News, followed humor groups that mocked liberals, embraced Christianity and was a fan of Melania Trump.

Within a single day, page recommendations for this account generated by Facebook itself had evolved to a "quite troubling, polarizing state,” the study found. By day 2, the algorithm was recommending more extremist content, including a QAnon-linked group, which the fake user didn’t join because she wasn't innately drawn to conspiracy theories.

A week later the test subject's feed featured “a barrage of extreme, conspiratorial and graphic content,” including posts reviving the false Obama birther lie and linking the Clintons to the murder of a former Arkansas state senator. Much of the content was pushed by dubious groups run from abroad or by administrators with a track record for violating Facebook’s rules on bot activity.

Those results led the researcher, whose name was redacted by the whistleblower, to recommend safety measures running from removing content with known conspiracy references and disabling “top contributor” badges for misinformation commenters to lowering the threshold number of followers required before Facebook verifies a page administrator’s identity.

Among the other Facebook employees who read the research the response was almost universally supportive.

“Hey! This is such a thorough and well-outlined (and disturbing) study,” one user wrote, their name blacked out by the whistleblower. “Do you know of any concrete changes that came out of this?”

Apple threatened to pull Facebook, Instagram apps over Mideast slave markets

Two years ago, Apple threatened to pull Facebook and Instagram from its app store over concerns about the platform being used as a tool to trade and sell maids in the Mideast.

The threat from Apple came in light of a 2019 BBC Arabic service report that found domestic workers being sold and bought via Instagram.

After publicly promising to crack down, Facebook acknowledged in internal documents obtained by The Associated Press that it was “under-enforcing on confirmed abusive activity” that saw Filipina maids complaining on the social media site of being abused. It also admitted to know about the problem prior to the BBC report.

“In our investigation, domestic workers frequently complained to their recruitment agencies of being locked in their homes, starved, forced to extend their contracts indefinitely, unpaid, and repeatedly sold to other employers without their consent,” one Facebook document read. “In response, agencies commonly told them to be more agreeable.”

Apple relented and Facebook and Instagram remained in the app store. Yet these ads continue to appear even today. In a statement to the AP, Facebook said it took the problem seriously, despite the continued spread of ads exploiting foreign workers in the Mideast. Apple did not respond to requests for comment.

Zuckerberg opposed pushing Spanish-language voting resources on WhatsApp in 2020

Facebook CEO Mark Zuckerberg opposed pushing a Spanish-language version of Facebook's "voting information center" on WhatsApp, arguing that it wasn't "politically neutral," The Washington Post reported.

Ahead of the 2020 U.S. elections, employees at WhatsApp proposed the creation of a Spanish-language "voting information center," similar to what Facebook had created for its own platform and Instagram.

The encrypted messaging service, which is owned by Facebook, suggested sending information about how to register to vote, become a poll worker, or request an absentee ballot, through a chat bot or link to millions of Spanish-speaking WhatsApp users who communicate regularly through the platform.

But according to a person familiar with the project who spoke on the condition of anonymity and intern documents review by the Post, Zuckerberg said the effort would make the company appear partisan.

Instead, the company implemented a pared-down version through partnerships with outside groups that allowed users to flag election misinformation or access factual information about voting by messaging a chat bot.

Facebook's language gaps in foreign countries weaken screening of hate speech, terrorism

Across the Middle East, journalists, activists and others have long accused Facebook of censoring their speech. In India and Myanmar, political groups use the social network to incite violence. All of it frequently slips through Facebook’s efforts to police its social media platforms because of a shortage of moderators who speak local languages and understand cultural contexts.

Now, internal company documents from the former Facebook employee-turned-whistleblower Frances Haugen show the problems plaguing the company’s content moderation are systemic, and that Facebook has understood them for years while doing little about it.

Its platforms have failed to develop artificial-intelligence solutions that can catch harmful content in different languages. As a result, terrorist content and hate speech proliferate in some of the world’s most volatile regions. Elsewhere, the company’s language gaps lead to overzealous policing of everyday expression.

The company says it has invested in language and topical expertise in recent years but concedes that Arabic content moderation remains a particular concern.

But the documents show the problems are not limited to Arabic. In Myanmar, where Facebook-based misinformation has been linked repeatedly to ethnic violence, the company’s internal reports show it failed to stop the spread of hate speech targeting the minority Rohingya Muslim population.

In India, the documents show moderators never flagged anti-Muslim hate speech broadcast by Prime Minister Narendra Modi’s far-right Hindu nationalist group because Facebook lacked moderators and automated filters with knowledge of Hindi and Bengali.

The files show that Facebook has been aware of the problems for years, raising questions over whether it has done enough to address these issues. Many critics and digital experts say it has failed to do so, especially in cases where members of Prime Minister Narendra Modi’s ruling Bharatiya Janata Party, the BJP, are involved.

'Outdated Network': Facebook losing its grip on young people in the U.S.

Facebook has been struggling to address stagnating user growth and shrinking engagement for the product in key areas such as the United States and Europe. Worse, the company is losing the attention of its most important demographic — teenagers and young people — with no clear path to gaining it back, its own documents reveal.

Young adults engage with Facebook far less than their older cohorts, seeing it as an “outdated network" with “irrelevant content" that provides limited value for them, according to a November 2020 internal document. It is “boring, misleading and negative," they say.

In other words, the young see Facebook as a place for old people.

Facebook's user base has been aging faster, on average, than the general population, the company's researchers found. Unless Facebook can find a way to turn this around, its population will continue to get older and young people will find even fewer reasons to sign on, threatening the monthly user figures that are essential to selling ads. Facebook says its products are still widely used by teens, although it acknowledges there's “tough competition" from TikTok, Snapchat and the like.

So it can continue to expand its reach and power, Facebook has pushed for high user growth outside the U.S. and Western Europe. But as it expanded into less familiar parts of the world, the company systematically failed to address or even anticipate the unintended consequences of signing up millions of new users without also providing staff and systems to identify and limit the spread of hate speech, misinformation and calls to violence.

Backed into a corner with hard evidence from leaked documents, the company has doubled down defending its choices rather than try to fix its problems.

“We do not and we have not prioritized engagement over safety," Monika Bickert, Facebook’s head of global policy management, told the AP this month following congressional testimony from Haugen.

Amid the Capitol riot, Facebook faced its own insurrection

For weeks, Jan 6. riot participants had vowed — on Facebook itself — to stop Congress from certifying Joe Biden’s election victory with little response from the company. 

Yet when the insurrection finally broke out, Facebook seemed as surprised as anyone else, leading employees to vent their frustration over what some saw as the company's halting and inconsistent response to rising U.S. extremism.

“Haven’t we had enough time to figure out how to manage discourse without enabling violence?” one employee wrote on an internal message board at the height of the Jan. 6 turmoil. “We’ve been fueling this fire for a long time and we shouldn’t be surprised it’s now out of control.”

What Facebook called “Break the Glass” emergency measures put in place on Jan. 6 were essentially a toolkit of options designed to stem the spread of dangerous or violent content that the social network had first used in the run-up to the bitter 2020 election. As many as 22 of those measures were rolled back at some point after the election, according to an internal spreadsheet analyzing the company's response.

An internal Facebook report following Jan. 6 faulted the company for having a “piecemeal” approach to the rapid growth of “Stop the Steal” pages, related misinformation sources, and violent and inciteful comments.

___

The Associated Press/NBC
Exit mobile version