- Facebook executives are defending the company's decision to bring some content moderators back into offices in the midst of the coronavirus pandemic.
- Facebook vice president of integrity Guy Rosen said the company has to have some of its most sensitive content reviewed from an office setting but has taken steps to make those environments safe.
- Rosen said that the majority of those workers are still working from home.
Facebook executives on Thursday defended their decision to bring some content moderators back into offices in the midst of the coronavirus pandemic, one day after more than 200 contractors
Facebook vice president of integrity Guy Rosen said the company needs some of its most sensitive content reviewed from an office setting and has taken steps to make those environments safe.
"We're not able to route some of the most sensitive and graphic content to outsourced reviewers at home," Rosen said on a press call. "This is really sensitive content. This is not something you want people reviewing from home with their family around."
In their letter addressed to CEO Mark Zuckerberg, the moderators claimed Facebook is needlessly risking their lives. The moderators demand that Facebook maximize working from home, offer hazard pay and end outsourcing.
Asked what portion of the company's content review workforce has returned to their offices, Rosen declined to give a specific figure but said most of those workers were still working from home.
"We've always had some employees in critical jobs who needed to come into some of our global facilities, these are things like data centers and so forth, and we have very strict safety standards that we are employing at those locations," Rosen said.
Other roles that have also returned to the offices include security teams and hardware engineers, Rosen said. Among the health and safety measures in place are significantly reduced capacity, physical distancing, mandatory temperature checks, mandatory masks, daily deep cleanings, hand sanitizing and air filter changes, Rosen said. The company has also brought on full-time employees to help with content review, asking them to handle more sensitive content, Rosen said.
"The facilities meet or exceed the guidance on a safe work space," Rosen said.
The company also addressed the open letter's criticism that Facebook's AI software is not up to the task of detecting all the type of content that breaches the company's policies.
"Our investments in AI are helping us detect and remove this content to keep people safe," Rosen said. "The reason ... we are bringing some workers back to the offices is exactly in order to ensure we can have that balance of people and AI working on these areas."
Additionally, the company on Thursday for the first time disclosed how much hate speech content it removed from Facebook and Instagram. During the third quarter, the company removed 22.1 million pieces of hate speech. In total, 0.11% of content viewed on Facebook was hate speech.
The company also provided an update on its efforts to moderate content related to the U.S. election and Covid-19.
Facebook also said that since March it removed more than 265,000 pieces of content in the U.S. from Facebook and Instagram for violating the company's voter interference policies. Rosen said that 140 million people visited the company's voting information center, with 33 million listing it on Election Day. The company also placed warning labels on more than 180 million pieces of debunked misinformation.
Between March and October, Facebook removed more than 12 million pieces of content from Facebook and Instagram that contained misinformation related to Covid that could lead to imminent physical harm. This includes content related to exaggerated cures or fake preventative measures. The company also displayed warnings on about 167 million pieces of content containing debunked Covid-19 misinformation.
Rosen said 95% of users do not click to view content that is covered by a warning screen.