Facebook’s New Transparency Report


Facebook’s new transparency report now includes data on takedowns of ‘bad’ content, including hate speech

Facebook this morning released its latest Transparency report, where the social network shares information on government requests for user data, noting that these requests had increased globally by around 4 percent compared to the first half of 2017, though U.S. government-initiated requests stayed roughly the same. In addition, the company added a new report to accompany the usual Transparency report, focused on detailing how and why Facebook takes action on enforcing its Community Standards, specifically in the areas of graphic violence, adult nudity and sexual activity, terrorist propaganda, hate speech, spam and fake accounts.

In terms of government requests for user data, the global increase led to 82,341 requests in the second half of 2017, up from 78,890 during the first half of the year. U.S. requests stayed roughly the same at 32,742; though 62 percent included a non-disclosure clause that prohibited Facebook from alerting the user – that’s up from 57 percent in the earlier part of the year, and up from 50 percent from the report before that. This points to use of the NDA becoming far more common among law enforcement agencies.

The number of pieces of content Facebook restricted based on local laws declined during the second half of the year, going from 28,036 to 14,294. But this is not surprising – the last report had an unusual spike in these sort of requests due to a school shooting in Mexico, which led to the government asking for content to be removed.

There were also 46 46 disruptions of Facebook services in 12 countries in the second half of 2017, compared to 52 disruptions in nine countries in the first half.

And Facebook and Instagram took down 2,776,665 pieces of content based on 373,934 copyright reports, 222,226 pieces of content based on 61,172 trademark reports and 459,176 pieces of content based on 28,680 counterfeit reports.

However, the more interesting data this time around comes from a new report Facebook is appending to its Transparency report, called the Community Standards Enforcement Report which focuses on the actions of Facebook’s review team. This is the first time Facebook has released its numbers related to its enforcement efforts, and follows its recent publication of its internal guidelines three weeks ago.

In 25 pages, Facebook in April explained how it moderates content on its platform, specifically around areas like graphic violence, adult nudity and sexual activity, terrorist propaganda, hate speech, spam and fake accounts. These are areas where Facebook is often criticized when it screws up – like when it took down the newsworthy “Napalm Girl” historical photo because it contained child nudity, before realizing the mistake and restoring it. It has also been more recently criticized for contributing to Myanmar violence, as extremists’ hate speech-filled posts incited violence. This is something Facebook also today addressed through an update for Messenger, which now allows users to report conversations that violate community standards.

Today’s Community Standards report details the number of takedowns across the various categories it enforces.

Facebook says that spam and fake account takedowns are the largest category, with 837 million pieces of spam removed in Q1 – almost all proactively removed before users reported it. Facebook also disabled 583 million fake accounts, the majority within minutes of registration. During this time, around 3-4 percent of Facebook accounts on the site were fake.

The company is likely hoping the scale of these metrics makes it seem like it’s doing a great job, when in reality, it didn’t take that many Russian accounts to throw Facebook’s entire operation into disarray, leading to CEO Mark Zuckerberg testifying before a Congress that’s now considering regulations.

Read the full article on TechCrunch