Facebook includes Instagram in its transparency report for the first time

Illustration by Alex Castro / The Verge

Facebook is including Instagram in its transparency report for the first time, releasing data on how the company moderates content related to child exploitation, self-harm, terrorist propaganda, and drug and firearm sales. Notably absent from the report is information on how fake accounts, hate speech, and violent content are regulated on the photo-sharing app.

The information is part of Facebook's quarterly "community standards" transparency report, which tracks the company's ongoing efforts to moderate content on the platform. The last report, released in May, showed a sharp increase in the number of abusive accounts on Facebook, and a downtick in the number of posts containing violent content the company detected and removed.

The addition of Instagram to the report is critical as misinformation about the 2020 election continues to spread on social media. Last month, the left-leaning human rights group Avaaz reported that stories containing misinformation were viewed almost 158.9 million times on the social network, continuing to spread even after they were proven to be false. The news was worrisome as the company has invested heavily in AI systems to detect misleading information since the 2016 election.

It also brings into sharper focus the specific danger that misinformation on Instagram presents. In 2016, Russian trolls posted more than 3,000 ads across both Facebook and Instagram. As memes and photos become an even larger component of election interference efforts, Instagram has become a prime target ahead of the 2020 election. The platform also has fewer resources to fight misinformation than Facebook, making the ongoing spread of fake news far more likely.

It's unfortunate that Facebook left out information related to the proliferation of fake accounts on Instagram, particularly since Facebook did share that information on its namesake platform. On Facebook, fake accounts continue to make up about 5 percent of monthly active users and the company catches the majority "within minutes" of when they register.

Facebook is apparently getting better at detecting self-harm content on Instagram before it's able to spread, the report says. Since May, it has removed about 845,000 pieces of suicide-related content, 79 percent of which it was able to proactively find.

Across the board, the company is better at finding content on Facebook that violates its community standards before people have to report it than on Instagram. For example, while Facebook says it catches 99 percent of content related to the sexual exploitation of children on the social network, it catches 94.6 percent on Instagram.

Facebook's ability to detect and take down posts hawking drugs or firearms is also more impressive on Facebook. Since May, the company says it has removed about 2.3 million pieces of content related to firearm sales on the platform, 93.8 percent of which it was able to find before users. Compare that to 58,600 pieces of similar content it was able to find and remove on Instagram, 91.3 of which it was able to catch before users.

As Facebook continues to draw criticism for refusing to moderate some content — notably, political ads — the transparency report is an important reminder of where the company is willing to crack down. Next time, the report might even include critical information on how fake accounts are doing on the platform, as the 2020 election ramps up.