YouTube’s new Transparency Report reveals centrality of automated notices and automated takedowns

Over the past few years YouTube has relied on a combination of human intervention and technology to “flag” content that is considered inappropriate in light of YouTube’s Community Guidelines. In particular, content can be flagged by YouTube’s automated flagging systems, members of the Trusted Flagger programme (which includes NGOs, government agencies and individuals) or from simple users within the YouTube community.

Google/YouTube has recently released a new Transparency Report, that adds up to its reports on copyright, the 'right to be forgotten', and government requests.

It concerns flagging due to content that is: sexual, spam or misleading, hateful, abusive, violent or repulsive (it excludes requests due to copyright).

The report specifies that about 80% of videos that violated the site’s guidelines in 2017 had first been detected by Artificial intelligence (AI) machines. Furthermore, out of the 8,000,000 removed between October 2017 – December 2017 approximately 6,600,000 were first notified through automated flagging systems.

As opposed to human “flaggers”, AI machines enable YouTube to act more quickly and accurately to enforce its policies. Google states that: “These systems focus on the most egregious forms of abuse, such as child exploitation and violent extremism. Once potentially problematic content is flagged by our automated systems, human review of that content verifies that the content does indeed violate our policies and allows the content to be used to train our machines for better coverage in the future. For example, with respect to the automated systems that detect extremist content, our teams have manually reviewed over two million videos to provide large volumes of training examples, which improve the machine learning flagging technology.”

At a time in which discussion around notice and takedown, filtering, etc. has been heating up, also considering the EU value gap proposal (see the latest development here; IPKat posts here) and how it is possible to handle incredibly vast amounts of content that is made available and shared online every day, the current Transparency Report is particularly interesting for the absolute centrality of bots. This is true both in relation to takedown requests and the handling of such requests. It also raises the question of how much of the material taken down also stays down, but this may be a new chapter to the never-ending story of online rights enforcement …