YouTube’s new Transparency Report reveals centrality of automated notices and automated takedowns
Over the past few years YouTube has relied on a combination of human intervention and technology to “flag” content that is considered inappropriate in light of YouTube’s Community Guidelines. In particular, content can be flagged by YouTube’s automated flagging systems, members of the Trusted Flagger programme (which includes NGOs, government agencies and individuals) or from simple users within the YouTube community.
Google/YouTube has recently released a new Transparency Report, that adds up to its reports on copyright, the 'right to be forgotten', and government requests.
It concerns flagging due to content that is: sexual, spam or misleading, hateful, abusive, violent or repulsive (it excludes requests due to copyright).
The report specifies that about 80% of videos that violated the site’s guidelines in 2017 had first been detected by Artificial intelligence (AI) machines. Furthermore, out of the 8,000,000 removed between October 2017 – December 2017 approximately 6,600,000 were first notified through automated flagging systems.
As opposed to human “flaggers”, AI machines enable YouTube to act more quickly and accurately to enforce its policies. Google states that: “These systems focus on the most egregious forms of abuse, such as child exploitation and violent extremism. Once potentially problematic content is flagged by our automated systems, human review of that content verifies that the content does indeed violate our policies and allows the content to be used to train our machines for better coverage in the future. For example, with respect to the automated systems that detect extremist content, our teams have manually reviewed over two million videos to provide large volumes of training examples, which improve the machine learning flagging technology.”