YouTube Updates Anti-Harassment Policy to Deal With Toxic Content
[the_ad id='1307']
Following heavy criticism for its perceived failure to deal with threats and harassment on its platform, YouTube on Wednesday updated its terms and conditions to include a slew of policy changes that it says will help reduce undesirable content. As part of its plans, the company said it will start taking a stronger stance against threats, personal attacks and toxic comments on its platforms going forward.
According to YouTube, its latest rules now not only prohibits explicit threats but also veiled or implied threats. “This includes content simulating violence toward an individual or language suggesting physical violence may occur. No individual should be subject to harassment that suggests violence”, said the company.
The video streaming platform also said that it is updating its hate speech policy to further clampdown on “demeaning language”. According to an official blog post, the company will no longer allow content that maliciously insults someone based on race, gender expression, or sexual orientation. “This applies to everyone, from private individuals to YouTube creators, to public officials”, said the company.
To formulate its anti-harassment policy, YouTube says it met with a number of creators on the platform, as well as experts who shared their perspective. The company also claims to have held meetings with researchers who study online bullying, organizations that advocate on behalf of journalists, free speech proponents who oppose social media censorship and policy advocacy groups from all sides of the political spectrum.
In what seems to be a reiteration of Twitter’s recent stance against hate speech, YouTube said that its updated terms are meant to protect the community and encourage healthy debates instead of allowing trolls and extremists to drag everybody down the rabbit hole.
[the_ad id='1307']
Source link
[the_ad id='1307']