YouTube has removed more than 100,000 videos and also 17,000 channels that contain some form of hate speech ever since launching an initiative to remove such negativity from the popular site.
The Google-owned video platform has been giving the public regular updates on how much of an impact its tougher stance on hate speech is having, stating that it is removing such harmful content faster than ever before.
The number of channels and videos taken down has increased by around five times, whilst YouTube has also stepped up its removal of inappropriate comments, removing more than 500 million comments which it deems to be harmful.
In a blog post made by the Google-owned site, YouTube stated that the spike in number of removals came as a result of there being older videos, channels and comments that were allowed before YouTube actually updated and started to implement its hate speech policy back in June.
These removals are being carried out by teams of people that are utilising an artificial intelligence-powered flagging tool, with such artificial intelligence helping this team to identify inappropriate videos right before they manage to attract plenty of views.
YouTube added that machine-learning technology proved to be useful in identifying patterns and taking down content immediately, yet the site also admitted that such a method was not a perfect solution, due to issues like hate speech often being “highly dependent on context”.
In June, YouTube stated that the main focus of its removal of hate speech was related to content that glorified the Nazis, yet this had its disadvantages, with plenty of educational material about the Second World War being removed as well, due to the AI not fully recognising the difference.
YouTube added that it will continue reviewing several videos and other types of material, whilst also reviewing its enforcement efforts in order to avoid having such issues as previously mentioned, yet it has remained committed to using both humans and AI for this crackdown.
A Google spokesperson stated that “across Google, we’ve tasked over 10,000 people with detecting, reviewing, and removing content that violates our guidelines.”
During August, YouTube chief executive Susan Wojcicki announced that the company was experimenting with a new algorithm that was specifically designed in order to reduce the number as well as the spread of potentially harmful videos in plenty of countries, most notably the UK.
This change in algorithm has been utilised in the United States for the last six months, which according to several reports has resulted in a drop of 50% in views from recommendations to this type of harmful content, which has been called “borderline content”.