YouTube says it will build army of 10,000 workers to police disturbing kids videos
Tuesday - 05/12/2017 20:49
THERE is a growing number of perverted and sinister videos targeting kids on YouTube and the company is desperately trying to fix the issue.
YOUTUBE has a big problem on its hands.
The Google-owned video sharing platform has become littered with depraved and sinister videos designed to target kids that often masquerade as harmless children’s content, allowing them to amass a huge number of views without being detected.
In one such video, popular children’s cartoon character Peppa Pig is seen hanging by a noose, the apparent victim of a lynch mob. As the video continues Peppa begins swearing profusely, violently stabs her brother before her family acts out a sex scene.
Aside from Elsagate content, there is another sinister subgenre of child exploitation videos that populate the site which show real children in vulnerable and distressing situations like being held down and given injections or being kidnapped. They often contain subliminal messages clearly designed to disturb young kids as well as sexualise children. Particularly concerning is the fact that there has been reports of paedophiles posting explicit comments on children’s videos on the site as the seedy content continues to spread.
The Elsagate phenomenon has sparked theories and discussion online about the motives behind the perverted clips, which are often quite elaborate, with some claiming they are intended to condition or groom children.
YouTube has come under pressure to address the problem and says by next year it will have an army of 10,000 workers to combat the disturbing content on its site.
“I’ve seen how some bad actors are exploiting our openness to mislead, manipulate, harass or even harm,” YouTube CEO Susan Wojcicki said in a blog post this week.
As a result, the video streaming platform is set to go on a hiring spree for content police, though her blog post did not say how many the company already has. YouTube spokeswoman Michelle Slavich said Tuesday that some have already been hired, and the team will be a combination of employees and contractors.
Ms Wojcicki said the company will apply lessons learned from combating violent and extremist videos to other “problematic” videos. YouTube will expand the use of “machine-learning” technology, a new form of artificial intelligence, to flag videos or comments that show hate speech or harm to children.
“We have begun training machine-learning technology across other challenging content areas, including child safety and hate speech,” she wrote.
Several advertisers have recently pulled ads from the platform in the past few weeks as a result of stories about videos showing harm to children, hate speech and other topics they don’t want their ads next to.
YouTube said this week that it is also taking steps to try to reassure advertisers that their ads won’t run next to gross and appalling videos.
Google isn’t the only tech company that’s stepping up human content reviews to help it police its platform after criticism. Facebook in May said it would hire 3000 more people to review videos and posts, and, later, another 1000 to review ads after discovering Russian ads meant to influence the US presidential election.