Meta, the parent company of Facebook and Instagram, has responded to recent concerns raised by a Wall Street Journal article, disputing claims that the social media giant faces challenges in effectively removing child predators and content related to child exploitation from its platforms.
The article highlighted that despite Meta’s establishment of a child-safety task force in June, issues persisted, with Instagram’s algorithms reportedly connecting accounts dedicated to the creation, purchase, and trade of underage sexual content. The Journal’s investigation, conducted over a five-month period, revealed ongoing challenges in Meta’s recommendation systems promoting such content, even after the removal of related hashtags.
In response to these allegations, Meta firmly denies any liability for the dissemination of child exploitation material, emphasizing its proactive measures to reduce, remove, and eliminate such contentfrom its social media sites. In a blog post, the company outlined actions taken by the task force, which included a thorough review of existing policies, examination of technology and enforcement systems, and implementation of changes to enhance protections for young people.
“We created a task force to review existing policies; examine technology and enforcement systems we have in place; and make changes that strengthen our protections for young people, ban predators, and remove the networks they use to connect with one another,” stated Meta on its website. “The task force took immediate steps to strengthen our protections, and our child safety teams continue to work on additional measures.”
A Meta spokesperson addressed the issue of child exploitation, acknowledging it as a horrific crime and underscoring the company’s commitment to combating online predators. The spokesperson highlighted Meta’s efforts, including the hiring of specialists dedicated to online child safety, development of new technology to identify predators, and collaboration with other companies and law enforcement.
“Child exploitation is a horrific crime and online predators are determined criminals,” said the Meta spokesperson. “We work hard to stay ahead. That’s why we hire specialists dedicated to online child safety, develop new technology that roots out predators, and we share what we learn with other companies and law enforcement. We are actively continuing to implement changes identified by the task force we set up earlier this year.”
The Wall Street Journal detailed its efforts to uncover disturbing sexual content involving children in various forums, revealing instances where Meta’s algorithms recommended groups with names such as ‘Little Girls,’ ‘Beautiful Boys,’ and ‘Young Teens Only’ after viewing public Facebook groups discussing inappropriate topics about children.
According to the Journal, researchers at the Stanford Internet Observatory found that when Meta disables hashtags related to pedophilia, its system often fails to detect or even suggests new ones with minor variations. The report highlighted that after Meta disabled #Pxdobait, the search recommendations suggested adding a specific emoji at the end for those searching the phrase.