Failures to Keep Children Safe

From BigTechWiki
Revision as of 17:07, 23 February 2022 by Btw admin (talk | contribs)
(diff) ← Older revision | Latest revision (diff) | Newer revision → (diff)
Jump to navigationJump to search
  • Instagram and Facebook have failed to keep children safe on their platforms, with high levels of child pornography and “grooming” of underage users. Facebook was responsible for the majority of reported instances of child pornography or trafficking in 2018, and Instagram was cited as the leading platform for child grooming in the U.K. in 2019.[1]
    • During the COVID-19 pandemic, Instagram failed to respond to complaints about predatory behavior, citing a lack of resources and arguing that they could not “prioritize all reports right now.”
    • In 2020, despite warnings from children’s advocates and FBI Director Christopher Wray, Facebook encrypted all of their messaging services, including Facebook Messenger, across all platforms. This move made it more difficult for either the company or law enforcement to identify and combat child abuse on the platforms.
  • Facebook’s previous experiment in a social media app designed for children, “Messenger Kids,” failed to keep children safe. A technical flaw allowed children to be in group chats with unauthorized strangers, violating one of the core safety features of the app.[2]
    • Facebook claimed that the app was vetted by “experts” prior to its launch, but failed to disclose that many of the experts received funding from the company. Facebook failed to do outreach to many prominent children’s advocates, including Common Sense Media and Campaign for a Commercial Free Childhood.
  • Facebook manipulated underage users into paying for services within games, calling the practice “friendly fraud.” Children as young as five years old were tricked into making purchases in the games, without realizing Facebook had stored their parent’s credit cards. This led to clawback rates of 9%, well over the Federal Trade Commission’s red flags for deceptive business practices.[3]
    • Internal Facebook communications showed that employees were aware of the practice and proposed reforms; however, Facebook rejected the reforms because they would hurt revenue. Nearly three years after the first internal communication of “friendly fraud,” clawback rates remained the same, suggesting that Facebook did not address the problem.
  • Instagram consistently failed to keep younger users safe, with pervasive online bullying on the platform that takes advantage of the ease of building anonymous accounts, the virality of cyberbullying posts, and the complete lack of oversight on comments and direct messages.