Facebook's False Claim To Supporting Regulation

From BigTechWiki
Revision as of 22:49, 26 March 2022 by Btw admin (talk | contribs)
(diff) ← Older revision | Latest revision (diff) | Newer revision → (diff)
Jump to navigationJump to search
  • For years, Facebook’s approach to looming regulatory policy in the U.S. and abroad involved denying any need for regulation at all.
    • In 2017, Facebook lobbyists and its allied Internet Association warned that proposed BROWSER Act legislation requiring websites and ISPs to obtain consent from users before sharing their browser data with other entities threatened innovation.[1]
  • In 2016, less than a month after a judge ruled against Facebook in a lawsuit alleging violations of the Illinois Biometric Privacy Act (BIPA) Lawsuit, the company backed a proposal in the state legislature to significantly weaken the 8-year-old law. Facebook donated to all four co-sponsors of the rollback proposal and leaned on its allied trade associations like the Illinois Chamber of Commerce and CompTIA to push for changes to BIPA.[2]
  • According to the Center for Public Integrity reporting, CompTIA and Facebook played a role in weakening similar biometric privacy proposals in Montana and Washington as well. In Texas, CompTIA donated $5,00 to the state Republican Party, where the Attorney General – a Republican – was responsible for the sole enforcement of a state biometric privacy law.
  • Facebook’s “no regulations” whatsoever approach changed in the aftermath of the 2018 Cambridge Analytica scandal, in which a The Guardian investigation revealed a political consulting firm harvested the data of millions of Facebook users without their consent. The incident forced Facebook to go on a public apology tour and landed CEO Mark Zuckerberg before The Senate Commerce and Judiciary Committee. In that March 2018 hearing, Senators criticized Facebook’s failure to protect users’ data and its role in enabling Russian interference in the 2016 election. Zuckerberg acknowledged the company’s shortcomings and said he welcomed additional regulations to address them.
  • Facebook paired Zuckerberg’s testimony with a public campaign to generate goodwill among skeptics in the government and media and shape forthcoming regulatory policy through funding think tanks. The company funded well-known policy centers like the Cato Institute, and The Future of Privacy Forum. These organizations hosted events that offered company allies opportunities to interact with policymakers directly and produced public commentary on the privacy debate aligned with the company’s goals.
  • Facebook’s push for federal privacy regulations was intended to pre-empt states from passing their stricter laws; company executives stated as much in Congressional testimony. Just weeks before Zuckerberg’s March 2018 testimony, the company had poured $200,000 into The Committee to Protect California Jobs, a political action group organized to defeat a ballot initiative that would give Californians greater control over their privacy. Facebook pulled their support for the group following Zuckerberg’s testimony but declined to say whether they were involved in similar efforts elsewhere.
  • In 2020, the Washington Post reported that Facebook launched an AstroTurf group American Edge which aimed to “convince policymakers that Silicon Valley is essential to the U.S. economy and the future of free speech.” American Edge ran ads likening proposals to regulate Silicon Valley to legislative overreach in the manufacturing industry. The group painted the tech industry as integral to free speech and American influence abroad. It warned that potential regulations threatened to harm “companies that share American values as they compete in the global marketplace.”[3]
  • Meanwhile, in March 2021, Mark Zuckerberg appeared before a House Subcommittee and declared Congress should make “thoughtful reform” to Section 230 of the Communications Decency Act. The existing Section 230 safeguarded technology platforms from being held liable for content posted by individual users. Zuckerberg proposed revising section 230 to make companies liability protection conditional on demonstrating they have “systems in place” for identifying unlawful content and removing it. Under his proposal, platforms would not be held liable if “if a particular piece of content evades detection.” NBC News reported the policy would “theoretically shore up Facebook's power, as well as that of other internet giants like Google, by requiring smaller social media companies and startups to develop robust content moderation systems that can be costly.”