Internal Warnings Showed Encryption Would Hide 7.5M Annual Abuse Reports
Unsealed documents from a New Mexico child exploitation trial reveal Meta Platforms executives pushed forward with end-to-end encryption on Messenger despite internal warnings that the move would gut its ability to detect child abuse. In 2023, employees flagged that the change would make an estimated 7.5 million annual child sexual abuse material (CSAM) reports on the platform undiscoverable. The real-world impact materialized quickly; after the feature's rollout in December 2023, the National Center for Missing and Exploited Children (NCMEC) reported a 6.9 million drop in CSAM reports from Meta in 2024.
Concerns were documented years earlier. In a March 2019 internal message, Meta's then-head of content policy, Monika Bickert, called the encryption plan "so irresponsible," stating, "there is no way to find the terror attack planning or child exploitation." That same year, Global Head of Safety Antigone Davis warned that encrypting Messenger, which is connected to Facebook's public social graph, would be "far, far worse" for child safety than on the closed-network WhatsApp. An internal analysis from February 2019 projected that encryption would have reduced CSAM reports by 65% in the prior year, from 18.4 million to 6.4 million.
New Mexico Trial Exposes Years of Safety Deficiencies
The ongoing trial in Santa Fe, which began February 9, 2026, has surfaced a history of operational failures beyond the encryption decision. Prosecutors presented evidence of a backlog of 247,000 cyber tip reports between May 2017 and July 2021, delaying potentially time-sensitive information from reaching law enforcement. Further testimony highlighted how poor report quality from Meta led 31 of the country's 61 Internet Crimes Against Children task forces to opt out of receiving certain low-priority tips in 2022.
The state's case centers on an undercover investigation, "Operation MetaPhile," where agents posing as underage girls were solicited for sex through platform features. One undercover account accumulated 7,000 followers in a month without being shut down. The prosecution's narrative of systemic neglect was reinforced by former Meta vice-president Brian Boland, who testified, "I absolutely did not believe that safety was a priority, which is the primary reason that I left" in 2020.
Legal Strategy Shifts to Attack 'Defective' Product Design
Meta's legal challenges extend beyond New Mexico, representing a fundamental threat to its business model. The case is part of a wider legal trend that frames social media platforms not as neutral content hosts but as manufacturers of a "defective product." This argument, central to a multi-district lawsuit in California involving thousands of plaintiffs, focuses on addictive design features like infinite scroll and algorithmic recommendations. By targeting the product's design, plaintiffs aim to bypass Section 230 of the Communications Decency Act, which has historically shielded platforms from liability for user-generated content.
This legal strategy mirrors the successful litigation against tobacco companies in the 1990s, which argued that companies knowingly designed and marketed a harmful, addictive product. With states like California and New York now mandating warning labels for social media, and courts increasingly distinguishing between platform design and user content, Meta faces a multi-front battle that could force significant changes to its core products and expose it to substantial financial damages.