Source: The Conversation (Au and NZ) – By Carolina Rossini, Professor of Practice and Director for Program, Public Interest Technology Initiative, UMass Amherst
The verdict in a Los Angeles courtroom on March 25, 2026, may become one of the most consequential legal challenges that Big Tech has ever faced.
This is an inflection point in the global debate over Big Tech liability: For the first time, an American jury had been asked to decide whether platform design itself can give rise to product liability – not because of what users post on them, but because of how they were built. The jury found that Meta and Google knew the design or operation of Instagram and YouTube was or was likely to be dangerous when used by a minor, and that the platforms failed to adequately warn of that danger.
As a technology policy and law scholar, I believe that the decision will likely generate a powerful domino effect in the United States and across jurisdictions worldwide.
The jury awarded the plaintiff US$3 million in damages and recommended to the court an additional $3 million in punitive damages. The jury split responsibility for the award between the companies: 70% from Meta and 30% from Google. A Meta spokesman stated that the company disagrees with the verdict and is evaluating its legal options.
Separately, a jury in New Mexico on March 24 found that Meta knowingly harmed children’s mental health and concealed what it knew about child sexual exploitation on its platforms.The case
The plaintiff in the Los Angeles case is a 20-year-old California woman identified by her initials, K.G.M. She said she began using YouTube around age 6 and created an Instagram account at age 9. Her lawsuit and testimony alleged that the platforms’ design features, which include likes, algorithmic recommendation engines, infinite scroll, autoplay and deliberately unpredictable rewards, got her addicted. The suit alleges that her addiction fueled depression, anxiety, body dysmorphia – when someone see themselves as ugly or disfigured when they aren’t – and suicidal thoughts.
TikTok and Snapchat settled with K.G.M. before trial for undisclosed sums, leaving Meta and Google as the remaining defendants. Meta CEO Mark Zuckerberg testified before the jury on Feb. 18.
The stakes extend far beyond one plaintiff. K.G.M.’s case is a bellwether trial, meaning the court chose it as a representative test case to help determine verdicts across all connected cases. Those cases involve approximately 1,600 plaintiffs, including more than 350 families and over 250 school districts. Their claims have been consolidated in a California Judicial Council Coordination Proceeding, No. 5255. This means potential awards could run into the billions of dollars.
The California proceeding shares legal teams and evidence pool, including internal Meta documents, with a federal multidistrict litigation that is scheduled to advance in court later this year, bringing together thousands of federal lawsuits.
Legal innovation: Design as defect
For decades, Section 230 of the Communications Decency Act shielded technology companies from liability for content that their users post. Whenever people sued over harms linked to social media, companies invoked Section 230, and the cases typically died early.
The K.G.M. litigation used a different legal strategy: negligence-based product liability. The plaintiff argued that the harm arises not from third-party content but from the platforms’ own engineering and design decisions, the “informational architecture” and features that shape users’ experience of content. Infinite scrolling, autoplay, notifications calibrated to heighten anxiety and variable-reward systems operate on the same behavioral principles as slot machines.
These are conscious product design choices. The plaintiff contended – and the jury agreed – that the platforms should be subject to the same safety obligations as any other manufactured product, thereby holding their makers accountable for negligence, strict liability or breach of warranty of fitness.
Judge Carolyn Kuhl of the California Superior Court agreed that these claims warranted a jury trial. In her Nov. 5, 2025, ruling denying Meta’s motion for summary judgment, she distinguished between features related to content publishing, which Section 230 might protect, and features like notification timing, engagement loops and the absence of meaningful parental controls, which it might not.
Here, Kuhl established that the conduct-versus-content distinction – treating algorithmic design choices as the company’s own conduct rather than as the protected publication of third-party speech – was a viable legal theory for a jury to evaluate. This fine-grained approach, evaluating each design feature individually and recognizing the increased complexities of technology products’ design, represents a potential road map for courts nationwide.
What the companies knew
The product liability theory depends partly on what companies knew about the risks of their designs. The 2021 leak of internal Meta documents, widely known as the “Facebook Papers,” revealed that the company’s own researchers had flagged concerns about Instagram’s effects on adolescent body image and mental health.
Internal communications disclosed in the K.G.M. proceedings have included exchanges among Meta employees comparing the platform’s effects to pushing drugs and gambling. Whether this internal awareness constitutes the kind of corporate knowledge that supports liability is a central factual question for the jury to decide.

There is a clear analogy to tobacco litigation. In the 1990s, plaintiffs succeeded against tobacco companies by proving they had concealed evidence about the addictive and deadly nature of their products. In K.G.M., the plaintiff here is making the same core argument: Where there is corporate knowledge, deliberate targeting and public denial, liability follows.
K.G.M.’s lead trial attorney, Mark Lanier, is the same lawyer who won multibillion-dollar verdicts in the Johnson & Johnson baby powder litigation, signaling the scale of accountability they are pursuing.
The science: Contested but consequential
The scientific evidence on social media and youth mental health is real but genuinely complex. The Diagnostic and Statistical Manual of Mental Disorders (DSM-5) does not classify social media use as an addictive disorder. Researchers like Amy Orben have found that large-scale studies show small average associations between social media use and reduced well-being.
Yet Orben herself has cautioned that these averages might mask severe harms experienced by a subset of vulnerable young users, particularly girls ages 12 to 15. The legal question under the negligence theory is not whether social media harms everyone equally, but whether platform designers had an obligation to account for foreseeable interactions between their design features and the vulnerabilities of developing minds, especially when internal evidence suggested they were aware of the risks.
First, a manufacturer has a duty to exercise reasonable care in designing its product, and that duty extends to harms that are reasonably foreseeable. Second, the plaintiff must show that the type of injury suffered was a foreseeable consequence of the design choice. The manufacturer doesn’t need to have foreseen the exact injury to the exact plaintiff, but the general category of harm must have been within the range of what a reasonable designer would anticipate.
This is why the Facebook Papers and internal Meta research are so legally significant in K.G.M.’s case: They go directly to establishing that the company’s own researchers identified the specific categories of harm – depression, body dysmorphia, compulsive use patterns among adolescent girls – that the plaintiff alleges she suffered. If the company’s own data flagged these risks and leadership continued on the same design trajectory, that would considerably strengthen the foreseeability element.
Why it matters
Even if the science is unsettled, the legal and policy landscape is shifting fast. In 2025 alone, 20 states in the U.S. enacted new laws governing children’s social media use. And this wave is not only in the U.S.; countries such as the U.K., Australia, Denmark, France and Brazil are also moving forward with specific legislation, including mandates banning social media for those under 16.
The K.G.M. trial represents something more fundamental: the proposition that algorithmic design decisions are product decisions, carrying real obligations of safety and accountability. If this verdict causes that framework to take hold, every platform will need to reconsider not just what content appears, but why and how it is delivered.
This is an updated version of an article originally published on March 6, 2026. It was updated to include the jury’s verdict.
– ref. Jury finds Instagram and YouTube addictive in lawsuit poised to reshape social media – platform design meets product liability – https://theconversation.com/jury-finds-instagram-and-youtube-addictive-in-lawsuit-poised-to-reshape-social-media-platform-design-meets-product-liability-277066

