What the Meta and Google verdict means for social media design

A Los Angeles jury has delivered a verdict in the first bellwether social-media-addiction case to go to trial. On March 25 jurors found Meta and Google negligent in designing Instagram and YouTube and in failing to warn users about their risks. They awarded the plaintiff $6 million in damages, with Meta assigned 70 percent of the liability and Google 30 percent.

The verdict alone does not set precedent, and both companies say they will appeal. But it turns a long-running argument about social media into a live legal question: Should the law treat the modern feed as protected publishing or as a product whose design can be judged for safety?

It is also a test case in a much larger fight: roughly 1,600 cases are pending in California alongside more than 10,000 individual cases and some 800 school district claims nationwide. The day before the Los Angeles verdict was reached, a New Mexico jury found Meta liable under the state’s consumer protection law for misleading consumers about the safety of Facebook, Instagram and WhatsApp and for enabling child sexual exploitation on those platforms.


On supporting science journalism

If you’re enjoying this article, consider supporting our award-winning journalism by subscribing. By purchasing a subscription you are helping to ensure the future of impactful stories about the discoveries and ideas shaping our world today.


The plaintiff, identified by her initials as K.G.M., now 20 years old, testified that she began using YouTube at age six and Instagram at age nine. But rather than focus on the specific videos and posts she saw, her lawyers focused on the design of the products themselves—features such as infinite scroll and autoplay and the systems built to keep serving up more.

That framing is how the plaintiff sought to sidestep Section 230 of the Communications Decency Act of 1996, which shields Internet companies from liability over user-generated content. Instead of treating Instagram and YouTube chiefly as hosts for other people’s speech, the lawsuit treats some of their core features as design choices with foreseeable harms—especially when children are using them.

Gregory Dickinson, an assistant professor of law at the University of Nebraska, who specializes in Section 230 and product liability, says the line between content and product design—for instance, between what a book contains and how it is printed—does exist in the case law, even if the boundary remains unsettled. He thinks developers of social media platforms land closer to book printers—and that the analogy actually understates the case. “Imagine a slot machine that knew all your favorite games, buzzed in your pocket when your friends started playing and automatically spun the next round unless you opted out,” he says. “That gets you closer to what social media is doing.” The claim is about the machine itself. Section 230’s “core function was to prevent crushing content-moderation burdens from being imposed on Internet intermediaries,” he says. “If the claim is instead, ‘You should not have built this specific engagement-maximizing feature in the first place,’ then Section 230 is much less necessary.”

Eric Goldman, a professor at Santa Clara University School of Law and a longtime Section 230 advocate, sees that distinction as unstable. “Social media services are content publishers,” he says. “Trying to distinguish between the content and the other publication decisions associated with their gathering, organizing and disseminating content is illusory in my mind.”

Goldman’s concern is structural. “If plaintiffs can focus on how a service is designed, rather than the content that’s delivered via that design, they will always do so,” he says, “and according to this court, that means that they will always get around Section 230—and as a result, Section 230 is essentially eviscerated.”

Whatever happens on appeal, the case puts a set of engineering choices under new scrutiny. Arturo Béjar, a former Facebook engineering leader who built safety tools at the company and later testified before the U.S. Senate in 2023, says the disputed features were built first to drive engagement. “Infinite scroll, autoplay were designed to increase the amount of time spent,” he says. “Notifications are chosen for the rate at which they bring people back into the app.”

He says those features “were not subject to any meaningful safety reviews. In particular, the safety question of ‘What is the harm that is intrinsic to the feature?’ was not asked or explored.” Safety protections, he says, got stripped through internal review. “Features that at conception would have provided meaningful safety got whittled down” through what the company called the minimum viable product process “so that the end result was ineffective at providing safety.”

The features under dispute—ranking systems optimized for retention, endless feeds, defaults that favor passive consumption and notifications—are product decisions engineered to hold our attention. Béjar, who worked at Facebook from 2009 to 2015 and was a consultant for Instagram from 2019 to 2021, offers examples of the trade-offs he says he saw from the inside. He recalls that Instagram once implemented a session-limit mechanism that displayed a “you’re all caught up” message. Later, suggested posts were introduced at the bottom of the feed, allowing people to keep scrolling. He offers a sense of scale: at the time of his second stint at what is now Meta, there were approximately 30,000 engineers at the company, but the portion of the well-being team focused on key teen issues—suicide, mental health—was fewer than 20.

In practice, safer design means more friction and less compulsion: defaults that are less aggressive, features that ask users to actively opt in and products that do not automatically assume the goal is to keep a person around for as long as possible.

Researchers at Carnegie Mellon University’s Human-Computer Interaction Institute, including Hank Lee and his Ph.D. adviser Sauvik Das, have tried to measure what happens when you undo some of those design choices. Their team built Purpose Mode, a browser extension that strips attention-capture elements—infinite scroll, autoplay, algorithmic recommendations—from social media platforms.

Lee says participants in a study of Purpose Mode felt less distracted, spent less time on the sites—about 21 fewer minutes per day on average—and, in some cases, liked the platforms more when these features were reduced. It was a small study, but it suggests that at least some of the mechanics now being litigated are changeable—and that dialing them back does not necessarily ruin the experience.

Some of the most familiar design choices suddenly appear less inevitable. Autoplay could be off by default. Notifications could become rarer and easier to disable. Recommendation systems could be less aggressive, especially for younger users. More of the product could be designed to help users take a break rather than to stop them from doing so.

None of that would be free; the same features that keep users scrolling are often the ones that boost engagement, ad inventory and return visits.

Goldman says that Meta and Google could challenge the verdict on several grounds: that product liability law was built for physical products and physical injuries, that causation was not proved cleanly in a case involving preexisting trauma, that the First Amendment protects editorial discretion, and that Section 230 should apply to design and dissemination alike because, in practice, the two are hard to separate.

Dickinson agrees that the appeal will be fought on legal terrain, not factual. Appellate courts generally defer to juries on questions of evidence and causation, which means the plaintiff’s victory on the facts—that platform design caused her harm—will be difficult to overturn. “For the plaintiffs, that is the biggest advantage: they are now in a strong posture on the facts,” he says. “Their harder task on appeal will be the legal one”—persuading the appellate court that the design-versus-content distinction survives scrutiny under Section 230 and the First Amendment.

The verdict will not remake social media immediately. But it does weaken one defense of the modern feed: that endless scroll, autoplay and aggressive notifications are simply the benign background conditions of online life. They are choices—ones that can be changed and, now, judged. As Béjar puts it: “Can you please make products that are not addictive to children?”

Read More

LEAVE A REPLY

Please enter your comment!
Please enter your name here