The censorship industry, the global network of “counter-disinformation” nonprofits and activists that work tirelessly to exert control over social media platforms, is banking on a new strategy to circumvent the First Amendment: frame tech companies as producers of harmful and addictive products, rather than platforms for speech.
This was evident at the latest Cambridge Disinformation Summit, an annual gathering of censorship industry professionals, policymakers, and academics at the University of Cambridge in the United Kingdom.
At the summit, numerous speakers, ranging from Gov. Spencer Cox of Utah to Sadiq Khan, the mayor of London, compared social media platforms to tobacco or drug companies — perhaps the most popular talking point in the censorship industry over the past few months.
Some of the remarks made at the summit:
- Sir. Sadiq Khan, Mayor of London: “I think the way big tech is behaving is the way big tobacco behaved over the last six decades. I mean, it’s really serious.”
- Gov. Spencer Cox, Governor of Utah: “We’re treating this the way we treated the tobacco companies in the 1950s and 1960s in the United States. The way we’ve looked at the opioid companies in the nineties and the early two thousand tens.”
- Imran Ahmed, CCDH CEO: “These companies are no longer being treated as neutral conduits. They are treated instead as designers, as manufacturers, as actors responsible for what their systems produce.”
- Alan Jagolinzer, Professor, University of Cambridge: “If this is the correct framing, then tech platforms in my opinion, should not escape the same level of scrutiny and accountability as pushers of tobacco or opiates.”
This is not a subtle rhetorical adjustment. It is a deliberate effort to move platform regulation out of the First Amendment’s jurisdiction and into a domain where liability, design standards, and duty of care dominate. Once speech becomes “harm,” platforms become not neutral intermediaries but defective products.
If platforms are “designers” and “manufacturers,” then the focus shifts naturally to algorithmic architecture—ranking systems, recommendation engines, and engagement optimization models. The claim is no longer that platforms merely host harmful content, but that their systems are engineered in ways that produce harm as an inherent outcome. In that frame, algorithmic design itself becomes the target of intervention.
By aligning social media with tobacco and opioids, advocates are gesturing toward a well-established litigation and regulatory playbook. Tobacco litigation in the United States and abroad succeeded not by arguing over speech, but by demonstrating that companies knowingly designed and marketed addictive products while obscuring harms. The opioid cases followed a similar trajectory, focusing on corporate knowledge, design, and distribution practices rather than debates over publishing and freedom of speech.
In short, the counter-disinformation hopes to move the debate over social media platforms to terrain where American courts are less likely to dismiss cases on First Amendment grounds.
Sidestepping the First Amendment
As the Foundation for Freedom Online previously reported, counter-disinformation organizations like the Knight-Georgetown Institute (whose board is filled with individuals behind the first wave of censorship in 2018-2024) see litigation as the mechanism through which they can once again influence the internal mechanisms of social media platforms.
As their own documents show, the “harms” approach, framing tech platforms as producer of potentially harmful products rather than platforms for speech, is central to the litigation strategy.

Once the debate is anchored in public health, the threshold for intervention lowers. Governments routinely regulate products deemed harmful to consumers, often with little regard for First Amendment or free speech considerations. By recasting social media as an addictive product rather than a communications medium, advocates effectively sidestep the constraints that have historically limited direct state involvement in speech.
This helps explain why algorithmic design has become the focal point. If harm is the product, then redesign is the remedy. The question is no longer what users are allowed to say, but how platforms are permitted to shape what users see.




