In a recent address to fellow disinformation researchers, Center for Countering Digital Hate (CCDH) co-founder and CEO Imran Ahmed delivered what amounted to a victory speech, telling the assembled audience that the long campaign to dismantle online speech protections has finally reached the enforcement phase.
Critics of the “disinformation industry” have long described a sprawling network of NGOs, regulators, academics, and journalists working in concert to impose centralized speech controls under the banner of safety. The remarks of Ahmed, a top goal of whose is to “kill Musk’s twitter” per leaked CCDH communications, confirm the picture of professional censors masquerading as disinterested academics.
After spending much of his speech bemoaning attacks on people in his field, including himself (his visa has been cancelled by the State Department over his role in censoring Americans), he offered an extended description of their role that confirms in his own words the charge that this community functions as a decentralized network of professional censors, producing the evidentiary inputs on which downstream regulatory, judicial, and enforcement action is built:
The work of this room does not merely describe reality. Increasingly, it is what forces reality to change. The evidence you produce is the foundation on which everything else depends. Without it, there is no legislation, litigation or enforcement. Without it, power remains deniable, harm remains dismissible. Without the evidence you produce, the status quo survives by insisting what is plain to ordinary people cannot yet be proven. You are not watching this fight from a distance. You are the reason it exists. Every dataset, every methodology, every finding that makes the invisible visible and the deniable undeniable, that is what gives lawmakers the courage to act, regulators the mandate to investigate, and courts the evidential basis on which to hold power to account.
Defenders of the disinformation research field typically describe it as a neutral, academic, or journalistic enterprise concerned with understanding online information environments. Ahmed’s framing tells a different story. In his own telling, the research is not the description of a problem but the production of a deliverable. Datasets, methodologies, and findings exist in order to give “lawmakers the courage to act,” regulators “the mandate to investigate,” and courts “the evidential basis” for adverse rulings against platforms. Without this output, in his words, “there is no legislation, litigation or enforcement.” The research community is, on its own most prominent founder’s account, the upstream supplier of justifications for downstream regulatory and judicial action against American technology companies.
Destroying Platform Neutrality
The concept of platform neutrality is a bedrock principle guarding freedom of speech online. Epitomized by statues like Section 230 of the Communications Decency Act in the U.S., it is the notion that platforms for online speech should not be held legally liable for content posted by their users. This allows platforms to host billions of items of user-generated content without opening themselves up to countless lawsuits.
This is the principle that Ahmed and CCDH hope to destroy.
We allowed private platforms to shape public life while disclaiming any responsibility for the consequences. And that moral immunity was reified into statute by measures such as Section 230 in the United States and the e-commerce directive in Europe.
The central legal fiction of the social media age, that these are passive pipes with no agency and no duty, is beginning to collapse in courts under the weight of evidence and changing public sentiment. And finally, real regulation this time with teeth.
Ahmed rejects outright the notion that platforms should serve as conduits for free expression with content filtering left to individual users. His preferred model treats platforms as actors with a legal “duty of care,” a phrase imported from the UK Online Safety Act that converts every speech-hosting service into something closer to a regulated utility answerable to government bodies for what its users say.
Ahmed highlighted his own role in dismantling the neutrality principle, taking credit for CCDH’s role in influencing the UK’s Online Safety Act:
In 2021, I was the opening witness before a joint committee of Parliament drafting the Online Safety Act. And in 2023, the UK Online Safety Act and the EU Digital Services Act became law. For the first time anywhere in the world, platforms were being treated as accountable actors. And in December 2025, British and European authorities began using those powers in earnest. The EU fined X 120 million euros for violating the Digital Services Act.
SEE ALSO: How the UK’s Online Safety Act Impacts Americans
By Imran’s own account, CCDH was created in 2019 to implement external influence over social media content moderation decisions, including by encouraging the state to regulate tech companies. Within two years, its founder was the opening witness before the UK Parliament drafting what would become the Online Safety Act, the most expansive speech-regulation regime in the Anglosphere. Within four years, both that Act and the EU’s Digital Services Act were on the books. Within six, European regulators were fining American platforms hundreds of millions of dollars for hosting disfavored content.
The Harms Model
Ahmed next celebrated the parallel American front, in which trial lawyers, state attorneys general, and now municipal governments have begun reframing social media not as a publisher or a forum but as a defective consumer product, modeled on the litigation strategy that was used against the tobacco industry:
And then there’s litigation. Two weeks ago, a New Mexico jury ordered Meta to pay $375 million for knowingly harming children’s mental health and concealing what it knew about child sexual exploitation on its platforms. The very next day, a jury in Los Angeles found Meta and YouTube negligent for designing products that harmed children. That case is a bellwether for thousands more. Two verdicts, two days, two juries of ordinary Americans saying enough. And on the same day as the New Mexico verdict, Baltimore became the first major American city to sue xAI over harms tied to Grok, explicitly citing evidence produced by CCDH. Courts are now asking a question that would’ve been unthinkable five years ago. Did the design of this product cause this harm? And it means these companies are no longer being treated as neutral conduits. They are treated instead as designers, as manufacturers, as actors responsible for what their systems produce.
SEE ALSO: Social Media as Nicotine: ‘Harms’ Approach Takes Center Stage at Cambridge Disinfo Summit
This is the most operationally significant section of the speech. Federal legislators have, for the most part, declined to repeal Section 230. Federal courts have repeatedly struck down state-level efforts to regulate online speech directly on First Amendment grounds. The “consumer harm” strategy is the workaround. By recharacterizing the platform itself as a dangerous product whose design caused injury, rather than as a publisher of constitutionally protected speech, litigators can drive billion-dollar verdicts that produce the same chilling effect as direct regulation while sidestepping much of the constitutional scrutiny that would otherwise apply.




