SUMMARY
- Established in 2024, the Knight-Georgetown Institute is laser-focused on pushing design changes to social media platforms to limit “toxicity,” “polarization,” and “disinformation.”
- Its steering council includes James Baker, the FBI’s Russiagate lawyer and formerly chief counsel for Twitter, as well as Alondra Nelson, who oversaw the Biden Administration’s whole-of-government initiative to suppress disfavored online speech.
- At a recent invent in Washington D.C., KGI researchers detailed the results of ongoing experiments on how design changes to social media platforms can shift public attitudes and behaviors.
- One KGI scholar sketched out a goal of “reshaping” the entire “information system environment.”
- KGI researchers also expressed optimism about recent court defeats for Meta, which hold the company’s design choices liable for mental health outcomes.
Since the end of 2016, a distributed network of NGOs, research institutes, and private companies have been working tirelessly to figure out how to reassert centralized influence over public attitudes by changing or censoring social media platforms.
One of the newest of these is the Knight-Georgetown Institute. It was established in 2024 at Georgetown University by the Knight Foundation — one of the biggest bankrollers of the censorship industry, having spent over $100 million since 2016 on “counter-disinformation” research.
Since its foundation, KGI has focused squarely on a single priority: determining how changes to the design of social media platforms can influence public attitudes, and then influencing tech and public policy to achieve the desired design changes.
Its “about” page warns: “the technologies that once promised to usher in a new era of democracy are also being used to spread disinformation.”
KGI’s steering council includes some big names from the era of online censorship, including:
- James Baker, the FBI’s top lawyer during Russiagate, who later joined Twitter before being dismissed during Elon Musk’s takeover for allegedly suppressing the Twitter Files.
- Nahiba Syed, now at Mozilla, who helped defend the Steele Dossier in court.
- Alondra Nelson, Biden’s acting director of the White House Office of Science and Technology Policy, who oversaw a sprawling federal interagency misinformation crackdown spanning 26 agencies, 14 universities, and 20+ NGOs.
- John Sands, who directed over $100 million in Knight Foundation grants targeting “information and technology in the context of our democracy.”
Control the Algorithms, Control the Attitudes
KGI is constantly seeking to finesse its understanding of how changes to the design of social media platforms can influence public attitudes and political outcomes. At a recent event at the Georgetown University campus in Washington D.C., KGI scholars elaborated on the results of their recent research.
SEE ALSO: From Narrative Bans to Design Tweaks: Shifting Tactics in the Censorship Industry
One Georgetown scholar, Nejla Ašimović, noted that small design tweaks to social media feeds can alter the behavior and speech patterns of both Democrats and Republicans:
In February, I had around 1200 democrats and republicans having conversations about policy relevant topics within different feeds. So one feed was pushing up content that reaches cross-group agreement between the groups. One made partisan identities more salient, and one was just chronological. And it’s really fascinating to see how behavior changes.
When you start pushing common ground… people also tend to use more nuance, more hedged language, they say more ‘I think,’ ‘it may be,’ and so on… Which is also say that algorithms really do have a strong effect also on how people speak, not just about what they see.
Another notable finding by Ašimović, which in fact conflicts with some of the favored narratives of the counter-disinformation industry, is that removing people from social media entirely doesn’t necessarily influence their viewpoints in a moderate direction.
And what I found is that in Bosnia specifically, having people go off of social media actually led to worse attitudes, but specifically driven by those in ethnically homogenous offline communities. So they’re the ones driving the effect. And what’s the story there?
Ašimović concluded that control must be asserted in offline spaces as well:
We often imagine [offline] is this beautiful space, but also offline, there’s a lot of hostility and it may have put them even further into these offline echo-chambers. Now at one point this is not an argument for lighting platforms about the hook in any way. It is just pointing to what I think is a key variable of thinking about also what are the opportunities for constructive intergrouping in the offline spaces as well.
She reiterated KGI’s core goal – to design social media algorithms that shape attitudes and behavior in the way they want it to be shaped.
According to Ašimović, some of the new finessed algorithms are now ready to be “put into practice,” and that with the ultimate goal of “rebuilding” the entire “information system environment,” through “small changes compounding over time.”
Engagement is optimizing for the time spent or for the numbers of likes shares and so on, which is mechanisms and are because of course that is good for business, right? Having more people on these platforms, and having them clicking, is what they’re optimized for.
But we have a lot of other better designs and I think now we’re at this point where we can think about how do we actually [put] them into practice. And I know some of the platforms are already starting to think about that, and providing users opportunities to have these different platform designs, which I’m particularly excited about.
And I think the last point I want to make is that these are not quick fixes. Sometimes we are disappointed by the small coefficient size or just 1% of this scale and so on. But it took so long to build the current information system environment, and it is going to take a long time also to rebuild it. So I think if we think about these small changes compounding over time, I think that is a more hopeful, a more hopeful thing.
Lawfare Against Big Tech
In the Q&A session, one of the attendees at KGI’s panel, Johns Hopkins computer scientist Tiziano Piccardi, expressed optimism about a recent landmark court ruling against Meta, which held the company and its product design liable for adverse mental health outcomes.
Piccardi said the ruling may “force” Meta to reconsider its algorithm design.
Questioner: Hi I’m an MA student at Georgetown. Yesterday there was this lawsuit that got finalized and they found Meta and Google liable for having addictive algorithms that were catastrophic for teenagers, mental health. So my question is, what do you think the implications of these are for the sociopolitical landscape in general online? Are we optimistic about this? Should we be optimistic about possible regulations this might come up with? Or do you think this is still going to be pretty much an uphill climb with the appeals process and all?
Piccardi: It can be a force for some changes in the platform design, because obviously there is [inaudible] about what they recommend… So it may force them to reconsider their design.
The court judgment, which also affected Alphabet’s YouTube, validated a novel legal approach against tech companies — holding them liable for adverse health outcomes, the same playbook used against tobacco companies in the 20th century.
KGI sees this as a promising area of pressure — so much so that it has an initiative called “Litigating Platform Design,” with the goal of “ensur[ing] that litigation plays a transformative role in safeguarding the public from potential harms of digital platform design.” The goal, as explained by KGI on its website, is not just to influence litigation in the US but also around the world:
Through research, stakeholder interviews, interdisciplinary convenings, and the development of model resources, the initiative informs litigation strategies, expands access to data and evidence, facilitates empirically-grounded design reforms, and catalyzes sustained collaboration across disciplines. While Litigating Platform Design is focused on the litigation context in the US, lessons and learning are relevant for emerging litigation in other jurisdictions, including the UK, Europe, East Africa, and beyond.
As FFO has previously reported, KGI is seeking to establish itself as a source of expert opinion to state lawmakers on the regulation of recommender systems—the algorithmic feeds that decide what billions of people see every day. Last summer, it shopped a toolkit to state legislators to accomplish this goal. As its website states, “KGI also acts as a convener, building relationships between leading academics and key decisionmakers.”
As lawfare weakens the resolve of tech companies, KGI hopes to position itself as the leading authority on the social media design changes that may follow.




