SUMMARY
- The Knight Foundation has a long history of bankrolling online censorship using the pretext of combating “disinformation,” including efforts targeted at swing states in 2024 to “fortify the election.”
- The Knight-Georgetown Institute has become a hub for big names in the disinformation industry, including James Baker, the FBI’s Russiagate lawyer and formerly chief counsel for Twitter.
- It also includes Alondra Nelson, who oversaw the Biden Administration’s whole-of-government initiative to suppress disfavored online speech.
- Unlikely to achieve immediate influence at the federal level, the Knight-Georgetown Institute is currently seeking to guide state lawmakers on regulating social media feeds, seeking design changes to suppress “toxicity” and “disinformation.”
- KGI reports on recommended changes to social media feeds include praise for NewsGuard and Google Jigsaw.
The Knight Foundation—long marketed as a philanthropic champion of journalism and democracy—has quietly become one of the most prolific funders of censorship infrastructure in America.
With over $107 million spent since 2016 on so-called “disinformation” research, tech regulation, and “trust and safety issues,” Knight has evolved from a grant-making institution into a central hub of the censorship-industrial complex. In 2024, it specifically targeted spending at swing states – to “fortify the election.”
A recent project of the Foundation is the Knight-Georgetown Institute (KGI), co-founded with Georgetown University in 2024 to combat “toxic” online content. Its “about” page warns: “the technologies that once promised to usher in a new era of democracy are also being used to spread disinformation.”
With its anti-disinformation mission, KGI is seeking to establish itself as a source of expert opinion to state lawmakers on the regulation of recommender systems—the algorithmic feeds that decide what billions of people see every day. This summer, it shopped a toolkit to state legislators to accomplish this goal. As its website states, “KGI also acts as a convener, building relationships between leading academics and key decisionmakers.”
The Board: A Disinformation Industry Who’s Who
KGI’s board is stacked with figures whose résumés read like a ledger of the U.S. censorship apparatus:
- James Baker, the FBI’s top lawyer during Russiagate, who later joined Twitter before being dismissed during Elon Musk’s takeover for allegedly suppressing the Twitter Files.
- Nahiba Syed, now at Mozilla, who helped defend the Steele Dossier in court.
- Alondra Nelson, Biden’s acting director of the White House Office of Science and Technology Policy, who oversaw a sprawling federal interagency misinformation crackdown spanning 26 agencies, 14 universities, and 20+ NGOs.
- John Sands, who directed over $100 million in Knight Foundation grants targeting “information and technology in the context of our democracy.”
Algorithmic Social Engineering
In March 2025, KGI released Better Feeds, its flagship paper urging platforms to adopt three “better” algorithmic approaches:
Bridging – prioritize content that fosters “positive dialogue,” i.e., engineered civility over authentic debate.
Surveys – continuously nudge users by asking if they want to see more or less of certain content, a form of feedback-driven content curation.
Quality – elevate “quality” content while downgrading language flagged as “toxic.”
In the latter section, KGI describes straightforward AI-based shadow-banning technologies that would automatically downrank any content considered “toxic” or low-quality.

In the citations, KGI recommends two well-known censorship tools, NewsGuard and Google Jigsaw’s Perspective AI. NewsGuard is well-known for its activities to promote the suppression and financial blacklisting of disfavored news sources, with previously blacklisted sources including well-known conservative outlets such as Breitbart News, Newsmax, and the Federalist.

Perspective, meanwhile, is a project of Google Jigsaw, the tech giant’s in-house censorship research workshop. Originally pitched as a way to use online tools to combat Islamic extremism, following the first Trump victory in 2016, Google Jigsaw pivoted its efforts to target the political right in America. Within days of the election, Google executives labeled Trump voters as “extremists” and discussed the use of Jigsaw as a response. Perspective AI is Google Jigsaw’s AI censorship superweapon, scanning online conversations to detect “toxicity” and “trolling.”
Combating ‘Misinformation’ and ‘Toxicity’
In a webinar promoting Better Feeds, KGI explains how social media platforms can use three alternative, “better” approaches” to creating algorithms: bridging, surveys, and quality.
Bridging focuses on promoting productive dialogue or positive emotions rather than trying to eliminate conflict, noting that many current platform algorithms often prioritize high-conflict engagement content.
Matt Motyl, a senior adviser for the University of Southern California’s Neely Center for Ethical Leadership and Decision-Making, served on Meta’s Civic Integrity team in the lead-up to the 2020 presidential election and Meta’s Social Responsibility unit working on coronavirus research shared with the White House Coronavirus Task Force.
Moytl, who helped helped craft the report on recommender systems in platform algorithms, said during the webinar that bridging-based algorithm “is particularly effective at combating misinformation, toxicity, and a lot of the other harms that we see online that that might kind of grow in these echo chambers, where people are communicating primarily with people that are similar to them and not getting not not reaching across different boundaries.”
The third platform recommender system alternative is a focus on “quality-based systems.” Motyl admits “quality” is a subjective term; however, you could evaluate it based on characteristics “by whether it contains curse words or toxic language.”
He added that a higher quality outlet could be if it has “award-winning journalists” or if other outlets link or share the outlet’s content.
Motyl went on to say that Google’s perspective API has “done a great job.” He said that a platform could use large language models (LLMs) to evaluate quality content based on the usage of proper language or if the outlet is “calling people names or using four-letter words.”
Webinar participants noted that 35 states have introduced bills in 2023 and 2024 aiming to address algorithmic harms related to social media, 75 state bills have been introduced during this time frame, and 500-plus lawsuits have been filed alleging algorithmic harms.
The Knight Foundation’s Track Record
As of June 2024, Knight Foundation said it has donated more than $107 million towards researching and fighting disinformation, algorithmic bias, antitrust enforcement, and related topics. This has led the Knight Research Network to help fund more than 900 publications across 300 journals and media outlets, testified before Congress 15 times, and more. Notable recipients include of Knight Foundation grants include:
- The Global Disinformation Index (GDI). Knight Foundation said it backed the Index’s research on how “extremist groups use digital platforms to spread disinformation and violent rhetoric to finance their activities.” As has been widely documented, GDI actually targeted conservative news outlets.
- The McCain Institute, which has used its Invent2Prevent competition to target online speech.
- The Carnegie Endowment for International Peace, another well-known hub of censorship research.
- In 2019, Knight Foundation funded a new research center at Carnegie Mellon University, the Center for Informed Democracy and Social Cybersecurity (IDeaS), to “fight online disinformation.”
- Knight also funded George Washington University’s Institute for Data, Democracy, and Politics, which targeted vaccine skepticism.
Sample Knight Foundation funded articles include:
- A grant for the University of Minnesota, “Trusted Messengers Can Leverage Connections to Combat Disinformation about Black communities in Black Communities” to develop a scalable model for addressing dis/misinformation in Black communities
- A grant for Media Justice for “Resourcing communities of color to combat disinformation” To develop and implement a train-the-trainer program to disrupt the spread of disinformation in communities of color
In 2024, the Knight Foundation directly intervened in the presidential election, with just under $7 million in funding for newsrooms in swing states. Maribel Pérez Wadsworth, CEO of the Knight Foundation, a leading funder of journalism and media innovation, insisted to The Chronicle of Philanthropy that Knight was not taking sides in the election.
She went on to use Russiagate as an excuse. Per the Chronicle, Wadsworth said that the grant focused on newsrooms in swing states due to U.S. intelligence reports that Russia continues to flood social media sites with false information, especially related to those states, in an attempt to secure the election of Donald Trump.
Partisan Sympathies
Letitita Bode is the research director for KGI. Her background includes CDC research on how to reduce coronavirus and vaccine-related misinformation as well as correcting “misperceptions” about genetically modified food. In 2024, Bode conducted misinformation research on the 2024 presidential election with the support of the Georgetown Tech & Public Policy program at Georgetown McCourt School of Public Policy.
Bode does not conceal her partisan sympathies. In August, she shared a post on Bluesky stating “it’s been a shitty week on top of a shitty month on top of a shitty year,” a quote from a Substack post that claimed Trump and Republicans are destroying institutions and democratic norms that “can take decades to build up.” The Substack post she shared referred to the president’s actions as “Stalinist.”

Bode reposted a study on Bluesky from another user that discussed “correcting publicly to build social norms around responding to misinformation” and called for social media platforms to “promote corrections & take action against toxic behaviors.”
The KGI research director also cheered another user for attacking Meta CEO Mark Zuckerberg’s move away from fact-checking content on its platforms as a “craven political move.”
“I appreciated this take on how much blame the media should get in (not) covering Trump,” she wrote in September 2024, quoting another Substack post that stated, “The blame for Trump falls squarely on Trump himself, his eager allies, and a Republican Party that quit fighting against him eight-plus years ago.”
Bode released a video on how to “slow down misinformation” ahead of the 2024 election,” urging online users to call out misinformation and to always “cite a credible source.”
In the associated article for the video, Bode cites a study that claimed misinformation relied on similar election tropes across 16 Latin Americans countries, including the 2020 American election. The study came from the LatamChequea coalition, a network of Latin American, Spanish, Portuguese, and American fact-checkers.
The Bode-cited study attacks former Brazilian leader Jair Bolsonaro for alleging that there was election fraud during the 2018 elections. It compares this false allegation to the “Big Lie” promoted by President Donald Trump that the 2020 presidential election was stolen.




