The Biden administration is set to dole out more than $550,000 in grants to develop an artificial intelligence model that can automatically detect and suppress microaggressions on social media, government spending records show.
The award, funded through President Joe Biden’s $1.9 trillion American Rescue Plan, was granted to researchers at the University of Washington in March to develop technologies that could be used to protect online users from discriminatory language. The researchers have already received $132,000 and expect total government funding to reach $550,436 over the next five years.
The researchers are developing machine-learning models that can analyze social media posts to detect implicit bias and microaggressions, commonly defined as slights that cause offense to members of marginalized groups. It’s a broad category, but past research conducted by the lead researcher on the University of Washington project suggests something as tame as praising meritocracy could be considered a microaggression.
The Biden administration's funding of the research comes as the White House faces growing accusations that it seeks to suppress free speech online. Biden last month suggested there should be an investigation into Tesla CEO Elon Musk’s acquisition of Twitter after the billionaire declared the social media app would pursue a "free speech" agenda. Internal Twitter communications Musk released this month also revealed a prolonged relationship between the FBI and Twitter employees, with the agency playing a regular role in the platform's content moderation.
Judicial Watch president Tom Fitton likened the Biden administration’s funding of the artificial intelligence research to the Chinese Communist Party’s efforts to "censor speech unapproved by the state." For the Biden administration, Fitton said, the research is a "project to make it easier for their leftist allies to censor speech."
A spokesman for the National Science Foundation, which issued the research grant, rebuffed criticism of the project, which he said "does not attempt to hamper free speech." The project, the spokesman said, creates "automated ways of identifying biases in speech" and addresses the biases of human content moderators.
The research’s description doesn’t give examples of what comments would qualify as microaggressions—though it acknowledges they can be unconscious and unintentional. The project is led by computer science professor Yulia Tsvetkov, who has authored studies that suggest the artificial intelligence model might identify and suppress language many would consider inoffensive, such as comments praising the concept of meritocracy.
Tsvetkov coauthored a 2019 study titled "Finding Microaggressions in the Wild," which categorized microaggressions into subcategories, one of which was the "myth" that "differences in treatment are due to one’s merit." Examples of microaggressions laid out in the paper included statements like "Your mom is white, so it’s not like you’re really black," and questions including "But where are you from, originally?"
Tsvetkov also coauthored a July article that analyzed the "prominence of positivity in #BlackLivesMatter tweets" during the June 2020 George Floyd riots. Tsvetkov and her colleagues determined positive emotions like "hope, pride, and optimism" were prevalent in pro-Black Lives Matter tweets, evidence they said contradicts narratives framing Black Lives Matter protesters as angry.
Conservative watchdog groups raised alarm over the Biden administration’s funding of the research, telling the Washington Free Beacon the project represents a White House effort to curb free speech online.
"It’s not the role of government to police speech that some might find either offensive or emotionally draining," said Dan Schneider, vice president of the Media Research Center’s free speech division. "Government is supposed to be protecting our rights, not suppressing our rights."
Tsvetkov did not respond to a request for comment regarding free speech advocates’ concerns about the research.
The research is the latest instance of the government assuming a role in online content moderation. The Biden Department of Homeland Security established a Disinformation Governance Board with the goal of "countering misinformation," only to scrap the controversial board after intense backlash.
Published under: CCP , Censorship , Elon Musk , Twitter