Experts caution that a new tool allowing Twitter users to moderate content may be counterproductive and ripe for abuse.
Following on the heels of its ban of Donald Trump, Twitter this week unveiled Birdwatch, which lets any user "add notes with helpful context" to any tweet they see as misleading.
Recent Stories in Latest News
Will Duffield, a policy analyst at the Cato Institute, warned, however, that when the topic being fact checked is already contentious, "crowd-based moderation could just replicate the conflict you're trying to resolve." Duffield said "brigading," where groups of Twitter users gang up on others, could manipulate Birdwatch results.
Daniel Castro, vice president of the Information Technology and Innovation Foundation, pointed out that Birdwatch comments that link to third party "reputable sources" are boosted in the algorithm. "By decentralizing responsibility, you're not necessarily getting different outcomes."
The move comes as Twitter faces widespread criticism over its handling of misinformation surrounding the 2020 election. Twitter hopes to neutralize the issue by moving toward a more community-driven approach to moderation.
Upon entering Birdwatch, users reporting misinformation are asked to choose from a multiple-choice dropdown describing why a tweet is misleading. Choices include "It is a joke or satire that might be misinterpreted as a fact" and "It is a misrepresentation or missing important context." Users can then append a note explaining their concerns. Vice President of Product Keith Coleman explained that "eventually we aim to make notes visible directly on Tweets for the global Twitter audience."
Birdwatch is currently only available to U.S. residents who have no "recent notice of Twitter Rules violations." Participants will be admitted in batches as Twitter gathers data on the program's effectiveness.
Social media giants are struggling to find solutions to dealing with misinformation that satisfy all parties. Facebook recently announced its Oversight Board would take up the question of whether Donald Trump should be permanently banned from the platform. Facebook said it would be permanently bound by the ruling.
By outsourcing fact checking, Twitter hopes to avoid backlash of the sort it received in 2020, when fact-check labels originally rolled out to handle coronavirus misinformation were used to tag Trump tweets about the upcoming election.
One of Twitter's trusted third-party organizations for coronavirus fact checks, the World Health Organization, had previously tweeted that there was "no clear evidence of human-to-human transmission" of the virus and that masks were unnecessary for individuals without symptoms.