Misinformation and Community Notes
Growing up in Taiwan, I’ve witnessed how political polarization and trust in government have become critical issues. A prime example is the recent controversy surrounding former Taipei Mayor Ko Wen-je, involving allegations about the Jinghua City project and political donations. A poll by the Taiwanese Public Opinion Foundation (TPOF) revealed sharp divisions:
- Democratic Progressive Party(DPP) supporters: 73% satisfied, 12% dissatisfied.
- Kuomintang (KMT) supporters: 24% satisfied, 58% dissatisfied.
- Taiwan People’s Party (TPP) supporters: 10% satisfied, 85% dissatisfied.
- Neutral voters: 22% satisfied, 30% dissatisfied, 48% undecided.
This polarization underscores how trust in government institutions is heavily influenced by political alignment. A major driver of this divide appears to be the information people consume. Supporters of different parties often rely on vastly different news sources, many of which are rife with misinformation. Without a shared foundation of factual evidence, meaningful discussion and consensus become impossible. This erodes the very essence of democracy, which thrives on dialogue rooted in truth. The challenge of combating misinformation is at the heart of this issue.
Tackling Misinformation
In the face of rampant misinformation, Meta partners with professional organizations certified by the International Fact-Checking Network (IFCN) to flag and label false information. However, this approach has faced criticism for perceived bias and overreach, potentially undermining user trust. Mark Zuckerberg himself has acknowledged these flaws and shifted toward solutions that prioritize user-driven moderation recently, such as Community Notes on X (formerly Twitter).
Community Notes is a crowdsourced fact-checking system designed to add context to potentially misleading tweets. By leveraging contributions from diverse users, the system aims to counter misinformation more transparently and inclusively. Here’s how it works:
- Users join as contributors, initially rating existing notes to build their credibility before writing their own.
- Contributors label tweets as misleading or not, providing context rated for helpfulness by others.
- An algorithm prioritizes diverse perspectives to mark broadly agreed notes as “Helpful.”
- All notes and ratings are public, with an open-sourced algorithm.
- Only “Helpful” notes are displayed, maintaining quality control.
Measuring Community Notes
Community Notes shows promise in curbing misinformation but has an uneven impact. Notes on inaccurate tweets can reduce retweets by up to 50% and increase tweet deletion likelihood by 80%. However, only 29% of fact-checkable tweets are rated helpful, and during key moments like the 2024 U.S. elections, fewer than 6% of notes met the bar for helpfulness.
Timing is another major flaw. Notes often take over seven hours to appear, and some linger in the pipeline for up to 70 hours. With tweets going viral in just over seven hours, this lag renders many notes ineffective in stemming the initial spread of misinformation.
Coverage is also sparse. Most misleading content flies under the radar, with around 69% of flagged tweets receiving no action. In the EU, the odds of a tweet receiving a Community Note are minuscule—just one in 500,000.
Looking Ahead
Community Notes is far from perfect, but it represents a step toward rebuilding trust and fostering productive dialogue in the digital age. I’m particularly interested in seeing how Meta integrates this system into platforms like Facebook and Threads, where misinformation thrives and political discourse often spirals into polarization in Taiwan. These are the platforms I use daily and where I see the greatest potential for change.