ITIF Logo
ITIF Search
Meta Community Notes and Content Moderation in a Free Market

Meta Community Notes and Content Moderation in a Free Market

January 16, 2025

Meta announced on January 7, 2025 that it was ending its third-party fact-checking program on its social media platforms—Facebook, Instagram, and Threads—and moving to a “Community Notes” model, similar to X, in the United States. As more of the leading social media platforms are embracing forms of community-driven content moderation, the United States should continue to foster an environment where platforms can experiment with these different models.

X, formerly Twitter, launched its Community Notes program in 2021 under the name Birdwatch. The initial pilot program was designed to allow participants to identify potentially misleading information and add notes that provide context, as well as rating the helpfulness of other participants’ contributions. In 2022, Twitter CEO Elon Musk expanded the program and renamed it Community Notes. The program has continued to evolve; in its current form, contributors must meet certain eligibility requirements, and only notes rated helpful by a diverse range of contributors appear on posts. Importantly, X does not write, rate, or moderate notes, and notes do not reflect the company’s viewpoints. All contributions to Community Notes are published daily and the ranking algorithm is available for anyone to view.

Ahead of the incoming second Trump administration, in which the president-elect has indicated Musk will play a significant role, Meta has followed X’s lead. In an announcement written by Chief Global Affairs Officer Joel Kaplan, Meta expressed that its “increasingly complex” content moderation systems have “gone too far” and impeded online free expression. It announced an end to its third-party fact-checking program, first launched in 2016, in the United States, and the beginning of a Community Notes program.

This type of community-driven content moderation—decentralized and democratized moderation generated by the cooperative efforts of a platform’s users—is not new. Since its inception in 2001, the online encyclopedia Wikipedia has relied on its community of volunteer editors to write, edit, and update its articles. The social media platform Reddit also relies on community-driven content moderation run by moderators, or “mods”—Reddit users who volunteer to moderate communities known as subreddits—who are empowered to create guidelines for these subreddits beyond the platform’s overall guidelines and mute or block users who violate subreddit rules. The trend toward community-driven content moderation is a result of the broader global conversation of where to strike the balance between online free speech and stopping the spread of potentially harmful content.

As with any form of content moderation, community-driven content moderation has strengths and weaknesses. Some research indicates platforms that focus on community building and curation strengthen the community enough to make intervention through traditional content moderation less necessary. However, platforms must contend with potential issues such as bias, manipulation, scalability, privacy, and participant burnout. It is important that platforms can experiment with different approaches to content moderation, as Meta and X have done, to learn what works best for each platform and its users.

The United States should maintain its free-market approach to online platforms. While there is room for regulation on certain digital policy issues, such as privacy and children’s safety, it is important that such regulation avoids mandating specific approaches to content moderation that would limit experimentation or force platforms into a role as arbiters of right and wrong or fact and fiction in ways that could limit free expression.

Back to Top