Blog

fact-checking

No More Fact-Checking? Is It Time to Update the Communication Decency Act?

No more fact-checking? As digital communication continues to evolve, so do the challenges associated with misinformation and false narratives. Social media platforms have transformed how information is disseminated, creating unprecedented challenges for communication, public discourse, and legal frameworks governing online content.

On January 7, 2025, META, which owns Facebook, announced, “We are ending our third-party fact-checking program and moving to a Community Notes model.”

Could this mean it is time to revisit and potentially update the Communication Decency Act (CDA), particularly Section 230? Since its enactment in 1996, Section 230 has been a cornerstone of Internet law. The Florida Defamation of Character Attorneys at Parrish & Goodman address this new challenge.

Facebook and Fact-Checking

Facebook’s controversial fact-checking program was introduced in 2016 to allow independent experts to provide more information on posts to debunk hoaxes and false information. The program was intended to help users better judge what they saw and read as true or false. The end result was more suppression of political discourse due to biases and perspectives of the so-called “fact-checkers.”

The result was a barrage of lawsuits against Meta for suppressing facts, targeting specific types of accounts or posts, and canceling some accounts. In fact, Meta has agreed to pay $25 million to settle a lawsuit filed by President Donald Trump and others who claimed their Facebook accounts were unlawfully shut down or blocked at the urging of the U.S. government.

Under the deal, Meta will pay $22 million to a fund to support the construction of Trump’s presidential library, while another $3 million is going to four people who joined Trump in the lawsuit filed in 2021. Facebook banned Trump shortly after the January 6, 2021, protest at the Capitol when his administration was still in office. His accounts were restored in 2023.

Now, Facebook is ending its third-party fact-checking program and moving to a Community Notes model like that used on the X platform. It will allow more speech by lifting restrictions on some topics that are part of mainstream discourse and focusing enforcement on illegal and high-severity violations. It will also take a more personalized approach to political content so that people who want to see more of it in their feeds can.

The Current Landscape

In the age of instant information sharing, the consequences of misinformation can be dire. From the COVID-19 pandemic to political elections, the impact of false claims and misleading narratives has prompted widespread calls for more robust accountability measures for digital platforms. Critics argue that Section 230 has allowed platforms like Facebook, Twitter (now X), and YouTube to evade responsibility for the content they host, effectively making them “information gatekeepers” without any of the corresponding obligations.

Understanding Section 230

At the heart of this conversation is Section 230 of the CDA, which provides immunity to online platforms from liability for user-generated content. This provision has been hailed as a vital mechanism that promotes free speech online, allowing platforms to host a wide range of opinions and discussions without fear of legal repercussions. However, it has also been criticized for enabling the spread of misinformation and harmful content without accountability.

What Would an Update Look Like?

Some propose creating stricter guidelines requiring platforms to engage in active moderation and fact-checking to mitigate the propagation of false information. Others suggest implementing a tiered liability system, where platforms actively curating content could be held accountable for misinformation while protecting smaller, less-resourced platforms that merely host user comments or forums.

An updated version of the Communication Decency Act might include provisions to:

  • Encourage Fact-Checking: Require platforms to adhere to enhanced fact-checking policies, particularly for content that impacts public health, safety, and democratic processes.
  • Define Misinformation: Create clear definitions of what constitutes misinformation and harmful content, providing a framework for platforms to assess responsibility more effectively.
  • Promote Transparency: Mandate transparency about how algorithms promote content, enabling users to understand how information is curated and shared within these ecosystems.
  • Establish Clear Penalties: Introduce defined penalties for platforms that fail to comply with regulatory standards surrounding misinformation, creating a tangible incentive for responsible moderation practices.

Striking the Correct Balance

While accountability and responsible communication are critical, striking a balance that preserves free speech is also essential. The challenge lies in ensuring that any reforms do not stifle legitimate discourse or censor diverse viewpoints. Achieving this equilibrium will require thoughtful dialogue among lawmakers, platform operators, and users to construct a legal framework that safeguards democracy while addressing misinformation.

Florida Defamation of Character Attorneys

We provide legal representation across Florida, with offices located in Fort Myers and Naples. The Parrish & Goodman Law Firm has extensive experience handling hundreds of slander and libel cases throughout the state, including complex cases involving Internet and email communications. To request a consultation with one of our experienced Defamation of Character Attorneys in Florida, please call 813-643-4529 or complete the contact page.