By Michael Beckerman
Published June 08, 2019
There’s a lot of chatter among members of Congress about changing an obscure yet critically important law that’s made the Internet as we know it possible: Section 230 of the Communications Decency Act (CDA).
As more lawmakers speak out about Section 230, it’s important to understand that this section of the law enables us all to have a voice online. Rolling back CDA 230 will suppress voices and do irreparable harm to the Internet – and society – as we know it.
If you want Internet Association member companies to take down content that is ostensibly legal (or even illegal) that no reasonable person would want online, then you should care about CDA 230.
So, what exactly is it and why does it matter?
Passed in 1996, the Communications Decency Act was one of the first major pieces of legislation regulating the Internet. Section 230 of this law specifies that all online platforms have the right to moderate, delete, or remove illegal or unsavory content from their platforms, and says these platforms aren’t considered publishers of user-generated content.
If we want to use laws and regulations to make the Internet a better and safer place, we need to carefully consider how new rules might impact the parts of the Internet we value most and how these new rules might apply to the wide range of services that we rely on every day.
In other words, if you or I were to post something libelous on a forum, then we’d be considered the publisher of that content – not the forum.
Importantly, the CDA doesn’t leave platforms completely off the hook. Like a media company, they are legally responsible for material that they create and intentionally publish. They can be sued or prosecuted for copyright infringement, violations of federal privacy laws and any other federal crimes.
Under existing law, there are strong incentives for good actors to remove illegal content. This is also not to say that the industry cannot and should not do more – CDA 230 is the law that lets them do more.
It’s difficult to overstate how critical this law has been to the success of the Internet sector. Imagine trying to convince investors to invest in the next big Internet company when every new user, piece of content, or app download has the potential to increase a company’s liability exposure. Growth itself becomes a risk, and any one user post could mean bankruptcy.
As lawmakers consider updates to Internet regulations, one of the biggest misconceptions circulating is that eliminating or weakening Section 230 will help protect or encourage more and better speech online.
The opposite is true.
Eliminating the ability of platforms to moderate content would mean a world full of 4chans and highly curated outlets – but nothing in between.
Section 230 is the law that allows companies to develop and enforce rules to moderate their services. Repealing 230 would make it harder – not easier – for online platforms to moderate both illegal and legal content that no reasonable person wants online – like threats, harassment, or hate speech.
Weakening CDA 230 is not a panacea that will somehow turn online platforms – especially well-intentioned ones – into perfect moderators of content.
As a conservative who worked in Republican politics for decades, I share the belief that free speech, online and elsewhere, needs to be vigorously protected. But our Constitution protects a lot of speech that we want online platforms to moderate. In other words, whether or not something is legal to say should not be the measure of whether or not it should be allowed on an online service.
Hate speech is a great example. Last year a white supremacist sued a social media platform for banning him for violating its policies and lost – thanks to 230.
Most Americans would agree that platforms – like any other private business – have the right to deny service to anyone who refuses to follow rules designed to prevent behavior that could harm others. Meddling with the law that enables the platforms to do that is simply the wrong approach.
Eliminating CDA 230 would mean that platforms would be incentivized to either pre-screen and filter every single controversial post to avoid litigation, or decide not to proactively moderate any content because moderation could lead to “knowledge” that illegal or tortious content was on the platform – and means potential liability.
In other words, either they take a hands-off approach allowing all types of speech – legal or not – or they restrict huge swathes of legal speech to avoid frivolous litigation. Imagine if restaurant review sites could be sued for libel because of innocuous and factual negative reviews that are obviously legal speech.
Neither of those outcomes is good for America or the world.
There’s no doubt that online platforms, including Internet Association members, have work to do in order to rebuild the trust of Americans and policymakers in their services. And part of rebuilding that trust can and should involve new laws and regulatory authority to protect consumer privacy.
But ill-conceived or rushed legislation is more likely to undermine the ability of people to express themselves. If we want to use laws and regulations to make the Internet a better and safer place, we need to carefully consider how new rules might impact the parts of the Internet we value most and how these new rules might apply to the wide range of services that we rely on every day.