Share this...

by Larry Magid
This post first appeared in the Mercury News

Section 230 of the 1996 Communications Decency Act is a great unifier. Congressmembers from both sides of the aisle agree that it should be struck down or highly modified. And, based on their responses to 2½ hours of testimony on Tuesday, Supreme Court  justices nominated by presidents from both parties seemed to be united in their confusion over the arguments and their reluctance to strike it down.

The court heard arguments in Gonzalez vs. Google which focused mostly on whether tech companies should be held civilly liable for content promoted by their algorithms.

Section 230 says that “no provider or user of an interactive computer service shall be treated as the publisher or speaker of any information provided by another information content provider,” which sets them apart from newspapers, broadcasters and other media companies whose content is mostly created and vetted by writers, producers and editors who work for the company.  So, if a newspaper accuses you of a crime you didn’t commit, or publishes something demonstrably harmful, you can sue that publisher because they made the decision to publish it. You might not win, but you have a clear target to sue. But if someone posts something on Facebook or YouTube that harms you, your beef is with the person who posted it, not the company that allowed it to be posted.

Section 230 was written years before Facebook or even MySpace entered our world when people went online using services like Compuserve and Prodigy.  Both of these companies had forums. At that time, I was a columnist at Compuserve and also a columnist and forum host at Prodigy. Prodigy’s forums were moderated — a human decided whether a post was appropriate. Compuserve’s forums were more of a free-for-all.

In a 1991 suit against Compuserve, a court found that Compuserve could not be held responsible for content because it didn’t review the content and therefore could not be expected to know whether it would be harmful. But in a 1995 case against Prodigy, the company was held responsible because it did have a moderation policy and therefore should have known. In other words, if you’re going to moderate any content, you can be held liable for all of it, including offensive content that slips through the cracks.

In many ways, Section 230 is like a Good Samaritan Law that protects health care workers and others who render aid in an emergency. Without such a law, if you stop and help, you could wind up in court. But if you just drive by and do nothing, you won’t get into trouble. What Section 230 did was to allow companies to moderate content without adding the risk of being sued if offensive content got through anyway.

Politicians from both parties have criticized 230 for opposite reasons. Some Democrats argue that it takes away any consequences if social media companies allow things like hate speech, misinformation or defamatory comments. Some Republicans argue that 230 gives social media companies the power to suppress political speech.

While I don’t fully agree with the Democrats’ argument, I at least understand it, but I’m baffled by the Republican argument. If 230 weren’t in place, social media companies would have even more pressure on them to suppress speech that might lead to violence, misinformation about vaccines or other alleged harms because they could be held liable. For example, in 2017, then President Trump posted a series of tweets falsely implying that MSNBC talk-show host Joe Scarborough had culpability for the 2001 death of a former employee. If Section 230 weren’t in place, Scarborough could have sued Twitter. Whether or not he would have won is an open question, but without 230 in place, Twitter would have had a very strong motivation to suppress Tweets like those and might have been compelled to suspend Trump from the service years before they finally did in the aftermath of the Jan. 6th attack on the Capitol. Trump is among the many Republicans calling for an end or modification of 230.

But the anti-230 stance of some Democrats could also lead to unintended consequences. Striking down 230 could disincentivize companies from moderating content. If you go back to those two cases from the nineties, Compuserve was exonerated for doing nothing while Prodigy was penalized for at least attempting to moderate content.

It’s complicated

What is interesting about the Gonzalez case is the claim that the algorithms that promote and amplify content change social media companies from mere conduits for the public to post content to publishers of that content. A case could be made that there is a distinction between merely carrying content vs. promoting it, but as some of the justices pointed out, there does need to be some way to organize content online. The mere presence of an algorithm doesn’t make someone a publisher, especially if that algorithm is content neutral as the companies say it is. They reportedly recommend content that the user is likely to want to see, regardless of what content it is, like the way Netflix recommends movies based on what you’ve previously watched.

It’s complicated, which is probably why Justice Elana Kagan commented, “We’re a court. We really don’t know about these things. These are not the nine greatest experts on the internet.” Justice Samuel Alito, whose political views are far to the right of Kagan’s, was equally perplexed, in his remark to the plaintiff’s attorney, “I admit I’m completely confused by whatever argument you’re making at the present time.”

In this case, the Gonzalez family’s attorneys argued that YouTube had created thumbnail images and web addresses for videos posted by supporters of the ISIS terrorists in Paris who killed their daughter and was therefore a publisher of that content. Kagan questioned that assumption, pointing out that 230 was a “pre-algorithm statute” when it was crafted in the mid-1990s.

The Supreme Court’s role has always been to attempt to adjudicate complicated questions, and 230, like a lot of laws that impact speech, has nuances and mixed consequences.

230 helps prevent censorship

On balance, I think 230 has served us well, enabling any company that allows for user content to provide moderation services without being held responsible for everything that gets posted on their servers. Without 230, these companies would not only be burdened with the almost unimaginable task of having to review billions of pieces of content a day, but they would also be in the position of having to take down — dare I say “censor” any content that could conceivably wind up triggering a lawsuit. If people on both sides of the political spectrum worry about companies suppressing content, they better get ready for far more suppression if 230 is taken off the books.

Disclosure: Larry Magid is CEO of ConnectSafely, a non-profit internet safety organization that receives financial support from Google and other companies that could be affected by a change in Section 230.


Share this...