Why Social Media Companies Aren’t Liable for Abuse on Their Platforms

Posted by Arthur Chu on Mar 18, 2016 in Internet Policy, Online Harassment
Why Social Media Companies Aren’t Liable for Abuse on Their Platforms
Share this:Tweet about this on TwitterShare on FacebookShare on TumblrShare on Google+Digg thisShare on RedditShare on StumbleUponBuffer this pageShare on LinkedIn

Of everything I’ve written–and I’ve covered some pretty heavy, controversial topics–I don’t think I’ve ever gotten as much blowback as when I advocated the amendment or repeal of Section 230 of the Communications Decency Act.

For most people Section 230, sometimes called the Good Samaritan Clause, is an obscure piece of legislation, but for those of us who live much of our personal or professional lives online it’s one of the most significant laws on the books. Nearly every problem we have with finding solutions to online abuse can be traced back to this law or to the spirit that lies behind it.

Section 230 of the CDA is, essentially, a declaration of neutrality for platforms. It states that if a company does not actively participate in the creation of content–if all it does is provide a venue for someone to express themselves–then the company is not liable for that content. It doesn’t matter how actionable that content is–how clearly a given utterance constitutes libel, or harassment, or incitement to violence. You can sue the person who said it, if you can track them down, but the service provider–Facebook, or Twitter, or YouTube–bears no responsibility and has no duty to compensate the victims or to take anything down.

It’s pretty easy to see how that fact, by itself, makes abusive online behavior a nearly unsolvable problem. Tracking down and de-anonymizing individual abusers is difficult work, it’s likely to be unrewarding for any lawyer who takes on a case, it’s low priority for law enforcement. The very nature of crowdsourced harassment–thousands of people piling on a single person, each one contributing a snowflake to the avalanche of abuse–makes taking action against individuals a Sisphyean task.

We aren’t talking about edge cases here: We’re talking about cases of clear-cut widespread defamation that directly harmed people’s businesses and possibly put their safety in danger, from the libel campaign to smear Kenneth Zeran as a domestic terrorist in Zeran v. AOL, the case that defined the concept of Section 230 protection, to the long defamation career of notorious troll Joshua Goldberg, who was only caught because he made the mistake of impersonating an Islamic terrorist–one of the few groups our government actively spends resources looking for.

The very nature of crowdsourced harassment makes taking action against individuals a Sisphyean task.

The argument is always the same–yes, terrible things happen online, and yes, Section 230 makes platforms’ responsibility to keep those things from happening entirely voluntary, but that’s the price of freedom. The clear lines we can draw connecting the profits made by social media companies from generating as much activity and “engagement” as possible and how those design choices feed into abusive behavior–that’s the price of having social media companies in the first place.

In this Section 230 of the CDA resembles nothing so much as the 2005 Protection of Lawful Commerce in Arms Act, a law much welcomed by gun manufacturers that the normally-stalwart progressive Bernie Sanders is now coming under fire for supporting. The PLCAA absolves gun manufacturers from any and all legal responsibility for the deaths and injuries caused by their product. Gun manufacturers have no incentive to try to limit the number of guns on the street or take measures to prevent their being sold to criminals. In fact, they have an incentive to do the opposite–an environment in which gun violence is common and the only realistic response available to people at risk of being shot is to buy their own gun to protect themselves is, for the gun sellers, ideal.

Such is the price of the Second Amendment and our freedom to bear arms. Note that we’re not even talking about making gun manufacturers pay full restitution for every single unnecessary death caused by firearms, though for those of us who lack a religious devotion to the Second Amendment we might ask why that’s so unreasonable.

The problem is PLCAA stands in the way of even more moderate solutions. Creating a legal “safe harbor” for gun manufacturers in return for following basic best practices–like having a vetting process to make sure guns are only sold to reputable dealers, like requiring guns to have safety locks, like having robust systems in place to track and trace firearms after they’re sold–is impossible as long as PLCAA protects manufacturers from all liability in the first place. There’s nothing to threaten the companies with, no incentive for them to voluntarily comply–and there’s plenty of incentive not to, given that by necessity any moves to curb gun violence will cost money and will reduce the total number of guns sold.

The issue of online abuse, defamation and harassment is not as immediately life-or-death as gun violence, true. But it’s a similar dynamic. People have brought up, over and over again, best practices that major social media platforms could adopt to limit the spread of defamatory information, to reduce the risk of large-scale harassment and to curb mob behavior.

But all of these would cost money in terms of hiring personnel or spending time working on software fixes. And all of them would, in the short term, reduce “engagement”–they would increase the effort necessary to get more eyeballs on more content as quickly as possible. Even if nasty incidents on a platform harm that platform’s PR and limit its long-term growth–just as increased gun violence and crime make gun manufacturers look bad–corporate executives will still readily sacrifice those long-term interests for short-term growth. That is, after all, how they get paid.

Tools are force multipliers–absent any external intervention or bias, they amplify whatever power dynamic already exists.

Even the dynamic where gun proliferation drives more proliferation as people enter an arms race to protect themselves has parallels online. As online communities become increasingly pathological, it becomes common for anyone remotely high-profile to join in the game of combatively calling out other people, discrediting them and siccing abuse on them in perceived self-defense. The platforms’ own moderation teams are understaffed, underpaid and inattentive and legal threats are toothless and laughed off so many people see no other option but to fight fire with fire–to make it clear that messing with them will cause an ugly backlash and therefore encourage harassers to find a softer target.

All of this sucks. It sucks because, as always seems to be the case in laissez-faire unregulated environments, attack is easier than defense–guns are cheaper and more effective at killing than bulletproof vests are at blocking bullets, salacious and malicious gossip about your enemies will go viral much faster than any defense of yourself–and so absent any external governing authority the environment becomes all attacking, all the time.

Neutral tools aren’t really neutral. Tools are force multipliers–absent any external intervention or bias, they amplify whatever power dynamic already exists. Tools that enable violent impulses–be they physical or social–empower those inclined to violence. Yes, guns don’t kill people, people kill people–but guns disproportionately empower people who are killers. Words can be used to bring truth to light or muddle it, to build up communities or tear them apart–but a media governed by nothing but people’s impulses and the logic of the market will always tear down before it builds up.

No policymaker is consciously on the side of the “trolls,” any more than anyone is consciously on the side of petty criminals or deranged mass shooters. But when policymakers decided to be “neutral” and let the logic of the market and human nature take its course, they inevitably empowered the worst actors at the expense of everyone else.

That’s the harsh truth that people who still have faith in neutrality and invisible hands and whatnot have to face: When you design a system, eventually you must take responsibility for the effects that system has. No one else can.

Voiceover artist. Stage actor. Freelance writer. Public speaker. Professional pot-stirrer and opinion-haver. Oh yeah, and an 11-time Jeopardy! champion.

Share this:Tweet about this on TwitterShare on FacebookShare on TumblrShare on Google+Digg thisShare on RedditShare on StumbleUponBuffer this pageShare on LinkedIn

Copyright Women’s Media Center, 2016. For reprint, contact permissions@womensmediacenter.com. The views expressed in this commentary are those of the author alone and do not represent WMC. WMC is a 501(c)(3) organization and does not endorse candidates.