Two Supreme Court Cases That Could Break the Internet


In February, the Supreme Court will hear two cases — Twitter v. Taamneh and Gonzalez v. Google — that could change internet regulation with potentially far-reaching consequences. Both cases fall under Section 230 of the Communications Decency Act 1996, which gives Internet platforms legal immunity for content posted by users. In each case, the plaintiffs allege the platforms violated federal anti-terrorism laws by allowing the content to remain online. (Section 230 contains a loophole for content that violates federal law.) Meanwhile, the Justices are deciding whether to hear two other cases involving laws in Texas and Florida that allow Internet service providers to censor what they consider to be political content. offensive or dangerous. The laws arose out of allegations that providers stifled conservative voices.

To talk about how these cases could change the Internet, I recently spoke by phone with Daphne Keller, who teaches at Stanford Law School and directs the platform regulation program at the Stanford Cyber ​​Policy Center. (Until 2015, he served as general counsel at Google.) During our lengthy conversation, edited for clarity, we discuss what Section 230 actually does, the different approaches the Court may take to interpreting the law, and why it might use each statute when interpreting it. we did The form of regulation by platforms has unintended consequences.

How ready should people be for the Supreme Court to fundamentally change how the internet works?

We should be prepared for the Court to change a lot of things about how the Internet works, but I think they could go in so many different directions that it’s very difficult to predict the nature of the change or what anyone should do in anticipation of it.

Until now, Internet platforms could allow users to share speech, good or bad, fairly freely, and they were immune from liability for much of what their users said. It’s the law colloquially known as Section 230, and it’s probably the most misunderstood, misinformed, and hated law on the Internet. It provides immunity from certain types of platform liability claims based on user speech.

These two cases, Taamneh and Gonzalez, could both change that immunity in several ways. If you look at Gonzalez, which just deals with Section 230, the plaintiff is asking the Court to say that there is no immunity after the platform makes recommendations and does individual targeting of content. If the Court only had to answer the question at hand, we could be looking at a world where platforms are suddenly responsible for everything in a rated news feed, like Facebook or Twitter, or whatever is recommended. YouTube related to the Gonzalez case.

If they were to lose the immunity they have with these features, we’d suddenly see the most used parts of Internet platforms, or the places where people actually go and see other users’ output, suddenly become heavily locked down or heavily restricted to only the most secure. content. Maybe we wouldn’t get things like the #MeToo movement. We may not get police shooting videos to really get seen and spread like wildfire because people share them and they appear in rated news feeds and as recommendations. We’ve seen a huge shift in the types of online discourse that exist, mostly on the front page of the Internet.

The upside is that there are some really terrible, terrible, dangerous performances in these cases. The cases involve plaintiffs whose family members were killed ISIS attacks. They try to get this kind of content to disappear from these feeds and recommendations. But much other content will also disappear in ways that affect speech rights and have disparate effects on marginalized groups.

So the plaintiffs’ arguments boil down to the idea that Internet platforms or social media companies don’t just passively allow people to post things. They package them and use algorithms and promote them in specific ways. And therefore they cannot wash their hands and say that they have no responsibility here. Is this accurate?

Yeah, I mean their argument even changed dramatically from one briefing to the next. It’s a little hard to pin down, but it’s something close to what they’re saying now. Both plaintiffs lost family members ISIS attacks. Gonzalez appealed to the Supreme Court as a question of immunity under Article 230. Another, Taamneh, is asking the Supreme Court whether the platforms would be liable under the law if immunity did not exist. What is the main law, the Anti-Terrorism Act?

It sounds like you have some concerns about these companies being responsible for anything posted on their site.

Absolutely. Also about them being responsible for anything that’s ranked and boosted or algorithmically shaped part of the platform, because that’s basically everything.

The results seem potentially harmful, but as a theoretical thought, it doesn’t seem crazy to me that these companies are responsible for what’s on their platforms. Do you feel that way, or do you think it’s actually too simplistic to say that these companies are responsible?

I think it’s reasonable to have legal liability if it’s something that companies can respond well to. If we think that legal responsibility can cause them to accurately identify illegal content and take it down, then that is the point of placing this responsibility on them. There are some situations under US law where we place that responsibility on the platforms, and I think that’s right. For example, there is no immunity under federal law or Section 230 from federal criminal prosecutions for child sexual abuse materials. The idea is that this content is so incredibly harmful that we want to put the responsibility on the platforms. And it’s highly identifiable. We are not worried that they will accidentally delete many other important outputs. Similarly, we as a country choose to prioritize copyright as a tort to which the law responds, but the law puts processes in place to prevent platforms from inadvertently taking down anything or anyone at risk. accusation.

So there are situations where we put responsibility on the platforms, but there’s no good reason to think they’ll do a good job of identifying and removing terrorist content in a situation where immunity is gone. I think in such a situation we would have every reason to expect that many legitimate outlets, like US military intervention in the Middle East or Syria’s immigration policy, would disappear because the platforms would worry that it would create liability. And the disappearing speech comes disproportionately from people who speak Arabic or who speak Islam. There is a highly anticipated set of challenges to imposing this particular set of legal obligations on platforms, given the capabilities they currently have. Maybe there’s some future world where there’s better technology, or better involvement of the courts in deciding what happens, or something where there’s less worry about unintended consequences, and then we want to put obligations on platforms. But we are not there now.

How did Europe deal with these problems? They seem to be pressuring tech companies to be transparent.

Europe recently had the legal status these claimants demanded. In Europe, there was a major law regulating platform liability that was passed in 2000. It’s called the Electronic Commerce Directive. And there was this very clear idea that if platforms “knew” about illegal content, they should take it down to protect privacy. Not surprisingly, what they found is that the law has led to many malicious accusations by people trying to silence opponents or people they disagree with online. This leads to platforms being willing to take a lot to avoid risk and inconvenience. So European lawmakers revised a law called the Digital Services Act to get rid of, or at least try to get rid of, the risks of a system that says it can make itself safe by silencing its users.



Source link