“Sens. Josh Hawley (R–Mo.) and Richard Blumenthal (D–Conn.) want to strangle generative artificial intelligence (A.I.) infants like ChatGPT and Bard in their cribs. How? By stripping them of the protection of Section 230 of the 1996 Communications Decency Act, which reads, “No provider or user of an interactive computer service shall be treated as the publisher or speaker of any information provided by another information content provider.”
“Section 230 embodies that principle that we should all be responsible for our own actions and statements online, but generally not those of others,” explains the Electronic Frontier Foundation. “The law prevents most civil suits against users or services that are based on what others say.” By protecting free speech, Section 230 enables the proliferation and growth of online platforms like Facebook, Google, Twitter, and Yelp and allows them to function as robust open forums for the exchange of information and for debate, both civil and not. Section 230 also protects other online services ranging from dating apps like Tinder and Grindr to service recommendation sites like Tripadvisor and Healthgrades.
Does Section 230 shield new developing A.I. services like ChatGPT from civil lawsuits in much the same way that it has protected other online services? Jess Miers, legal advocacy counsel at the tech trade group the Chamber of Progress, makes a persuasive case that it does. Over at Techdirt, she notes that ChatGPT qualifies as an interactive computer service and is not a publisher or speaker. “Like Google Search, ChatGPT is entirely driven by third-party input. In other words, ChatGPT does not invent, create, or develop outputs absent any prompting from an information content provider (i.e. a user).”
One commenter at Techdirt asked what will happen “when ChatGPT designs buildings that fall down.” Properly answered: “The responsibility will be on the idiots who approved and built a faulty building designed by a chatbot.” That is roughly the situation of a couple of New York lawyers who recently filed a legal brief compiled by ChatGPT in which the language model “hallucinated” numerous nonexistent precedent cases. And just as he should, the presiding judge is holding them responsible and deciding what punishments they may deserve. (Their client might also be interested in pursuing a lawsuit for legal malpractice.)”
“”If Section 230 does not apply to how YouTube organizes third-party videos, petitioners and the government have no coherent theory that would save search recommendations and other basic software tools that organize an otherwise unnavigable flood of websites, videos, comments, messages, product
“Section 230 essentially functions as the internet’s First Amendment by protecting private companies from being held liable for most forms of user-generated content. This is the second time in very recent history that lawmakers have sought to sneak Section 230 changes into legislation that otherwise has nothing to do with Section 230.”
“Section 230 has attracted bipartisan enmity, although for completely different reasons: Republican critics say that online giants such as Facebook and Twitter are too heavy-handed with content moderation, at least when it comes to conservative speech, while their Democratic counterparts want platforms to scrub more hate speech and fake news. 230’s critics range from Sen. Josh Hawley (R–Mo.) to Vice President-Elect Kamala Harris, though one wonders if either would be happy with the result of the rollback once the other party was in power.
McConnell’s bill would also create a committee to investigate election fraud and the impact of COVID-19 on voting practices, as Trump keeps pushing the conspiracy theory that President-elect Joe Biden stole the 2020 election.”
“Section 230, the law that is often credited as the reason why the internet as we know it exists, could be facing its greatest threat yet. A seemingly coordinated attack on the law is unfolding this week from the Trump administration and Republicans in Congress. It follows complaints that platforms such as Facebook, Twitter, and YouTube unfairly censor conservative speech. Though some are framing the efforts as a way to promote free speech, others say the result will be exactly the opposite.
Following President Trump’s executive order aimed at social media companies he thinks are censoring right-wing voices, the most direct actions taken against Section 230 arrived this week in the form of a new bill from Sen. Josh Hawley and a set of recommendations from Attorney General Bill Barr.
Hawley, a 40-year-old Republican from Missouri who has made no secret of his intentions regarding Section 230, is proposing a bill that would require large platforms to enforce their rules equally to stop a perceived targeting of conservatives and conservative commentary. Hawley is also rumored to be preparing another Section 230-related bill to add to his growing collection.
Meanwhile, Barr’s Department of Justice said it is calling for new legislation that, in certain cases, would remove the civil liability protections offered by Section 230. If platforms like Facebook, Google, and Twitter somehow encouraged content that violates federal law, these platforms would be treated as “bad samaritans” and would lose the immunity offered by Section 230. Like Hawley’s bill, the DOJ’s proposed rules would also force platforms to clearly define and equally enforce content rules.
Civil rights advocates say they’re concerned that some of these proposed measures may end up becoming law, leading to all sorts of unintended consequences and stifling speech — which will ultimately punish internet users far more than the websites.”
“Section 230 is part of the Communications Decency Act of 1996. It says internet platforms that host third-party content are not civilly liable for that content. There are a few exceptions, such as intellectual property or content related to sex trafficking, but otherwise the law allows platforms to be as hands-off as they want to be with user-generated content.
“If a Twitter user were to tweet something defamatory, the user could be sued for libel, but Twitter itself could not.”
“If these sites could be held responsible for the actions of their users, they would either have to strictly moderate everything those users produce — which is impossible at scale — or not host any third-party content at all. Either way, the demise of Section 230 could be the end of sites like Facebook, Twitter, Reddit, YouTube, Yelp, forums, message boards, and basically any platform that’s based on user-generated content.”
“The consequences of changing Section 230 will inevitably change the internet and what we’re allowed to do on it. Ruane, from the ACLU, points to the impact of FOSTA-SESTA, which she says “has been a complete and total disaster,” and its unintended consequences as a guide for what we can expect. Faced with the new law, online platforms didn’t seek to target specific content that might relate to or facilitate sex trafficking; they simply took down everything sex or sex work-related to ensure they wouldn’t get in trouble.
“It was only supposed to apply to advertisements for sex trafficking. That is absolutely not what happened,” Ruane said. “All platforms adopted much broader content moderation policies that applied to a lot of LGBTQ-related speech, sex education-related speech, and … sites where [sex workers] built communities where they shared information to maintain safety.””
“laws that force platforms to be “politically neutral” may not encourage more speech, as conservatives who favor those laws claim, but rather suppress it. Facebook has taken a similar stance, saying on Wednesday that changing Section 230’s liability protections would “mean less speech of all kinds appearing online.”
Section 230 won’t change tomorrow, if it changes at all. But a series of seemingly coordinated attacks from two of the three branches of government certainly shows some momentum toward the possibility of change.”