“Texas Gov. Greg Abbott, who..signed a bill that aims to restrict social media platforms’ editorial discretion, says the new law “protects Texans from wrongful censorship” and thereby upholds their “first amendment rights.” The law, H.B. 20, is scheduled to take effect on December 2, but that probably will not happen, because it is blatantly unconstitutional and inconsistent with federal law.
Abbott, a former Texas Supreme Court justice who served as his state’s attorney general from 2002 to 2015, presumably knows that. But whether he is sincerely mistaken or cynically catering to his party’s base, H.B. 20 reflects widespread confusion among conservatives about what the First Amendment requires and allows.”
“the First Amendment applies to the government and imposes no constraints on private parties.
To the contrary, the First Amendment guarantees a private publisher’s right to exercise editorial discretion. The Supreme Court emphasized that point in a 1974 case involving a political candidate’s demand that The Miami Herald publish his responses to editorials that criticized him.
The constitutional protection against compelled publication does not disappear when we move from print to the internet, or from a news outlet to a website that invites users to post their own opinions. As Justice Brett Kavanaugh noted when he was a judge on the U.S. Court of Appeals for the D.C. Circuit, “the Government may not…tell Twitter or YouTube what videos to post” or “tell Facebook or Google what content to favor.”
Yet that is what H.B. 20 purports to do. The law says “social media platforms” with more than 50 million active monthly users in the U.S. may not “censor” content based on the “viewpoint” it expresses. That edict covers any effort to “block, ban, remove, deplatform, demonetize, de-boost, restrict, deny equal access or visibility to, or otherwise discriminate against expression.”
H.B. 20 makes a few exceptions, including “expression that directly incites criminal activity” and “specific threats of violence” that target people based on their membership in certain protected categories. But otherwise the rule’s reach is vast: As two trade organizations note in a federal lawsuit they filed last week, H.B. 20 “would unconstitutionally require platforms like YouTube and Facebook to disseminate, for example, pro-Nazi speech, terrorist propaganda, foreign government disinformation, and medical misinformation.””
“In response to Australian court decisions holding media companies legally liable for the comments by users, CNN has blocked access to some of its Facebook pages from users in that country.
This is an inevitable outcome of a bad decision and a reminder of why it’s important not to try to force government-mandated moderation policies onto massive social media platforms that will inevitably lead to either censorship or lack of access to information.”
“If you’re a troll online, you are most likely also a troll offline, at least with respect to political discussions, reports new research published in the American Political Science Review. In their study, Aarhus University researchers Alexander Bor and Michael Bang Petersen investigate what they call the “mismatch hypothesis.” Do mismatches between human psychology, evolved to navigate life in small social groups, and novel features of online environments, such as anonymity, rapid text-based responses, combined with the absence of moderating face-to-face social cues, change behavior for the worse in impersonal online political discussions?
No, conclude the authors. “Instead, hostile political discussions are the result of status-driven individuals who are drawn to politics and are equally hostile both online and offline,” they report. However, they also find that online political discussions may tend to feel more hostile because the greater connectivity and permanence of various Internet discussion platforms make trolls much more visible online than offline.”
LC: The article and study seem to use a broader definition for “trolling” than I use.
“Australia’s highest court has upheld a controversial and potentially destructive ruling that media outlets are legally liable for defamatory statements posted by online commenters on Facebook, a decision that could result in massive amounts of online censorship out of fear of lawsuits.
The case revolves around a television program from 2016 on Australia’s ABC TV (no relation to America’s ABC network) about the mistreatment of youths in Australia’s jail system. Footage of Dylan Voller in a restraining chair was part of the coverage. When media outlets covered this program and posted links to the coverage on Facebook, users made comments about Voller, and this prompted Voller to sue the media outlets. The comments were defamatory, Voller claimed, and he argued that the media outlets themselves were responsible for publishing them.
The media outlets countered that, no, they were not the publishers of third-party comments on Facebook and were not responsible for what they said. The outlets have been appealing to the courts to toss out the lawsuits, and they’ve been losing.”
“The country’s top justices determined that media outlets in the country are, indeed, publishers of the comments that users post on Facebook under stories that they link.
The logic here is absolutely terrible and destructive. Facebook has control over the tools for managing comments on media pages. The media outlets themselves do not, and they can’t “turn off” commenting on their Facebook pages. They do have the power to delete comments after the fact or use filtering tools that target keywords (to stop people from making profane or obscene comments) and can block individual users from the page.
Using these tools to try to prevent defamatory comments requires constant monitoring of the media outlet’s Facebook page and would demand that moderators be so agile as to remove potentially defamatory content the moment it appears before anybody else could see it. Nevertheless, the justices concluded that this is enough control over the comments for media outlets to be considered publishers. Two of the justices were very blunt that simply participating on Facebook made Fairfax Media Publications a publisher of the comments”
“It is easy to assume, as these other justices apparently have, that such a decision could not possibly cause a disastrous amount of online censorship because media outlets should know when a controversial story might lead to defamatory comments. The judges actually note this in the ruling. They seem to think that this is only an issue with certain types of stories and that the appearance of defamatory comments can be predicted in advance.
This is complete rubbish, and anybody with any experience on social media already knows this. Trolls, scammers, and spammers range far and wide (that’s the point of them), and it’s incredibly naive to think that a story that has no controversial elements can’t end up with third parties posting defamatory nonsense under them.”
“it’s why Section 230 of the U.S. Communications Decency Act, which generally protects websites and social media platforms (and you) from liability for comments published by others, is so important. It’s not just to protect media outlets from being held liable for comments from trolls. It’s to allow social media participation to even happen at all. Some large media outlets or companies might be able to afford around-the-clock moderation to attempt to catch problems. But even if they could, let’s be clear that they’re going to avoid as much risk as possible and delete any comment that has a whiff of controversy. Why would they allow it to stand if it could get them sued?
But smaller companies and outlets—and there’s no reason to think this ruling applies only to media outlets—will either have to hope Facebook gives them better tools to control who posts on their page or just not have social media presences at all.”
“My sense is that social media in particular — as well as a broader range of internet technologies, including algorithmically driven search and click-based advertising — have changed the way that people get information and form opinions about the world.
And they seem to have done so in a manner that makes people particularly vulnerable to the spread of misinformation and disinformation.”
“What we’re concerned about is the fact that this information ecosystem has developed to optimize something orthogonal to things that we think are extremely important, like being concerned about the veracity of information or the effect of information on human well-being, on democracy, on health, on the ecosystem.”
“The printing press came out and upended history. We’re still recovering from the capacity that the printing press gave to Martin Luther. The printing press radically changed the political landscape in Europe. And, you know, depending on whose histories you go by, you had decades if not centuries of war [after it was introduced].”
“”People’s habits do incline somewhat toward their preferred political positions, but a study of Web browser, survey, and consumer data from 2004 to 2009 found that people’s media diets online were modestly divided by ideology but far more diverse than, for instance, the networks of people with whom they talked about politics in person,” wrote Brendan Nyhan, a professor of government at Dartmouth College, in a review of the data for The Washington Post. “This finding of limited information polarization has been repeatedly replicated. Most recently, a new study found that mobile news consumption is even less segregated by ideology than desktop/laptop data used in previous research.”
Many people want to believe that social media, and Facebook in particular, makes everyone more racist, politically paranoid, addicted, and anxious. It’s a narrative that’s equally popular with very conservative Republicans (who somewhat bafflingly view Facebook as an enemy), progressive Democrats (who are ideologically predisposed to dislike large corporations), and the mainstream media (which views social media as a rival). But there is solid evidence undermining many of these claims, and it’s important to remember that taking away technology and shutting off conversations—even fraught and divisive conversations—often increases ignorance and prejudice.”
“On Sunday, July 11, thousands of Cubans in dozens of cities around the island nation took to the streets to protest the country’s communist dictatorship and persistent shortages in food, energy, and medicine, all of which have been made worse by the pandemic.
The demonstrations have been enabled by social media and the internet, which only came to Cuba in a big way in late 2018, when President Miguel Diaz-Canel allowed citizens access to the internet on their cellphones.”
“”Defining ‘misinformation’ is a challenging task, and any definition has limitations,” Murthy concedes. “One key issue is whether there can be an objective benchmark for whether something qualifies as misinformation. Some researchers argue that for something to be considered misinformation, it has to go against ‘scientific consensus.’ Others consider misinformation to be information that is contrary to the ‘best available evidence.’ Both approaches recognize that what counts as misinformation can change over time with new evidence and scientific consensus. This Advisory prefers the ‘best available evidence’ benchmark since claims can be highly misleading and harmful even if the science on an issue isn’t yet settled.”
Who decides what the “best available evidence” indicates? Trusting government-appointed experts with that job seems risky, to say the least.”
“If those recommendations become commands, they would clearly impinge on the First Amendment rights of social media companies and people who use their platforms. But even if such regulations could pass constitutional muster, they would face the same basic problem as voluntary efforts to curb “misinformation”: Once you get beyond clear examples like warnings about vaccine-induced mass sterility, misinformation is in the eye of the beholder.”
“while some circumstantial evidence supports the lab leak theory, there is still no scientific consensus on whether COVID-19 emerged from a research facility, a wet market, or somewhere else.”
“Facebook made a quiet but dramatic reversal..: It no longer forbids users from touting the theory that COVID-19 came from a laboratory.
“In light of ongoing investigations into the origin of COVID-19 and in consultation with public health experts, we will no longer remove the claim that COVID-19 is man-made or manufactured from our apps,” the social media platform declared in a statement.”
“the lab leak theory—the idea that COVID-19 inadvertently escaped from a laboratory, possibly the Wuhan Institute of Virology—has gained some public support among experts. In March, former Centers for Disease Control and Prevention (CDC) chief Robert Redfield said that he bought the theory. (His admission earned him death threats; most of them came from fellow scientists.) Nicholson Baker, writing in New York, and Nicholas Wade, formerly of The New York Times, both wrote articles that accepted the lab leak as equally if not more plausible than the idea that COVID-19 jumped from animals to humans in the wild (or at a wet market). Even Anthony Fauci, the White House’s coronavirus advisor and an early critic of the lab leak theory, now concedes it shouldn’t be ruled out as a possibility.
This has forced many in the media to eat crow. Matthew Yglesias, formerly of Vox, assailed mainstream journalism’s approach to lab leak as a “fiasco.” The Post rewrote its February headline, which now refers to the lab leak as a “fringe theory that scientists have disputed” rather than as a debunked conspiracy theory. New York magazine’s Jonathan Chait noted that a few ardent opponents of lab leak “with unusually robust social-media profiles” had used Twitter—the preferred medium of progressive politicos and journalists—to promote the idea that any dissent on this subject was both wrong and a sign of racial bias against Asian people.”
“Big Tech takes its cues from the mainstream media, making decisions about which articles to boost or suppress based on the prevailing wisdom coming from The New York Times, The Washington Post, and elite media fact-checkers. (That’s according to information I obtained from insiders at Facebook during research for my forthcoming book, Tech Panic.)”