“”Social media has become a significant source of information for U.S. law enforcement and intelligence agencies,” the Brennan Center for Justice at NYU Law noted in a report released last week. “The Department of Homeland Security, the FBI, and the State Department are among the many federal agencies that routinely monitor social platforms, for purposes ranging from conducting investigations to identifying threats to screening travelers and immigrants.””
“Photos of beheadings, extremist propaganda and violent hate speech related to Islamic State and the Taliban were shared for months within Facebook groups over the past year despite the social networking giant’s claims it had increased efforts to remove such content.
The posts — some tagged as “insightful” and “engaging” via new Facebook tools to promote community interactions — championed the Islamic extremists’ violence in Iraq and Afghanistan, including videos of suicide bombings and calls to attack rivals across the region and in the West, according to a review of social media activity between April and December. At least one of the groups contained more than 100,000 members.
In several Facebook groups, competing Sunni and Shia militia trolled each other by posting pornographic images and other obscene photos into rival groups in the hope Facebook would remove those communities.
In others, Islamic State supporters openly shared links to websites with reams of online terrorist propaganda, while pro-Taliban Facebook users posted regular updates about how the group took over Afghanistan during much of 2021”
…
“Facebook said it had invested heavily in artificial intelligence tools to automatically remove extremist content and hate speech in more than 50 languages. Since early 2021, the company told POLITICO it had added more Pashto and Dari speakers — the main languages spoken in Afghanistan — but declined to provide numbers of the staffing increases.
Yet the scores of Islamic State and Taliban content still on the platform show those efforts have failed to stop extremists from exploiting the platform.”
“Texas Gov. Greg Abbott, who..signed a bill that aims to restrict social media platforms’ editorial discretion, says the new law “protects Texans from wrongful censorship” and thereby upholds their “first amendment rights.” The law, H.B. 20, is scheduled to take effect on December 2, but that probably will not happen, because it is blatantly unconstitutional and inconsistent with federal law.
Abbott, a former Texas Supreme Court justice who served as his state’s attorney general from 2002 to 2015, presumably knows that. But whether he is sincerely mistaken or cynically catering to his party’s base, H.B. 20 reflects widespread confusion among conservatives about what the First Amendment requires and allows.”
…
“the First Amendment applies to the government and imposes no constraints on private parties.
To the contrary, the First Amendment guarantees a private publisher’s right to exercise editorial discretion. The Supreme Court emphasized that point in a 1974 case involving a political candidate’s demand that The Miami Herald publish his responses to editorials that criticized him.
The constitutional protection against compelled publication does not disappear when we move from print to the internet, or from a news outlet to a website that invites users to post their own opinions. As Justice Brett Kavanaugh noted when he was a judge on the U.S. Court of Appeals for the D.C. Circuit, “the Government may not…tell Twitter or YouTube what videos to post” or “tell Facebook or Google what content to favor.”
Yet that is what H.B. 20 purports to do. The law says “social media platforms” with more than 50 million active monthly users in the U.S. may not “censor” content based on the “viewpoint” it expresses. That edict covers any effort to “block, ban, remove, deplatform, demonetize, de-boost, restrict, deny equal access or visibility to, or otherwise discriminate against expression.”
H.B. 20 makes a few exceptions, including “expression that directly incites criminal activity” and “specific threats of violence” that target people based on their membership in certain protected categories. But otherwise the rule’s reach is vast: As two trade organizations note in a federal lawsuit they filed last week, H.B. 20 “would unconstitutionally require platforms like YouTube and Facebook to disseminate, for example, pro-Nazi speech, terrorist propaganda, foreign government disinformation, and medical misinformation.””
“In response to Australian court decisions holding media companies legally liable for the comments by users, CNN has blocked access to some of its Facebook pages from users in that country.
This is an inevitable outcome of a bad decision and a reminder of why it’s important not to try to force government-mandated moderation policies onto massive social media platforms that will inevitably lead to either censorship or lack of access to information.”
“If you’re a troll online, you are most likely also a troll offline, at least with respect to political discussions, reports new research published in the American Political Science Review. In their study, Aarhus University researchers Alexander Bor and Michael Bang Petersen investigate what they call the “mismatch hypothesis.” Do mismatches between human psychology, evolved to navigate life in small social groups, and novel features of online environments, such as anonymity, rapid text-based responses, combined with the absence of moderating face-to-face social cues, change behavior for the worse in impersonal online political discussions?
No, conclude the authors. “Instead, hostile political discussions are the result of status-driven individuals who are drawn to politics and are equally hostile both online and offline,” they report. However, they also find that online political discussions may tend to feel more hostile because the greater connectivity and permanence of various Internet discussion platforms make trolls much more visible online than offline.”
LC: The article and study seem to use a broader definition for “trolling” than I use.
“Australia’s highest court has upheld a controversial and potentially destructive ruling that media outlets are legally liable for defamatory statements posted by online commenters on Facebook, a decision that could result in massive amounts of online censorship out of fear of lawsuits.
The case revolves around a television program from 2016 on Australia’s ABC TV (no relation to America’s ABC network) about the mistreatment of youths in Australia’s jail system. Footage of Dylan Voller in a restraining chair was part of the coverage. When media outlets covered this program and posted links to the coverage on Facebook, users made comments about Voller, and this prompted Voller to sue the media outlets. The comments were defamatory, Voller claimed, and he argued that the media outlets themselves were responsible for publishing them.
The media outlets countered that, no, they were not the publishers of third-party comments on Facebook and were not responsible for what they said. The outlets have been appealing to the courts to toss out the lawsuits, and they’ve been losing.”
…
“The country’s top justices determined that media outlets in the country are, indeed, publishers of the comments that users post on Facebook under stories that they link.
The logic here is absolutely terrible and destructive. Facebook has control over the tools for managing comments on media pages. The media outlets themselves do not, and they can’t “turn off” commenting on their Facebook pages. They do have the power to delete comments after the fact or use filtering tools that target keywords (to stop people from making profane or obscene comments) and can block individual users from the page.
Using these tools to try to prevent defamatory comments requires constant monitoring of the media outlet’s Facebook page and would demand that moderators be so agile as to remove potentially defamatory content the moment it appears before anybody else could see it. Nevertheless, the justices concluded that this is enough control over the comments for media outlets to be considered publishers. Two of the justices were very blunt that simply participating on Facebook made Fairfax Media Publications a publisher of the comments”
…
“It is easy to assume, as these other justices apparently have, that such a decision could not possibly cause a disastrous amount of online censorship because media outlets should know when a controversial story might lead to defamatory comments. The judges actually note this in the ruling. They seem to think that this is only an issue with certain types of stories and that the appearance of defamatory comments can be predicted in advance.
This is complete rubbish, and anybody with any experience on social media already knows this. Trolls, scammers, and spammers range far and wide (that’s the point of them), and it’s incredibly naive to think that a story that has no controversial elements can’t end up with third parties posting defamatory nonsense under them.”
…
“it’s why Section 230 of the U.S. Communications Decency Act, which generally protects websites and social media platforms (and you) from liability for comments published by others, is so important. It’s not just to protect media outlets from being held liable for comments from trolls. It’s to allow social media participation to even happen at all. Some large media outlets or companies might be able to afford around-the-clock moderation to attempt to catch problems. But even if they could, let’s be clear that they’re going to avoid as much risk as possible and delete any comment that has a whiff of controversy. Why would they allow it to stand if it could get them sued?
But smaller companies and outlets—and there’s no reason to think this ruling applies only to media outlets—will either have to hope Facebook gives them better tools to control who posts on their page or just not have social media presences at all.”
“My sense is that social media in particular — as well as a broader range of internet technologies, including algorithmically driven search and click-based advertising — have changed the way that people get information and form opinions about the world.
And they seem to have done so in a manner that makes people particularly vulnerable to the spread of misinformation and disinformation.”
“What we’re concerned about is the fact that this information ecosystem has developed to optimize something orthogonal to things that we think are extremely important, like being concerned about the veracity of information or the effect of information on human well-being, on democracy, on health, on the ecosystem.”
…
“The printing press came out and upended history. We’re still recovering from the capacity that the printing press gave to Martin Luther. The printing press radically changed the political landscape in Europe. And, you know, depending on whose histories you go by, you had decades if not centuries of war [after it was introduced].”
“”People’s habits do incline somewhat toward their preferred political positions, but a study of Web browser, survey, and consumer data from 2004 to 2009 found that people’s media diets online were modestly divided by ideology but far more diverse than, for instance, the networks of people with whom they talked about politics in person,” wrote Brendan Nyhan, a professor of government at Dartmouth College, in a review of the data for The Washington Post. “This finding of limited information polarization has been repeatedly replicated. Most recently, a new study found that mobile news consumption is even less segregated by ideology than desktop/laptop data used in previous research.”
Many people want to believe that social media, and Facebook in particular, makes everyone more racist, politically paranoid, addicted, and anxious. It’s a narrative that’s equally popular with very conservative Republicans (who somewhat bafflingly view Facebook as an enemy), progressive Democrats (who are ideologically predisposed to dislike large corporations), and the mainstream media (which views social media as a rival). But there is solid evidence undermining many of these claims, and it’s important to remember that taking away technology and shutting off conversations—even fraught and divisive conversations—often increases ignorance and prejudice.”