“The Biden administration did pressure Meta, as well as its competitors, to crack down on Covid-19 misinformation throughout the pandemic. In 2021, Surgeon General Vivek Murthy called it “an urgent threat,” and Biden himself said that misinformation was “killing people,” a statement he later walked back. This pressure was also at the center of a recent Supreme Court case, in which justices ruled in favor of the Biden administration.
We also knew that Meta, then known simply as Facebook, pushed back at efforts to stop the spread of misinformation on its platforms. Not long after Biden’s “killing people” remark, leaked company documents revealed that Facebook knew that vaccine misinformation on its platforms was undermining its own goal of protecting the vaccine rollout and was causing harm. It even studied the broader problem and produced several internal reports on the spread of misinformation, but despite pressure from Congress, Facebook failed to share that research with lawmakers at the time.
We actually learned about the specific kind of pressure the White House put on Facebook a year ago, thanks to documents the company turned over to, you guessed it, Jim Jordan and the House Judiciary Committee.
The Biden administration issued a statement after Zuckerberg’s latest letter became public. It said, in part, “Our position has been clear and consistent: We believe tech companies and other private actors should take into account the effects their actions have on the American people, while making independent choices about the information they present.”
But the Zuckerberg letter didn’t stop with details of the well-known crackdown on Covid misinformation. It also reminds the public of the time, ahead of the 2020 election, the FBI warned social media companies that a New York Post article about Hunter Biden’s laptop could be part of a Russian disinformation campaign. Without mentioning any direct pressure from the government, Zuckerberg says in the letter that his company demoted the laptop story while it conducted a fact-check. He told podcaster Joe Rogan something similar in a 2022 interview, when he mentioned that an FBI disinformation warning contributed to the decision to suppress the story. Twitter also suppressed the laptop story, and its executives denied that there was pressure from Democrats or law enforcement to do so.
Zuckerberg also addresses some donations he made to voting access efforts in the 2020 election through his family’s philanthropic foundation. “My goal is to be neutral and not play a role one way or another — or to even appear to be playing a role,” the billionaire said. “So I don’t plan on making a similar contribution this cycle.” The House Judiciary Committee responded in a tweet, “Mark Zuckerberg also tells the Judiciary Committee that he won’t spend money this election cycle. That’s right, no more Zuck-bucks.” Neither party mentioned that Zuckerberg also declined to make a contribution in the 2022 cycle for the same reasons.
The right is taking a victory lap over this Zuckerberg letter. Others are simply wondering why on earth, on an otherwise quiet week in August, did Zuckerberg even bother to remind us of all of these familiar facts?
“So, on the same day that the Supreme Court appears to have established that a sitting president can commit the most horrible crimes imaginable against someone who dares to speak out against him, the same Court — with three justices joining both decisions — holds that the First Amendment still imposes some limits on the government’s ability to control what content appears online.
Chief Justice John Roberts and Justice Brett Kavanaugh joined both decisions in full. Justice Amy Coney Barrett joined the Netchoice opinion in full, plus nearly all of the Trump decision.”
…
“That’s such a sweeping restriction on content moderation that it would forbid companies like YouTube or Twitter from removing content that is abusive, that promotes violence, or that seeks to overthrow the United States government. Indeed, Kagan’s opinion includes a bullet-pointed list of eight subject matters that the Texas law would not permit the platforms to moderate, including posts that “support Nazi ideology” or that “encourage teenage suicide and self-injury.”
In any event, Kagan makes clear that this sort of government takeover of social media moderation is not allowed, and she repeatedly rebukes the far-right US Court of Appeals for the Fifth Circuit, which upheld the Texas law.
As Kagan writes, the First Amendment does not permit the government to force platforms “to carry and promote user speech that they would rather discard or downplay.” She also cites several previous Supreme Court decisions that support this proposition, including its “seminal” decision in Miami Herald Publishing Co. v. Tornillo (1974), which held that a newspaper has the right to final control over “the choice of material to go into” it.
Nothing in Kagan’s opinion breaks new legal ground — it is very well-established that the government cannot seize editorial control over the media, for reasons that should be obvious to anyone who cares the least bit about freedom of speech and of the press. But the Court’s reaffirmation of this ordinary and once uncontested legal principle is still jarring on the same day that the Court handed down a blueprint for a Trump dictatorship in its presidential immunity case.
It’s also worth noting that Kagan’s decision is technically a victory for Texas and Florida, although on such narrow grounds that this victory is unlikely to matter.”
“As Corn-Revere points out, “adopted in 1996, Section 230 was proposed as a way to counter efforts to censor internet speech.” Prior to its passage, online platforms were treated as publishers of material posted on their sites if they made any attempt at moderation. They were incentivized to allow free-for-alls, or else scrutinize all content for legal liability—or not allow third parties to post anything at all.
Included in the Communications Decency Act, Section 230’s important provisions survived the voiding of most of that law on constitutional grounds. It reads, in part: “No provider or user of an interactive computer service shall be treated as the publisher or speaker of any information provided by another information content provider.” Those are the 26 words credited as creating the internet by Jeff Kosseff’s 2019 book. They also take the blame for what so many politicians hate about the online world.”
…
“”For the biggest players, more carefully policing content would probably mean bolstering the ranks of thousands of hired moderators and facing down far more lawsuits,” added Shields and Brody. “For smaller players, the tech industry argues, it could prove ruinous.””
…
“”The law is not a shield for Big Tech,” point out the Electronic Frontier Foundation’s (EFF) Aaron Mackey and Joe Mullin in defending Section 230. “Critically, the law benefits the millions of users who don’t have the resources to build and host their own blogs, email services, or social media sites, and instead rely on services to host that speech.””
“Haidt cites 476 studies in his book that seem to represent an overwhelming case. But two-thirds of them were published before 2010, or before the period that Haidt focuses on in the book. Only 22 of them have data on either heavy social media use or serious mental issues among adolescents, and none have data on both.
There are a few good studies cited in the book. For example, one co-authored by psychologist Jean Twenge uses a large and carefully selected sample with a high response rate. It employs exploratory data analysis rather than cookbook statistical routines.
Unfortunately for Haidt, that study undercuts his claim. The authors did find that heavy television watchers, video game players, and computer and phone users were less happy. But the similar graphs for these four ways of spending time suggest that the specific activity didn’t matter. This study actually suggests that spending an excessive amount of time in front of any one type of screen is unhealthy—not that there’s anything uniquely dangerous about social media.”
“The constitutional law here appears straightforward: Congress can’t outright ban TikTok or any social media platform unless it can prove that it poses legitimate and serious privacy and national security concerns that can’t be addressed by any other means. The bar for such a justification is necessarily very high in order to protect Americans’ First Amendment rights, Krishnan said.”
…
“members of Congress have not provided concrete proof for their claims about Chinese digital espionage and seem to have little interest in offering any transparency: Before the committee voted to advance the bill Thursday, lawmakers had a closed-door classified briefing on national security concerns associated with TikTok.”
“The latest wave of fearmongering about TikTok involves a study purportedly showing that the app suppresses content unflattering to China. The study attracted a lot of coverage in the American media, with some declaring it all the more reason to ban the video-sharing app.”
…
“But there are serious flaws in the study design that undermine its conclusions and any panicky takeaways from them.
In the study, the Network Contagion Research Institute (NCRI) compared the use of specific hashtags on Instagram (owned by the U.S. company Meta) and on TikTok (owned by the Chinese company ByteDance). The analysis included hashtags related both to general subjects and to “China sensitive topics” such as Uyghurs, Tibet, and Tiananmen Square. “While ratios for non-sensitive topics (e.g., general political and pop-culture) generally followed user ratios (~2:1), ratios for topics sensitive to the Chinese Government were much higher (>10:1),” states the report, titled “A Tik-Tok-ing Timebomb: How TikTok’s Global Platform Anomalies Align with the Chinese Communist Party’s Geostrategic Objectives.”
The study concludes that there is “a strong possibility that TikTok systematically promotes or demotes content on the basis of whether it is aligned with or opposed to the interests of the Chinese Government.”
There are ample reasons to be skeptical of this conclusion. Paul Matzko pointed out some of these in a recent Cato Institute blog post, identifying “two remarkably basic errors that call into question the fundamental utility of the report.””
…
“the researchers fail to account for differences in how long the two social networks in question have been around. Instagram launched nearly 7 years before TikTok’s international launch (and nearly 6 years before TikTok existed at all) and introduced hashtags a few months thereafter (in January 2011). Yet the researchers’ data collection process does not seem to account for the different launch dates, nor does their report even mention this disparity. (Reason reached out to the study authors last week to ask about this but has not received a response.)
The researchers also fail to account for the fact that Instagram and TikTok users are not identical. This leads them “to miss the potential for generational cohort effects,” suggested Matzko. “In short, the median user of Instagram is older than the median user of TikTok. Compare the largest segment of users by age on each platform: 25% of TikTok users in the US are ages 10–19, while 27.4% of Instagram users are 25–34.”
It’s easy to imagine how differing launch dates and typical-user ages could lead to differences in content prevalence, with no nefarious meddling by the Chinese government or algorithmic fiddling by Bytedance needed.”
“The war in Gaza marks the end of that era. X CEO Elon Musk has so profoundly undermined the functions that made Twitter useful in an international crisis that it is now counterproductive to turn to it if you hope to understand what’s happening on the ground when news breaks around the world. Over the course of 10 years, X has evolved from indispensable to useless.
It’s fitting that along with his other changes, Musk changed the name of the company. Literally and figuratively, we’re witnessing the first post-Twitter major world conflict.
Disinformation and misinformation proliferated on the platform prior to Musk’s takeover. But the difference in degree is so significant now as to amount to a difference in kind. Almost every significant change Musk has made has reduced its value as a source of reliable information from and for people affected by a disaster, and every change, similarly, has increased its utility to malicious propagandists and scammers.
When Musk arrived, he dissolved Twitter’s Trust and Safety Council, the advisory group of some 100 independent civil, human rights and other organizations that helped the company to combat misinformation, and fired its full-time Trust and Safety employees, including all but one member of the AI ethics team that monitored algorithmic amplification.
He blew up the old verification system. A blue check once signified that the account belonged to someone whose identity had been confirmed and who fell under one of the following categories: government; companies, brands and organizations; news organizations and journalists; entertainment; sports and gaming; activists and organizers; content creators; and influential individuals.
The prevalence on Twitter of verified journalists, academics, researchers and government sources made it possible, in a crisis, to quickly find reliable people who were on the ground and who could probably be trusted to report what they were seeing in reasonably good faith. Now, the blue checkmark signifies only that the owner has paid for it. When you buy a blue check, your posts go to the top of the search results and replies, irrespective of others’ desire to see them.
X now pays users based on the number of views they receive, creating a massive incentive to post sensationalistic and inflammatory lies. On-the-ground witnesses — and worse, people who need help — can’t reach their audiences unless they have a costly blue check mark and a willingness to compete with the most outrageous promoted content.
Musk has stripped headlines and summaries off article previews. Images are no longer accompanied by context, making it that much easier to misunderstand them and that much less likely that users will read the article. Meanwhile, Musk promotes — directly, via his own tweets and algorithmically — conspiracy theorists, Russian war propagandists, hostile-state media, foreign and domestic extremists and engagement farmers who exploit pain and tragedy to gain followers.”
…
“A massive number of accounts posting images that purport to be from Gaza are in fact posting images from unrelated conflicts. These tweets have racked up millions of views and shares.”
…
“According to Cyabra, an Israeli analysis firm, pro-Hamas forces have launched a coordinated influence operations campaign involving tens of thousands of fake profiles. As a result, one in five social media accounts participating in the conversation about the war in Gaza are fake. One in four pro-Hamas profiles are fake. It’s not clear who is creating and using these fake profiles to spread disinformation, but it could be anyone from Russian internet trolls to antisemites to far-right hucksters who are eager to make a buck.
Accounts that were once clearly labeled as state-affiliated, such as that of Iran’s Press TV, are no longer distinguished from others. In September, an EU report found that the “reach and influence” of Kremlin-backed accounts on social media, and on X in particular, had increased in 2023.
In another study, the EU found that disinformation was more easily found on X than on any other social media platform. It also received more engagement than it did on any other platform.
That report found that X had the highest ratio of what the authors called “disinformation actors” to real posts. “The average engagement with mis/disinformation content found on X is 1.977 times as high as the average engagement with non-mis/disinformation,” the authors wrote. In other words, X users are twice as likely to engage with lies as the truth.”
…
“Meanwhile, good sources of information are leaving the platform. Many of the most useful voices are now gone. Reporters have fled, largely moving to Bluesky. But Bluesky can’t replace X yet; its network is too small. You can use it to talk to other journalists, not so much to find sources or promote your work to your readers.
To judge by the responses to the fake tweets, most people have no idea they’re fake. Musk certainly doesn’t. Recently, he recommended (in a since-deleted tweet) that his followers follow two well-known disinformation accounts — one of them, for example, provides such helpful analysis as, “The overwhelming majority of people in the media and banks are zi0nists.” When Musk suggests something like this, it is not just his 162 million followers who see it. You can mute him, but unless you do, everything Musk says is now forced into the timeline of every user of the platform, whether or not they follow him.”
…
“This state of affairs is massively deleterious to American national security. Members of Congress are as vulnerable to hostile disinformation as anyone else. One morning, I watched a number of Russian accounts, including that of former Russian President Dmitry Medvedev, begin simultaneously to push out the line that Israel had been attacked with weapons the U.S. sent to Ukraine, which Israelis immediately denied. By afternoon, U.S. Rep. Marjorie Taylor Greene (R-Ga.) was asserting this as fact.”
…
“The degeneration of the quality of information on X means that journalists who are still on the platform waste far more time looking for the signal in the noise. They waste more time running down rumors. They are at greater risk of sharing fake information. They are doubtless absorbing narratives and framings from X shaped by disinformation, even if they’re not sharing falsehoods.”
…
“At least in the short term, the market won’t be able to solve this problem because Twitter’s value to consumers was owed to its market dominance. Everyone used it. A number of competitors are now trying to fill the void, but because the microblogging market is now fractured, no company can play the central role Twitter played.”