What the evidence really says about social media’s impact on teens’ mental health

What the evidence really says about social media’s impact on teens’ mental health


The Bad Science Behind Jonathan Haidt’s Call to Regulate Social Media

“Haidt cites 476 studies in his book that seem to represent an overwhelming case. But two-thirds of them were published before 2010, or before the period that Haidt focuses on in the book. Only 22 of them have data on either heavy social media use or serious mental issues among adolescents, and none have data on both.
There are a few good studies cited in the book. For example, one co-authored by psychologist Jean Twenge uses a large and carefully selected sample with a high response rate. It employs exploratory data analysis rather than cookbook statistical routines.

Unfortunately for Haidt, that study undercuts his claim. The authors did find that heavy television watchers, video game players, and computer and phone users were less happy. But the similar graphs for these four ways of spending time suggest that the specific activity didn’t matter. This study actually suggests that spending an excessive amount of time in front of any one type of screen is unhealthy—not that there’s anything uniquely dangerous about social media.”


Is the new push to ban TikTok for real?

“The constitutional law here appears straightforward: Congress can’t outright ban TikTok or any social media platform unless it can prove that it poses legitimate and serious privacy and national security concerns that can’t be addressed by any other means. The bar for such a justification is necessarily very high in order to protect Americans’ First Amendment rights, Krishnan said.”

“members of Congress have not provided concrete proof for their claims about Chinese digital espionage and seem to have little interest in offering any transparency: Before the committee voted to advance the bill Thursday, lawmakers had a closed-door classified briefing on national security concerns associated with TikTok.”


The Big Flaws in That Study Suggesting That China Manipulates TikTok Topics

“The latest wave of fearmongering about TikTok involves a study purportedly showing that the app suppresses content unflattering to China. The study attracted a lot of coverage in the American media, with some declaring it all the more reason to ban the video-sharing app.”

“But there are serious flaws in the study design that undermine its conclusions and any panicky takeaways from them.
In the study, the Network Contagion Research Institute (NCRI) compared the use of specific hashtags on Instagram (owned by the U.S. company Meta) and on TikTok (owned by the Chinese company ByteDance). The analysis included hashtags related both to general subjects and to “China sensitive topics” such as Uyghurs, Tibet, and Tiananmen Square. “While ratios for non-sensitive topics (e.g., general political and pop-culture) generally followed user ratios (~2:1), ratios for topics sensitive to the Chinese Government were much higher (>10:1),” states the report, titled “A Tik-Tok-ing Timebomb: How TikTok’s Global Platform Anomalies Align with the Chinese Communist Party’s Geostrategic Objectives.”

The study concludes that there is “a strong possibility that TikTok systematically promotes or demotes content on the basis of whether it is aligned with or opposed to the interests of the Chinese Government.”

There are ample reasons to be skeptical of this conclusion. Paul Matzko pointed out some of these in a recent Cato Institute blog post, identifying “two remarkably basic errors that call into question the fundamental utility of the report.””

“the researchers fail to account for differences in how long the two social networks in question have been around. Instagram launched nearly 7 years before TikTok’s international launch (and nearly 6 years before TikTok existed at all) and introduced hashtags a few months thereafter (in January 2011). Yet the researchers’ data collection process does not seem to account for the different launch dates, nor does their report even mention this disparity. (Reason reached out to the study authors last week to ask about this but has not received a response.)
The researchers also fail to account for the fact that Instagram and TikTok users are not identical. This leads them “to miss the potential for generational cohort effects,” suggested Matzko. “In short, the median user of Instagram is older than the median user of TikTok. Compare the largest segment of users by age on each platform: 25% of TikTok users in the US are ages 10–19, while 27.4% of Instagram users are 25–34.”

It’s easy to imagine how differing launch dates and typical-user ages could lead to differences in content prevalence, with no nefarious meddling by the Chinese government or algorithmic fiddling by Bytedance needed.”


Opinion | Twitter Gave Us an Indispensable Real-Time News Platform. X Took It Away.

“The war in Gaza marks the end of that era. X CEO Elon Musk has so profoundly undermined the functions that made Twitter useful in an international crisis that it is now counterproductive to turn to it if you hope to understand what’s happening on the ground when news breaks around the world. Over the course of 10 years, X has evolved from indispensable to useless.
It’s fitting that along with his other changes, Musk changed the name of the company. Literally and figuratively, we’re witnessing the first post-Twitter major world conflict.

Disinformation and misinformation proliferated on the platform prior to Musk’s takeover. But the difference in degree is so significant now as to amount to a difference in kind. Almost every significant change Musk has made has reduced its value as a source of reliable information from and for people affected by a disaster, and every change, similarly, has increased its utility to malicious propagandists and scammers.

When Musk arrived, he dissolved Twitter’s Trust and Safety Council, the advisory group of some 100 independent civil, human rights and other organizations that helped the company to combat misinformation, and fired its full-time Trust and Safety employees, including all but one member of the AI ethics team that monitored algorithmic amplification.

He blew up the old verification system. A blue check once signified that the account belonged to someone whose identity had been confirmed and who fell under one of the following categories: government; companies, brands and organizations; news organizations and journalists; entertainment; sports and gaming; activists and organizers; content creators; and influential individuals.

The prevalence on Twitter of verified journalists, academics, researchers and government sources made it possible, in a crisis, to quickly find reliable people who were on the ground and who could probably be trusted to report what they were seeing in reasonably good faith. Now, the blue checkmark signifies only that the owner has paid for it. When you buy a blue check, your posts go to the top of the search results and replies, irrespective of others’ desire to see them.

X now pays users based on the number of views they receive, creating a massive incentive to post sensationalistic and inflammatory lies. On-the-ground witnesses — and worse, people who need help — can’t reach their audiences unless they have a costly blue check mark and a willingness to compete with the most outrageous promoted content.

Musk has stripped headlines and summaries off article previews. Images are no longer accompanied by context, making it that much easier to misunderstand them and that much less likely that users will read the article. Meanwhile, Musk promotes — directly, via his own tweets and algorithmically — conspiracy theorists, Russian war propagandists, hostile-state media, foreign and domestic extremists and engagement farmers who exploit pain and tragedy to gain followers.”

“A massive number of accounts posting images that purport to be from Gaza are in fact posting images from unrelated conflicts. These tweets have racked up millions of views and shares.”

“According to Cyabra, an Israeli analysis firm, pro-Hamas forces have launched a coordinated influence operations campaign involving tens of thousands of fake profiles. As a result, one in five social media accounts participating in the conversation about the war in Gaza are fake. One in four pro-Hamas profiles are fake. It’s not clear who is creating and using these fake profiles to spread disinformation, but it could be anyone from Russian internet trolls to antisemites to far-right hucksters who are eager to make a buck.

Accounts that were once clearly labeled as state-affiliated, such as that of Iran’s Press TV, are no longer distinguished from others. In September, an EU report found that the “reach and influence” of Kremlin-backed accounts on social media, and on X in particular, had increased in 2023.

In another study, the EU found that disinformation was more easily found on X than on any other social media platform. It also received more engagement than it did on any other platform.

That report found that X had the highest ratio of what the authors called “disinformation actors” to real posts. “The average engagement with mis/disinformation content found on X is 1.977 times as high as the average engagement with non-mis/disinformation,” the authors wrote. In other words, X users are twice as likely to engage with lies as the truth.”

“Meanwhile, good sources of information are leaving the platform. Many of the most useful voices are now gone. Reporters have fled, largely moving to Bluesky. But Bluesky can’t replace X yet; its network is too small. You can use it to talk to other journalists, not so much to find sources or promote your work to your readers.

To judge by the responses to the fake tweets, most people have no idea they’re fake. Musk certainly doesn’t. Recently, he recommended (in a since-deleted tweet) that his followers follow two well-known disinformation accounts — one of them, for example, provides such helpful analysis as, “The overwhelming majority of people in the media and banks are zi0nists.” When Musk suggests something like this, it is not just his 162 million followers who see it. You can mute him, but unless you do, everything Musk says is now forced into the timeline of every user of the platform, whether or not they follow him.”

“This state of affairs is massively deleterious to American national security. Members of Congress are as vulnerable to hostile disinformation as anyone else. One morning, I watched a number of Russian accounts, including that of former Russian President Dmitry Medvedev, begin simultaneously to push out the line that Israel had been attacked with weapons the U.S. sent to Ukraine, which Israelis immediately denied. By afternoon, U.S. Rep. Marjorie Taylor Greene (R-Ga.) was asserting this as fact.”

“The degeneration of the quality of information on X means that journalists who are still on the platform waste far more time looking for the signal in the noise. They waste more time running down rumors. They are at greater risk of sharing fake information. They are doubtless absorbing narratives and framings from X shaped by disinformation, even if they’re not sharing falsehoods.”

“At least in the short term, the market won’t be able to solve this problem because Twitter’s value to consumers was owed to its market dominance. Everyone used it. A number of competitors are now trying to fill the void, but because the microblogging market is now fractured, no company can play the central role Twitter played.”


Elon Musk’s Israel disinformation investigation, explained

“In the hours after the Hamas attack on Israel began, users subscribed to X Premium — whose accounts show a verified check mark and get boosted engagement in exchange for a monthly fee — spread a number of particularly egregious pieces of misinformation. According to a running tracker by Media Matters, these accounts amplified a fake White House memo claiming the US government was about to send $8 billion in aid to Israel; circulated videos from other conflicts (and in some cases, footage from a video game) while claiming they showed the latest out of Israel and Gaza; falsely claimed that a church in Gaza had been bombed; and impersonated a news outlet. These posts were shared by X users with huge followings and viewed tens of millions of times. The Tech Transparency Project said on Thursday that it had identified X Premium accounts promoting Hamas propaganda videos, which were viewed hundreds of thousands of times.”


Propagandists are exploiting Syria’s suffering to win the information war in Gaza

“In one video, children cry amid rubble. In another, explosions rip through residential neighbourhoods. The images have gone viral on X (formerly Twitter), purporting to be from the ongoing chaos in Israel and Gaza. They actually originate from the war in Syria – including my family’s besieged hometown of Aleppo, where the Assad regime’s tanks once fired on my grandparents’ home while they were still inside.
They are not isolated examples, and the proliferation of misinformation on X is now so extreme that the European Commission began an official investigation last week. The past week has proved that the site is now unable to effectively tackle the spread of falsehoods in a time of crisis.”


Even Elon Musk can’t fully wreck Twitter’s one great superpower

“the TV ratings Nielsen reports have no correlation to the viewership numbers Twitter is reporting. For starters, as always, Nielsen is reporting the average viewership the debates generated, not the total number of views, which is what online outlets generally report. That’s not a new discrepancy, and at this point everyone in tech and media should know better but either doesn’t or pretends not to.

More important, under Musk, Twitter has moved to an even more fanciful description of “viewership,” where it’s not even pretending to count people who watch the video. Instead, it’s simply measuring the number of times someone has seen the tweet with the video scroll through their feed, as the Washington Post’s Will Oremus reports.”


Biden White House Pressured Facebook To Censor Lab Leak Posts

“President Joe Biden’s White House pushed Meta, the parent company of Facebook and Instagram, to censor contrarian COVID-19 content, including speculation about the virus having escaped from a lab, vaccine skepticism, and even jokes.
“Can someone quickly remind me why we were removing—rather than demoting/labeling—claims that Covid is man made,” asked Nick Clegg, president for global affairs at the company, in a July 2021 email to his coworkers.

A content moderator replied, “We were under pressure from the administration and others to do more. We shouldn’t have done it.””

“”According to a trove of confidential documents obtained by Reason, health advisers at the CDC had significant input on pandemic-era social media policies at Facebook as well. They were consulted frequently, at times daily. They were actively involved in the affairs of content moderators, providing constant and ever-evolving guidance. They requested frequent updates about which topics were trending on the platforms, and they recommended what kinds of content should be deemed false or misleading. “Here are two issues we are seeing a great deal of misinfo on that we wanted to flag for you all,” reads one note from a CDC official. Another email with sample Facebook posts attached begins: “BOLO for a small but growing area of misinfo.”

” These Facebook Files show that the platform responded with incredible deference. Facebook routinely asked the government to vet specific claims, including whether the virus was “man-made” rather than zoonotic in origin. (The CDC responded that a man-made origin was “technically possible” but “extremely unlikely.”) In other emails, Facebook asked: “For each of the following claims, which we’ve recently identified on the platform, can you please tell us if: the claim is false; and, if believed, could this claim contribute to vaccine refusals?””