Google’s Brief to the Supreme Court Explains Why We Need Section 230

“”If Section 230 does not apply to how YouTube organizes third-party videos, petitioners and the government have no coherent theory that would save search recommendations and other basic software tools that organize an otherwise unnavigable flood of websites, videos, comments, messages, product

In Defense of Algorithms

“When Facebook launched in 2004, it was a fairly static collection of profile pages. Facebook users could put lists of favorite media on their “walls” and use the “poke” button to give each other social-media nudges. To see what other people were posting, you had to intentionally visit their pages. There were no automatic notifications, no feeds to alert you to new information.
In 2006, Facebook introduced the News Feed, an individualized homepage for each user that showed friends’ posts in chronological order. The change seemed small at the time, but it turned out to be the start of a revolution. Instead of making an active choice to check in on other people’s pages, users got a running list of updates.

Users still controlled what information they saw by selecting which people and groups to follow. But now user updates, from new photos to shower thoughts, were delivered automatically, as a chronologically ordered stream of real-time information.

This created a problem. Facebook was growing fast, and users were spending more and more time on it, especially once Apple’s iPhone app store brought social media to smartphones. It wasn’t long before there were simply too many updates for many people to reasonably follow. Sorting the interesting from the irrelevant became a big task.

But what if there were a way for the system to sort through those updates for users, determining which posts might be most interesting, most relevant, most likely to generate a response?

In 2013, Facebook largely ditched the chronological feed. In its place, the social media company installed an algorithm.

Instead of a simple time-ordered log of posts from friends and pages you followed, you saw whichever of these posts Facebook’s algorithms “decided” you should see, filtering content based on an array of factors designed to suss out which content users found more interesting. That algorithm not only changed Facebook; it changed the world, making Facebook specifically—and social media algorithms generally—the subject of intense cultural and political debate.”

“Algorithms..help solve problems of information abundance. They cut through the noise, making recommendations more relevant, helping people see what they’re most likely to want to see, and helping them avoid content they might find undesirable. They make our internet experience less chaotic, less random, less offensive, and more efficient.”

“As Facebook and other social media companies started using them to sort and prioritize vast troves of user-generated content, algorithms started determining what material people were most likely to see online. Mathematical assessment replaced bespoke human judgment, leaving some people upset at what they were missing, some annoyed at what they were shown, and many feeling manipulated.

The algorithms that sort content for Facebook and other social media megasites change constantly. The precise formulas they employ at any given moment aren’t publicly known. But one of the key metrics is engagement, such as how many people have commented on a post or what type of emoji reactions it’s received.

As social media platforms like Facebook and Twitter, which shifted its default from chronological to algorithmic feeds in 2016, became more dominant as sources of news and political debate, people began to fear that algorithms were taking control of America’s politics.

Then came the 2016 election. In the wake of Trump’s defeat of Hillary Clinton in the presidential race, reports started trickling out that Russia may have posted on U.S. social media in an attempt to influence election results. Eventually it emerged that employees of a Russian company called the Internet Research Agency had posed as American individuals and groups on Facebook, Instagram, Tumblr, Twitter, and YouTube. These accounts posted and paid for ads on inflammatory topics, criticized candidates (especially Clinton), and sometimes shared fake news. The Senate Select Committee on Intelligence opened an investigation, and Facebook, Google, and Twitter executives were called before Congress to testify.”

“Progressives continued to embrace this explanation with each new and upsetting political development. The alt-right? Blame algorithms! Conspiracy theories about Clinton and sex trafficking? Algorithms! Nice Aunt Sue becoming a cantankerous loon online? Algorithms, of course.

Conservatives learned to loathe the algorithm a little later. Under fire about Russian trolls and other liberal bugaboos, tech companies started cracking down on a widening array of content. Conservatives became convinced that different kinds of algorithms—the ones used to find and deal with hate speech, spam, and other kinds of offensive posts—were more likely to flag and punish conservative voices. They also suspected that algorithms determining what people did see were biased against conservatives.”

“A common thread in all this is the idea that algorithms are powerful engines of personal and political behavior, either deliberately engineered to push us to some predetermined outcome or negligently wielded in spite of clear dangers. Inevitably, this narrative produced legislative proposals”

“It’s no secret that tech companies engineer their platforms to keep people coming back. But this isn’t some uniquely nefarious feature of social media businesses. Keeping people engaged and coming back is the crux of entertainment entities from TV networks to amusement parks.

Moreover, critics have the effect of algorithms precisely backward. A world without algorithms would mean kids (and everyone else) encountering more offensive or questionable content.

Without the news feed algorithm, “the first thing that would happen is that people would see more, not less, hate speech; more, not less, misinformation; more, not less, harmful content,” Nick Clegg, Meta’s then–vice president of global affairs, told George Stephanopoulos last year. That’s because algorithms are used to “identify and deprecate and downgrade bad content.” After all, algorithms are just sorting tools. So Facebook uses them to sort and downgrade hateful content.

“Without [algorithms], you just get an undifferentiated mass of content, and that’s not very useful,” noted Techdirt editor Mike Masnick last March.”

“several studies suggest social media is actually biased toward conservatives. A paper published in Research & Politics in 2022 found that a Facebook algorithm change in 2018 benefitted local Republicans more than local Democrats. In 2021, Twitter looked at how its algorithms amplify political content, examining millions of tweets sent by elected officials in seven countries, as well as “hundreds of millions” of tweets in which people shared links to articles. It found that “in six out of seven countries—all but Germany—Tweets posted by accounts from the political right receive more algorithmic amplification than the political left” and that right-leaning news outlets also “see greater algorithmic amplification.”

As for the Republican email algorithms bill, it would almost certainly backfire. Email services like Gmail use algorithms to sort out massive amounts of spam: If the GOP bill passed, it could mean email users would end up seeing a lot more spam in their inboxes as services strove to avoid liability.”

“it becomes clear why people might feel like algorithms have increased polarization. Life not long ago meant rarely engaging in political discussion with people outside one’s immediate community, where viewpoints tend to coalesce or are glossed over for the sake of propriety. For instance, before Facebook, my sister and an old family friend would likely never have gotten into clashes about Trump—it just wouldn’t have come up in the types of interactions they found themselves in. But that doesn’t mean they’re more politically divergent now; they just know more about it. Far from limiting one’s horizons, engaging with social media means greater exposure to opposing viewpoints, information that challenges one’s beliefs, and sometimes surprising perspectives from people around you.

The evidence used to support the social media/polarization hypothesis is often suspect. For instance, people often point to political polarization. But polarization seems to have started its rise decades before Facebook and Twitter came along.”

“For the average person online, algorithms do a lot of good. They help us get recommendations tailored to our tastes, save time while shopping online, learn about films and music we might not otherwise be exposed to, avoid email spam, keep up with the biggest news from friends and family, and be exposed to opinions we might not otherwise hear.”

“If algorithms are driving political chaos, we don’t have to look at the deeper rot in our democratic systems. If algorithms are driving hate and paranoia, we don’t have to grapple with the fact that racism, misogyny, antisemitism, and false beliefs never faded as much as we thought they had. If the algorithms are causing our troubles, we can pass laws to fix the algorithms. If algorithms are the problem, we don’t have to fix ourselves.

Blaming algorithms allows us to avoid a harder truth. It’s not some mysterious machine mischief that’s doing all of this. It’s people, in all our messy human glory and misery. Algorithms sort for engagement, which means they sort for what moves us, what motivates us to act and react, what generates interest and attention. Algorithms reflect our passions and predilections back at us.”

Maybe Trump was right about TikTok

“TikTok is owned by ByteDance, which is based in China. It isn’t an arm of the Chinese Communist Party, but Chinese laws say it can be forced to assist the Chinese government. That could mean handing all the data its app has collected about American citizens to China. And TikTok collects a lot of data about its users.
“The Chinese government has established clear pathways to empower itself to surveil individuals, to gather data from corporations, and through the 2017 [National Intelligence] law, to aggregate that data on government servers,” said Aynne Kokas, director of the University of Virginia’s East Asia Center and author of the recently released book Trafficking Data: How China Is Winning the Battle for Digital Sovereignty. “To the degree to which any of this is happening is difficult to know.”

TikTok has repeatedly said it isn’t happening and that it never will. It’s also tried to distance itself from its Chinese parent company. But those claims have been undermined by recent reports that say ByteDance has a great deal of control over TikTok and its direction, that China does have access to US data, and that ByteDance has tried to get location data from a few Americans through their TikTok accounts. (To these reports, TikTok has said that the app doesn’t collect precise location data and therefore couldn’t surveil US users this way, and that leaked conversations about Chinese employees having access to US data were in regards to figuring out to turn that access off.)”

” They also fear that TikTok, directed by the Chinese government, will push propaganda or disinformation, which wouldn’t be hard to do considering how TikTok feeds its users so much content with its “For You” algorithm. It’s also not out of the realm of possibility that it would do this. A 2019 report showed that ByteDance had a list of banned content on TikTok, which included Tiananmen Square, Tibet, and Taiwan. And China has been caught using social media to spread disinformation or propaganda before (as have many other countries, including the United States). But that was through someone else’s platform. With TikTok, China could directly control what’s on the platform and how it’s distributed. It can’t do that with Facebook or Instagram.”

“there’s the fear that China will be able to use TikTok’s data to power its AI innovations. That’s an advantage the US won’t have because its social media apps are banned in China and because there aren’t laws that would compel social media companies to hand over data just because the government wants it.”

“While some have come around to thinking Trump was right to want to ban TikTok, they don’t necessarily agree with how he tried to do it. Courts didn’t agree either, and blocked his August 2020 executive order that would have forced ByteDance to sell TikTok or be banned. But it never made it to an actual trial, as Biden took office and revoked the executive order.
Republican leaders have criticized President Biden for not being as tough as Trump on TikTok and appearing to support the platform by reaching out to some of its biggest influencers. But the Biden administration isn’t going easy on TikTok, either. Biden recently issued an executive order expanding the definition of national security for the purposes of CFIUS reviews to include data and technologies necessary to “protect United States technical leadership.” It doesn’t directly address TikTok, but it certainly includes it.

CFIUS, by the way, has been reviewing ByteDance’s acquisition of Musical.ly for several years now. CFIUS doesn’t comment on ongoing investigations, but TikTok said in a statement to Recode that “we will not comment on the specifics of confidential discussions with the US government, but we are confident that we are on a path to fully satisfy all reasonable US national security concerns.”

To that end, TikTok is currently trying to wall US data off from China to satisfy CFIUS’s concerns in an effort it’s dubbed “Project Texas.” That would keep what’s considered “protected” data on US users on US-based servers run by Oracle, with controls over who has access to it.”

“A deal between CFIUS and TikTok has reportedly been imminent for weeks now, but it hasn’t happened yet. There are doubts that anything short of forcing ByteDance to sell off TikTok would guarantee that China can’t access user data or do anything about concerns over pushing propaganda and disinformation.”

“These problems could be solved very quickly if ByteDance were to sell off TikTok, but that doesn’t seem to be an option. The Chinese government would have to approve such a move, and experts say that’s very unlikely.

“The Chinese government loves TikTok,” Lewis said, pointing out that it’s the only social media app from China that’s been successful outside of the country. “The Chinese government will protect it.””

Social Media Interaction Does Not Improve Political Knowledge, but It Does Polarize Us

“Two new political science studies investigate how all of this time spent on social media affects our politics. The first asks what, if anything, digital denizens learn about politics, while the second develops a model to explain how social media interactions spark culture wars by sorting people into antagonistic political tribes.
Published in the Journal of Communication, the first study finds that whatever else millions of social media devotees learn from their online activity—catching up with friends and finding new ones, checking out product reviews, and watching cat videos—one thing they do not do is learn about is politics.

There are “no observable political knowledge gains from using Facebook, Twitter,
or SNS [social network sites] in general; when measuring policy-specific, campaign-related, or general political knowledge; in election and routine periods; and when individuals use social media specifically for news or for more general purposes,” find Israeli communications researchers Eran Amsalem and Alon Zoizner. They reached this conclusion after parsing data involving more than 440,000 subjects in a pre-registered meta-analysis of 76 different studies on the effect of social media on political knowledge.

One minor exception to this sweeping conclusion is that research using experimental setups finds small but statistically significant knowledge gains from using social media. However, in such experiments, Amsalem and Zoizner note that subjects are given no choice over what political information they see, so they may be artificially induced to pay greater attention to it than they ordinarily would scrolling out in the wild. The experiments also fail to account for information decay, since they test their subjects’ political knowledge gains immediately after exposure to it. “Considering these caveats, our conclusion remains that social media contribute little, if at all, to political knowledge,” write the authors.”

“Törnberg observes that accumulating data indicate that political polarization in the U.S. and other countries is not growing because people are increasingly isolating themselves with like-minded folks in social media echo chambers that confirm the righteousness of their views. Instead, Törnberg notes that the “empirical literature suggests that digitalization does not appear to lead to a reduction of interaction across political divide, but quite the opposite: it confronts us with diverse individuals, perspectives, and viewpoints, often in contentious ways.” So how might social media contribute to the apparent increase in partisan rancor? Törnberg develops a model showing how political disputes, controversies, and debates on nonlocal social media can drive people to adopt increasingly extreme positions.

Back in the good old days, political and other differences of opinion were largely local and neighbors had many cross-cutting issues and commonalities that moderated their views of those who disagreed with them. “Social conflict is sustainable as long as there are multiple and non-overlapping lines of disagreement: we may differ on our views on one issue but agree on another; we may vote differently, but if we support the same football team or go to the same church, there remains space for interpersonal respect,” writes Törnberg. “The recent rise in polarization is thus expressive of a gradual breakdown of this cohesive glue, driven by a gradual alignment of social, economic, geographic, and ideological differences and conflicts.””

“Based on the results of his model, Törnberg argues that we need to rethink “digital media as not merely arenas for rational deliberation and political debate but as spaces for social identity formation and for symbolic displays of solidarity with allies and difference from outgroups. Digital media do not isolate us from opposing ideas; au contraire, they throw us into a national political war, in which we are forced to take sides.””

The Government Can’t Fix Social Media Moderation and Should Not Try

“Despite their increasingly bitter differences, Democrats and Republicans generally agree that content moderation by social media companies is haphazard at best. But while Democrats tend to think the main problem is too much speech of the wrong sort, Republicans complain that platforms like Facebook, Twitter, and YouTube are biased against them.
The government cannot resolve this dispute and should not try. Siding with the critics who complain about online “misinformation” poses an obvious threat to free inquiry and open debate. And while attempting to mandate evenhandedness might seem more consistent with those values, it undermines the freedoms guaranteed by the First Amendment in a more subtle but equally troubling way.”