TikTok Is Too Popular To Ban

“Investigative journalists have tried hard to find evidence that TikTok is leaking data to Chinese authorities, but to no avail.
“I haven’t found any evidence” of “the company handing over data to Chinese authorities, or security risks associated with its connection to the Chinese state,” writes Chris Stokel-Walker—who has done ample critical reporting about the company—at Buzzfeed this week:

I’ve been trying for years to find any links to the Chinese state. I’ve spoken to scores of TikTok employees, past and present, in pursuit of such a connection. But I haven’t discovered it….””

“TikTok adamantly denies allegations about data sharing with the Chinese government. “TikTok has never shared, or received a request to share, U.S. user data with the Chinese government,” said its CEO in prepared testimony released ahead of today’s House hearing. “Nor would TikTok honor such a request if one were ever made.”
“Let me state this unequivocally: ByteDance is not an agent of China or any other country,” the testimony continues. “Bans are only appropriate when there are no alternatives. But we do have an alternative.”

The company has been cooperating with U.S. regulators to develop protocols around user data that will help mollify security and privacy concerns. “TikTok has formed a special-purpose subsidiary, TikTok U.S. Data Security (USDS), that currently has nearly 1,500 full-time employees and contracted with Oracle to store TikTok’s U.S. user data,” notes Reuters. According to Chew’s testimony, “Oracle has already begun inspecting TikTok’s source code and will have unprecedented access to the related algorithms and data models.””

Facebook Says Noting the CDC’s Scientific Misrepresentations ‘Could Mislead People’

“Laboratory experiments provide good reason to believe that masks, especially N95s, can reduce the risk that someone will be infected or infect other people. But those experiments are conducted in idealized conditions that may not resemble the real world, where people often choose low-quality cloth masks and do not necessarily wear masks properly or consistently.

Observational studies, which look at infection rates among voluntary mask wearers or people subject to mask mandates, can provide additional evidence that general mask wearing reduces infection. But such studies do not fully account for confounding variables.
If people who voluntarily wear masks or live in jurisdictions that require them to do so differ from the comparison groups in ways that independently affect disease transmission, the estimates derived from observational studies will be misleading. Those studies can also be subject to other pitfalls, such as skewed sampling and recall bias, that make it difficult to reach firm conclusions.

Despite those uncertainties, the CDC touted an observational study that supposedly proved “wearing a mask lowered the odds of testing positive” by as much as 83 percent. It said even cloth masks reduced infection risk by 56 percent, although that result was not statistically significant and the study’s basic design, combined with grave methodological weaknesses, made it impossible to draw causal inferences.”

“If wearing a mask had the dramatic impact that the CDC claimed, you would expect to see some evidence of that in RCTs. Yet the Cochrane review found essentially no relationship between mask wearing and disease rates, whether measured by reported symptoms or by laboratory tests. Nor did it confirm the expectation that N95s would prove superior to surgical masks in the field. The existing RCT evidence, the authors said, “demonstrates no differences in clinical effectiveness.””

“Does the Cochrane review prove that masks are worthless in protecting people from COVID-19? No. But it does show that the Centers for Disease Control and Prevention (CDC) misled the public about the strength of the evidence supporting mask mandates”

Google’s Brief to the Supreme Court Explains Why We Need Section 230

“”If Section 230 does not apply to how YouTube organizes third-party videos, petitioners and the government have no coherent theory that would save search recommendations and other basic software tools that organize an otherwise unnavigable flood of websites, videos, comments, messages, product

In Defense of Algorithms

“When Facebook launched in 2004, it was a fairly static collection of profile pages. Facebook users could put lists of favorite media on their “walls” and use the “poke” button to give each other social-media nudges. To see what other people were posting, you had to intentionally visit their pages. There were no automatic notifications, no feeds to alert you to new information.
In 2006, Facebook introduced the News Feed, an individualized homepage for each user that showed friends’ posts in chronological order. The change seemed small at the time, but it turned out to be the start of a revolution. Instead of making an active choice to check in on other people’s pages, users got a running list of updates.

Users still controlled what information they saw by selecting which people and groups to follow. But now user updates, from new photos to shower thoughts, were delivered automatically, as a chronologically ordered stream of real-time information.

This created a problem. Facebook was growing fast, and users were spending more and more time on it, especially once Apple’s iPhone app store brought social media to smartphones. It wasn’t long before there were simply too many updates for many people to reasonably follow. Sorting the interesting from the irrelevant became a big task.

But what if there were a way for the system to sort through those updates for users, determining which posts might be most interesting, most relevant, most likely to generate a response?

In 2013, Facebook largely ditched the chronological feed. In its place, the social media company installed an algorithm.

Instead of a simple time-ordered log of posts from friends and pages you followed, you saw whichever of these posts Facebook’s algorithms “decided” you should see, filtering content based on an array of factors designed to suss out which content users found more interesting. That algorithm not only changed Facebook; it changed the world, making Facebook specifically—and social media algorithms generally—the subject of intense cultural and political debate.”

“Algorithms..help solve problems of information abundance. They cut through the noise, making recommendations more relevant, helping people see what they’re most likely to want to see, and helping them avoid content they might find undesirable. They make our internet experience less chaotic, less random, less offensive, and more efficient.”

“As Facebook and other social media companies started using them to sort and prioritize vast troves of user-generated content, algorithms started determining what material people were most likely to see online. Mathematical assessment replaced bespoke human judgment, leaving some people upset at what they were missing, some annoyed at what they were shown, and many feeling manipulated.

The algorithms that sort content for Facebook and other social media megasites change constantly. The precise formulas they employ at any given moment aren’t publicly known. But one of the key metrics is engagement, such as how many people have commented on a post or what type of emoji reactions it’s received.

As social media platforms like Facebook and Twitter, which shifted its default from chronological to algorithmic feeds in 2016, became more dominant as sources of news and political debate, people began to fear that algorithms were taking control of America’s politics.

Then came the 2016 election. In the wake of Trump’s defeat of Hillary Clinton in the presidential race, reports started trickling out that Russia may have posted on U.S. social media in an attempt to influence election results. Eventually it emerged that employees of a Russian company called the Internet Research Agency had posed as American individuals and groups on Facebook, Instagram, Tumblr, Twitter, and YouTube. These accounts posted and paid for ads on inflammatory topics, criticized candidates (especially Clinton), and sometimes shared fake news. The Senate Select Committee on Intelligence opened an investigation, and Facebook, Google, and Twitter executives were called before Congress to testify.”

“Progressives continued to embrace this explanation with each new and upsetting political development. The alt-right? Blame algorithms! Conspiracy theories about Clinton and sex trafficking? Algorithms! Nice Aunt Sue becoming a cantankerous loon online? Algorithms, of course.

Conservatives learned to loathe the algorithm a little later. Under fire about Russian trolls and other liberal bugaboos, tech companies started cracking down on a widening array of content. Conservatives became convinced that different kinds of algorithms—the ones used to find and deal with hate speech, spam, and other kinds of offensive posts—were more likely to flag and punish conservative voices. They also suspected that algorithms determining what people did see were biased against conservatives.”

“A common thread in all this is the idea that algorithms are powerful engines of personal and political behavior, either deliberately engineered to push us to some predetermined outcome or negligently wielded in spite of clear dangers. Inevitably, this narrative produced legislative proposals”

“It’s no secret that tech companies engineer their platforms to keep people coming back. But this isn’t some uniquely nefarious feature of social media businesses. Keeping people engaged and coming back is the crux of entertainment entities from TV networks to amusement parks.

Moreover, critics have the effect of algorithms precisely backward. A world without algorithms would mean kids (and everyone else) encountering more offensive or questionable content.

Without the news feed algorithm, “the first thing that would happen is that people would see more, not less, hate speech; more, not less, misinformation; more, not less, harmful content,” Nick Clegg, Meta’s then–vice president of global affairs, told George Stephanopoulos last year. That’s because algorithms are used to “identify and deprecate and downgrade bad content.” After all, algorithms are just sorting tools. So Facebook uses them to sort and downgrade hateful content.

“Without [algorithms], you just get an undifferentiated mass of content, and that’s not very useful,” noted Techdirt editor Mike Masnick last March.”

“several studies suggest social media is actually biased toward conservatives. A paper published in Research & Politics in 2022 found that a Facebook algorithm change in 2018 benefitted local Republicans more than local Democrats. In 2021, Twitter looked at how its algorithms amplify political content, examining millions of tweets sent by elected officials in seven countries, as well as “hundreds of millions” of tweets in which people shared links to articles. It found that “in six out of seven countries—all but Germany—Tweets posted by accounts from the political right receive more algorithmic amplification than the political left” and that right-leaning news outlets also “see greater algorithmic amplification.”

As for the Republican email algorithms bill, it would almost certainly backfire. Email services like Gmail use algorithms to sort out massive amounts of spam: If the GOP bill passed, it could mean email users would end up seeing a lot more spam in their inboxes as services strove to avoid liability.”

“it becomes clear why people might feel like algorithms have increased polarization. Life not long ago meant rarely engaging in political discussion with people outside one’s immediate community, where viewpoints tend to coalesce or are glossed over for the sake of propriety. For instance, before Facebook, my sister and an old family friend would likely never have gotten into clashes about Trump—it just wouldn’t have come up in the types of interactions they found themselves in. But that doesn’t mean they’re more politically divergent now; they just know more about it. Far from limiting one’s horizons, engaging with social media means greater exposure to opposing viewpoints, information that challenges one’s beliefs, and sometimes surprising perspectives from people around you.

The evidence used to support the social media/polarization hypothesis is often suspect. For instance, people often point to political polarization. But polarization seems to have started its rise decades before Facebook and Twitter came along.”

“For the average person online, algorithms do a lot of good. They help us get recommendations tailored to our tastes, save time while shopping online, learn about films and music we might not otherwise be exposed to, avoid email spam, keep up with the biggest news from friends and family, and be exposed to opinions we might not otherwise hear.”

“If algorithms are driving political chaos, we don’t have to look at the deeper rot in our democratic systems. If algorithms are driving hate and paranoia, we don’t have to grapple with the fact that racism, misogyny, antisemitism, and false beliefs never faded as much as we thought they had. If the algorithms are causing our troubles, we can pass laws to fix the algorithms. If algorithms are the problem, we don’t have to fix ourselves.

Blaming algorithms allows us to avoid a harder truth. It’s not some mysterious machine mischief that’s doing all of this. It’s people, in all our messy human glory and misery. Algorithms sort for engagement, which means they sort for what moves us, what motivates us to act and react, what generates interest and attention. Algorithms reflect our passions and predilections back at us.”

Maybe Trump was right about TikTok

“TikTok is owned by ByteDance, which is based in China. It isn’t an arm of the Chinese Communist Party, but Chinese laws say it can be forced to assist the Chinese government. That could mean handing all the data its app has collected about American citizens to China. And TikTok collects a lot of data about its users.
“The Chinese government has established clear pathways to empower itself to surveil individuals, to gather data from corporations, and through the 2017 [National Intelligence] law, to aggregate that data on government servers,” said Aynne Kokas, director of the University of Virginia’s East Asia Center and author of the recently released book Trafficking Data: How China Is Winning the Battle for Digital Sovereignty. “To the degree to which any of this is happening is difficult to know.”

TikTok has repeatedly said it isn’t happening and that it never will. It’s also tried to distance itself from its Chinese parent company. But those claims have been undermined by recent reports that say ByteDance has a great deal of control over TikTok and its direction, that China does have access to US data, and that ByteDance has tried to get location data from a few Americans through their TikTok accounts. (To these reports, TikTok has said that the app doesn’t collect precise location data and therefore couldn’t surveil US users this way, and that leaked conversations about Chinese employees having access to US data were in regards to figuring out to turn that access off.)”

” They also fear that TikTok, directed by the Chinese government, will push propaganda or disinformation, which wouldn’t be hard to do considering how TikTok feeds its users so much content with its “For You” algorithm. It’s also not out of the realm of possibility that it would do this. A 2019 report showed that ByteDance had a list of banned content on TikTok, which included Tiananmen Square, Tibet, and Taiwan. And China has been caught using social media to spread disinformation or propaganda before (as have many other countries, including the United States). But that was through someone else’s platform. With TikTok, China could directly control what’s on the platform and how it’s distributed. It can’t do that with Facebook or Instagram.”

“there’s the fear that China will be able to use TikTok’s data to power its AI innovations. That’s an advantage the US won’t have because its social media apps are banned in China and because there aren’t laws that would compel social media companies to hand over data just because the government wants it.”

“While some have come around to thinking Trump was right to want to ban TikTok, they don’t necessarily agree with how he tried to do it. Courts didn’t agree either, and blocked his August 2020 executive order that would have forced ByteDance to sell TikTok or be banned. But it never made it to an actual trial, as Biden took office and revoked the executive order.
Republican leaders have criticized President Biden for not being as tough as Trump on TikTok and appearing to support the platform by reaching out to some of its biggest influencers. But the Biden administration isn’t going easy on TikTok, either. Biden recently issued an executive order expanding the definition of national security for the purposes of CFIUS reviews to include data and technologies necessary to “protect United States technical leadership.” It doesn’t directly address TikTok, but it certainly includes it.

CFIUS, by the way, has been reviewing ByteDance’s acquisition of Musical.ly for several years now. CFIUS doesn’t comment on ongoing investigations, but TikTok said in a statement to Recode that “we will not comment on the specifics of confidential discussions with the US government, but we are confident that we are on a path to fully satisfy all reasonable US national security concerns.”

To that end, TikTok is currently trying to wall US data off from China to satisfy CFIUS’s concerns in an effort it’s dubbed “Project Texas.” That would keep what’s considered “protected” data on US users on US-based servers run by Oracle, with controls over who has access to it.”

“A deal between CFIUS and TikTok has reportedly been imminent for weeks now, but it hasn’t happened yet. There are doubts that anything short of forcing ByteDance to sell off TikTok would guarantee that China can’t access user data or do anything about concerns over pushing propaganda and disinformation.”

“These problems could be solved very quickly if ByteDance were to sell off TikTok, but that doesn’t seem to be an option. The Chinese government would have to approve such a move, and experts say that’s very unlikely.

“The Chinese government loves TikTok,” Lewis said, pointing out that it’s the only social media app from China that’s been successful outside of the country. “The Chinese government will protect it.””