Why the Twitter Files actually matter

“One clue is in a message by Trust and Safety chief Yoel Roth, who alludes to “the SEVERE risks here and lessons of 2016.” In 2016, there was an effort by the Russian government to interfere with the general election in a way that would hurt Hillary Clinton and Democrats’ prospects. As later documented in the Mueller report, this effort involved both a “troll farm” of Russian accounts masquerading as Americans to spread false or inflammatory information, and the “hack-and-leak” campaign in which leading Democrats’ emails were stolen and provided to WikiLeaks.
After Trump won, many leading figures in politics, tech, media, and law enforcement concluded that major social media platforms like Twitter and Facebook should have done more to stop this Russian interference effort and the spread of “misinformation” more generally (with some arguing that this was a problem regardless of electoral impact, and others claiming that this helped or even caused Trump’s victory). Law enforcement officials argued the Russian campaign was illegal and indicted about two dozen Russians believed to be involved in it. Social media companies began to take a more aggressive approach to curbing what they saw as misinformation, and as the 2020 election approached, they met regularly with FBI and other government officials to discuss the dangers of potential new foreign interference campaigns.

But several issues are being conflated here. Misinformation is (in theory) false information. Foreign propaganda is not necessarily false, but it is being spread by a foreign government with malicious intent (for example, to inflame America’s divisions). Hacked material, though, is tricker in part because it often isn’t misinformation — its power comes from its accuracy. Now, it is theoretically possible that false information could be mixed in with true information as part of a hacked document dump, so it’s important to authenticate it to the extent possible. And even authentic information can often be ripped out of context to appear more damning than it really is. Still, Twitter was putting itself in the awkward position where it would be resolving to suppress information that could well be accurate, for the greater good of preventing foreign interference in an election.

More broadly, a blanket ban on hacked material doesn’t seem particularly well thought through, since a fair amount of journalism is based on material that is illicitly obtained in some way (such as the Pentagon Papers). Every major media source wrote about the DNC and Podesta email leaks, as well as the leaked State Department cables, while entertainment journalists wrote about the Sony hack. Should all those stories be banned like the Post’s was? A standard that Twitter won’t host any sexual images of someone posted without their consent, or any personal information like someone’s address, is a neutral one. Beyond that, determining what stolen or hacked information is newsworthy is inherently subjective. Should that judgment be left to social media companies?

Then there’s the problem that Twitter jumped to the conclusion that this was a hack in the first place. I can see why they did — recent high-profile examples of mass personal info dumps like this were generally hacks. So if you had been anticipating a chance to “do over” 2016’s hack scandal, here it seemed to be. But it was jumping to a conclusion. Additionally, the apparent belief of some employees that proactively censoring the story until there was more information about whether it was hacked info was a way to express “caution” seems dubious — fully banning a link to a media outlet from the platform was a sweeping measure.

So to me this seems a pretty clear case of overreach by Twitter. This wasn’t a “rigging” of the election (again, the ban was only in place for a little over a day). But the decision — born out of a blinkered focus on avoiding a repeat of 2016, rather than taking speech or press freedom or the different details of this situation into account — was the wrong call, in my view. ”

“it should be noted that the phenomenon of controversial Twitter bannings occurring at top executives’ whims has not been solved under the Musk regime. Musk has already decided to suspend Kanye West’s account, keep a preexisting ban on Infowars host Alex Jones in place, and ban an account tracking flight information for Musk’s private jet (even though he said..his “commitment to free speech” was so strong he would allow that account to keep posting).”

“When the Covid-19 pandemic broke out, Twitter again grappled with the topic of “misinformation.” As with Trump (and with hate speech), Twitter executives likely believed lives could well hinge on their decisions. So by May 2020, the company announced it would remove or label tweets that “directly pose a risk to someone’s health or well-being,” such as encouragements that people disregard social distancing guidelines.

But the company essentially defined “misinformation” as whatever went against the public health establishment’s current conventional wisdom. And as time passed, Covid quickly became another issue where conservatives and some journalists came to deeply distrust that establishment, viewing it as making mistakes and giving politically slanted guidance.

The situation took another turn when President Biden took office. By the summer of 2021, his administration was trying to encourage widespread vaccine adoption in the hope the pandemic could be ended entirely. (The omicron variant, which sufficiently evaded vaccines to end that hope, was not yet circulating.) Toward that end, administration officials publicly demanded social companies do more to fight misinformation, and poured private pressure on the companies to delete certain specific accounts.

One of those accounts belonged to commentator Alex Berenson, who “has mischaracterized just about every detail regarding the vaccines to make the dubious case that most people would be better off avoiding them,” according to the Atlantic’s Derek Thompson. After Berenson was eventually banned, he sued and obtained records showing the White House had specifically asked Twitter why he hadn’t been kicked off the platform yet. Another lawsuit against the administration, from Republican state attorneys general and other people who believed their speech was suppressed (including Bhattacharya), is also pending.

All that is to say that there is a thorny question here about whether the government should be trying to get individual people who have violated no laws banned from social media. And from the standpoint of 2022, when the US has adopted a return-to-normal policy without universal vaccination or the virus being suppressed, and when there’s increased attention on whether school lockdowns harmed children, some reflection may be called for about what constitutes misinformation and what constitutes opinions people may have about policy in a free society.”

In Defense of Algorithms

“When Facebook launched in 2004, it was a fairly static collection of profile pages. Facebook users could put lists of favorite media on their “walls” and use the “poke” button to give each other social-media nudges. To see what other people were posting, you had to intentionally visit their pages. There were no automatic notifications, no feeds to alert you to new information.
In 2006, Facebook introduced the News Feed, an individualized homepage for each user that showed friends’ posts in chronological order. The change seemed small at the time, but it turned out to be the start of a revolution. Instead of making an active choice to check in on other people’s pages, users got a running list of updates.

Users still controlled what information they saw by selecting which people and groups to follow. But now user updates, from new photos to shower thoughts, were delivered automatically, as a chronologically ordered stream of real-time information.

This created a problem. Facebook was growing fast, and users were spending more and more time on it, especially once Apple’s iPhone app store brought social media to smartphones. It wasn’t long before there were simply too many updates for many people to reasonably follow. Sorting the interesting from the irrelevant became a big task.

But what if there were a way for the system to sort through those updates for users, determining which posts might be most interesting, most relevant, most likely to generate a response?

In 2013, Facebook largely ditched the chronological feed. In its place, the social media company installed an algorithm.

Instead of a simple time-ordered log of posts from friends and pages you followed, you saw whichever of these posts Facebook’s algorithms “decided” you should see, filtering content based on an array of factors designed to suss out which content users found more interesting. That algorithm not only changed Facebook; it changed the world, making Facebook specifically—and social media algorithms generally—the subject of intense cultural and political debate.”

“Algorithms..help solve problems of information abundance. They cut through the noise, making recommendations more relevant, helping people see what they’re most likely to want to see, and helping them avoid content they might find undesirable. They make our internet experience less chaotic, less random, less offensive, and more efficient.”

“As Facebook and other social media companies started using them to sort and prioritize vast troves of user-generated content, algorithms started determining what material people were most likely to see online. Mathematical assessment replaced bespoke human judgment, leaving some people upset at what they were missing, some annoyed at what they were shown, and many feeling manipulated.

The algorithms that sort content for Facebook and other social media megasites change constantly. The precise formulas they employ at any given moment aren’t publicly known. But one of the key metrics is engagement, such as how many people have commented on a post or what type of emoji reactions it’s received.

As social media platforms like Facebook and Twitter, which shifted its default from chronological to algorithmic feeds in 2016, became more dominant as sources of news and political debate, people began to fear that algorithms were taking control of America’s politics.

Then came the 2016 election. In the wake of Trump’s defeat of Hillary Clinton in the presidential race, reports started trickling out that Russia may have posted on U.S. social media in an attempt to influence election results. Eventually it emerged that employees of a Russian company called the Internet Research Agency had posed as American individuals and groups on Facebook, Instagram, Tumblr, Twitter, and YouTube. These accounts posted and paid for ads on inflammatory topics, criticized candidates (especially Clinton), and sometimes shared fake news. The Senate Select Committee on Intelligence opened an investigation, and Facebook, Google, and Twitter executives were called before Congress to testify.”

“Progressives continued to embrace this explanation with each new and upsetting political development. The alt-right? Blame algorithms! Conspiracy theories about Clinton and sex trafficking? Algorithms! Nice Aunt Sue becoming a cantankerous loon online? Algorithms, of course.

Conservatives learned to loathe the algorithm a little later. Under fire about Russian trolls and other liberal bugaboos, tech companies started cracking down on a widening array of content. Conservatives became convinced that different kinds of algorithms—the ones used to find and deal with hate speech, spam, and other kinds of offensive posts—were more likely to flag and punish conservative voices. They also suspected that algorithms determining what people did see were biased against conservatives.”

“A common thread in all this is the idea that algorithms are powerful engines of personal and political behavior, either deliberately engineered to push us to some predetermined outcome or negligently wielded in spite of clear dangers. Inevitably, this narrative produced legislative proposals”

“It’s no secret that tech companies engineer their platforms to keep people coming back. But this isn’t some uniquely nefarious feature of social media businesses. Keeping people engaged and coming back is the crux of entertainment entities from TV networks to amusement parks.

Moreover, critics have the effect of algorithms precisely backward. A world without algorithms would mean kids (and everyone else) encountering more offensive or questionable content.

Without the news feed algorithm, “the first thing that would happen is that people would see more, not less, hate speech; more, not less, misinformation; more, not less, harmful content,” Nick Clegg, Meta’s then–vice president of global affairs, told George Stephanopoulos last year. That’s because algorithms are used to “identify and deprecate and downgrade bad content.” After all, algorithms are just sorting tools. So Facebook uses them to sort and downgrade hateful content.

“Without [algorithms], you just get an undifferentiated mass of content, and that’s not very useful,” noted Techdirt editor Mike Masnick last March.”

“several studies suggest social media is actually biased toward conservatives. A paper published in Research & Politics in 2022 found that a Facebook algorithm change in 2018 benefitted local Republicans more than local Democrats. In 2021, Twitter looked at how its algorithms amplify political content, examining millions of tweets sent by elected officials in seven countries, as well as “hundreds of millions” of tweets in which people shared links to articles. It found that “in six out of seven countries—all but Germany—Tweets posted by accounts from the political right receive more algorithmic amplification than the political left” and that right-leaning news outlets also “see greater algorithmic amplification.”

As for the Republican email algorithms bill, it would almost certainly backfire. Email services like Gmail use algorithms to sort out massive amounts of spam: If the GOP bill passed, it could mean email users would end up seeing a lot more spam in their inboxes as services strove to avoid liability.”

“it becomes clear why people might feel like algorithms have increased polarization. Life not long ago meant rarely engaging in political discussion with people outside one’s immediate community, where viewpoints tend to coalesce or are glossed over for the sake of propriety. For instance, before Facebook, my sister and an old family friend would likely never have gotten into clashes about Trump—it just wouldn’t have come up in the types of interactions they found themselves in. But that doesn’t mean they’re more politically divergent now; they just know more about it. Far from limiting one’s horizons, engaging with social media means greater exposure to opposing viewpoints, information that challenges one’s beliefs, and sometimes surprising perspectives from people around you.

The evidence used to support the social media/polarization hypothesis is often suspect. For instance, people often point to political polarization. But polarization seems to have started its rise decades before Facebook and Twitter came along.”

“For the average person online, algorithms do a lot of good. They help us get recommendations tailored to our tastes, save time while shopping online, learn about films and music we might not otherwise be exposed to, avoid email spam, keep up with the biggest news from friends and family, and be exposed to opinions we might not otherwise hear.”

“If algorithms are driving political chaos, we don’t have to look at the deeper rot in our democratic systems. If algorithms are driving hate and paranoia, we don’t have to grapple with the fact that racism, misogyny, antisemitism, and false beliefs never faded as much as we thought they had. If the algorithms are causing our troubles, we can pass laws to fix the algorithms. If algorithms are the problem, we don’t have to fix ourselves.

Blaming algorithms allows us to avoid a harder truth. It’s not some mysterious machine mischief that’s doing all of this. It’s people, in all our messy human glory and misery. Algorithms sort for engagement, which means they sort for what moves us, what motivates us to act and react, what generates interest and attention. Algorithms reflect our passions and predilections back at us.”

What the Twitter files don’t tell us

“journalists Weiss and Taibbi shared details of some of the documents and their own analysis in two long Twitter threads. The revelations are ongoing, with plans to post more in the coming days. Their central accusation so far is that Twitter has long silenced conservative or contrarian voices, and they reference internal emails, Slack messages, and content moderation systems to show how Twitter limited the reach of popular right-wing accounts like Dan Bongino, Charlie Kirk, and Libs of TikTok.
But these claims and the internal documents lack crucial context.

We don’t have a full explanation, for example, of why Twitter limited the reach of these accounts — i.e., whether they were violating the platform’s rules on hate speech, health misinformation, or violent content. Without this information, we don’t know whether these rules were applied fairly or not. Twitter has long acknowledged that it sometimes downranks content that is violative of its rules instead of all-out banning it. It’s a strategy that Musk himself has advocated for by arguing that people should have “freedom of speech, but not freedom of reach” on the platform.

And while Weiss has surfaced specific examples of Twitter limiting the reach of conservative accounts known for spreading hateful content about the LGTBQ+ community or sharing the “big lie” about the US presidential elections, we don’t know if Twitter did the same for some far-left accounts that have also been known for pushing boundaries, such as some former Occupy movement leaders who have complained about Twitter’s content moderation in the past.

Musk, Weiss, and Taibbi are also assuming these decisions were made with explicit political motivation. Historically, most Twitter employees — like the rest of Big Tech — lean liberal. Twitter’s conservative critics argue that this presents an inherent bias in the company’s content moderation decisions. Former Twitter employees Recode spoke with this week insisted that content moderation teams operate in good faith to execute on Twitter’s policy rules, regardless of personal politics. And research shows that Twitter’s recommendation algorithms actually have an inherent bias in favor of right-wing news. What’s been shared so far in the Twitter files doesn’t offer clear proof that anyone at Twitter made decisions about specific accounts or tweets because of their political affiliation. We need more context and information to clarify what’s really going on here.

But to right-wing politicians, influencers, and their supporters, none of this nuance ultimately matters.”

Elon Musk Enforces Twitter’s Ban on ‘Hateful Conduct’ As Critics Predict a Flood of Bigotry

“There is obviously a tension between Twitter’s commitment to “free expression” and its prohibition of hate speech. But while even the vilest expressions of bigotry are protected by the First Amendment, that does not mean a private company is obligated to allow them in a forum it owns. Twitter has made a business judgment that the cost of letting people talk about how awful Jews are, in terms of alienating users and advertisers, outweighs any benefit from allowing users to “express their opinions and beliefs” without restriction.
Other social media platforms strike a different balance and advertise lighter moderation as a virtue. But all of them have some sort of ground rules, because a completely unfiltered experience is appealing only in theory.

Parler, for example, describes itself as “the premier global free speech app,” a refuge for people frustrated by heavy-handed moderation on other platforms. It nevertheless promises to remove “threatening or inciting content.” Parler also offers the option of a “‘trolling’ filter” (mandatory in the Apple version of the app) that is designed to block “personal attacks based on immutable or otherwise irrelevant characteristics such as race, sex, sexual orientation, or religion.” Such content, it explains, “often doesn’t contribute to a productive conversation, and so we wanted to provide our users with a way to minimize it in their feeds, should they choose to do so.””

“Musk has invited back inflammatory political figures such as Donald Trump and Marjorie Taylor Green. He has rescinded Twitter’s ban on “COVID-19 misinformation,” a fuzzy category that ranged from demonstrably false assertions of fact to arguably or verifiably true statements that were deemed “misleading” or contrary to government advice. At the same time, however, Musk has interpreted Twitter’s rule against impersonation as requiring that parody accounts be clearly labeled as such, and he evidently thinks enforcing the ban on “hateful conduct” is important enough to justify banishing a celebrity with a huge following.”

Elon Musk and Matt Taibbi Reveal Why Twitter Censored the Hunter Biden Laptop Story

“The thread contains fascinating screenshots of conversations between various content moderators and company executives as the laptop story debacle was unfolding. But given how massively Musk hyped the revelations, the results are a tad disappointing, and mostly confirm what the public already assumed: A (still unidentified) employee or process flagged the story as “unsafe” and suppressed its spread, and then Twitter moderators devised a retroactive justification—violation of a “hacked materials” policy—for having taken such an extraordinary step. Then-CEO Jack Dorsey was largely absent from these conversations; Vijaya Gadde, Twitter’s former head of trust and safety played “a key role.” None of this material is groundbreaking; it’s already well-known.
To be clear, it’s useful to see some of these internal messages. They confirm that Twitter’s various departments—communications, moderation, senior management—horrendously mismanaged the entire affair. They were not all on the same page: Vice President of Global Communications Brandon Borrman, for example, was immediately unconvinced by the “hacked materials” justification.”

“The most interesting revelation in Taibbi’s thread is that Twitter’s top executives were warned, over and over again, that this decision was going to create a backlash like nothing they had ever seen before. Rep. Ro Khanna (D–Calif.), a progressive lawmaker, repeatedly emailed a Twitter communications staffer to complain that the firm was violating “1st Amendment principles.” (He raised some very valid points in his communications with the company, though strictly speaking the First Amendment does not apply in this situation.) NetChoice, a tech industry trade association, explicitly told Twitter that this would be the company’s “Access Hollywood moment.” (Unlike Twitter, both Khanna and NetChoice come off looking pretty good in all this.)”