“President Joe Biden’s White House pushed Meta, the parent company of Facebook and Instagram, to censor contrarian COVID-19 content, including speculation about the virus having escaped from a lab, vaccine skepticism, and even jokes.
“Can someone quickly remind me why we were removing—rather than demoting/labeling—claims that Covid is man made,” asked Nick Clegg, president for global affairs at the company, in a July 2021 email to his coworkers.
A content moderator replied, “We were under pressure from the administration and others to do more. We shouldn’t have done it.””
“”According to a trove of confidential documents obtained by Reason, health advisers at the CDC had significant input on pandemic-era social media policies at Facebook as well. They were consulted frequently, at times daily. They were actively involved in the affairs of content moderators, providing constant and ever-evolving guidance. They requested frequent updates about which topics were trending on the platforms, and they recommended what kinds of content should be deemed false or misleading. “Here are two issues we are seeing a great deal of misinfo on that we wanted to flag for you all,” reads one note from a CDC official. Another email with sample Facebook posts attached begins: “BOLO for a small but growing area of misinfo.”
” These Facebook Files show that the platform responded with incredible deference. Facebook routinely asked the government to vet specific claims, including whether the virus was “man-made” rather than zoonotic in origin. (The CDC responded that a man-made origin was “technically possible” but “extremely unlikely.”) In other emails, Facebook asked: “For each of the following claims, which we’ve recently identified on the platform, can you please tell us if: the claim is false; and, if believed, could this claim contribute to vaccine refusals?””
“The government legislation that both companies are protesting is called the Online News Act, or C-18. The intention is to give the long-suffering journalism industry a little cash boost, likely at the expense of two companies that are partially responsible for its woes. It accomplishes this by compelling them to pay Canadian news outlets if they host links to their content. (Fenlon’s employer, which is a public broadcaster, officially supports the Online News Act.) That’s why Meta and Google are threatening to remove news links for all Canadian users, permanently, if the law applies to them when it takes effect, likely by the end of this year.”
“The new Canadian law is modeled on a controversial Australian law, the News Media and Digital Platforms Mandatory Bargaining Code, which went into effect in 2021. Google and Meta’s responses to that law were similar threats to pull links, but both companies ended up making payments to some news organizations. The Australian government estimates that news outlets got AU$200 million, although it doesn’t know that for sure — nor does it know how that money was distributed — because the companies were allowed to keep those figures private. Even so, other countries, like Canada, likely assumed they’d get similar results with similar laws and were less apt to take Google and Meta’s threats seriously.
If you’re Google and Meta, this may not seem fair. Links are meant to drive people to websites, right? News sites are getting traffic through those links they otherwise may not have gotten, and the platform loses eyeballs when people click away from it. Meta contends that it doesn’t even post the links in the first place; its users, including the outlets themselves, do that. In the eyes of Google and Meta, they’re doing news sites a favor. And, Meta has said, news content is a very small draw for its users. If the companies don’t really need news links to attract users, why should they be forced to pay for them and be subject to government regulation, something they want to avoid at all costs?”
“In the eyes of the law’s supporters, however, Google and Meta’s business models have taken a lot away from journalism, and this “link tax” is the least they can do to pay some of that back. And, yes, the internet has decimated the journalism industry. One way is digital ad revenues: They’re a fraction of what news outlets commanded for their print and broadcast products, and that already smaller sum is reduced even further because online advertising companies — an industry dominated by Meta and Google — take a cut of it for themselves. One oft-cited statistic has Google and Meta getting 80 percent of online advertising revenue in the country. While Google and Meta have programs that pay news companies, including in Canada, they’re not legally required to do it, they can pick and choose who and what to support (and, by extension, who and what not to support), and they can change the terms whenever they want. Meta, for example, ended an emerging journalists fellowship program in Canada in response to C-18’s passage. The Online News Act is meant to ensure that even the smallest publications get something and that the DNIs have to pay at all. The Canadian government estimates the law will generate about CA$330 million a year for its news outlets.
But that’s all if there are links to Canadian news outlets on those platforms in the first place, which brings us to the current game of chicken between the Canadian government and Big Tech — and the yawning gaps on the news feeds of people like Fenlon and Krichel.”
“Laboratory experiments provide good reason to believe that masks, especially N95s, can reduce the risk that someone will be infected or infect other people. But those experiments are conducted in idealized conditions that may not resemble the real world, where people often choose low-quality cloth masks and do not necessarily wear masks properly or consistently.
Observational studies, which look at infection rates among voluntary mask wearers or people subject to mask mandates, can provide additional evidence that general mask wearing reduces infection. But such studies do not fully account for confounding variables.
If people who voluntarily wear masks or live in jurisdictions that require them to do so differ from the comparison groups in ways that independently affect disease transmission, the estimates derived from observational studies will be misleading. Those studies can also be subject to other pitfalls, such as skewed sampling and recall bias, that make it difficult to reach firm conclusions.
Despite those uncertainties, the CDC touted an observational study that supposedly proved “wearing a mask lowered the odds of testing positive” by as much as 83 percent. It said even cloth masks reduced infection risk by 56 percent, although that result was not statistically significant and the study’s basic design, combined with grave methodological weaknesses, made it impossible to draw causal inferences.”
“If wearing a mask had the dramatic impact that the CDC claimed, you would expect to see some evidence of that in RCTs. Yet the Cochrane review found essentially no relationship between mask wearing and disease rates, whether measured by reported symptoms or by laboratory tests. Nor did it confirm the expectation that N95s would prove superior to surgical masks in the field. The existing RCT evidence, the authors said, “demonstrates no differences in clinical effectiveness.””
“Does the Cochrane review prove that masks are worthless in protecting people from COVID-19? No. But it does show that the Centers for Disease Control and Prevention (CDC) misled the public about the strength of the evidence supporting mask mandates”
“When Facebook launched in 2004, it was a fairly static collection of profile pages. Facebook users could put lists of favorite media on their “walls” and use the “poke” button to give each other social-media nudges. To see what other people were posting, you had to intentionally visit their pages. There were no automatic notifications, no feeds to alert you to new information.
In 2006, Facebook introduced the News Feed, an individualized homepage for each user that showed friends’ posts in chronological order. The change seemed small at the time, but it turned out to be the start of a revolution. Instead of making an active choice to check in on other people’s pages, users got a running list of updates.
Users still controlled what information they saw by selecting which people and groups to follow. But now user updates, from new photos to shower thoughts, were delivered automatically, as a chronologically ordered stream of real-time information.
This created a problem. Facebook was growing fast, and users were spending more and more time on it, especially once Apple’s iPhone app store brought social media to smartphones. It wasn’t long before there were simply too many updates for many people to reasonably follow. Sorting the interesting from the irrelevant became a big task.
But what if there were a way for the system to sort through those updates for users, determining which posts might be most interesting, most relevant, most likely to generate a response?
In 2013, Facebook largely ditched the chronological feed. In its place, the social media company installed an algorithm.
Instead of a simple time-ordered log of posts from friends and pages you followed, you saw whichever of these posts Facebook’s algorithms “decided” you should see, filtering content based on an array of factors designed to suss out which content users found more interesting. That algorithm not only changed Facebook; it changed the world, making Facebook specifically—and social media algorithms generally—the subject of intense cultural and political debate.”
“Algorithms..help solve problems of information abundance. They cut through the noise, making recommendations more relevant, helping people see what they’re most likely to want to see, and helping them avoid content they might find undesirable. They make our internet experience less chaotic, less random, less offensive, and more efficient.”
“As Facebook and other social media companies started using them to sort and prioritize vast troves of user-generated content, algorithms started determining what material people were most likely to see online. Mathematical assessment replaced bespoke human judgment, leaving some people upset at what they were missing, some annoyed at what they were shown, and many feeling manipulated.
The algorithms that sort content for Facebook and other social media megasites change constantly. The precise formulas they employ at any given moment aren’t publicly known. But one of the key metrics is engagement, such as how many people have commented on a post or what type of emoji reactions it’s received.
As social media platforms like Facebook and Twitter, which shifted its default from chronological to algorithmic feeds in 2016, became more dominant as sources of news and political debate, people began to fear that algorithms were taking control of America’s politics.
Then came the 2016 election. In the wake of Trump’s defeat of Hillary Clinton in the presidential race, reports started trickling out that Russia may have posted on U.S. social media in an attempt to influence election results. Eventually it emerged that employees of a Russian company called the Internet Research Agency had posed as American individuals and groups on Facebook, Instagram, Tumblr, Twitter, and YouTube. These accounts posted and paid for ads on inflammatory topics, criticized candidates (especially Clinton), and sometimes shared fake news. The Senate Select Committee on Intelligence opened an investigation, and Facebook, Google, and Twitter executives were called before Congress to testify.”
“Progressives continued to embrace this explanation with each new and upsetting political development. The alt-right? Blame algorithms! Conspiracy theories about Clinton and sex trafficking? Algorithms! Nice Aunt Sue becoming a cantankerous loon online? Algorithms, of course.
Conservatives learned to loathe the algorithm a little later. Under fire about Russian trolls and other liberal bugaboos, tech companies started cracking down on a widening array of content. Conservatives became convinced that different kinds of algorithms—the ones used to find and deal with hate speech, spam, and other kinds of offensive posts—were more likely to flag and punish conservative voices. They also suspected that algorithms determining what people did see were biased against conservatives.”
“A common thread in all this is the idea that algorithms are powerful engines of personal and political behavior, either deliberately engineered to push us to some predetermined outcome or negligently wielded in spite of clear dangers. Inevitably, this narrative produced legislative proposals”
“It’s no secret that tech companies engineer their platforms to keep people coming back. But this isn’t some uniquely nefarious feature of social media businesses. Keeping people engaged and coming back is the crux of entertainment entities from TV networks to amusement parks.
Moreover, critics have the effect of algorithms precisely backward. A world without algorithms would mean kids (and everyone else) encountering more offensive or questionable content.
Without the news feed algorithm, “the first thing that would happen is that people would see more, not less, hate speech; more, not less, misinformation; more, not less, harmful content,” Nick Clegg, Meta’s then–vice president of global affairs, told George Stephanopoulos last year. That’s because algorithms are used to “identify and deprecate and downgrade bad content.” After all, algorithms are just sorting tools. So Facebook uses them to sort and downgrade hateful content.
“Without [algorithms], you just get an undifferentiated mass of content, and that’s not very useful,” noted Techdirt editor Mike Masnick last March.”
“several studies suggest social media is actually biased toward conservatives. A paper published in Research & Politics in 2022 found that a Facebook algorithm change in 2018 benefitted local Republicans more than local Democrats. In 2021, Twitter looked at how its algorithms amplify political content, examining millions of tweets sent by elected officials in seven countries, as well as “hundreds of millions” of tweets in which people shared links to articles. It found that “in six out of seven countries—all but Germany—Tweets posted by accounts from the political right receive more algorithmic amplification than the political left” and that right-leaning news outlets also “see greater algorithmic amplification.”
As for the Republican email algorithms bill, it would almost certainly backfire. Email services like Gmail use algorithms to sort out massive amounts of spam: If the GOP bill passed, it could mean email users would end up seeing a lot more spam in their inboxes as services strove to avoid liability.”
“it becomes clear why people might feel like algorithms have increased polarization. Life not long ago meant rarely engaging in political discussion with people outside one’s immediate community, where viewpoints tend to coalesce or are glossed over for the sake of propriety. For instance, before Facebook, my sister and an old family friend would likely never have gotten into clashes about Trump—it just wouldn’t have come up in the types of interactions they found themselves in. But that doesn’t mean they’re more politically divergent now; they just know more about it. Far from limiting one’s horizons, engaging with social media means greater exposure to opposing viewpoints, information that challenges one’s beliefs, and sometimes surprising perspectives from people around you.
The evidence used to support the social media/polarization hypothesis is often suspect. For instance, people often point to political polarization. But polarization seems to have started its rise decades before Facebook and Twitter came along.”
“For the average person online, algorithms do a lot of good. They help us get recommendations tailored to our tastes, save time while shopping online, learn about films and music we might not otherwise be exposed to, avoid email spam, keep up with the biggest news from friends and family, and be exposed to opinions we might not otherwise hear.”
“If algorithms are driving political chaos, we don’t have to look at the deeper rot in our democratic systems. If algorithms are driving hate and paranoia, we don’t have to grapple with the fact that racism, misogyny, antisemitism, and false beliefs never faded as much as we thought they had. If the algorithms are causing our troubles, we can pass laws to fix the algorithms. If algorithms are the problem, we don’t have to fix ourselves.
Blaming algorithms allows us to avoid a harder truth. It’s not some mysterious machine mischief that’s doing all of this. It’s people, in all our messy human glory and misery. Algorithms sort for engagement, which means they sort for what moves us, what motivates us to act and react, what generates interest and attention. Algorithms reflect our passions and predilections back at us.”
“Facebook says changes Apple made that affect how ads work on iOS apps — namely, that it’s now much harder for app-makers and advertisers to track user behavior — will cost it $10 billion in revenue this year.
For context: Facebook is still making an enormous amount of money from advertising — analyst Michael Nathanson estimates the company will generate $129 billion in ad revenue in 2022. But that would mean its ad business will only grow about 12 percent this year, compared to a 36 percent increase the previous year. Wall Street has prized Facebook for its ability to grow at a rocket velocity, and now that rocket may be sputtering.”