“Facebook says changes Apple made that affect how ads work on iOS apps — namely, that it’s now much harder for app-makers and advertisers to track user behavior — will cost it $10 billion in revenue this year.
For context: Facebook is still making an enormous amount of money from advertising — analyst Michael Nathanson estimates the company will generate $129 billion in ad revenue in 2022. But that would mean its ad business will only grow about 12 percent this year, compared to a 36 percent increase the previous year. Wall Street has prized Facebook for its ability to grow at a rocket velocity, and now that rocket may be sputtering.”
“Facebook is still a behemoth, and it has a long way to fall before that will cease to be true (if it ever is). I’m not suggesting we start writing eulogies yet. But the U.S. (and European Union) antitrust push against Facebook and other big tech companies assumes—and often explicitly argues—that Facebook’s power is permanent and its market share irreversible. Recent developments and ancient history show that’s very obviously not the case.”
“”Social media has become a significant source of information for U.S. law enforcement and intelligence agencies,” the Brennan Center for Justice at NYU Law noted in a report released last week. “The Department of Homeland Security, the FBI, and the State Department are among the many federal agencies that routinely monitor social platforms, for purposes ranging from conducting investigations to identifying threats to screening travelers and immigrants.””
“Photos of beheadings, extremist propaganda and violent hate speech related to Islamic State and the Taliban were shared for months within Facebook groups over the past year despite the social networking giant’s claims it had increased efforts to remove such content.
The posts — some tagged as “insightful” and “engaging” via new Facebook tools to promote community interactions — championed the Islamic extremists’ violence in Iraq and Afghanistan, including videos of suicide bombings and calls to attack rivals across the region and in the West, according to a review of social media activity between April and December. At least one of the groups contained more than 100,000 members.
In several Facebook groups, competing Sunni and Shia militia trolled each other by posting pornographic images and other obscene photos into rival groups in the hope Facebook would remove those communities.
In others, Islamic State supporters openly shared links to websites with reams of online terrorist propaganda, while pro-Taliban Facebook users posted regular updates about how the group took over Afghanistan during much of 2021”
“Facebook said it had invested heavily in artificial intelligence tools to automatically remove extremist content and hate speech in more than 50 languages. Since early 2021, the company told POLITICO it had added more Pashto and Dari speakers — the main languages spoken in Afghanistan — but declined to provide numbers of the staffing increases.
Yet the scores of Islamic State and Taliban content still on the platform show those efforts have failed to stop extremists from exploiting the platform.”
“As when Haugen first came forward—providing information that formed the basis of a series of Wall Street Journal reports—the real takeaway is that Facebook has been struggling to attract the young users it wants, faces robust competition, and generates apoplectic denunciation from mainstream journalists mostly because they resent the social media giant for shaking up the news industry.
There are, to be clear, some decent reasons in here to criticize Facebook CEO Mark Zuckerberg. The Washington Post reports that he was intimately involved with the company’s decision to comply with the Vietnamese government’s demand for greater censorship of political dissidents. Though even then, it’s debatable what Zuckerberg should do when authoritarian governments demand content moderation. Should Facebook pull out of Vietnam, depriving the country of the site entirely? Is a censored version of Facebook worse than no Facebook at all?”
“In response to Australian court decisions holding media companies legally liable for the comments by users, CNN has blocked access to some of its Facebook pages from users in that country.
This is an inevitable outcome of a bad decision and a reminder of why it’s important not to try to force government-mandated moderation policies onto massive social media platforms that will inevitably lead to either censorship or lack of access to information.”
“Apple and Google shut down a voting app meant to help opposition parties organize against the Kremlin in a parliamentary election in Russia that’s taking place over the weekend. The companies removed the app from their app stores on Friday after the Russian government accused them of interfering in the country’s internal affairs, a clear attempt by President Vladimir Putin to obstruct free elections and stay in power.
The Smart Voting app was designed to identify candidates most likely to beat members of the government-backed party, United Russia, as part of a broader strategy organized by supporters of the imprisoned Russian activist Alexei Navalny to bring together voters who oppose Putin. In a bid to clamp down on the opposition effort, the Russian government told Google and Apple that the app was illegal, and reportedly threatened to arrest employees of both companies in the country.
The move also comes amid a broader crackdown on Big Tech in Russia. Earlier this week, a Russian court fined Facebook and Twitter for not removing “illegal” content, and the country is reportedly blocking peoples’ access to Google Docs, which Navalny supporters had been using to share lists of preferred candidates.”
“Australia’s highest court has upheld a controversial and potentially destructive ruling that media outlets are legally liable for defamatory statements posted by online commenters on Facebook, a decision that could result in massive amounts of online censorship out of fear of lawsuits.
The case revolves around a television program from 2016 on Australia’s ABC TV (no relation to America’s ABC network) about the mistreatment of youths in Australia’s jail system. Footage of Dylan Voller in a restraining chair was part of the coverage. When media outlets covered this program and posted links to the coverage on Facebook, users made comments about Voller, and this prompted Voller to sue the media outlets. The comments were defamatory, Voller claimed, and he argued that the media outlets themselves were responsible for publishing them.
The media outlets countered that, no, they were not the publishers of third-party comments on Facebook and were not responsible for what they said. The outlets have been appealing to the courts to toss out the lawsuits, and they’ve been losing.”
“The country’s top justices determined that media outlets in the country are, indeed, publishers of the comments that users post on Facebook under stories that they link.
The logic here is absolutely terrible and destructive. Facebook has control over the tools for managing comments on media pages. The media outlets themselves do not, and they can’t “turn off” commenting on their Facebook pages. They do have the power to delete comments after the fact or use filtering tools that target keywords (to stop people from making profane or obscene comments) and can block individual users from the page.
Using these tools to try to prevent defamatory comments requires constant monitoring of the media outlet’s Facebook page and would demand that moderators be so agile as to remove potentially defamatory content the moment it appears before anybody else could see it. Nevertheless, the justices concluded that this is enough control over the comments for media outlets to be considered publishers. Two of the justices were very blunt that simply participating on Facebook made Fairfax Media Publications a publisher of the comments”
“It is easy to assume, as these other justices apparently have, that such a decision could not possibly cause a disastrous amount of online censorship because media outlets should know when a controversial story might lead to defamatory comments. The judges actually note this in the ruling. They seem to think that this is only an issue with certain types of stories and that the appearance of defamatory comments can be predicted in advance.
This is complete rubbish, and anybody with any experience on social media already knows this. Trolls, scammers, and spammers range far and wide (that’s the point of them), and it’s incredibly naive to think that a story that has no controversial elements can’t end up with third parties posting defamatory nonsense under them.”
“it’s why Section 230 of the U.S. Communications Decency Act, which generally protects websites and social media platforms (and you) from liability for comments published by others, is so important. It’s not just to protect media outlets from being held liable for comments from trolls. It’s to allow social media participation to even happen at all. Some large media outlets or companies might be able to afford around-the-clock moderation to attempt to catch problems. But even if they could, let’s be clear that they’re going to avoid as much risk as possible and delete any comment that has a whiff of controversy. Why would they allow it to stand if it could get them sued?
But smaller companies and outlets—and there’s no reason to think this ruling applies only to media outlets—will either have to hope Facebook gives them better tools to control who posts on their page or just not have social media presences at all.”