Reading view

There are new articles available, click to refresh the page.

A Different Perspective on the ‘Design Choices’ Social Media Company Verdicts

By: Nick Heer

Mike Masnick, of Techdirt, unsurprisingly opposes the verdicts earlier this week finding Meta and Google guilty of liability for how their products impact children’s safety. I think it is a perspective worth reading. Unlike the Wall Street Journal, Masnick respects your intelligence and brings actual substance. Still, I have some disagreements.

Masnick, on the “design choices” argument:

This distinction — between “design” and “content” — sounds reasonable for about three seconds. Then you realize it falls apart completely.

Here’s a thought experiment: imagine Instagram, but every single post is a video of paint drying. Same infinite scroll. Same autoplay. Same algorithmic recommendations. Same notification systems. Is anyone addicted? Is anyone harmed? Is anyone suing?

Of course not. Because infinite scroll is not inherently harmful. Autoplay is not inherently harmful. Algorithmic recommendations are not inherently harmful. These features only matter because of the content they deliver. The “addictive design” does nothing without the underlying user-generated content that makes people want to keep scrolling.

This sounds like a reasonable retort until you think about it for three more seconds and realize that the lack of neutrality in the outcomes of these decisions is the entire point. Users post all kinds of stuff on social media platforms, and those posts can be delivered in all kinds of different ways, as Masnick also writes. They can be shown in reverse-chronological order in a lengthy scroll, or they can be shown one at a time like with Stories. The source of the posts someone sees might be limited to just accounts a user has opted into, or it can be broadened to any account from anyone in the world. Twitter used to have a public “firehose” feed.

But many of the biggest and most popular platforms have coalesced around a feed of material users did not ask for. This is not like television, where each show has been produced and vetted by human beings, and there are expectations for what is on at different times of the day. This is automated and users have virtually no control within the platforms themselves. If you do not like what Instagram is serving you on your main feed, your choice is to stop using Instagram entirely — even if you like and use other features.

Platforms know people will post objectionable and graphic material if they are given a text box or an upload button. We know it is “impossible” to moderate a platform well at scale. But we are supposed to believe they have basically no responsibility for what users post and what their systems surface in users’ feeds? Pick one.

Masnick, on the risks of legal accountability for smaller platforms:

And this is already happening. TikTok and Snap were also named as defendants in the California case. They both settled before trial — not because they necessarily thought they’d lose on the merits, but because the cost of fighting through a multi-week jury trial can be staggering. If companies the size of TikTok and Snap can’t stomach the expense, imagine what this means for mid-size platforms, small forums, or individual website operators.

I am going to need a citation that TikTok and Snap caved because they could not afford continuing to fight. It seems just as plausible they could see which way the winds were blowing, given what I have read so far in the evidence that has been released.

Masnick:

One of the key pieces of evidence the New Mexico attorney general used against Meta was the company’s 2023 decision to add end-to-end encryption to Facebook Messenger. The argument went like this: predators used Messenger to groom minors and exchange child sexual abuse material. By encrypting those messages, Meta made it harder for law enforcement to access evidence of those crimes. Therefore, the encryption was a design choice that enabled harm.

The state is now seeking court-mandated changes including “protecting minors from encrypted communications that shield bad actors.”

Yes, the end result of the New Mexico ruling might be that Meta is ordered to make everyone’s communications less secure. That should be terrifying to everyone. Even those cheering on the verdict.

This is undeniably a worrisome precedent. I will note Raúl Torrez, New Mexico’s Attorney General and the man who brought this case against Meta, says he wants to do so for minors only. The implementation of this is an obvious question, though one that mandated age-gating would admittedly make straightforward.

Meta cited low usage when it announced earlier this month that it would be turning off end-to-end encryption in Instagram. If it is a question of safety or liability, it is one Meta would probably find difficult to articulate given end-to-end encryption remains available and enabled by default in Messenger and WhatsApp. An executive raised concerns about the feature when it was being planned, drawing a distinction between it and WhatsApp because the latter “does not make it easy to make social connections, meaning making Messenger e2ee will be far, far worse”.

I think Masnick makes some good arguments in this piece and raises some good questions. It is very possible or even likely this all gets unwound when it is appealed. I, too, expect the ripple effects of these cases to create some chaos. But I do not think the correct response to a lack of corporate accountability — or, frankly, standards — is, in Masnick’s words, “actually funding mental health care for young people”. That is not to say mental health should not be funded, only that it is a red herring response. In the U.S., total spending on children’s mental health care rose by 50% between 2011 and 2017; it continued to rise through the pandemic, of course. Perhaps that is not enough. But, also, it is extraordinary to think that we should allow companies to do knowingly harmful things and expect everyone else to correct for the predictable outcomes.

⌥ Permalink

Meta Loses Two Landmark Cases Regarding Product Safety and Children’s Use; Google Loses One

By: Nick Heer

Morgan Lee, Associated Press:

A New Mexico jury found Tuesday that social media conglomerate Meta is harmful to children’s mental health and in violation of state consumer protection law.

The landmark decision comes after a nearly seven-week trial. Jurors sided with state prosecutors who argued that Meta — which owns Instagram, Facebook and WhatsApp — prioritized profits over safety. The jury determined Meta violated parts of the state’s Unfair Practices Act on accusations the company hid what it knew [about] the dangers of child sexual exploitation on its platforms and impacts on child mental health.

Meta communications jackass Andy Stone noted on X his company’s delight to be liable for “a fraction of what the State sought”. The company says it will appeal the verdict.

Stephen Morris and Hannah Murphy, Financial Times:

Meta and Google were found liable in a landmark legal case that social media platforms are designed to be addictive to children, opening up the tech giants to penalties in thousands of similar claims filed around the US.

A jury in the Los Angeles trial on Wednesday returned a verdict after nine days of deliberation, finding Meta’s platforms such as Instagram and Google’s YouTube were harmful to children and teenagers and that the companies failed to warn users of the dangers.

Dara Kerr, the Guardian:

To come to its liability decision, the jury was asked whether the companies’ negligence was a substantial factor in causing harm to KGM [the plaintiff] and if the tech firms knew the design of their products was dangerous. The 12-person panel of jurors returned a 10-2 split answering in favor of the plaintiff on every single question.

Meta says it will also appeal this verdict.

Sonja Sharp, Los Angeles Times:

Collectively, the suits seek to prove that harm flowed not from user content but from the design and operation of the platforms themselves.

That’s a critical legal distinction, experts say. Social media companies have so far been protected by a powerful 1996 law called Section 230, which has shielded the apps from responsibility for what happens to children who use it.

For its part, the Wall Street Journal editorial board is standing up for beleaguered social media companies in an editorial today criticizing everything about these verdicts, including this specific means of liability, which it calls a “dodge” around Section 230.

But it is not. The principles described by Section 230 are a good foundation for the internet. This law, while U.S.-centric, has enabled the web around the world to flourish. Making companies legally liable for the things users post will not fix the mess we are in, but it would cause great damage if enacted.

Product design, though, is a different question. It would be a mistake, I think, to read Section 230 as a blanket allowance for any way platforms wish to use or display users’ posts. (Update: In part, that is because it is a free speech question.) From my entirely layman perspective, it has never struck me as entirely reasonable that the recommendations systems of these platforms should have no duty or expectation of care.

The Journal’s editorial board largely exists to produce rage bait and defend the interests of the powerful, so I am loath to give it too much attention, but I thought this paragraph was pretty rich:

Trial lawyers and juries may figure that Big Tech companies can afford to pay, but extorting companies is certain to have downstream consequences. Meta and Google are spending hundreds of billions of dollars on artificial intelligence this year, which could have positive social impacts such as accelerating treatments for cancer.

Do not sue tech companies because they could be finding cancer treatments — why should I take this editorial board seriously if its members are writing jokes like these? They think you are stupid.

As for the two cases, I am curious about how these conclusions actually play out. I imagine other people who feel their lives have been eroded by the specific way these platforms are designed will be able to test their claims in court, too, and that it will be complicated by the inevitably lengthy appeals and relitigation process.

I am admittedly a little irritated by both decisions being reached by jury instead of a judge; I would have preferred to see reasoning instead of overwhelming agreement among random people. However, it sends a strong signal to big social media platforms that people saw and heard evidence about how these products are designed, and they agreed it was damaging. This is true of all users, not just children. Meta tunes its feeds (PDF) for maximizing engagement across the board, and it surely is not the only one. There are a staggering number of partially redacted exhibits released today to go through, if one is so inclined.

If these big social platforms are listening, the signals are out there: people may be spending a lot of time with these products, but that is not a good proxy for their enjoyment or satisfaction. Research indicates a moderate amount of use is correlated with neutral or even positive outcomes among children, yet there are too many incentives in these apps to push past self-control mechanisms. These products should be designed differently.

⌥ Permalink

Polarization in the United States Has Become the World’s Side Hustle

By: Nick Heer

Marina Dunbar, the Guardian:

Many of the most influential personalities in the “Make America great again” (Maga) movement on X are based outside of the US, including Russia, Nigeria and India, a new transparency feature on the social media site has revealed.

The new tool, called “about this account”, became available on Friday to users of the Elon Musk-owned platform. It allows anyone to see where an account is located, when it joined the platform, how often its username has been changed, and how the X app was downloaded.

This is a similar approach to adding labels or notes to tweets containing misinformation in that it is adding more speech and context. It is more automatic, but the function and intent is comparable, which means Musk’s hobbyist P.R. team must be all worked up. But I checked, and none seem particularly bothered. Maybe they actually care about trust and safety now, or maybe they are lying hacks.

Mike Masnick, Techdirt:

For years, Matt Taibbi, Michael Shellenberger, and their allies have insisted that anyone working on these [trust and safety] problems was part of a “censorship industrial complex” designed to silence political speech. Politicians like Ted Cruz and Jim Jordan repeated these lies. They treated trust & safety work as a threat to democracy itself.

Then Musk rolled out one basic feature, and within hours proved exactly why trust & safety work existed in the first place.

Jason Koebler, 404 Media, has been covering the monetization of social media:

This has created an ecosystem of side hustlers trying to gain access to these programs and YouTube and Instagram creators teaching people how to gain access to them. It is possible to find these guide videos easily if you search for things like “monetized X account” on YouTube. Translating that phrase and searching in other languages (such as Hindi, Portuguese, Vietnamese, etc) will bring up guides in those languages. Within seconds, I was able to find a handful of YouTubers explaining in Hindi how to create monetized X accounts; other videos on the creators’ pages explain how to fill these accounts with AI-generated content. These guides also exist in English, and it is increasingly popular to sell guides to make “AI influencers,” and AI newsletters, Reels accounts, and TikTok accounts regardless of the country that you’re from.

[…]

Americans are being targeted because advertisers pay higher ad rates to reach American internet users, who are among the wealthiest in the world. In turn, social media companies pay more money if the people engaging with the content are American. This has created a system where it makes financial sense for people from the entire world to specifically target Americans with highly engaging, divisive content. It pays more.

The U.S. market is a larger audience, too. But those of us in rich countries outside the U.S. should not get too comfortable; I found plenty of guides similar to the ones shown by Koebler for targeting Australia, Canada, Germany, New Zealand, and more. Worrisome — especially if you, say, are somewhere with an electorate trying to drive the place you live off a cliff.

Update: Several X accounts purporting to be Albertans supporting separatism appear to be from outside Canada, including a “Concerned 🍁 Mum”, “Samantha”, “Canada the Illusion”, and this “Albertan” all from the United States, and a smaller account from Laos. I tried to check more, but X’s fragile servers are aggressively rate-limited.

I do not think people from outside a country are forbidden from offering an opinion on what is happening within it. I would be a pretty staggering hypocrite if I thought that. Nor do I think we should automatically assume people who are stoking hostile politics on social media are necessarily external or bots. It is more like a reflection of who we are now, and how easily that can be exploited.

⌥ Permalink

Podcasting’s Pivot to Video

By: Nick Heer

Joseph Bernstein, New York Times:

Indeed, according to an April survey by Cumulus Media and the media research firm Signal Hill Insights, nearly three-quarters of podcast consumers play podcast videos, even if they minimize them, compared with about a quarter who listen only to the audio. Paul Riismandel, the president of Signal Hill, said that this split holds across age groups — it’s not simply driven by Gen Z and that younger generation’s supposed great appetite for video.

[…]

Still, this leaves everyone else — more than half of YouTube podcast consumers, who say they are actively watching videos. Here, it gets even trickier. YouTube, the most popular platform for podcasts, defines “views” in a variety of ways, among them a user who clicks “play” on a video and watches for at least 30 seconds: far from five hours. And the April survey data did not distinguish between people who were watching, say, four hours of Lex Fridman interviewing Marc Andreessen from people who were viewing the much shorter clips of these podcasts that are ubiquitous on TikTok, Instagram Reels, X and YouTube itself.

Thirty seconds is an awful short time to be counted as a single view on these very long videos. At the very least, I think it should be calculated as a fraction of the length of any specific video.

This report (PDF) has a few things of note, anyhow, like this from the fifth page:

YouTube is not a walled garden of podcasts: 72% of weekly podcast consumers who have consumed podcasts on YouTube say they would switch platforms from YouTube if a podcast were to become available only on another platform. 51% of YouTube podcast consumers say they already have listened to the same podcasts they consume on YouTube in another place.

There is not another YouTube, so this indicates to me the video component is not actually important to many people, and that YouTube is not a great podcast client. It is, however, a great place for discovery — a centralized platform in the largely decentralized world of podcasting.

Bernstein:

Now, the size of the market for video podcasts is too large to ignore, and many ad deals require podcasters to have a video component. The platforms where these video podcasts live, predominantly YouTube and Spotify, are creating new kinds of podcast consumers, who expect video.

The advertising model of podcasts has long been a tough nut to crack. It is harder to participate in the same surveillance model as the rest of the web, even with the development of dynamically ad insertion. There is simply less tracking and less data available to advertisers and data brokers. This is a good thing. YouTube, being a Google platform, offers advertisers more of what they are used to.

⌥ Permalink

Judge Dismisses 2021 Rumble Antitrust Suit Against Google on Statute of Limitations Grounds

By: Nick Heer

Mike Scarcella, Reuters:

Alphabet’s Google has persuaded a federal judge in California to reject a lawsuit from video platform Rumble accusing the technology giant of illegally monopolizing the online video-sharing market.

In a ruling on Wednesday, U.S. District Judge Haywood Gilliam Jr said Rumble’s 2021 lawsuit seeking more than $2 billion in damages was untimely filed outside the four-year statute of limitations for antitrust claims.

Rumble is dishonest and irritating, but I thought its case in which it argued Google engages in self-preferencing could be interesting. It seems to rank YouTube videos more highly than those from other sources. This can be explained by YouTube’s overwhelming popularity — it consistently ranks in the top ten web services according to Cloudflare — yet I can see anyone’s discomfort in taking Google’s word for it, since it has misrepresented its ranking criteria.

This is an unsatisfying outcome, but it seems Rumble has another suit it is still litigating.

⌥ Permalink

A Lot of People Apparently Watch Podcasts on YouTube Now

By: Nick Heer

Ben Cohen, Wall Street Journal:

Only four years ago, when it was less popular for podcasts than both Spotify and Apple, YouTube becoming a podcasting colossus sounded about as realistic as Martin Scorsese releasing his next movie on TikTok.

But this year, YouTube passed the competition and became the most popular service for podcasts in the U.S., with 31% of weekly podcast listeners saying it’s now the platform they use the most, according to Edison Research.

This is notable, but Cohen omits key context for why YouTube is suddenly a key podcast platform: Google Podcasts was shut down this year with users and podcasters alike instructed to move to YouTube. According to Buzzsprout’s 2023 analytics, Google Podcasts was used by only 2.5% of global listeners. YouTube is not listed in their report, perhaps because it exists in its own bubble instead of being part of the broader RSS-feed-reading podcast client ecosystem.

But where Google was previously bifurcating its market share, it aligned its users behind a single client. And, it would seem, that audience responded favourably.

John Herrman, New York magazine:

Then, just as the 2010s podcasting bubble was about to peak, TikTok arrived. Here was a video-first platform that was basically only a recommendation engine, minus the pretense and/or burden of sociality — a machine for automating and allocating virality. Its rapid growth drove older, less vibrant social-media platforms wild with envy and/or panic. They all immediately copied it, refashioning themselves as algorithmic short-video apps almost overnight. Suddenly, on every social-media platform — including YouTube, which plugged vertical video “Shorts” into its interface and rewarded creators who published them with followers, attention, and money — there was a major new opportunity for rapid, viral growth. TikTok’s success (and imitation by existing megaplatforms) triggered a formal explosion in video content as millions of users figured out what sorts of short videos worked in this new context: Vine-like comedy sketches; dances; product recommendations; rapid-fire confessionals. The list expanded quickly and widely, but one surprising category broke through: podcast clips.

Of the top twenty podcasts according to Edison Research, fifteen have what I would deem meaningful and regular video components. I excluded those with either a still piece of artwork or illustrated talking heads, and those which only occasionally have video.

Dave Winer:

[…] We’re losing the word “podcast” very quickly. It’s coming to mean video interviews on YouTube mostly. Our only hope is upgrading the open platform in a way that stimulates the imagination of creators, and there’s no time to waste. If you make a podcast client, it’s time to start collaborating with competitors and people who create RSS-based podcasts to take advantage of the open platforms, otherwise having a podcast will mean getting approved by Google, Apple, Spotify, Amazon etc. […]

I hope this is not the case. Luckily, YouTube seems to be an additional place for podcasters so far. I found every show in the top twenty available for download through Overcast in an audio-only format. Also, YouTube channels have RSS feeds, though that is not very useful in an audio-only client like Overcast. Also, Google’s commitment to RSS is about as good as the company’s commitment to anything.

⌥ Permalink

❌