Reading view

There are new articles available, click to refresh the page.

A Different Perspective on the ‘Design Choices’ Social Media Company Verdicts

By: Nick Heer

Mike Masnick, of Techdirt, unsurprisingly opposes the verdicts earlier this week finding Meta and Google guilty of liability for how their products impact children’s safety. I think it is a perspective worth reading. Unlike the Wall Street Journal, Masnick respects your intelligence and brings actual substance. Still, I have some disagreements.

Masnick, on the “design choices” argument:

This distinction — between “design” and “content” — sounds reasonable for about three seconds. Then you realize it falls apart completely.

Here’s a thought experiment: imagine Instagram, but every single post is a video of paint drying. Same infinite scroll. Same autoplay. Same algorithmic recommendations. Same notification systems. Is anyone addicted? Is anyone harmed? Is anyone suing?

Of course not. Because infinite scroll is not inherently harmful. Autoplay is not inherently harmful. Algorithmic recommendations are not inherently harmful. These features only matter because of the content they deliver. The “addictive design” does nothing without the underlying user-generated content that makes people want to keep scrolling.

This sounds like a reasonable retort until you think about it for three more seconds and realize that the lack of neutrality in the outcomes of these decisions is the entire point. Users post all kinds of stuff on social media platforms, and those posts can be delivered in all kinds of different ways, as Masnick also writes. They can be shown in reverse-chronological order in a lengthy scroll, or they can be shown one at a time like with Stories. The source of the posts someone sees might be limited to just accounts a user has opted into, or it can be broadened to any account from anyone in the world. Twitter used to have a public “firehose” feed.

But many of the biggest and most popular platforms have coalesced around a feed of material users did not ask for. This is not like television, where each show has been produced and vetted by human beings, and there are expectations for what is on at different times of the day. This is automated and users have virtually no control within the platforms themselves. If you do not like what Instagram is serving you on your main feed, your choice is to stop using Instagram entirely — even if you like and use other features.

Platforms know people will post objectionable and graphic material if they are given a text box or an upload button. We know it is “impossible” to moderate a platform well at scale. But we are supposed to believe they have basically no responsibility for what users post and what their systems surface in users’ feeds? Pick one.

Masnick, on the risks of legal accountability for smaller platforms:

And this is already happening. TikTok and Snap were also named as defendants in the California case. They both settled before trial — not because they necessarily thought they’d lose on the merits, but because the cost of fighting through a multi-week jury trial can be staggering. If companies the size of TikTok and Snap can’t stomach the expense, imagine what this means for mid-size platforms, small forums, or individual website operators.

I am going to need a citation that TikTok and Snap caved because they could not afford continuing to fight. It seems just as plausible they could see which way the winds were blowing, given what I have read so far in the evidence that has been released.

Masnick:

One of the key pieces of evidence the New Mexico attorney general used against Meta was the company’s 2023 decision to add end-to-end encryption to Facebook Messenger. The argument went like this: predators used Messenger to groom minors and exchange child sexual abuse material. By encrypting those messages, Meta made it harder for law enforcement to access evidence of those crimes. Therefore, the encryption was a design choice that enabled harm.

The state is now seeking court-mandated changes including “protecting minors from encrypted communications that shield bad actors.”

Yes, the end result of the New Mexico ruling might be that Meta is ordered to make everyone’s communications less secure. That should be terrifying to everyone. Even those cheering on the verdict.

This is undeniably a worrisome precedent. I will note Raúl Torrez, New Mexico’s Attorney General and the man who brought this case against Meta, says he wants to do so for minors only. The implementation of this is an obvious question, though one that mandated age-gating would admittedly make straightforward.

Meta cited low usage when it announced earlier this month that it would be turning off end-to-end encryption in Instagram. If it is a question of safety or liability, it is one Meta would probably find difficult to articulate given end-to-end encryption remains available and enabled by default in Messenger and WhatsApp. An executive raised concerns about the feature when it was being planned, drawing a distinction between it and WhatsApp because the latter “does not make it easy to make social connections, meaning making Messenger e2ee will be far, far worse”.

I think Masnick makes some good arguments in this piece and raises some good questions. It is very possible or even likely this all gets unwound when it is appealed. I, too, expect the ripple effects of these cases to create some chaos. But I do not think the correct response to a lack of corporate accountability — or, frankly, standards — is, in Masnick’s words, “actually funding mental health care for young people”. That is not to say mental health should not be funded, only that it is a red herring response. In the U.S., total spending on children’s mental health care rose by 50% between 2011 and 2017; it continued to rise through the pandemic, of course. Perhaps that is not enough. But, also, it is extraordinary to think that we should allow companies to do knowingly harmful things and expect everyone else to correct for the predictable outcomes.

⌥ Permalink

Meta Loses Two Landmark Cases Regarding Product Safety and Children’s Use; Google Loses One

By: Nick Heer

Morgan Lee, Associated Press:

A New Mexico jury found Tuesday that social media conglomerate Meta is harmful to children’s mental health and in violation of state consumer protection law.

The landmark decision comes after a nearly seven-week trial. Jurors sided with state prosecutors who argued that Meta — which owns Instagram, Facebook and WhatsApp — prioritized profits over safety. The jury determined Meta violated parts of the state’s Unfair Practices Act on accusations the company hid what it knew [about] the dangers of child sexual exploitation on its platforms and impacts on child mental health.

Meta communications jackass Andy Stone noted on X his company’s delight to be liable for “a fraction of what the State sought”. The company says it will appeal the verdict.

Stephen Morris and Hannah Murphy, Financial Times:

Meta and Google were found liable in a landmark legal case that social media platforms are designed to be addictive to children, opening up the tech giants to penalties in thousands of similar claims filed around the US.

A jury in the Los Angeles trial on Wednesday returned a verdict after nine days of deliberation, finding Meta’s platforms such as Instagram and Google’s YouTube were harmful to children and teenagers and that the companies failed to warn users of the dangers.

Dara Kerr, the Guardian:

To come to its liability decision, the jury was asked whether the companies’ negligence was a substantial factor in causing harm to KGM [the plaintiff] and if the tech firms knew the design of their products was dangerous. The 12-person panel of jurors returned a 10-2 split answering in favor of the plaintiff on every single question.

Meta says it will also appeal this verdict.

Sonja Sharp, Los Angeles Times:

Collectively, the suits seek to prove that harm flowed not from user content but from the design and operation of the platforms themselves.

That’s a critical legal distinction, experts say. Social media companies have so far been protected by a powerful 1996 law called Section 230, which has shielded the apps from responsibility for what happens to children who use it.

For its part, the Wall Street Journal editorial board is standing up for beleaguered social media companies in an editorial today criticizing everything about these verdicts, including this specific means of liability, which it calls a “dodge” around Section 230.

But it is not. The principles described by Section 230 are a good foundation for the internet. This law, while U.S.-centric, has enabled the web around the world to flourish. Making companies legally liable for the things users post will not fix the mess we are in, but it would cause great damage if enacted.

Product design, though, is a different question. It would be a mistake, I think, to read Section 230 as a blanket allowance for any way platforms wish to use or display users’ posts. (Update: In part, that is because it is a free speech question.) From my entirely layman perspective, it has never struck me as entirely reasonable that the recommendations systems of these platforms should have no duty or expectation of care.

The Journal’s editorial board largely exists to produce rage bait and defend the interests of the powerful, so I am loath to give it too much attention, but I thought this paragraph was pretty rich:

Trial lawyers and juries may figure that Big Tech companies can afford to pay, but extorting companies is certain to have downstream consequences. Meta and Google are spending hundreds of billions of dollars on artificial intelligence this year, which could have positive social impacts such as accelerating treatments for cancer.

Do not sue tech companies because they could be finding cancer treatments — why should I take this editorial board seriously if its members are writing jokes like these? They think you are stupid.

As for the two cases, I am curious about how these conclusions actually play out. I imagine other people who feel their lives have been eroded by the specific way these platforms are designed will be able to test their claims in court, too, and that it will be complicated by the inevitably lengthy appeals and relitigation process.

I am admittedly a little irritated by both decisions being reached by jury instead of a judge; I would have preferred to see reasoning instead of overwhelming agreement among random people. However, it sends a strong signal to big social media platforms that people saw and heard evidence about how these products are designed, and they agreed it was damaging. This is true of all users, not just children. Meta tunes its feeds (PDF) for maximizing engagement across the board, and it surely is not the only one. There are a staggering number of partially redacted exhibits released today to go through, if one is so inclined.

If these big social platforms are listening, the signals are out there: people may be spending a lot of time with these products, but that is not a good proxy for their enjoyment or satisfaction. Research indicates a moderate amount of use is correlated with neutral or even positive outcomes among children, yet there are too many incentives in these apps to push past self-control mechanisms. These products should be designed differently.

⌥ Permalink

Threads Continues to Reward Rage Bait

By: Nick Heer

Hank Green was not getting a lot of traction on a promotional post on Threads about a sale on his store. He got just over thirty likes, which does not sound awful, until you learn that was over the span of seven hours and across Green’s following of 806,000 accounts on Threads.

So he tried replying to rage bait with basically the same post, and that was far more successful. But, also, it has some pretty crappy implications:

That’s the signal that Threads is taking from this: Threads is like oh, there’s a discussion going on.

It’s 2025! Meta knows that “lots of discussion” is not a surrogate for “good things happening”!

I assume the home feed ranking systems are similar for Threads and Instagram — though they might not be — and I cannot tell you how many times my feed is packed with posts from many days to a week prior. So many businesses I frequent use it as a promotional tool for time-bound things I learn about only afterward. The same thing is true of Stories, since they are sorted based on how frequently you interact with an account.

Everyone is allowed one conspiracy theory, right? Mine is that a primary reason Meta is hostile to reverse-chronological feeds is because it requires businesses to buy advertising. I have no proof to support this, but it seems entirely plausible.

⌥ Permalink

The Blurry Future of Sora

By: Nick Heer

Jason Parham, Wired:

The uptick in artificial social networks, [Rudy] Fraser tells me, is being driven by the same tech egoists who have eroded public trust and inflamed social isolation through “divisive” algorithms. “[They] are now profiting on that isolation by creating spaces where folks can surround themselves with sycophantic bots.”

I saw this quote circulating on Bluesky over the weekend and it has been rattling around my head since. It cuts to the heart of one reason why A.I.-based “social” networks like Sora and Meta’s Vibes feel so uncomfortable.

Unfortunately, I found the very next paragraph from Parham uncompelling:

In the many conversations I had with experts, similar patterns of thought emerged. The current era of content production prioritizes aesthetics over substance. We are a culture hooked on optimization and exposure; we crave to be seen. We live on our phones and through our screens. We’re endlessly watching and being watched, submerged in a state of looking. With a sort of all-consuming greed, we are transforming into a visual-first society — an infinite form of entertainment for one another to consume, share, fight over, and find meaning through.

Of course our media reflects aesthetic trends and tastes; it always has. I do not know that there was a halcyon era of substance-over-style media, nor do I believe there was a time since celebrity was a feasible achievement in which at least some people did not desire it. In a 1948 British survey of children 10–15 years old, one-sixth to one-third of respondents aspired to “‘romantic’ [career] choices like film acting, sport, and the arts”. An article published in Scouting Magazine in 2000 noted children leaned toward high-profile careers — not necessarily celebrity, but jobs “every child is exposed to”. We love this stuff because we have always loved this stuff.

Among the bits I quibble with in the above, however, this stood out as a new and different thing: “[w]e’re endlessly watching and being watched”. That, I think, is the kind of big change Fraser is quoted as speaking about, and something I think is concerning. We already worried about echo chambers, and platforms like YouTube responded by adjusting recommendations to less frequently send users to dark places. Let us learn something, please.

Cal Newport:

A company that still believes that its technology was imminently going to run large swathes of the economy, and would be so powerful as to reconfigure our experience of the world as we know it, wouldn’t be seeking to make a quick buck selling ads against deep fake videos of historical figures wrestling. They also wouldn’t be entertaining the idea, ​as [Sam] Altman did last week​, that they might soon start offering an age-gated version of ChatGPT so that adults could enjoy AI-generated “erotica.”

To me, these are the acts of a company that poured tens of billions of investment dollars into creating what they hoped would be the most consequential invention in modern history, only to finally realize that what they wrought, although very cool and powerful, isn’t powerful enough on its own to deliver a new world all at once.

I do not think Sora smells of desperation, but I do think it is the product of a company that views unprecedented scale as its primary driver. I think OpenAI wants to be everywhere — and not in the same way that a consumer electronics company wants its smartphones to be the category’s most popular, or anything like that. I wonder if Ben Thompson’s view of OpenAI as “the Windows of A.I.” is sufficient. I think OpenAI is hoping to be a ubiquitous layer in our digital world; or, at least, it is behaving that way.

⌥ Permalink

Meta Says Threads Has Over 400 Million Monthly Active Users

By: Nick Heer

Emily Price, Fast Company:

Meta’s Threads is on a roll.

The social networking app is now home to more than 400 million monthly active users, Meta shared with Fast Company on Tuesday. That’s 50 million more than just a few months ago, and a long way from the 175 million it had around its first birthday last summer.

What is even more amazing about this statistic is how non-essential Threads seems to be. I might be in a bubble, but I cannot recall the last time someone sent me a link to a Threads post or mentioned they saw something worthwhile there. I see plenty of screenshots of posts from Bluesky, X, and even Mastodon circulating in various other social networks, but I cannot remember a single one from Threads.

As if to illustrate Threads’ invisibility, Andy Stone, Meta’s communications guy, rebutted a Wall Street Journal story with a couple of posts on X. He has a Threads account, of course, but he posts there only a few times per month.

⌥ Permalink

You Are Just a Guest on Meta’s A.I.-Filled Platforms

By: Nick Heer

Jason Koebler, 404 Media:

The best way to think of the slop and spam that generative AI enables is as a brute force attack on the algorithms that control the internet and which govern how a large segment of the public interprets the nature of reality. It is not just that people making AI slop are spamming the internet, it’s that the intended “audience” of AI slop is social media and search algorithms, not human beings.

[…]

“Brute force” is not just what I have noticed while reporting on the spammers who flood Facebook, Instagram, TikTok, YouTube, and Google with AI-generated spam. It is the stated strategy of the people getting rich off of AI slop.

Regardless of whether you have been following Koebler’s A.I. slop beat, you owe it to yourself to read this article at least. The goal, Koelber surmises, is for Meta to target slop and ads at users in more-or-less the same way and, because this slop is cheap and fast to produce, it is a bottomless cup of engagement metrics.

Koebler, in a follow-up article:

As I wrote last week, the strategy with these types of posts is to make a human linger on them long enough to say to themselves “what the fuck,” or to be so horrified as to comment “what the fuck,” or send it to a friend saying “what the fuck,” all of which are signals to the algorithm that it should boost this type of content but are decidedly not signals that the average person actually wants to see this type of thing. The type of content that I am seeing right now makes “Elsagate,” the YouTube scandal in which disturbing videos were targeted to kids and resulted in various YouTube reforms, look quaint.

Matt Growcoot, PetaPixel:

Meta is testing an Instagram feature that suggests AI-generated comments for users to post beneath other users’ photos and videos.

Meta is going to make so much money before it completely disintegrates on account of nobody wanting to spend this much time around a thin veneer over robots.

⌥ Permalink

Bluesky Changes How Replies Are Sorted by Default

By: Nick Heer

From the official Bluesky account:

With this release, you can now display replies by “hotness,” which weights liked replies that are more recent more heavily.

I believe this replaced the past reply sorting of oldest to newest. People seem worried this can be gamed, but there is good news: you can just change it. There are options for oldest replies, newest replies, most-liked, and one that is completely randomized. Also, you can still set it to prioritize people you follow.

Imagine that: options for viewing social media that give control back to users. Threads is experimenting, but Meta still fundamentally distrusts users to make decisions like these.

⌥ Permalink

Meta’s Big Squeeze

By: Nick Heer

Ashley Belanger, reporting for Ars Technica in July 2022 in what I will call “foreshadowing”:

Despite all the negative feedback [over then-recent Instagram changes], Meta revealed on an earnings call that it plans to more than double the number of AI-recommended Reels that users see. The company estimates that in 2023, about a third of Instagram and Facebook feeds will be recommended content.

Ed Zitron:

In this document [leaked to Zitron], they discuss the term “meaningful interactions,” the underlying metric which (allegedly) guides Facebook today. In January 2018, Adam Mosseri, then Head of News Feed, would post that an update to the News Feed would now “prioritize posts that spark conversations and meaningful interactions between people,” which may explain the chaos (and rot) in the News Feed thereafter.

To be clear, metrics around time spent hung around at the company, especially with regard to video, and Facebook has repeatedly and intentionally made changes to manipulate its users to satisfy them. In his book “Broken Code,” Jeff Horwitz notes that Facebook “changed its News Feed design to encourage people to click on the reshare button or follow a page when they viewed a post,” with “engineers altering the Facebook algorithm to increase how often users saw content reshared from people they didn’t know.”

Zitron, again:

When you look at Instagram or Facebook, I want you to try and think of them less as social networks, and more as a form of anthropological experiment. Every single thing you see on either platform is built or selected to make you spend more time on the app and see more things that Meta wants you to see, be they ads, sponsored content, or suggested groups that you can interact with, thus increasing the amount of your “time spent” on the app, and increasing the amount of “meaningful interactions” you have with content.

Zitron is a little too eager, for my tastes, to treat Meta’s suggestions of objectionable and controversial posts as deliberate. It seems much more likely the company simply sucks at moderating this stuff at scale and is throwing in the towel.

Kurt Wagner, Bloomberg:

In late 2021, TikTok was on the rise, Facebook interactions were declining after a pandemic boom and young people were leaving the social network in droves. Chief Executive Officer Mark Zuckerberg assembled a handful of veterans who’d built their careers on the Big Blue app to figure out how to stop the bleeding, including head of product Chris Cox, Instagram boss Adam Mosseri, WhatsApp lead Will Cathcart and head of Facebook, Tom Alison.

During discussions that spanned several meetings, a private WhatsApp group, and an eventual presentation at Zuckerberg’s house in Palo Alto, California, the group came to a decision: The best way to revive Facebook’s status as an online destination for young people was to start serving up more content from outside a person’s network of friends and family.

Jason Koebler, 404 Media:

At first, previously viral (but real) images were being run through image-to-image AI generators to create a variety of different but plausibly believable AI images. These images repeatedly went viral, and seemingly tricked real people into believing they were real. I was able to identify a handful of the “source” or “seed” images that formed the basis for this type of content. Over time, however, most AI images on Facebook have gotten a lot easier to identify as AI and a lot more bizarre. This is presumably happening because people will interact with the images anyway, or the people running these pages have realized they don’t need actual human interaction to go viral on Facebook.

Sarah Perez, TechCrunch:

Instagram confirmed it’s testing unskippable ads after screenshots of the feature began circulating across social media. These new ad breaks will display a countdown timer that stops users from being able to browse through more content on the app until they view the ad, according to informational text displayed in the Instagram app.

These pieces each seem like they are circling a theme of a company finding the upper bound of its user base, and then squeezing it for activity, revenue, and promising numbers to report to investors. Unlike Zitron, I am not convinced we are watching Facebook die. I think Koebler is closer to the truth: we are watching its zombification.

⌥ Permalink

❌