Normal view

There are new articles available, click to refresh the page.
Before yesterdayMain stream

A Different Perspective on the ‘Design Choices’ Social Media Company Verdicts

By: Nick Heer
26 March 2026 at 21:56

Mike Masnick, of Techdirt, unsurprisingly opposes the verdicts earlier this week finding Meta and Google guilty of liability for how their products impact children’s safety. I think it is a perspective worth reading. Unlike the Wall Street Journal, Masnick respects your intelligence and brings actual substance. Still, I have some disagreements.

Masnick, on the “design choices” argument:

This distinction — between “design” and “content” — sounds reasonable for about three seconds. Then you realize it falls apart completely.

Here’s a thought experiment: imagine Instagram, but every single post is a video of paint drying. Same infinite scroll. Same autoplay. Same algorithmic recommendations. Same notification systems. Is anyone addicted? Is anyone harmed? Is anyone suing?

Of course not. Because infinite scroll is not inherently harmful. Autoplay is not inherently harmful. Algorithmic recommendations are not inherently harmful. These features only matter because of the content they deliver. The “addictive design” does nothing without the underlying user-generated content that makes people want to keep scrolling.

This sounds like a reasonable retort until you think about it for three more seconds and realize that the lack of neutrality in the outcomes of these decisions is the entire point. Users post all kinds of stuff on social media platforms, and those posts can be delivered in all kinds of different ways, as Masnick also writes. They can be shown in reverse-chronological order in a lengthy scroll, or they can be shown one at a time like with Stories. The source of the posts someone sees might be limited to just accounts a user has opted into, or it can be broadened to any account from anyone in the world. Twitter used to have a public “firehose” feed.

But many of the biggest and most popular platforms have coalesced around a feed of material users did not ask for. This is not like television, where each show has been produced and vetted by human beings, and there are expectations for what is on at different times of the day. This is automated and users have virtually no control within the platforms themselves. If you do not like what Instagram is serving you on your main feed, your choice is to stop using Instagram entirely — even if you like and use other features.

Platforms know people will post objectionable and graphic material if they are given a text box or an upload button. We know it is “impossible” to moderate a platform well at scale. But we are supposed to believe they have basically no responsibility for what users post and what their systems surface in users’ feeds? Pick one.

Masnick, on the risks of legal accountability for smaller platforms:

And this is already happening. TikTok and Snap were also named as defendants in the California case. They both settled before trial — not because they necessarily thought they’d lose on the merits, but because the cost of fighting through a multi-week jury trial can be staggering. If companies the size of TikTok and Snap can’t stomach the expense, imagine what this means for mid-size platforms, small forums, or individual website operators.

I am going to need a citation that TikTok and Snap caved because they could not afford continuing to fight. It seems just as plausible they could see which way the winds were blowing, given what I have read so far in the evidence that has been released.

Masnick:

One of the key pieces of evidence the New Mexico attorney general used against Meta was the company’s 2023 decision to add end-to-end encryption to Facebook Messenger. The argument went like this: predators used Messenger to groom minors and exchange child sexual abuse material. By encrypting those messages, Meta made it harder for law enforcement to access evidence of those crimes. Therefore, the encryption was a design choice that enabled harm.

The state is now seeking court-mandated changes including “protecting minors from encrypted communications that shield bad actors.”

Yes, the end result of the New Mexico ruling might be that Meta is ordered to make everyone’s communications less secure. That should be terrifying to everyone. Even those cheering on the verdict.

This is undeniably a worrisome precedent. I will note Raúl Torrez, New Mexico’s Attorney General and the man who brought this case against Meta, says he wants to do so for minors only. The implementation of this is an obvious question, though one that mandated age-gating would admittedly make straightforward.

Meta cited low usage when it announced earlier this month that it would be turning off end-to-end encryption in Instagram. If it is a question of safety or liability, it is one Meta would probably find difficult to articulate given end-to-end encryption remains available and enabled by default in Messenger and WhatsApp. An executive raised concerns about the feature when it was being planned, drawing a distinction between it and WhatsApp because the latter “does not make it easy to make social connections, meaning making Messenger e2ee will be far, far worse”.

I think Masnick makes some good arguments in this piece and raises some good questions. It is very possible or even likely this all gets unwound when it is appealed. I, too, expect the ripple effects of these cases to create some chaos. But I do not think the correct response to a lack of corporate accountability — or, frankly, standards — is, in Masnick’s words, “actually funding mental health care for young people”. That is not to say mental health should not be funded, only that it is a red herring response. In the U.S., total spending on children’s mental health care rose by 50% between 2011 and 2017; it continued to rise through the pandemic, of course. Perhaps that is not enough. But, also, it is extraordinary to think that we should allow companies to do knowingly harmful things and expect everyone else to correct for the predictable outcomes.

⌥ Permalink

Meta Loses Two Landmark Cases Regarding Product Safety and Children’s Use; Google Loses One

By: Nick Heer
26 March 2026 at 05:00

Morgan Lee, Associated Press:

A New Mexico jury found Tuesday that social media conglomerate Meta is harmful to children’s mental health and in violation of state consumer protection law.

The landmark decision comes after a nearly seven-week trial. Jurors sided with state prosecutors who argued that Meta — which owns Instagram, Facebook and WhatsApp — prioritized profits over safety. The jury determined Meta violated parts of the state’s Unfair Practices Act on accusations the company hid what it knew [about] the dangers of child sexual exploitation on its platforms and impacts on child mental health.

Meta communications jackass Andy Stone noted on X his company’s delight to be liable for “a fraction of what the State sought”. The company says it will appeal the verdict.

Stephen Morris and Hannah Murphy, Financial Times:

Meta and Google were found liable in a landmark legal case that social media platforms are designed to be addictive to children, opening up the tech giants to penalties in thousands of similar claims filed around the US.

A jury in the Los Angeles trial on Wednesday returned a verdict after nine days of deliberation, finding Meta’s platforms such as Instagram and Google’s YouTube were harmful to children and teenagers and that the companies failed to warn users of the dangers.

Dara Kerr, the Guardian:

To come to its liability decision, the jury was asked whether the companies’ negligence was a substantial factor in causing harm to KGM [the plaintiff] and if the tech firms knew the design of their products was dangerous. The 12-person panel of jurors returned a 10-2 split answering in favor of the plaintiff on every single question.

Meta says it will also appeal this verdict.

Sonja Sharp, Los Angeles Times:

Collectively, the suits seek to prove that harm flowed not from user content but from the design and operation of the platforms themselves.

That’s a critical legal distinction, experts say. Social media companies have so far been protected by a powerful 1996 law called Section 230, which has shielded the apps from responsibility for what happens to children who use it.

For its part, the Wall Street Journal editorial board is standing up for beleaguered social media companies in an editorial today criticizing everything about these verdicts, including this specific means of liability, which it calls a “dodge” around Section 230.

But it is not. The principles described by Section 230 are a good foundation for the internet. This law, while U.S.-centric, has enabled the web around the world to flourish. Making companies legally liable for the things users post will not fix the mess we are in, but it would cause great damage if enacted.

Product design, though, is a different question. It would be a mistake, I think, to read Section 230 as a blanket allowance for any way platforms wish to use or display users’ posts. (Update: In part, that is because it is a free speech question.) From my entirely layman perspective, it has never struck me as entirely reasonable that the recommendations systems of these platforms should have no duty or expectation of care.

The Journal’s editorial board largely exists to produce rage bait and defend the interests of the powerful, so I am loath to give it too much attention, but I thought this paragraph was pretty rich:

Trial lawyers and juries may figure that Big Tech companies can afford to pay, but extorting companies is certain to have downstream consequences. Meta and Google are spending hundreds of billions of dollars on artificial intelligence this year, which could have positive social impacts such as accelerating treatments for cancer.

Do not sue tech companies because they could be finding cancer treatments — why should I take this editorial board seriously if its members are writing jokes like these? They think you are stupid.

As for the two cases, I am curious about how these conclusions actually play out. I imagine other people who feel their lives have been eroded by the specific way these platforms are designed will be able to test their claims in court, too, and that it will be complicated by the inevitably lengthy appeals and relitigation process.

I am admittedly a little irritated by both decisions being reached by jury instead of a judge; I would have preferred to see reasoning instead of overwhelming agreement among random people. However, it sends a strong signal to big social media platforms that people saw and heard evidence about how these products are designed, and they agreed it was damaging. This is true of all users, not just children. Meta tunes its feeds (PDF) for maximizing engagement across the board, and it surely is not the only one. There are a staggering number of partially redacted exhibits released today to go through, if one is so inclined.

If these big social platforms are listening, the signals are out there: people may be spending a lot of time with these products, but that is not a good proxy for their enjoyment or satisfaction. Research indicates a moderate amount of use is correlated with neutral or even positive outcomes among children, yet there are too many incentives in these apps to push past self-control mechanisms. These products should be designed differently.

⌥ Permalink

Meta Laid Off Several Hundred People Today

By: Nick Heer
26 March 2026 at 02:58

Ashley Capoot and Jonathan Vanian, CNBC:

Meta is laying off several hundred employees on Wednesday, CNBC confirmed.

The cuts are happening across several different organizations within the company, including Facebook, global operations, recruiting, sales and its virtual reality division Reality Labs, according to a source familiar with the company’s plans who asked not to be named because they are confidential.

Some impacted employees are being offered new roles within the company, the person said. In some cases, those new positions will require relocation.

“Several hundred” employees is a long way off from the numbers reported earlier this month. Perhaps Reuters got it all wrong but, more worryingly for employees, perhaps those figures were correct and this is only the beginning.

⌥ Permalink

OpenAI to Discontinue Sora App, Video Platform

By: Nick Heer
24 March 2026 at 23:13

Berber Jin, Wall Street Journal:

CEO Sam Altman announced the changes to staff on Tuesday, writing that the company would wind down products that use its video models. In addition to the consumer app, OpenAI is also discontinuing a version of Sora for developers and won’t support video functionality inside ChatGPT, either.

OpenAI is not shutting this down because it has ethical qualms with what it has created, despite good reasons to do just that. It is because it is expensive without any clear reason for it to exist other than because OpenAI wants to be everywhere.

If you are desperate for a completely synthetic social media feed, Meta’s Vibes is apparently still around. Users are readily abusing it, of course, because that is what happens if you give people a text input box.

Update: In a tweet, OpenAI has confirmed it is shutting down Sora. But, while it originally announced “We’re saying goodbye to Sora”, it changed that about an hour later to read “We’re saying goodbye to the Sora app“, emphasis mine. The Journal has not changed its report to retract claims about shutting down the platform altogether, though, while OpenAI continues to promote Sora API pricing.

⌥ Permalink

Lobbying Firms Funded by Apple and Meta Are Duelling on Age Verification

By: Nick Heer
20 March 2026 at 04:33

Emily Birnbaum, writing for Bloomberg in July:

Meta is also helping to fund the Digital Childhood Alliance, a coalition of conservative groups leading efforts to pass app-store age verification, according to three people familiar with the funding.

The App Store Accountability Act is based on model legislation written by the Digital Childhood Alliance. The lobbying group also publishes marketing pieces, including one (PDF) that calls Apple’s age verification frameworks “ineffective”. Specifically, it points to the lack of parental consent required “for kids to enter into complex contracts”, with “no way to verify that parental consent has been obtained”.

Meta, for its part, requires users to self-report their birthday and click a button that says “I agree” to create an Instagram account. In fairness, the title of that page says “read and agree to our terms” and, on the terms page, Meta does say you need to be 13 years old. This is pretty standard stuff but, if Meta actually cared about this, it could voluntarily implement the stricter controls at sign-up without a legislative incentive.

Though this article was published last year, I am linking to it now because something called the TBOTE Project recently resurfaced these findings and added some of its own in an open source investigation. Unlike similar investigations from sources like Bellingcat, it does not appear that the person or people behind TBOTE have editors or fact-checkers to verify their interpretation of this information. That does not mean it is useless; it is simply worth exercising some caution. Regardless, their findings show a massive amount of lobbyist spending on Meta’s part to try and get these laws passed.

Birnbaum continues:

The App Association, a group backed by Apple, has been running ads in Texas, Alabama, Louisiana and Ohio arguing that the app store age verification bills are backed by porn websites and companies. The adult entertainment industry’s main lobby said it is not pushing for the bills; pornography is mostly banned from app stores.

This is obviously bad faith, but also flawed in the opposite direction: the porn industry wants device-level verification.

⌥ Permalink

Meta Realizes Horizon Worlds on Quest Never Had Legs, Will Shut It Down in June

By: Nick Heer
18 March 2026 at 04:44

A few weeks ago, Meta published an update from Samantha Ryan, of Reality Labs, announced a “renewed focus” and a “doubling down” on virtual reality. It planned to achieve this by “almost exclusively” betting its future on the smartphone Horizon Worlds app.

In an announcement today, Meta shifted its definition of “almost exclusively” to simply “exclusively”:

Earlier this year, we shared an update on our renewed focus for VR and Horizon. We are separating the two platforms so each can grow with greater focus, and the Horizon Worlds platform will become a mobile-only experience. This separation will extend across our ecosystem, including our mobile app. To support this vision, we are making the following changes to streamline your Quest experience throughout 2026.

This opening paragraph is opaque and, though the announcement goes on to explain exactly what is happening, it is not nearly as clear as the email sent to Horizon Worlds users. I really think Meta is looking to exit from its pure V.R. efforts, especially with the sales success of the perv glasses.

As I write this, the Horizon app for iOS is the sixty-ninth most popular free game in the Canadian App Store, just behind Wordscapes and ahead of Perfect Makeover Cleaning ASMR. Nice?

⌥ Permalink

Annotators in Kenya Describe How They Review Sensitive Data Captured by Meta’s Ray-Bans

By: Nick Heer
4 March 2026 at 05:05

Naipanoi Lepapa, Ahmed Abdigadir, and Julia Lindblom, Svenska Dagbladet:

The workers in Kenya say that it feels uncomfortable to go to work. They tell us about deeply private video clips, which appear to come straight out of Western homes, from people who use the glasses in their everyday lives.

Several describe video material showing bathroom visits, sex and other intimate moments.

Another worker talks about people coming out of bathrooms.

It is appalling that massively rich corporations like Meta continue to offload critical tasks like these onto people who receive little support or pay. I recently finished “Ghost Work” by Mary L. Gray and Siddharth Suri and, while not my favourite book nor surfacing anything conceptually new, is worth your time. Meta can and should be doing far better, but can avoid association with labour atrocities better than, say, Nike in the 1990s in part because I doubt most people think too much about human intervention in artificial intelligence. Meta does not celebrate the hard work of its contract labour in Kenya; it does not even acknowledge them.

Speaking of not acknowledging the human labour involved, this story is the obvious nightmare you would expect. Some of these incidents of sensitive video recordings appear to be accidental, while others are seemingly deliberate. Without excusing the people who seem to be recording creepy videos on purpose, I assume few people would have believed it would be seen by someone at a company they probably have not heard about.

At first glance, it appears that we have significant control over our data. It states that voice recordings may only be saved and used for improvement or training of other Meta products if the user actively agrees.

But for the AI assistant to function, voice, text, image and sometimes video must be processed and may be shared onwards. This data processing is done automatically and cannot be turned off.

This is the kind of thing I would expect would be bundled into the additional diagnostic information Meta asks if you would like to opt into sharing. But Meta says this “does not include the photos and videos captured by your glasses”. That is, as this investigation found, part of the mandatory data collection.

This is offensive on behalf of users who might be less likely to consent if they had this full information. But it is also offensive to their romantic partners, friends, acquaintances, and passers-by, none of whom agreed to have their image or conversations adjudicated by these contractors.

⌥ Permalink

It Sure Looks to Me Like Meta Is Winding Down Its V.R. Efforts

By: Nick Heer
24 February 2026 at 04:55

Samantha Ryan, “VP of Content” at Meta’s Reality Labs:

We’ve recently made some pretty big changes, including right-sizing our Reality Labs investment to ensure that our efforts remain sustainable over time. We’ve been in this space for over a decade, and we aren’t going anywhere. We’re in it for the long haul.

By “right-sizing”, Ryan means laying off ten percent of the Reality Labs workforce, and pouring money into the Ray-Ban partnership instead of metaverse initiatives. By “in it for the long haul”, Ryan means shifting the definition of the “metaverse” to meet Mark Zuckerberg’s latest obsession. They did not whiff by renaming the entire company around a crappy update to Second Life; you just are not getting it.

Ryan:

Our goal remains constant: to empower developers and creators as they build long-term, sustainable businesses. We used to have a pretty well-defined audience for VR, but as we’ve grown, we’ve attracted new audiences — who want different things — and the onus is on us to make sure that each of these distinct groups can find the apps and games that appeal to them.

That’s why we’re changing our roadmaps to increase your chances for success. We’re explicitly separating our Quest VR platform from our Worlds platform in order to create more space for both products to grow. We’re doubling down on the VR developer ecosystem while shifting the focus of Worlds to be almost exclusively mobile. By breaking things down into two distinct platforms, we’ll be better able to clearly focus on each.

Meta can say it is “doubling down on the V.R. developer ecosystem” all it wants, but it announced in January it would be shutting down its work-focused V.R. app with only a month’s notice, and it has cancelled third-party headsets. Now, it is saying Horizon Worlds is basically a phone app. Last February, Andrew Bosworth wrote in a memo about the importance of this very strategy:

[…] And Horizon Worlds on mobile absolutely has to break out for our long term plans to have a chance. […]

As I write this, Meta Horizon is the fifty-seventh most popular free game in the Canadian App Store, just two spots behind Hole.io, “the most addictive black hole game”. Maybe people do not, in general, want to wear a computer on their entire head — not for the thousands of dollars Apple is charging, and not for the hundreds Meta is.

⌥ Permalink

Meta Plans Deep Cuts to Metaverse Efforts

By: Nick Heer
5 December 2025 at 23:52

Kurt Wagner, Bloomberg:

Meta Platforms Inc.’s Mark Zuckerberg is expected to meaningfully cut resources for building the so-called metaverse, an effort that he once framed as the future of the company and the reason for changing its name from Facebook Inc.

Executives are considering potential budget cuts as high as 30% for the metaverse group next year, which includes the virtual worlds product Meta Horizon Worlds and its Quest virtual reality unit, according to people familiar with the talks, who asked not to be named while discussing private company plans. Cuts that high would most likely include layoffs as early as January, according to the people, though a final decision has not yet been made.

Wagner’s reporting was independently confirmed by Mike Isaac, of the New York Times, and Meghan Bobrowsky and Georgia Wells, of the Wall Street Journal, albeit in slightly different ways. While Wagner wrote it “would most likely include layoffs as early as January”, Isaac apparently confirmed the budget cuts are likely large-scale personnel cuts, which makes sense:

The cuts could come as soon as next month and amount to 10 to 30 percent of employees in the Metaverse unit, which works on virtual reality headsets and a V.R.-based social network, the people said. The numbers of potential layoffs are still in flux, they said. Other parts of the Reality Labs division develop smart glasses, wristbands and other wearable devices. The total number of employees in Reality Labs could not be learned.

Alan Dye is just about to join Reality Labs. I wonder if this news comes as a fun surprise for him.

At Meta Connect a few months ago, the company spent basically the entire time on augmented reality glasses, but it swore up and down it was all related to its metaverse initiatives:

We’re hard at work advancing the state of the art in augmented and virtual reality, too, and where those technologies meet AI — that’s where you’ll find the metaverse.

The metaverse is whatever Meta needs it to be in order to justify its 2021 rebrand.

Our vision for the future is a world where anyone anywhere can imagine a character, a scene, or an entire world and create it from scratch. There’s still a lot of work to do, but we’re making progress. In fact, we’re not far off from being able to create compelling 3D content as easily as you can ask Meta AI a question today. And that stands to transform not just the imagery and videos we see on platforms like Instagram and Facebook, but also the possibilities of VR and AR, too.

You know, whenever I am unwinding and chatting with friends after a long day at work, I always get this sudden urge to create compelling 3D content.

⌥ Permalink

Threads Continues to Reward Rage Bait

By: Nick Heer
2 December 2025 at 01:07

Hank Green was not getting a lot of traction on a promotional post on Threads about a sale on his store. He got just over thirty likes, which does not sound awful, until you learn that was over the span of seven hours and across Green’s following of 806,000 accounts on Threads.

So he tried replying to rage bait with basically the same post, and that was far more successful. But, also, it has some pretty crappy implications:

That’s the signal that Threads is taking from this: Threads is like oh, there’s a discussion going on.

It’s 2025! Meta knows that “lots of discussion” is not a surrogate for “good things happening”!

I assume the home feed ranking systems are similar for Threads and Instagram — though they might not be — and I cannot tell you how many times my feed is packed with posts from many days to a week prior. So many businesses I frequent use it as a promotional tool for time-bound things I learn about only afterward. The same thing is true of Stories, since they are sorted based on how frequently you interact with an account.

Everyone is allowed one conspiracy theory, right? Mine is that a primary reason Meta is hostile to reverse-chronological feeds is because it requires businesses to buy advertising. I have no proof to support this, but it seems entirely plausible.

⌥ Permalink

Meta’s Accounting of Its Louisiana Data Centre ‘Strains Credibility’

By: Nick Heer
25 November 2025 at 03:54

Jonathan Weil, Wall Street Journal:

It seems like a marvel of financial engineering: Meta Platforms is building a $27 billion data center in Louisiana, financed with debt, and neither the data center nor the debt will be on its own balance sheet.

That outcome looks too good to be true, and it probably is.

The phrase “marvel of financial engineering” does not seem like a compliment. In addition to the evidence from Weil’s article, Meta is taking advantage of a tax exemption created by Louisiana’s state legislature. But, in its argument, it is merely a user of this data centre.

Also, colour me skeptical this data centre will truly be “the size of Manhattan” before the bubble bursts, despite the disruption to life in the area.

Update: Paris Martineau points to Weil’s bio noting he was “the first reporter to challenge Enron’s accounting practices”.

⌥ Permalink

Meta’s Steak Sauce Demo Should Have Been Dumber

By: Nick Heer
20 September 2025 at 18:50

John Walker, Kotaku

Rather than because of wifi, the reason this happened is because these so-called AIs are just regurgitating information that has been parsed from scanning the internet. It will have been trained on recipes written by professional chefs, home cooks and cookery sites, then combined this information to create something that sounds a lot like a recipe for a Korean sauce. But it, not being an intelligence, doesn’t know what Korean sauce is, nor what recipes are, because it doesn’t know anything. So it can only make noises that sound like the way real humans have described things. Hence it having no way of knowing that ingredients haven’t already been mixed — just the ability to mimic recipe-like noises. The recipes it will have been trained on will say “after you’ve combined the ingredients…” so it does too.

I would love to know how this demo was supposed to go. In an ideal world, is it supposed to walk you through the preparation ingredient-by-ingredient? If Jack Mancuso had picked up the soy sauce, would it have guided the recipe-suggested amount? That would be impressive, if it had worked. The New York Times’ tech reporters got to try the glasses for about thirty minutes and, while they shared no details, said it was “as spotty as Mr. Zuckerberg’s demonstration”.

I think Walker is too hard on the faux off-the-cuff remarks, though they are mock-worthy in the context of the failed demo. But I think the diagnosis of this is entirely correct: what we think of as “A.I.” is kind of overkill for this situation. I can see some utility. For example, I could not find a written recipe that exactly matched the ingredients on Mancuso’s bench, but perhaps Meta’s A.I. software can identify the ingredients, and assume the lemons are substituting for rice vinegar. Sure. After that, what would actually be useful is a straightforward recitation of a specific recipe: measure out a quarter-cup of soy sauce and pour it into a bowl; next, stir in one tablespoon of honey — that kind of thing. This is pretty basic text-to-speech stuff, though it would be cool if it can respond to questions like how much ginger?, and did I already add the honey?, too.

Also, I would want to know which recipe it was following. A.I. has a terrible problem with not crediting its sources of information in general, and it is no different here.

Also — and this probably goes without saying — even if these glasses worked as well as Meta suggests they should, there is no way I would buy a pair. You are to tell me that I should strap a legacy of twenty years of privacy violations and user hostility to my face? Oh, please.

⌥ Permalink

Meta’s Whiffed Its Live Demos at Connect

By: Nick Heer
18 September 2025 at 20:12

Rani Molla, Sherwood News:

While the prerecorded videos of the products in use were slick and highly produced, some of the live demos simply failed.

“Glasses are the ideal form factor for personal superintelligence because they let you stay present in the moment while getting access to all of these AI capabilities to make you smarter, help you communicate better, improve your memory, improve your senses,” CEO Mark Zuckerberg reiterated at the start of the event, but the ensuing bloopers certainly didn’t make it feel that way.

I like that Meta took a chance with live demos but, in addition to the bloopers, Connect felt like another showcase of an inspiration-bereft business. The opening was a more grounded — figuratively and literally — version of the Google Glass skydive from 2012. Then, beginning at about 52 minutes, Zuckerberg introduced the wrist-based control system, saying “every new computing platform has a new way to interact with it”, summarizing a piece of the Macworld 2007 iPhone introduction. It is not that I am offended by Meta cribbing others’ marketing. What I find amusing, more than anything, is Zuckerberg’s clear desire to be thought of as an inventor and futurist, despite having seemingly few original ideas.

⌥ Permalink

Meta Says Threads Has Over 400 Million Monthly Active Users

By: Nick Heer
21 August 2025 at 04:06

Emily Price, Fast Company:

Meta’s Threads is on a roll.

The social networking app is now home to more than 400 million monthly active users, Meta shared with Fast Company on Tuesday. That’s 50 million more than just a few months ago, and a long way from the 175 million it had around its first birthday last summer.

What is even more amazing about this statistic is how non-essential Threads seems to be. I might be in a bubble, but I cannot recall the last time someone sent me a link to a Threads post or mentioned they saw something worthwhile there. I see plenty of screenshots of posts from Bluesky, X, and even Mastodon circulating in various other social networks, but I cannot remember a single one from Threads.

As if to illustrate Threads’ invisibility, Andy Stone, Meta’s communications guy, rebutted a Wall Street Journal story with a couple of posts on X. He has a Threads account, of course, but he posts there only a few times per month.

⌥ Permalink

Meta Adds ‘Friends’ Tab to Facebook to Show Posts From Users’ Friends

By: Nick Heer
28 March 2025 at 04:18

Meta:

Formerly a place to view friend requests and People You May Know, the Friends tab will now show your friends’ stories, reels, posts, birthdays and friend requests.

You know, I think this concept of showing people things they say they want to see might just work.

Meta says this is just one of “several ‘O.G.’ Facebook experiences [coming] throughout the year” — a truly embarrassing sentence. But Mark Zuckerberg said in an autumn earnings call that Facebook would “add a whole new category of content which is A.I. generated or A.I. summarized content, or existing content pulled together by A.I. in some way”. This plan is going just great. I think the way these things can be reconciled is exactly how Facebook is doing it: your friends go in a “Friends” tab, but you will see all the other stuff it wants to push on you by default. Just look how Meta has done effectively the same thing in Instagram and Threads.

⌥ Permalink

Facebook to Stop Targeting Ads at U.K. Woman After Legal Fight

By: Nick Heer
25 March 2025 at 03:05

Grace Dean, BBC News:

Ms O’Carroll’s lawsuit argued that Facebook’s targeted advertising system was covered by the UK’s definition of direct marketing, giving individuals the right to object.

Meta said that adverts on its platform could only be targeted to groups of a minimum size of 100 people, rather than individuals, so did not count as direct marketing. But the Information Commissioner’s Office (ICO) disagreed.

“Organisations must respect people’s choices about how their data is used,” a spokesperson for the ICO said. “This means giving users a clear way to opt out of their data being used in this way.”

Meta, in response, says “no business can be mandated to give away its services for free”, a completely dishonest way to interpret the ICO’s decision. There is an obvious difference between advertising and personalized advertising. To pretend otherwise is nonsense. Sure, personalized advertising makes Meta more money than non-personalized advertising, but that is an entirely different problem. Meta can figure it out. Or it can be a big soggy whiner about it.

⌥ Permalink

Mark Zuckerberg Stays On Script

By: Nick Heer
31 July 2024 at 15:52

Karissa Bell, Engadget:

Zuckerberg then launched into a lengthy rant about his frustrations with “closed” ecosystems like Apple’s App Store. None of that is particularly new, as the Meta founder has been feuding with Apple for years. But then Zuckerberg, who is usually quite controlled in his public appearances, revealed just how frustrated he is, telling Huang that his reaction to being told “no” is “fuck that.”

It all has a whiff of the image consultant, with notes of Musk.

Everybody knows a corporate executive wearing boring business clothes and answering questions with defined talking points is playing a role. This costume Zuckerberg is wearing is just as much of a front. The billionaire CEO of a publicly traded social media company cannot be a rebel in any meaningful sense.

⌥ Permalink

Meta’s Big Squeeze

By: Nick Heer
4 June 2024 at 02:49

Ashley Belanger, reporting for Ars Technica in July 2022 in what I will call “foreshadowing”:

Despite all the negative feedback [over then-recent Instagram changes], Meta revealed on an earnings call that it plans to more than double the number of AI-recommended Reels that users see. The company estimates that in 2023, about a third of Instagram and Facebook feeds will be recommended content.

Ed Zitron:

In this document [leaked to Zitron], they discuss the term “meaningful interactions,” the underlying metric which (allegedly) guides Facebook today. In January 2018, Adam Mosseri, then Head of News Feed, would post that an update to the News Feed would now “prioritize posts that spark conversations and meaningful interactions between people,” which may explain the chaos (and rot) in the News Feed thereafter.

To be clear, metrics around time spent hung around at the company, especially with regard to video, and Facebook has repeatedly and intentionally made changes to manipulate its users to satisfy them. In his book “Broken Code,” Jeff Horwitz notes that Facebook “changed its News Feed design to encourage people to click on the reshare button or follow a page when they viewed a post,” with “engineers altering the Facebook algorithm to increase how often users saw content reshared from people they didn’t know.”

Zitron, again:

When you look at Instagram or Facebook, I want you to try and think of them less as social networks, and more as a form of anthropological experiment. Every single thing you see on either platform is built or selected to make you spend more time on the app and see more things that Meta wants you to see, be they ads, sponsored content, or suggested groups that you can interact with, thus increasing the amount of your “time spent” on the app, and increasing the amount of “meaningful interactions” you have with content.

Zitron is a little too eager, for my tastes, to treat Meta’s suggestions of objectionable and controversial posts as deliberate. It seems much more likely the company simply sucks at moderating this stuff at scale and is throwing in the towel.

Kurt Wagner, Bloomberg:

In late 2021, TikTok was on the rise, Facebook interactions were declining after a pandemic boom and young people were leaving the social network in droves. Chief Executive Officer Mark Zuckerberg assembled a handful of veterans who’d built their careers on the Big Blue app to figure out how to stop the bleeding, including head of product Chris Cox, Instagram boss Adam Mosseri, WhatsApp lead Will Cathcart and head of Facebook, Tom Alison.

During discussions that spanned several meetings, a private WhatsApp group, and an eventual presentation at Zuckerberg’s house in Palo Alto, California, the group came to a decision: The best way to revive Facebook’s status as an online destination for young people was to start serving up more content from outside a person’s network of friends and family.

Jason Koebler, 404 Media:

At first, previously viral (but real) images were being run through image-to-image AI generators to create a variety of different but plausibly believable AI images. These images repeatedly went viral, and seemingly tricked real people into believing they were real. I was able to identify a handful of the “source” or “seed” images that formed the basis for this type of content. Over time, however, most AI images on Facebook have gotten a lot easier to identify as AI and a lot more bizarre. This is presumably happening because people will interact with the images anyway, or the people running these pages have realized they don’t need actual human interaction to go viral on Facebook.

Sarah Perez, TechCrunch:

Instagram confirmed it’s testing unskippable ads after screenshots of the feature began circulating across social media. These new ad breaks will display a countdown timer that stops users from being able to browse through more content on the app until they view the ad, according to informational text displayed in the Instagram app.

These pieces each seem like they are circling a theme of a company finding the upper bound of its user base, and then squeezing it for activity, revenue, and promising numbers to report to investors. Unlike Zitron, I am not convinced we are watching Facebook die. I think Koebler is closer to the truth: we are watching its zombification.

⌥ Permalink

❌
❌