Reading view

There are new articles available, click to refresh the page.

You Are Just a Guest on Meta’s A.I.-Filled Platforms

By: Nick Heer

Jason Koebler, 404 Media:

The best way to think of the slop and spam that generative AI enables is as a brute force attack on the algorithms that control the internet and which govern how a large segment of the public interprets the nature of reality. It is not just that people making AI slop are spamming the internet, it’s that the intended “audience” of AI slop is social media and search algorithms, not human beings.

[…]

“Brute force” is not just what I have noticed while reporting on the spammers who flood Facebook, Instagram, TikTok, YouTube, and Google with AI-generated spam. It is the stated strategy of the people getting rich off of AI slop.

Regardless of whether you have been following Koebler’s A.I. slop beat, you owe it to yourself to read this article at least. The goal, Koelber surmises, is for Meta to target slop and ads at users in more-or-less the same way and, because this slop is cheap and fast to produce, it is a bottomless cup of engagement metrics.

Koebler, in a follow-up article:

As I wrote last week, the strategy with these types of posts is to make a human linger on them long enough to say to themselves “what the fuck,” or to be so horrified as to comment “what the fuck,” or send it to a friend saying “what the fuck,” all of which are signals to the algorithm that it should boost this type of content but are decidedly not signals that the average person actually wants to see this type of thing. The type of content that I am seeing right now makes “Elsagate,” the YouTube scandal in which disturbing videos were targeted to kids and resulted in various YouTube reforms, look quaint.

Matt Growcoot, PetaPixel:

Meta is testing an Instagram feature that suggests AI-generated comments for users to post beneath other users’ photos and videos.

Meta is going to make so much money before it completely disintegrates on account of nobody wanting to spend this much time around a thin veneer over robots.

⌥ Permalink

Perplexity Is a Bullshit Machine

By: Nick Heer

Dhruv Mehrotra and Tim Marchman, of Wired, were able to confirm Robb Knight’s finding that Perplexity ignores the very instructions it gives website owners to opt out of scraping. And there is more:

The WIRED analysis also demonstrates that despite claims that Perplexity’s tools provide “instant, reliable answers to any question with complete sources and citations included,” doing away with the need to “click on different links,” its chatbot, which is capable of accurately summarizing journalistic work with appropriate credit, is also prone to bullshitting, in the technical sense of the word.

I had not played around with Perplexity very much, but I tried asking it “what is the bullshit web?”. Its summaries in response to prompts with and without a question mark are slightly different but there is one constant: it does not cite my original article, only a bunch of (nice) websites which linked to or reblogged it.

⌥ Permalink

Google’s A.I. Answers Said to Put Glue in Pizza, So Katie Notopoulos Made Some Pizza

By: Nick Heer

Jason Koebler, 404 Media:

The complete destruction of Google Search via forced AI adoption and the carnage it is wreaking on the internet is deeply depressing, but there are bright spots. For example, as the prophecy foretold, we are learning exactly what Google is paying Reddit $60 million annually for. And that is to confidently serve its customers ideas like, to make cheese stick on a pizza, “you can also add about 1/8 cup of non-toxic glue” to pizza sauce, which comes directly from the mind of a Reddit user who calls themselves “Fucksmith” and posted about putting glue on pizza 11 years ago.

Katie Notopoulos, putting the “business” in Business Insider:

I knew my assignment: I had to make the Google glue pizza. (Don’t try this at home! I risked myself for the sake of the story, but you shouldn’t!)

My timeline on three entirely separate social networks — Bluesky, Mastodon, and Threads — has been chock full of examples of Google’s A.I. answers absolutely eating dirt — or, in one case, rocks — in the face of obvious satire and shitposting. Well, obvious to us. Computers, it seems, have not figured out glue and gasoline are bad for food.

The A.I. answers from Google are not all yucks and chuckles, unfortunately.

Nic Lake:

Yesterday (Part 1) I saw that mushrooms post, and knew something like that was going to get people hurt. I didn’t really think that (CONTENT WARNING) asking how best to deal with depression was going to be next on the “shit I didn’t want to see” Bingo card.

The organizations know. They know that these tools are not ready. They call it a “beta” and feed it to you anyway.

Google is manually removing A.I. results where appropriate, and it is claiming some of the screenshots which have been circulating have been faked in some way without specifying which.

To quote week-ago me:

Given the sliding quality of Google’s results, it seems quite bold for the company to be confident users worldwide will trust its generated answers.

Quite bold, indeed.

I do not expect perfection, but it is downright embarrassing that Google rolled out a product so unreliable and occasionally dangerous it continues to tarnish a reputation already suffering. Google’s Featured Snippets were bad enough. Now it is in the process of rolling out a whole new level of overconfident nonsense to the entire world, fixing it as everyone tests its limits.

⌥ Permalink

❌