Normal view

There are new articles available, click to refresh the page.
Before yesterdayMain stream

Apple’s Missteps in A.I. Are Partly the Fault of A.I.

By: Nick Heer
28 March 2025 at 23:59

Allison Morrow, CNN:

Tech columnists such as the New York Times’ Kevin Roose have suggested recently that Apple has failed AI, rather than the other way around.

“Apple is not meeting the moment in AI,” Roose said on his podcast, Hard Fork, earlier this month. “I just think that when you’re building products with generative AI built into it, you do just need to be more comfortable with error, with mistakes, with things that are a little rough around the edges.”

To which I would counter, respectfully: Absolutely not.

Via Dan Moren, of Six Colors:

The thesis of the piece is not about excusing Apple’s AI missteps, but zooming out to take a look at the bigger picture of why AI is everywhere, and make the argument that maybe Apple is well-served by not necessarily being on the cutting edge of these developments.

If that is what this piece is arguing, I do not think Apple makes a good case for it. When it launched Apple Intelligence, it could have said it was being more methodical, framing a modest but reliable feature set as a picture of responsibility. This would be a thin layer of marketing speak covering the truth, of course, but that would at least set expectations. Instead, what we got was a modest and often unreliable feature set with mediocre implementation, and the promise of a significantly more ambitious future that has been kicked down the road.

These things do not carry the Apple promise, as articulated by Morrow, of “design[ing] things that are accessible out of the box”, products for which “[y]ou will almost never need a user manual filled with tiny print”. It all feels flaky and not particularly nice to use. Even the toggle to turn it off is broken.

⌥ Permalink

⌥ Apple Could Build Great Platforms for Third-Party A.I. If It Wanted To

By: Nick Heer
22 March 2025 at 04:16

There is a long line of articles questioning Apple’s ability to deliver on artificial intelligence because of its position on data privacy. Today, we got another in the form of a newsletter.

Reed Albergotti, Semafor:

Meanwhile, Apple was focused on vertically integrating, designing its own chips, modems, and other components to improve iPhone margins. It was using machine learning on small-scale projects, like improving its camera algorithms.

[…]

Without their ads businesses, companies like Google and Meta wouldn’t have built the ecosystems and cultures required to make them AI powerhouses, and that environment changed the way their CEOs saw the world.

Again, I will emphasize this is a newsletter. It may seem like an article from a prestige publisher that prides itself on “separat[ing] the facts from our views”, but you might notice how, aside from citing some quotes and linking to ads, none of Albergotti’s substantive claims are sourced. This is just riffing.

I remain skeptical. Albergotti frames this as both a mindset shift and a necessity for advertising companies like Google and Meta. But the company synonymous with the A.I. boom, OpenAI, does not have the same business model. Besides, Apple behaves like other A.I. firms by scraping the web and training models on massive amounts of data. The evidence for this theory seems pretty thin to me.

But perhaps a reluctance to be invasive and creepy is one reason why personalized Siri features have been delayed. I hope Apple does not begin to mimic its peers in this regard; privacy should not be sacrificed. I think it is silly to be dependent on corporate choices rather than legislation to determine this, but that is the world some of us live in.

Let us concede the point anyhow, since it suggests a role Apple could fill by providing an architecture for third-party A.I. on its products. It does not need to deliver everything to end users; it can focus on building a great platform. Albergotti might sneeze at “designing its own chips […] to improve iPhone margins”, which I am sure was one goal, but it has paid off in ridiculously powerful Macs perfect for A.I. workflows. And, besides, it has already built some kind of plugin architecture into Apple Intelligence because it has integrated ChatGPT. There is no way for other providers to add their own extension — not yet, anyhow — but the system is there.

Gus Mueller:

The crux of the issue in my mind is this: Apple has a lot of good ideas, but they don’t have a monopoly on them. I would like some other folks to come in and try their ideas out. I would like things to advance at the pace of the industry, and not Apple’s. Maybe with a blessed system in place, Apple could watch and see how people use LLMs and other generative models (instead of giving us Genmoji that look like something Fisher-Price would make). And maybe open up the existing Apple-only models to developers. There are locally installed image processing models that I would love to take advantage of in my apps.

Via Federico Viticci, MacStories:

Which brings me to my second point. The other feature that I could see Apple market for a “ChatGPT/Claude via Apple Intelligence” developer package is privacy and data retention policies. I hear from so many developers these days who, beyond pricing alone, are hesitant toward integrating third-party AI providers into their apps because they don’t trust their data and privacy policies, or perhaps are not at ease with U.S.-based servers powering the popular AI companies these days. It’s a legitimate concern that results in lots of potentially good app ideas being left on the table.

One of Apple’s specialties is in improving the experience of using many of the same technologies as everyone else. I would like to see that in A.I., too, but I have been disappointed by its lacklustre efforts so far. Even long-running projects where it has had time to learn and grow have not paid off, as anyone can see in Siri’s legacy.

What if you could replace these features? What if Apple’s operating systems were great platforms by which users could try third-party A.I. services and find the ones that fit them best? What if Apple could provide certain privacy promises, too? I bet users would want to try alternatives in a heartbeat. Apple ought to welcome the challenge.

Apple Head Computer, Apple Intelligence, and Apple Computer Heads

By: Nick Heer
20 March 2025 at 22:26

Benedict Evans:

That takes us to xR, and to AI. These are fields where the tech is fundamental, and where there are real, important Apple kinds of questions, where Apple really should be able to do something different. And yet, with the Vision Pro Apple stumbled, and then with AI it’s fallen flat on its face. This is a concern.

The Vision Pro shipped as promised and works as advertised. But it’s also both too heavy and bulky and far too expensive to be a viable mass-market consumer product. Hugo Barra called it an over-engineered developer kit — you could also call it an experiment, or a preview or a concept. […]

The main problem, I think, with the reception of the Vision Pro is that it was passed through the same marketing lens as Apple uses to frame all its products. I have no idea if Apple considers the sales of this experiment acceptable, the tepid developer adoption predictable, or the skeptical press understandable. However, if you believe the math on display production and estimated sales figures, they more-or-less match.

Of course, as Evans points out, Apple does not ship experiments:

The new Siri that’s been delayed this week is the mirror image of this. […]

However, it clearly is a problem that the Apple execution machine broke badly enough for Apple to spend an hour at WWDC and a bunch of TV commercials talking about vapourware that it didn’t appear to understand was vapourware. The decision to launch the Vision Pro looks like a related failure. It’s a big problem that this is late, but it’s an equally big problem that Apple thought it was almost ready.

Unlike the Siri feature delay, I do not think the Vision Pro’s launch affects the company’s credibility at all. It can keep pushing that thing and trying to turn it into something more mass-market. This Siri stuff is going to make me look at WWDC in a whole different light this year.

Mark Gurman, Bloomberg:

Chief Executive Officer Tim Cook has lost confidence in the ability of AI head John Giannandrea to execute on product development, so he’s moving over another top executive to help: Vision Pro creator Mike Rockwell. In a new role, Rockwell will be in charge of the Siri virtual assistant, according to the people, who asked not to be identified because the moves haven’t been announced.

[…]

Rockwell is known as the brains behind the Vision Pro, which is considered a technical marvel but not a commercial hit. Getting the headset to market required a number of technical breakthroughs, some of which leveraged forms of artificial intelligence. He is now moving away from the Vision Pro at a time when that unit is struggling to plot a future for the product.

If you had no context for this decision, it looks like Rockwell is being moved off Apple’s hot new product and onto a piece of software that perennially disappoints. It looks like a demotion. That is how badly Siri needs a shakeup.

Giannandrea will remain at the company, even with Rockwell taking over Siri. An abrupt departure would signal publicly that the AI efforts have been tumultuous — something Apple is reluctant to acknowledge. Giannandrea’s other responsibilities include oversight of research, testing and technologies related to AI. The company also has a team reporting to Giannandrea investigating robotics.

I figured as much. Gurman does not clarify in this article how much of Apple Intelligence falls under Giannandrea’s rubric, and how much is part of the “Siri” stuff that is being transferred to Rockwell. It does not sound as though Giannandrea will have no further Apple Intelligence responsibilities — yet — but the high-profile public-facing stuff is now overseen by Rockwell and, ultimately, Craig Federighi.

⌥ Permalink

Siri Invented a Calendar Event and Then Hallucinated a Helpful Suggestion

By: Nick Heer
5 December 2024 at 17:01

Go figure — just one day after writing about how Apple’s ambiguous descriptions of supposedly clever features has the potential to rob trust, my phone has become haunted.

I saw a suggestion from Siri that I turn on Do Not Disturb until the end of an event in my calendar — a reservation at a restaurant from 8:30 until 10:00 this morning. No such matching event was in Fantastical. It was, however, shown in the Calendar app as a Siri Suggestion.

What I think happened is that I was looking at that restaurant on OpenTable at perhaps 8:00 this morning. I was doing so in my web browser on my Mac, and I was not logged into OpenTable. My Mac and iPhone are both running operating system beta builds with Apple Intelligence enabled. Siri must have interpreted this mere browsing as me making a reservation, and then added it to my calendar without my asking, and then made a suggestion based on that fictional event.

This was not helpful. It was, in fact, perplexing and creepy. I do not know how all of these things were able to work together to produce this result, but I do not like it at all. It is obvious how this would make anyone question whether they can trust Apple Intelligence, A.I. systems generally, Siri, and their personal privacy. Truly bizarre.

⌥ Permalink

⌥ Ambiguity and Trust in Apple Intelligence

By: Nick Heer
5 December 2024 at 04:59

Spencer Ackerman has been a national security reporter for over twenty years, and was partially responsible for the Guardian’s coverage of NSA documents leaked by Edward Snowden. He has good reason to be skeptical of privacy claims in general, and his experience updating his iPhone made him worried:

Recently, I installed Apple’s iOS 18.1 update. Shame on me for not realizing sooner that I should be checking app permissions for Siri — which I had thought I disabled as soon as I bought my device — but after installing it, I noticed this update appeared to change Siri’s defaults.

Apple has a history with changing preferences and dark patterns. This is particularly relevant in the case of the iOS 18.1 update because it was the one with Apple Intelligence, which creates new ambiguity between what is happening on-device and what goes to a server farm somewhere.

Allen Pike:

While easy tasks are handled by their on-device models, Apple’s cloud is used for what I’d call moderate-difficulty work: summarizing long emails, generating patches for Photos’ Clean Up feature, or refining prose in response to a prompt in Writing Tools. In my testing, Clean Up works quite well, while the other server-driven features are what you’d expect from a medium-sized model: nothing impressive.

Users shouldn’t need to care whether a task is completed locally or not, so each feature just quietly uses the backend that Apple feels is appropriate. The relative performance of these two systems over time will probably lead to some features being moved from cloud to device, or vice versa.

It would be nice if it truly did not matter — and, for many users, the blurry line between the two is probably fine. Private Cloud Compute seems to be trustworthy. But I fully appreciate Ackerman’s worries. Someone in his position necessarily must understand what is being stored and processed in which context.

However, Ackerman appears to have interpreted this setting change incorrectly:

I was alarmed to see that even my secure communications apps, like Proton and Signal, were toggled by default to “Learn from this App” and enable some subsidiary functions. I had to swipe them all off.

This setting was, to Ackerman, evidence of Apple “uploading your data to its new cloud-based AI project”, which is a reasonable assumption at a glance. Apple, like every technology company in the past two years, has decided to loudly market everything as being connected to its broader A.I. strategy. In launching these features in a piecemeal manner, though, it is not clear to a layperson which parts of iOS are related to Apple Intelligence, let alone where those interactions are taking place.

However, this particular setting is nearly three years old and unrelated to Apple Intelligence. This is related to Siri Suggestions which appear throughout the system. For example, the widget stack on my home screen suggests my alarm clock app when I charge my iPhone at night. It suggests I open the Microsoft Authenticator app on weekday mornings. When I do not answer the phone for what is clearly a scammer, it suggests I return the missed call. It is not all going to be gold.

Even at the time of its launch, its wording had the potential for confusion — something Apple has not clarified within the Settings app in the intervening years — and it seems to have been enabled by default. While this data may play a role in establishing the “personal context” Apple talks about — both are part of the App Intents framework — I do not believe it is used to train off-device Apple Intelligence models. However, Apple says this data may leave the device:

Your personal information — which is encrypted and remains private — stays up to date across all your devices where you’re signed in to the same Apple Account. As Siri learns about you on one device, your experience with Siri is improved on your other devices. If you don’t want Siri personalization to update across your devices, you can disable Siri in iCloud settings. See Keep what Siri knows about you up to date on your Apple devices.

While I believe Ackerman is incorrect about the setting’s function and how Apple handles its data, I can see how he interpreted it that way. The company is aggressively marketing Apple Intelligence, even though it is entirely unclear which parts of it are available, how it is integrated throughout the company’s operating systems, and which parts are dependent on off-site processing. There are people who really care about these details, and they should be able to get answers to these questions.

All of this stuff may seem wonderful and novel to Apple and, likely, many millions of users. But there are others who have reasonable concerns. Like any new technology, there are questions which can only be answered by those who created it. Only Apple is able to clear up the uncertainty around Apple Intelligence, and I believe it should. A cynical explanation is that this ambiguity is all deliberate because Apple’s A.I. approach is so much slower than its competitors and, so, it is disincentivized from setting clear boundaries. That is possible, but there is plenty of trust to be gained by being upfront now. Americans polled by Pew Research and Gallup have concerns about these technologies. Apple has repeatedly emphasized its privacy bonafides. But these features remain mysterious and suspicious for many people regardless of how much a giant corporation swears it delivers “stateless computation, enforceable guarantees, no privileged access, non-targetability, and verifiable transparency”.

All of that is nice, I am sure. Perhaps someone at Apple can start the trust-building by clarifying what the Siri switch does in the Settings app, though.

Keep the crap going

By: VM
6 December 2024 at 09:16

Have you seen the new ads for Google Gemini?

In one version, just as a young employee is grabbing her fast-food lunch, she notices her snooty boss get on an elevator. So she drops her sandwich, rushes to meet her just as the doors are about to close, and submits her proposal in the form of a thick dossier. The boss asks her for a 500-word summary to consume during her minute-long elevator ride. The employee turns to Google Gemini, which digests the report and spits out the gist, and which the employee regurgitates to the boss’s approval. The end.


Isn’t this unsettling? Google isn’t alone either. In May this year, Apple released a tactless ad for its new iPad Pro. From Variety:

The “Crush!” ad shows various creative and cultural objects — including a TV, record player, piano, trumpet, guitar, cameras, a typewriter, books, paint cans and tubes, and an arcade game machine — getting demolished in an industrial press. At the end of the spot, the new iPad Pro pops out, shiny and new, with a voiceover that says, “The most powerful iPad ever is also the thinnest.”

After the backlash, Apple bactracked and apologised — and then produced two ads in November for its Apple Intelligence product showcasing how it could help thoughtless people continue to be thoughtless.



The second video is additionally weird because it seems to suggest reaching all the way for an AI tool makes more sense than setting a reminder on the calendar that comes in all smartphones these days.

And they are now joined in spirit by Google, because bosses can now expect their subordinates to Geminify their way through what could otherwise have been tedious work or just impossible to do on punishingly short deadlines — without the bosses having to think about whether their attitudes towards what they believe is reasonable to ask of their teammates need to change. (This includes a dossier of details that ultimately won’t be read.)

If AI is going to absorb the shock that comes of someone being crappy to you, will we continue to notice that crappiness and demand they change or — as Apple and Google now suggest — will we blame ourselves for not using AI to become crappy ourselves? To quote from a previous post:

When machines make decisions, the opportunity to consider the emotional input goes away. This is a recurring concern I’m hearing about from people working with or responding to AI in some way. … This is Anna Mae Duane, director of the University of Connecticut Humanities Institute, in The Conversation: “I fear how humans will be damaged by the moral vacuum created when their primary social contacts are designed solely to serve the emotional needs of the ‘user’.”

The applications of these AI tools have really blossomed and millions of people around the world are using them for all sorts of tasks. But even if the ads don’t pigeonhole these tools, they reveal how their makers — Apple and Google — are thinking about what the tools bring to the table and what these tech companies believe to be their value. To Google’s credit at least, its other ads in the same series are much better (see here and here for examples), but they do need to actively cut down on supporting or promoting the idea that crappy behaviour is okay.

Apple Intelligence-Related Instructions

By: Nick Heer
6 August 2024 at 03:22

Reddit user devanxd2000:

I was digging into the system files for the update and I found a bunch of json files containing what appears to be prompts given to the AI in the background. I found it interesting and thought I’d share.

You can find them here: /System/Library/AssetsV2/​com_apple_​MobileAsset_UAF_FM​_GenerativeModels

There’ll be a bunch of folders, some of them will have metadata.json files like this.

Wes Davis, the Verge:

Files I browsed through refer to the model as “ajax,” which some Verge readers might recall as the rumored internal name for Apple’s LLM last year.

It is unclear to me if these directly represent the instructions which interpret and produce the results users see. These could be something else, like a file involved in the development process but not related to how it functions on a user’s device; we just do not know.

But, assuming — quite fairly, I might add — that these instructions are what underpins features like message summaries and custom Memories in Photos, it is kind of interesting to see them written in plain English. They advise the model to “only output valid [JSON] and nothing else”, and warn it “do not hallucinate” and “do not make up factual information”. The latter two are just good rules for life. I am not sure what I expected, but I guess it was not these kinds of visible instructions. But, I guess it would make sense for it to feed through what I presume is the same system underpinning the revised version of Siri, which needs to interpret everything from plain English commands. After all, programming is just a specific version of a language.

⌥ Permalink

Apple Says It Will Prevent E.U. Users From Accessing Select New Features, Including Apple Intelligence, Until It Has Achieved DMA Compliance

By: Nick Heer
21 June 2024 at 20:15

Javier Espinoza and Michael Acton, Financial Times:

Apple has warned that it will not roll out the iPhone’s flagship new artificial intelligence features in Europe when they launch elsewhere this year, blaming “uncertainties” stemming from Brussels’ new competition rules.

This article carries the headline “Apple delays European launch of new AI features due to EU rules”, but it is not clear to me these features are “delayed” in the E.U. or that they would “launch elsewhere this year”. According to the small text in Apple’s WWDC press release, these features “will be available in beta […] this fall in U.S. English”, with “additional languages […] over the course of the next year”. This implies the A.I. features in question will only be available to devices set to U.S. English, and acting upon text and other data also in U.S. English.

To be fair, this is a restriction of language, not geography. Someone in France or Germany could still want to play around with Apple Intelligence stuff even if it is not very useful with their mostly not-English data. Apple is saying they will not be able to. It aggressively region-locks alternative app marketplaces to Europe and, I imagine, will use the same infrastructure to keep users out of these new features.

There is an excerpt from Apple’s statement in this Financial Times article explaining which features will not launch in Europe this year: iPhone Mirroring, better screen sharing with SharePlay, and Apple Intelligence. Apple provided a fuller statement to John Gruber. This is the company’s explanation:

Specifically, we are concerned that the interoperability requirements of the DMA could force us to compromise the integrity of our products in ways that risk user privacy and data security. We are committed to collaborating with the European Commission in an attempt to find a solution that would enable us to deliver these features to our EU customers without compromising their safety.

Apple does not explain specifically how these features run afoul of the DMA — or why it would not or could not build them to clearly comply with the DMA — so this could be mongering, but I will assume it is a good-faith effort at compliance in the face of possible ambiguity. I am not sure Apple has earned a benefit of the doubt, but that is a different matter.

It seems like even the possibility of lawbreaking has made Apple cautious — and I am not sure why that is seen as an inherently bad thing. This is one of the world’s most powerful corporations, and the products and services it rolls out impact a billion-something people. That position deserves significant legal scrutiny.

I was struck by something U.S. FTC chair Lina Khan said in an interview at a StrictlyVC event this month:

[…] We hear routinely from senior dealmakers, senior antitrust lawyers, who will say pretty openly that as of five or six or seven years ago, when you were thinking about a potential deal, antitrust risk or even the antitrust analysis was nowhere near the top of the conversation, and now it is up front and center. For an enforcer, if you’re having companies think about that legal issue on the front end, that’s a really good thing because then we’re not going to have to spend as many public resources taking on deals that we believe are violating the laws.

Now that competition laws are being enforced, businesses have to think about them. That is a good thing! I get a similar vibe from this DMA response. It is much newer than antitrust laws in both the U.S. and E.U. and there are things about which all of the larger technology companies are seeking clarity. But it is not an inherently bad thing to have a regulatory layer, even if it means delays.

Is that not Apple’s whole vibe, anyway? It says it does not rush into things. It is proud of withholding new products until it feels it has gotten them just right. Perhaps you believe corporations are a better judge of what is acceptable than a regulatory body, but the latter serves as a check on the behaviour of the former.

Apple is not saying Europe will not get these features at all. It is only saying it is not sure it has built them in a DMA compliant way. We do not know anything more about why that is the case at this time, and it does not make sense to speculate further until we do.

⌥ Permalink

❌
❌