Normal view

There are new articles available, click to refresh the page.
Yesterday — 7 December 2025Main stream

⌥ A Questionable A.I. Plateau

By: Nick Heer
2 December 2025 at 05:33

The Economist:

On November 20th American statisticians released the results of a survey. Buried in the data is a trend with implications for trillions of dollars of spending. Researchers at the Census Bureau ask firms if they have used artificial intelligence “in producing goods and services” in the past two weeks. Recently, we estimate, the employment-weighted share of Americans using AI at work has fallen by a percentage point, and now sits at 11% (see chart 1). Adoption has fallen sharply at the largest businesses, those employing over 250 people. Three years into the generative-AI wave, demand for the technology looks surprisingly flimsy.

[…]

Even unofficial surveys point to stagnating corporate adoption. Jon Hartley of Stanford University and colleagues found that in September 37% of Americans used generative AI at work, down from 46% in June. A tracker by Alex Bick of the Federal Reserve Bank of St Louis and colleagues revealed that, in August 2024, 12.1% of working-age adults used generative AI every day at work. A year later 12.6% did. Ramp, a fintech firm, finds that in early 2025 AI use soared at American firms to 40%, before levelling off. The growth in adoption really does seem to be slowing.

I am skeptical of the metrics used by the Economist to produce this summary, in part because they are all over the place, and also because they are mostly surveys. I am not sure people always know they are using a generative A.I. product, especially when those features are increasingly just part of the modern office software stack.

While the Economist has an unfortunate allergy to linking to its sources, I wanted to track them down because a fuller context is sometimes more revealing. I believe the U.S. Census data is the Business Trends and Outlook Survey though I am not certain because its charts are just plain, non-interactive images. In any case, it is the Economist’s own estimate of falling — not stalling — adoption by workers, not an estimate produced by the Census Bureau, which is curious given two of its other sources indicate more of a plateau instead of a decline.

The Hartley, et al. survey is available here and contains some fascinating results other than the specific figures highlighted by the Economist — in particular, that the construction industry has the fourth-highest adoption of generative A.I., that Gemini is shown in Figure 9 as more popular than ChatGPT even though the text on page 7 indicates the opposite, and that the word “Microsoft” does not appear once in the entire document. I have some admittedly uninformed and amateur questions about its validity. At any rate, this is the only source the Economist cites which indicates a decline.

The data point attributed to the tracker operated by the Federal Reserve Bank of St. Louis is curious. The Economist notes “in August 2024, 12.1% of working-age adults used generative A.I. every day at work. A year later 12.6% did”, but I am looking at the dashboard right now, and it says the share using generative A.I. daily at work is 13.8%, not 12.6%. In the same time period, the share of people using it “at least once last week” jumped from 36.1% to 46.9%. I have no idea where that 12.6% number came from.

Finally, Ramp’s data is easy enough to find. Again, I have to wonder about the Economist’s selective presentation. If you switch the chart from an overall view to a sector-based view, you can see adoption of paid subscriptions has more than doubled in many industries compared to October last year. This is true even in “accommodation and food services”, where I have to imagine use cases are few and far between.

After finding the actual source of the Economist’s data, it has left me skeptical of the premise of this article. However, plateauing interest — at least for now — makes sense to me on a gut level. There is a ceiling to work one can entrust to interns or entry-level employees, and that is approximately similar for many of today’s A.I. tools. There are also sector-level limits. Consider Ramp’s data showing high adoption in the tech and finance industries, with considerably less in sectors like healthcare and food services. (Curiously, Ramp says only 29% of the U.S. construction industry has a subscription to generative A.I. products, while Hartley, et al. says over 40% of the construction industry is using it.)

I commend any attempt to figure out how useful generative A.I. is in the real world. One of the problems with this industry right now is that its biggest purveyors are not public companies and, therefore, have fewer disclosure requirements. Like any company, they are incentivized to inflate their importance, but we have little understanding of how much they are exaggerating. If you want to hear some corporate gibberish, OpenAI interviewed executives at companies like Philips and Scania about their use of ChatGPT, but I do not know what I gleaned from either interview — something about experimentation and vague stuff about people being excited to use it, I suppose. It is not very compelling to me. I am not in the C-suite, though.

The biggest public A.I. firm is arguably Microsoft. It has rolled out Copilot to Windows and Office users around the world. Again, however, its press releases leave much to be desired. Levi Strauss employees, Microsoft says, “report the devices and operating system have led to significant improvements in speed, reliability and data handling, with features like the Copilot key helping reduce the time employees spend searching and free up more time for creating”. Sure. In another case study, Microsoft and Pantone brag about the integration of a colour palette generator that you can use with words instead of your eyes.

Microsoft has every incentive to pretend Copilot is a revolutionary technology. For people actually doing the work, however, its ever-nagging presence might be one of many nuisances getting in the way of the job that person actually knows how to do. A few months ago, the company replaced the familiar Office portal with a Copilot prompt box. It is still little more than a thing I need to bypass to get to my work.

All the stats and apparent enthusiasm about A.I. in the workplace are, as far as I can tell, a giant mess. A problem with this technology is that the ways in which it is revolutionary are often not very useful, its practical application in a work context is a mixed bag that depends on industry and role, and its hype encourages otherwise respectable organizations to suggest their proximity to its promised future.

The Economist being what it is, much of this article revolves around the insufficiently realized efficiency and productivity gains, and that is certainly something for business-minded people to think about. But there are more fundamental issues with generative A.I. to struggle with. It is a technology built on a shaky foundation. It shrinks the already-scant field of entry-level jobs. Its results are unpredictable and can validate harm. The list goes on, yet it is being loudly inserted into our SaaS-dominated world as a top-down mandate.

It turns out A.I. is not magic dust you can sprinkle on a workforce to double their productivity. CEOs might be thrilled by having all their email summarized, but the rest of us do not need that. We need things like better balance of work and real life, good benefits, and adequate compensation. Those are things a team leader cannot buy with a $25-per-month-per-seat ChatGPT business license.

A.I. Mania Looks and Feels Bigger Than the .Com Bubble

By: Nick Heer
25 November 2025 at 03:41

Fred Vogelstein, Crazy Stupid Tech — which, again, is a compliment:

We’re not only in a bubble but one that is arguably the biggest technology mania any of us have ever witnessed. We’re even back reinventing time. Back in 1999 we talked about internet time, where every year in the new economy was like a dog year – equivalent to seven years in the old.

Now VCs, investors and executives are talking about AI dog years – let’s just call them mouse years – which is internet time divided by five? Or is it by 11? Or 12? Sure, things move way faster than they did a generation ago. But by that math one year today now equals 35 years in 1995. Really?

A sobering piece that, unfortunately, is somewhat undercut since it lacks a single mention of layoffs, jobs, employment, or any other indication that this bubble will wreck the lives of people far outside its immediate orbit. In fairness, few of the related articles linked at the bottom mention that, either. Articles in Stratechery, the Brookings Institute, and the New York Times want you to think a bubble is just a sign of building something new and wonderful. A Bloomberg newsletter mentions layoffs only in the context of changing odds in predictions markets — I chuckled — while M.G. Siegler notes all the people who are being laid off while new A.I. hires get multimillion-dollar employment packages. Maybe all the pain and suffering that is likely to result from the implosion of this massive sector is too obvious to mention for the MBA and finance types. I think it is worth stating, though, not least because it acknowledges other people are worth caring about at least as much as innovation and growth and all that stuff.

⌥ Permalink

Before yesterdayMain stream

Zoom CEO Eric Yuan Lies About A.I. Leading to Shorter Work Weeks

By: Nick Heer
28 October 2025 at 23:43

Sarah Perez, TechCrunch:

Zoom CEO Eric Yuan says AI will shorten our workweek

[…]

“Today, I need to manually focus on all those products to get work done. Eventually, AI will help,” Yuan said.

“By doing that, we do not need to work five days a week anymore, right? … Five years out, three days or four days [a week]. That’s a goal,” he said.

So far, technological advancements have not — in general — produced a shorter work week; that was a product of collective labour action. We have been promised a shorter week before. We do not need to carry water for people who peddle obvious lies. We will always end up being squeezed for greater output.

⌥ Permalink

OpenAI Documents Reveal Punitive Tactics Toward Former Employees

By: Nick Heer
23 May 2024 at 02:16

Kelsey Piper, Vox:

Questions arose immediately [over the resignations of key OpenAI staff]: Were they forced out? Is this delayed fallout of Altman’s brief firing last fall? Are they resigning in protest of some secret and dangerous new OpenAI project? Speculation filled the void because no one who had once worked at OpenAI was talking.

It turns out there’s a very clear reason for that. I have seen the extremely restrictive off-boarding agreement that contains nondisclosure and non-disparagement provisions former OpenAI employees are subject to. It forbids them, for the rest of their lives, from criticizing their former employer. Even acknowledging that the NDA exists is a violation of it.

Sam Altman, [sic]:

we have never clawed back anyone’s vested equity, nor will we do that if people do not sign a separation agreement (or don’t agree to a non-disparagement agreement). vested equity is vested equity, full stop.

there was a provision about potential equity cancellation in our previous exit docs; although we never clawed anything back, it should never have been something we had in any documents or communication. this is on me and one of the few times i’ve been genuinely embarrassed running openai; i did not know this was happening and i should have.

Piper, again, in a Vox follow-up story:

In two cases Vox reviewed, the lengthy, complex termination documents OpenAI sent out expired after seven days. That meant the former employees had a week to decide whether to accept OpenAI’s muzzle or risk forfeiting what could be millions of dollars — a tight timeline for a decision of that magnitude, and one that left little time to find outside counsel.

[…]

Most ex-employees folded under the pressure. For those who persisted, the company pulled out another tool in what one former employee called the “legal retaliation toolbox” he encountered on leaving the company. When he declined to sign the first termination agreement sent to him and sought legal counsel, the company changed tactics. Rather than saying they could cancel his equity if he refused to sign the agreement, they said he could be prevented from selling his equity.

For its part, OpenAI says in a statement quoted by Piper that it is updating its documentation and releasing former employees from the more egregious obligations of their termination agreements.

This next part is totally inside baseball and, unless you care about big media company CMS migrations, it is probably uninteresting. Anyway. I noticed, in reading Piper’s second story, an updated design which launched yesterday. Left unmentioned in that announcement is that it is, as far as I can tell, the first of Vox’s Chorus-powered sites migrated to WordPress. The CMS resides on the platform subdomain which is not important. But it did indicate to me that the Verge may be next — platform.theverge.com resolves to a WordPress login page — and, based on its DNS records, Polygon could follow shortly thereafter.

⌥ Permalink

❌
❌