Normal view

There are new articles available, click to refresh the page.
Before yesterdayMain stream

Conservapedia Still Exists

By: Nick Heer
29 October 2025 at 19:22

I am not sure it is worth writing at length about Grokipedia, the Elon Musk-funded effort to quite literally rewrite history from the perspective of a robot taught to avoid facts upsetting to the U.S. far right. Perhaps it will be an unfortunate success — the Fox News of encyclopedias, giving ideologues comfortable information as they further isolate themselves.

It is less a Wikipedia competitor than it is a machine-generated alternative to Conservapedia. Founded by Andy Schlafly, an attorney and son of Phyllis Schlafly, the Wikipedia alternative was an attempt to make an online encyclopedia from a decidedly U.S. conservative and American exceptionalism perspective. Seventeen years ago, Schlafly’s effort was briefly profiled by Canadian television and, somehow, the site is still running. Perhaps that is the fate of Grokipedia: a brief curiosity, followed by traffic coming only from a self-selecting mix of weirdos and YouTubers needing material.

⌥ Permalink

A Profile of Setlist.fm

By: Nick Heer
29 October 2025 at 00:01

Marc Hogan, New York Times (gift link):

Enter Setlist.fm. The wikilike site, where users document what songs artists play each night on tour, has grown into a vast archive, updated in real time but also reaching back into the historical annals. From the era of Mozart (seriously!) to last night’s Chappell Roan show, Setlist.fm offers reams of statistics — which songs artists play most often, when they last broke out a particular tune. In recent years, the site has begun posting data about average concert start times and set lengths.

Good profile. I had no idea it was owned by Live Nation.

I try to avoid Setlist.fm ahead of a show, but I check it immediately when I get home and for the days following. I might be less familiar with an artist’s catalogue, and this is particularly true of an opener, so it lets me track down particular songs that were played. It is one of the internet’s great resources.

⌥ Permalink

Zoom CEO Eric Yuan Lies About A.I. Leading to Shorter Work Weeks

By: Nick Heer
28 October 2025 at 23:43

Sarah Perez, TechCrunch:

Zoom CEO Eric Yuan says AI will shorten our workweek

[…]

“Today, I need to manually focus on all those products to get work done. Eventually, AI will help,” Yuan said.

“By doing that, we do not need to work five days a week anymore, right? … Five years out, three days or four days [a week]. That’s a goal,” he said.

So far, technological advancements have not — in general — produced a shorter work week; that was a product of collective labour action. We have been promised a shorter week before. We do not need to carry water for people who peddle obvious lies. We will always end up being squeezed for greater output.

⌥ Permalink

Colorado Police Officer Caught on Doorbell Camera Talking About Surveillance Powers

By: Nick Heer
28 October 2025 at 18:34

Andrew Kenney, Denverite:

It was Sgt. Jamie Milliman [at the door], a police officer with the Columbine Valley Police Department who covers the town of Bow Mar, which begins just south of [Chrisanna] Elser’s home.

[…]

“You know we have cameras in that jurisdiction and you can’t get a breath of fresh air, in or out of that place, without us knowing, correct?” he said.

“OK?” Elser, a financial planner in her 40s, responded in a video captured by her smart doorbell and viewed by Denverite.

“Just as an example,” the sergeant told her, she had “driven through 20 times the last month.”

This story is a civil liberties rollercoaster. Milliman was relying on a nearby town’s use of Flock license plate cameras and Ring doorbells — which may also be connected to the Flock network — to accuse Elser of theft and issue a summons. Elser was able to get the summons dropped by compiling evidence from, in part, the cameras and GPS system on her truck. Milliman’s threats were recorded by a doorbell camera, too. The whole thing is creepy, and all over a $25 package stolen off a doorstep.

I have also had things stolen from me, and I wish the police officers I spoke to had a better answer for me than shrugging their shoulders and saying, in effect, this is not worth our time. But this situation is like a parallel universe ad for Amazon and its Ring subsidiary. Is this the path toward “very close to zero[ing] out crime”? It is not worth it.

⌥ Permalink

Apple’s Tedious and Expensive Procedure for Replacing the Battery in the New MacBook Pro

By: Nick Heer
28 October 2025 at 03:37

Carsten Frauenheim and Elizabeth Chamberlain, iFixit:

Apple’s official replacement process requires swapping the entire top case, keyboard and all, just to replace this single consumable component. And it has for a long time. That’s a massive and unreasonable job, requiring complete disassembly and reassembly of the entire device. We’re talking screws, shields, logic board, display, Touch ID, trackpad, everything. In fact, the only thing that doesn’t get transferred are the keyboard and speakers. The keyboard is more or less permanently affixed to this top aluminum, and the speakers are glued in — which, I guess, according to Apple means that the repair is out of the scope of DIY (we disagree).

At least one does not need to send in their laptop for a mere battery replacement. Still, I do not understand why this — the most predictable repair — is so difficult and expensive.

I hate to be that guy, but the battery for a mid-2007 15-inch MacBook Pro used to cost around $150 (about $220 inflation-adjusted) and could be swapped with two fingers. The official DIY solution for replacing the one in my M1 MacBook Pro is over $700, though there is a $124 credit for returning the replaced part. The old battery was, of course, a little bit worse: 60 watt-hours compared to 70 watt-hours in the one I am writing this with. I do not even mind the built-in-ness of this battery. But it should not cost an extra $500 and require swapping the rest of the top case parts.

[…] But for now, this tedious and insanely expensive process is the only offering they make for changing out a dead battery. Is it just a byproduct of this nearly half-a-decade-old chassis design, something that won’t change until the next rethink? We don’t know.

“Nearly half-a-decade-old” is a strange way of writing “four years”, almost like it is attempting to emphasize the age of this design. Four years old does not seem particularly ancient to me. I thought iFixit’s whole vibe was motivating people to avoid the consumerist churn encouraged by rapid redesigns.

⌥ Permalink

Reddit Sues Perplexity and Three Data Scraping Companies Because They Crawled Google

By: Nick Heer
25 October 2025 at 05:48

Matt O’Brien, Associated Press:

Social media platform Reddit sued the artificial intelligence company Perplexity AI and three other entities on Wednesday, alleging their involvement in an “industrial-scale, unlawful” economy to “scrape” the comments of millions of Reddit users for commercial gain.

[…]

Also named in the lawsuit are Lithuanian data-scraping company Oxylabs UAB, a web domain called AWMProxy that Reddit describes as a “former Russian botnet,” and Texas-based startup SerpApi, which lists Perplexity as a customer on its website.

Mike Masnick, Techdirt:

Most reporting on this is not actually explaining the nuances, which require a deeper understanding of the law, but fundamentally, Reddit is NOT arguing that these companies are illegally scraping Reddit, but rather that they are illegally scraping… Google (which is not a party to the lawsuit) and in doing so violating the DMCA’s anti-circumvention clause, over content Reddit holds no copyright over. And, then, Perplexity is effectively being sued for linking to Reddit.

This is… bonkers on so many levels. And, incredibly, within their lawsuit, Reddit defends its arguments by claiming it’s filing this lawsuit to protect the open internet. It is not. It is doing the exact opposite.

I am glad Masnick wrote about this despite my disagreement with his views on how much control a website owner ought to have over scraping. This is a necessary dissection of the suit, though I would appreciate views on it from actual intellectual property lawyers. They might be able to explain how a positive outcome of this case for Reddit would have clear rules delineating this conduct from the ways in which artificial intelligence companies have so far benefitted from a generous reading of fair use and terms of service documents.

⌥ Permalink

Apple Threatens to Withdraw App Tracking Transparency in Europe

By: Nick Heer
25 October 2025 at 03:45

Andrej Sokolow, Deutsche Presse Agentur:

Apple could switch off a function that prevents users’ apps from tracking their behaviour across various services and websites for advertising purposes in Germany and other European countries.

The iPhone manufacturer on Wednesday complained that it has experienced constant headwinds from the tracking industry.

“Intense lobbying efforts in Germany, Italy and other countries in Europe may force us to withdraw this feature to the detriment of European consumers,” Apple said in a statement.

It is a little rich for Apple to be claiming victimhood in the face of “intense lobbying efforts” by advertising companies when it is the seventh highest spender on lobbying in the European Union. Admittedly, it spends about one-third as much as Meta in Germany, but that is not because Apple cannot afford to spend more. Apple’s argument is weak.

In any case, this is another case where Apple believes it should have a quasi-regulatory role. As I wrote last month:

[…] Apple seems to believe it is its responsibility to implement technical controls to fulfill its definition of privacy and, if that impacts competition and compatibility, too bad. E.U. regulators seem to believe it has policy protections for user privacy, and that users should get to decide how their private data is shared.

I believe there are people within Apple who care deeply about privacy. However, when Apple also gets to define privacy and tracking, it is no coincidence it found an explanation allowing it to use platform activity and in-app purchases for ad targeting. This is hardly as sensitive as the tracking performed by Google and Meta, and Apple does not use third-party data for targeting.

But why would it? Apple owns the platform and, if it wanted, could exploit far more user information without it being considered “tracking” since it is all first-party data. That it does not is a positive reflection of self-policing and, ideally, something it will not change. But it could.

What E.U. authorities are concerned about is this self-serving definition of privacy and the self-policing that results, conflicting with the role of European regulators and privacy laws, and its effects on competition. I think those are reasonable grounds for questioning the validity of App Tracking Transparency. Furthermore, the consequences emanating from violations of privacy law are documented; Meta was penalized €1.2 billion as a result of GDPR violations. Potential violations of App Store policy, on the other hand, are handled differently. If Meta has, as a former employee alleges, circumvented App Tracking Transparency, would the penalties be handled by similar regulatory bodies, or would it — like Uber before — be dealt with privately and rather quietly?

The consequences of previous decisions have been frustrating. They result in poorer on-device privacy controls for users in part because Apple is a self-interested party. It would be able to make its case more convincingly if it walked away from the advertising business altogether.

Sokolow:

Apple argues that it has proposed various solutions to the competition authorities, but has not yet been able to dispel their concerns.

The company wants to continue to offer ATT to European users. However, it argued that the competition authorities have proposed complex solutions that would effectively undermine the function from Apple’s point of view.

Specificity would be nice. It would be better if these kinds of conversations could be had in public instead of in vague statements provided on background to select publications.

⌥ Permalink

The Verge Delivers a Bad Article About Amazon’s Ring

By: Nick Heer
24 October 2025 at 04:27

Jennifer Pattison Tuohy, of the Verge, interviewed Ring founder Jamie Siminoff about a new book — which Tuohy has not read — written with Andrew Postman about the success of the company. During this conversation, Tuohy stumbled into Siminoff making a pretty outrageous claim:

While research suggests that today’s video doorbells do little to prevent crime, Siminoff believes that with enough cameras and with AI, Ring could eliminate most of it. Not all crime — “you’ll never stop crime a hundred percent … there’s crimes that are impossible to stop,” he concedes — but close.

“I think that in most normal, average neighborhoods, with the right amount of technology — not too crazy — and with AI, that we can get very close to zero out crime. Get much closer to the mission than I ever thought,” he says. “By the way, I don’t think it’s 10 years away. That’s in 12 to 24 months … maybe even within a year.”

If this sounds ridiculous to you, congratulations, you are thinking harder than whomever wrote the headline on this article:

Ring’s CEO says his cameras can almost ‘zero out crime’ within the next 12 months

The word “almost” and the phrase “very close” are working very hard to keep the core of Siminoff’s claim intact. What he says is that, by this time next year, “normal” communities with enough Ring cameras and a magic dusting of A.I. will have virtually no crime. The caveats are there to imply more nuance, but they are merely an escape hatch for when someone revisits this next year.

The near-complete elimination of crime in “normal” areas — whatever that means — will very obviously not happen. Tuohy cites a 2023 Scientific American story which, in turn, points to articles in MIT Technology Review and CNet. The first debunks a study Ring likes to promote claiming its devices drove a 55% decline in burglaries in Wilshire Park, Los Angeles in 2015, with cameras on about forty homes. Not only does the public data does not support this dramatic reduction, but:

Even if the doorbells had a positive effect, it seemed not to last. In 2017, Wilshire Park suffered more burglaries than in any of the previous seven years.

The CNet article collects a series of reports from other police departments indicating Ring cameras have questionable efficacy at deterring crime on a city-wide level.

This is also something we can know instinctually, since we already have plenty of surveillance cameras. A 2019 meta analysis (PDF) by Eric Piza, et al., found CCTV adoption decreased crime by about 13%. That is not nothing, but it is also a long way from nearly 100%. One could counter that these tests did not factor in Ring’s A.I. features, like summaries of what the camera saw — we have spent so much energy creating summary-making machines — and finding lost dogs.

The counterargument to all of this, however, is that Ring’s vision is a police state enforced by private enterprise. A 2022 paper (PDF) by Dan Calacci, et al., found race was, unsurprisingly, a motivating factor in reports of suspicious behaviour, and that reports within Ring’s Neighbors app was not correlated with the actual frequency of those crimes. Ring recently partnered with Flock, adding a further layer of creepiness.

I will allow that perhaps an article about Siminoff’s book is not the correct place to litigate these claims. By the very same logic, however, the Verge should be more cautious in publishing them, and should not have promoted them in a headline.

⌥ Permalink

App Store Restrictions Face Scrutiny in China, U.K.

By: Nick Heer
23 October 2025 at 04:05

Liam Mo and Brenda Goh, Reuters:

A group of 55 Chinese iPhone and iPad users filed a complaint with China’s market regulator on Monday, a lawyer representing the group said, alleging that Apple abuses its market dominance by restricting app distribution and payments to its own platforms while charging high commissions.

[…]

This marks the second complaint against Apple led by Wang. A similar case filed in 2021 was dismissed by a Shanghai court last year.

Imran Rahman-Jones, BBC News:

But the Competition and Markets Authority (CMA) has designated both Apple and Google as having “strategic market status” – effectively saying they have a lot of power over mobile platforms.

The ruling has drawn fury from the tech giants, with Apple saying it risked harming consumers through “weaker privacy” and “delayed access to new features”, while Google called the decision “disappointing, disproportionate and unwarranted”.

The CMA said the two companies “may be limiting innovation and competition”.

Pretty soon it may be easier to list the significant markets in which Apple is still able to exercise complete control over iOS app distribution.

⌥ Permalink

OpenAI Launches ChatGPT Atlas

By: Nick Heer
22 October 2025 at 04:53

Maxwell Zeff, TechCrunch:

OpenAI announced Tuesday the launch of its AI-powered browser, ChatGPT Atlas, a major step in the company’s quest to unseat Google as the main way people find information online.

The company says Atlas will first roll out on macOS, with support for Windows, iOS, and Android coming soon. OpenAI says the product will be available to all free users at launch.

Atlas, like Perplexity’s Comet, is a Chromium-based browser. You cannot use it without signing in to ChatGPT. As I was completing the first launch experience, shimmering colours radiated from the setup window and — no joke — it looked like my computer’s screen was failing.

OpenAI:

As you use Atlas, ChatGPT can get smarter and more helpful, too. Browser memories let ChatGPT remember context from the sites you visit and bring that context back when you need it. This means you can ask ChatGPT questions like: “Find all the job postings I was looking at last week and create a summary of industry trends so I can prepare for interviews.” Browser memories in Atlas are completely optional, and you’re always in control: you can view or archive them at any time in settings, and deleting browsing history deletes any associated browser memories.

I love the idea of this. So often, I need to track down something I remember reading, but have only the haziest recollection of what, exactly, it is. I want this in my life. Yet I have zero indication I can trust OpenAI with retaining and synthesizing useful information from my browsing history.

The company says it only retains pages until they have been summarized, and I am sure it thinks it is taking privacy as seriously as it can. But what about down the road? What could it do with all of this data it does retain — information that is tied to your ChatGPT account? OpenAI wants to be everywhere, and it wants to know everything about you to an even greater extent than Google or Meta have been able to accomplish. Why should I trust it? What makes the future of OpenAI look different than the trajectories of the information-hungry businesses before it?

⌥ Permalink

Federico Viticci’s M5 iPad Pro Review

By: Nick Heer
21 October 2025 at 22:35

Even if you are not interested in the iPad or Apple product news generally, I recommend making time for Federico Viticci’s review, at MacStories, of the new iPad Pro. Apple claims 3.5× performance gains with A.I. models, so Viticci attempted to verify that number. Unfortunately, he ran into some problems.

Viticci (emphasis his):

This is the paradox of the M5. Theoretically speaking, the new Neural Accelerator architecture should lead to notable gains in token generation and prefill time that may be appreciated on macOS by developers and AI enthusiasts thanks to MLX (more on this below). However, all these improvements amount to very little on iPadOS today because there is no serious app ecosystem for local AI development and tinkering on iPad. That ecosystem absolutely exists on the Mac. On the iPad, we’re left with a handful of non-MLX apps from the App Store, no Terminal, and the untapped potential of the M5.

In case it’s not clear, I’m coming at this from a perspective of disappointment, not anger. […]

Viticci’s frustration with the state of A.I. models on the iPad Pro is palpable. Ideally and hopefully, it is a future-friendly system, but that is not usually the promise of Apple’s products. It usually likes to tell a complete story with the potential for sequels. To get even a glimpse of what that story looks like, Viticci had to go to great lengths, as documented in his review.

In the case of this iPad Pro, it is marketing leaps-and-bounds boosts in A.I. performance — though those claims appear to be optimistic — while still playing catch-up on last year’s Apple intelligence announcements, and offering little news for a user who wants to explore A.I. models directly on their iPad. It feels like a classic iPad story: incredible hardware, restricted by Apple’s software decisions.

Update: I missed a followup post from Viticci in which he points to a review from Max Weinbach of Creative Strategies. Weinbach found the M5 MacBook Pro does, indeed, post A.I. performance gains closer to Apple’s claims.

As an aside, I think it is curious for Apple to be supplying review units to Creative Strategies. It is nominally a research and analysis firm, not a media outlet. While there are concerns about the impartiality of reviewers granted access to prerelease devices, it feels to me like an entirely different thing for a broad-ranging research organization for reasons I cannot quite identify.

⌥ Permalink

Long Lines for Election Day in Alberta

By: Nick Heer
21 October 2025 at 04:41

Ken MacGillivray and Karen Bartko, Global News:

“All electors are legislatively required to complete a Statement of Eligibility form (Form 13) at the voting station. This form is a declaration by an elector that they meet the required legislated criteria to receive and cast ballots,” Elections Edmonton said.

[…]

Those casting ballots say confirming voters are on the register or completing the necessary paperwork takes three to five minutes per voter.

I was lucky to be in and out of my polling place in about fifteen minutes, but the longest part was waiting for the person to diligently copy my name, address, and date-of-birth from my driver’s license to a triplicate form, immediately after confirming the same information on the printed voter roll. It is a silly requirement coming down as part of a larger unwanted package from our provincial government for no clear reason. The same legislation also prohibits electronic tabulation, so all the ballots are slowly being counted by hand. These are the kinds of measures that only begin to make sense if you assume someone with influence in our provincial government watches too much Fox News.

I wonder if our Minister of Red Tape Reduction has heard about all the new rules and restrictions implemented by his colleagues.

⌥ Permalink

The Blurry Future of Sora

By: Nick Heer
21 October 2025 at 04:07

Jason Parham, Wired:

The uptick in artificial social networks, [Rudy] Fraser tells me, is being driven by the same tech egoists who have eroded public trust and inflamed social isolation through “divisive” algorithms. “[They] are now profiting on that isolation by creating spaces where folks can surround themselves with sycophantic bots.”

I saw this quote circulating on Bluesky over the weekend and it has been rattling around my head since. It cuts to the heart of one reason why A.I.-based “social” networks like Sora and Meta’s Vibes feel so uncomfortable.

Unfortunately, I found the very next paragraph from Parham uncompelling:

In the many conversations I had with experts, similar patterns of thought emerged. The current era of content production prioritizes aesthetics over substance. We are a culture hooked on optimization and exposure; we crave to be seen. We live on our phones and through our screens. We’re endlessly watching and being watched, submerged in a state of looking. With a sort of all-consuming greed, we are transforming into a visual-first society — an infinite form of entertainment for one another to consume, share, fight over, and find meaning through.

Of course our media reflects aesthetic trends and tastes; it always has. I do not know that there was a halcyon era of substance-over-style media, nor do I believe there was a time since celebrity was a feasible achievement in which at least some people did not desire it. In a 1948 British survey of children 10–15 years old, one-sixth to one-third of respondents aspired to “‘romantic’ [career] choices like film acting, sport, and the arts”. An article published in Scouting Magazine in 2000 noted children leaned toward high-profile careers — not necessarily celebrity, but jobs “every child is exposed to”. We love this stuff because we have always loved this stuff.

Among the bits I quibble with in the above, however, this stood out as a new and different thing: “[w]e’re endlessly watching and being watched”. That, I think, is the kind of big change Fraser is quoted as speaking about, and something I think is concerning. We already worried about echo chambers, and platforms like YouTube responded by adjusting recommendations to less frequently send users to dark places. Let us learn something, please.

Cal Newport:

A company that still believes that its technology was imminently going to run large swathes of the economy, and would be so powerful as to reconfigure our experience of the world as we know it, wouldn’t be seeking to make a quick buck selling ads against deep fake videos of historical figures wrestling. They also wouldn’t be entertaining the idea, ​as [Sam] Altman did last week​, that they might soon start offering an age-gated version of ChatGPT so that adults could enjoy AI-generated “erotica.”

To me, these are the acts of a company that poured tens of billions of investment dollars into creating what they hoped would be the most consequential invention in modern history, only to finally realize that what they wrought, although very cool and powerful, isn’t powerful enough on its own to deliver a new world all at once.

I do not think Sora smells of desperation, but I do think it is the product of a company that views unprecedented scale as its primary driver. I think OpenAI wants to be everywhere — and not in the same way that a consumer electronics company wants its smartphones to be the category’s most popular, or anything like that. I wonder if Ben Thompson’s view of OpenAI as “the Windows of A.I.” is sufficient. I think OpenAI is hoping to be a ubiquitous layer in our digital world; or, at least, it is behaving that way.

⌥ Permalink

I Bet Normal Users Will Figure Out Which Power Adapter to Buy

By: Nick Heer
21 October 2025 at 03:16

John Gruber, responding to my exploration of the MacBook Pro A.C. adapter non-issue:

The problem I see with the MacBook power adapter situation in Europe is that while power users — like the sort of people who read Daring Fireball and Pixel Envy — will have no problem buying exactly the sort of power adapter they want, or simply re-using a good one they already own, normal users have no idea what makes a “good” power adapter. I suspect there are going to be a lot of Europeans who buy a new M5 MacBook Pro and wind up charging it with inexpensive low-watt power adapters meant for things like phones, and wind up with a shitty, slow charging experience.

Maybe. I think it is fair to be concerned about this being another thing people have to think about when buying a laptop. But, in my experience, less technically adept people still believe they need specific cables and chargers, even when they do not.

When I was in college, a friend forgot to bring the extension cable for their MacBook charger. There was an unused printer in the studio, though, so I was able to use the power cable from that because it is an interchangeable standard plug. I see this kind of thing all the time among friends, family members, and colleagues. It makes sense in a world frequently populated by proprietary adapters.

Maybe some people will end up with underpowered USB-C chargers. I bet a lot of people will just go to the Apple Store and buy the one recommended by staff, though.

⌥ Permalink

Latest Beta of Apple’s Operating Systems Adds Another Translucency Control

By: Nick Heer
20 October 2025 at 19:46

Chance Miller, 9to5Mac:

You can find the new option [in 26.1 beta 4] on iPhone and iPad by going to the Settings app and navigating to the Display & Brightness menu. On the Mac, it’s available in the “Appearance” menu in System Settings. Here, you’ll see a new Liquid Glass menu with “Clear” and “Tinted” options.

“Choose your preferred look for Liquid Glass. Clear is more transparent, revealing the content beneath. Tinted increases opacity and adds more contrast,” Apple explains.

After Apple made the menu bar translucent in Mac OS X Leopard, it added a preference to make the bar solid after much pushback. When it refreshed the design of Mac OS X in Yosemite with more frosted glass effects, it added controls to Reduce Transparency and Increase Contrast, which replaced the menu bar-specific setting.

Here we are with yet another theme built around translucency, and more complaints about legibility and contrast — Miller writes “Apple says it heard from users throughout the iOS 26 beta testing period that they’d like a setting to manage the opaqueness of the Liquid Glass design”. Now, as has become traditional, there is another way to moderate the excesses of Apple’s new visual language. I am sure there are some who will claim this undermines the entire premise of Liquid Glass, and I do not know that they are entirely wrong. Some might call it greater personalization and customization, too. I think it feels unfocused. Apple keeps revisiting translucency and finding it needs to add more controls to compensate.

⌥ Permalink

NSO Group Banned From Using or Supplying WhatsApp Exploits

By: Nick Heer
18 October 2025 at 03:46

Carly Nairn, Courthouse News Service:

U.S. District Judge Phyllis Hamilton said in a 25-page ruling that there was evidence NSO Group’s flagship spyware could still infiltrate WhatApp users’ devices and granted Meta’s request for a permanent injunction.

However, Hamilton, a Bill Clinton appointee, also determined that any damages would need to follow a ratioed amount of compensation based on a legal framework designed to proportion damages. She ordered that the jury-based award of $167 million should be reduced to a little over $4 million.

Once again, I am mystified by Apple’s decision to drop its suit against NSO Group. What Meta won is protection from WhatsApp being used as an installation vector for NSO’s spyware; importantly, high-value WhatsApp users won a modicum of protection from NSO’s customers. And, as John Scott-Railton of Citizen Lab points out, NSO has “an absolute TON of their business splashed all over the court records”. There are several depositions from which an enterprising journalist could develop a better understanding of this creepy spyware company.

Last week, NSO Group confirmed it had been acquired by U.S. investors. However, according to its spokesperson, its “headquarters and core operations remain in Israel [and] continues to be fully supervised and regulated by the relevant Israeli authorities”.

Lorenzo Franceschi-Bicchierai, TechCrunch:

NSO has long claimed that its spyware is designed to not target U.S. phone numbers, likely to avoid hurting its chances to enter the U.S. market. But the company was caught in 2021 targeting about a dozen U.S. government officials abroad.

Soon after, the U.S. Commerce Department banned American companies from trading with NSO by putting the spyware maker on the U.S. Entities List. Since then, NSO has tried to get off the U.S. government’s blocklist, as recently as May 2025, with the help of a lobbying firm tied to the Trump administration.

I have as many questions about what this change in ownership could mean for its U.S. relationship as I do about how it affects possible targets.

⌥ Permalink

Sponsor: Magic Lasso Adblock: Incredibly Private and Secure Safari Web Browsing

By: Nick Heer
17 October 2025 at 18:00

My thanks to Magic Lasso Adblock for sponsoring Pixel Envy this week.

With over 5,000 five star reviews, Magic Lasso Adblock is simply the best ad blocker for your iPhone, iPad, and Mac.

Designed from the ground up to protect your privacy, Magic Lasso blocks all intrusive ads, trackers, and annoyances. It stops you from being followed by ads around the web and, with App Ad Blocking, it stops your app usage being harvested by ad networks.

So, join over 350,000 users and download Magic Lasso Adblock today.

⌥ Permalink

The New MacBook Pro Is €35 Less Expensive in E.U. Countries, Ships Without a Charger

By: Nick Heer
17 October 2025 at 02:38

Are you outraged? Have you not heard? Apple updated its entry-level MacBook Pro with a new M5 chip, and across Europe, it does not ship with an A.C. adapter in the box as standard any more. It still comes with a USB-C to MagSafe cable, and you can add an adapter at checkout, but those meddling E.U. regulators have forced Apple to do something stupid and customer-unfriendly again. Right?

William Gallagher, of AppleInsider, gets it wrong:

Don’t blame Apple this time — if you’re in the European Union or the UK, your new M5 14-inch MacBook Pro or iPad Pro may cost you $70 extra because Apple isn’t allowed to bundle a charger.

First of all, the dollar is not the currency in any of these countries. Second, the charger in European countries is €65, which is more like $76 right now. Third, Apple is allowed to bundle an A.C. adapter, it just needs to offer an option to not include it. Fourth, and most important, is that the new MacBook Pro is less expensive in nearly every region in which the A.C. adapter is now a configure-to-order option — even after adding the adapter.

In Ireland, the MacBook Pro used to start at €1,949; it now starts at €1,849; in France, it was €1,899, and it is now €1,799. As mentioned, the adapter is €65, making these new Macs €35 less with a comparable configuration. The same is true in each Euro-currency country I checked: Germany, Italy, and Spain all received a €100 price cut if you do not want an A.C. adapter, and a €35 price cut if you do.

It is not just countries that use the Euro receiving cuts. In Norway, the new MacBook Pro starts at 2,000 krone less than the one it replaces, and a charger is 849 krone. In Hungary, it is 50,000 forint less, with a charger costing about 30,000 forint. There are some exceptions, too. In Switzerland, the new models are 50 francs less, but a charger is 59 francs. And in the U.K., there is no price adjustment, even though the charger is a configure-to-order option there, too.

Countries with a charger in the box, on the other hand, see no such price adjustment, at least for the ones I have checked. The new M5 model starts at the same price as the M4 it replaces in Canada, Japan, Singapore, and the United States. (For the sake of brevity and because not all of these pages have been recently crawled by the Internet Archive, I have not included links to each comparison. I welcome checking my work, however, and would appreciate an email if I missed an interesting price change.)

Maybe Apple was already planning a €100 price cut for these new models. The M4 was €100 less expensive than the M3 it replaced, for example, so it is plausible. That is something we simply cannot know. What we do know for certain is that these new MacBook Pros might not come with an A.C. adapter, but even if someone adds one at checkout, it still costs less in most places with this option.

Gallagher:

It doesn’t appear that Apple has cut prices of the MacBook Pro or iPad Pro to match, either. That can’t be proven, though, because at least with the UK, Apple generally does currency conversion just by swapping symbols.

It can be proven if you bother to put in thirty minutes’ work.

Joe Rossignol, of MacRumors, also gets it a little wrong:

According to the European Union law database, Apple could have let customers in Europe decide whether they wanted to have a charger included in the box or not, but the company has ultimately decided to not include one whatsoever: […]

A customer can, in fact, choose to add an A.C. adapter when they order their Mac.

⌥ Permalink

OpenAI and Nvidia Are at the Centre of a Trillion-Dollar Circular Investment Economy

By: Nick Heer
17 October 2025 at 01:29

Tabby Kinder in New York and George Hammond, Financial Times:

OpenAI has signed about $1tn in deals this year for computing power to run its artificial intelligence models, commitments that dwarf its revenue and raise questions about how it can fund them.

Emily Forgash and Agnee Ghosh, Bloomberg:

For much of the AI boom, there have been whispers about Nvidia’s frenzied dealmaking. The chipmaker bolstered the market by pumping money into dozens of AI startups, many of which rely on Nvidia’s graphics processing units to develop and run their models. OpenAI, to a lesser degree, also invested in startups, some of which built services on top of its AI models. But as tech firms have entered a more costly phase of AI development, the scale of the deals involving these two companies has grown substantially, making it harder to ignore.

The day after Nvidia and OpenAI announced their $100 billion investment agreement, OpenAI confirmed it had struck a separate $300 billion deal with Oracle to build out data centers in the US. Oracle, in turn, is spending billions on Nvidia chips for those facilities, sending money back to Nvidia, a company that is emerging as one of OpenAI’s most prominent backers.

I possess none of the skills most useful to understand what all of this means. I am not an economist; I did not have a secret life as an investment banker. As a layperson, however, it is not comforting to read from some People With Specialized Knowledge that this is similar to historically good circular investments, just at an unprecedented scale, while other People With Specialized Knowledge say this has been the force preventing the U.S. from entering a recession. These articles might be like one of those prescient papers from before the Great Recession. Not a great feeling.

⌥ Permalink

The New ‘Foreign Influence’ Scare

By: Nick Heer
16 October 2025 at 03:38

Emmanuel Maiberg, 404 Media:

Democratic U.S. Senators Richard Blumenthal and Elizabeth Warren sent letters to the Department of Treasury Secretary Scott Bessent and Electronic Arts CEO Andrew Wilson, raising concerns about the $55 billion acquisition of the giant American video game company in part by Saudi Arabia’s Public Investment Fund (PIF).

Specifically, the Senators worry that EA, which just released Battlefield 6 last week and also publishes The Sims, Madden, and EA Sports FC, “would cease exercising editorial and operational independence under the control of Saudi Arabia’s private majority ownership.”

“The proposed transaction poses a number of significant foreign influence and national security risks, beginning with the PIF’s reputation as a strategic arm of the Saudi government,” the Senators wrote in their letter. […]

In the late 1990s and early 2000s, the assumption was that it would be democratic nations successfully using the web for global influence. But I think the 2016 U.S. presidential election, during which Russian operatives worked to sway voters’ intentions, was a reality check. Fears of foreign influence were then used by U.S. lawmakers to justify banning TikTok, and to strongarm TikTok into allowing Oracle to oversee its U.S. operations. Now, it is Saudi Arabian investment in Electronic Arts raising concerns. Like TikTok, it is not the next election that is, per se, at risk, but the general thoughts and opinions of people in the United States.

U.S. politicians even passed a law intended to address “foreign influence” concerns. However, Saudi Arabia is not one of the four “covered nations” restricted by PAFACA.

Aside from xenophobia, I worry “foreign influence” is becoming a new standard excuse for digital barriers. We usually associate restrictive internet policies with oppressive and authoritarian regimes that do not trust their citizens to be able to think for themselves. This is not to say foreign influence is not a reasonable concern, nor that Saudi Arabia has no red flags, nor still that these worries are a purely U.S. phenomenon. Canadian officials are similarly worried about adversarial government actors covertly manipulating our policies and public opinion. But I think we need to do better if we want to support a vibrant World Wide Web. U.S. adversaries are allowed to have big, successful digital products, too.

⌥ Permalink

My flailing around with Firefox's Multi-Account Containers

By: cks
30 October 2025 at 02:43

I have two separate Firefox environments. One of them is quite locked down so that it blocks JavaScript by default, doesn't accept cookies, and so on. Naturally this breaks a lot of things, so I have a second "just make it work" environment that runs all the JavaScript, accepts all the cookies, and so on (although of course I use uBlock Origin, I'm not crazy). This second environment is pretty risky in the sense that it's going to be heavily contaminated with tracking cookies and so on, so to mitigate the risk (and make it a better environment to test things in), I have this Firefox set to discard cookies, caches, local storage, history, and so on when it shuts down.

In theory how I use this Firefox is that I start it when I need to use some annoying site I want to just work, use the site briefly, and then close it down, flushing away all of the cookies and so on. In practice I've drifted into having a number of websites more or less constantly active in this "accept everything" Firefox, which means that I often keep it running all day (or longer at home) and all of those cookies stick around. This is less than ideal, and is a big reason why I wish Firefox had a 'open this site in a specific profile' feature. Yesterday, spurred on by Ben Zanin's Fediverse comment, I decided to make my "accept everything" Firefox environment more complicated in the pursuit of doing better (ie, throwing away at least some cookies more often).

First, I set up a combination of Multi-Account Containers for the basic multi-container support and FoxyTab to assign wildcarded domains to specific containers. My reason to use Multi-Account Containers and to confine specific domains to specific containers is that both M-A C itself and my standard Cookie Quick Manager add-on can purge all of the cookies and so on for a specific container. In theory this lets me manually purge undesired cookies, or all cookies except desired ones (for example, my active Fediverse login). Of course I'm not likely to routinely manually delete cookies, so I also installed Cookie AutoDelete with a relatively long timeout and with its container awareness turned on, and exemptions configured for the (container-confined) sites that I'm going to want to retain cookies from even when I've closed their tab.

(It would be great if Cookie AutoDelete supported different cookie timeouts for different containers. I suspect it's technically possible, along with other container-aware cookie deletion, since Cookie AutoDelete applies different retention policies in different containers.)

In FoxyTab, I've set a number of my containers to 'Limit to Designated Sites'; for example, my 'Fediverse' container is set this way. The intention is that when I click on an external link in a post while reading my Fediverse feed, any cookies that external site sets don't wind up in the Fediverse container; instead they go either in the default 'no container' environment or in any specific container I've set up for them. As part of this I've created a 'Cookie Dump' container that I've assigned as the container for various news sites and so on where I actively want a convenient way to discard all their cookies and data (which is available through Multi-Account Containers).

Of course if you look carefully, much of this doesn't really require Multi-Account Containers and FoxyTab (or containers at all). Instead I could get almost all of this just by using Cookie AutoDelete to clean out cookies from closed sites after a suitable delay. Containers do give me a bit more isolation between the different things I'm using my "just make it work" Firefox for, and maybe that's important enough to justify the complexity.

(I still have this Firefox set to discard everything when it exits. This means that I have to re-log-in every so often even for the sites where I have Cookie AutoDelete keep cookies, but that's fine.)

I wish Firefox Profiles supported assigning websites to profiles

By: cks
29 October 2025 at 03:23

One of the things that Firefox is working on these days is improving Firefox's profiles feature so that it's easier to use them. Firefox also has an existing feature that is similar to profiles, in containers and the Multi-Account Containers extension. The reason Firefox is tuning up profiles is that containers only separate some things, while profiles separate pretty much everything. A profile has a separate set of about:config settings, add-ons, add-on settings, memorized logins, and so on. I deliberately use profiles to create two separate and rather different Firefox environments. I'd like to have at least two or three more profiles, but one reason I've been lazy is that the more profiles I have, the more complex getting URLs into the right profile is (even with tooling to help).

This leads me to my wish for profiles, which is for profiles to support the kind of 'assign website to profile' and 'open website in profile' features that you currently have with containers, especially with the Multi-Account Containers extension. Actually I would like a somewhat better version than Multi-Account Containers currently offers, because as far as I can see you can't currently say 'all subdomains under this domain should open in container X' and that's a feature I very much want for one of my use cases.

(Multi-Account Containers may be able to do wildcarded subdomains with an additional add-on, but on the other hand apparently it may have been neglected or abandoned by Mozilla.)

Another way to get much of what I want would be for some of my normal add-ons to be (more) container aware. I could get a lot of the benefit of profiles (although not all of them) by using Multi-Account Containers with container aware cookie management in, say, Cookie AutoDelete (which I believe does support that, although I haven't experimented). Using containers also has the advantage that I wouldn't have to maintain N identical copies of my configuration for core extensions and bookmarklets and so on.

(I'm not sure what you can copy from one profile to a new one, and you currently don't seem to get any assistance from Firefox for it, at least in the old profile interface. This is another reason I haven't gone wild on making new Firefox profiles.)

Modern Linux filesystem mounts are rather complex things

By: cks
28 October 2025 at 03:04

Once upon a time, Unix filesystem mounts worked by putting one inode on top of another, and this was also how they worked in very early Linux. It wasn't wrong to say that mounts were really about inodes, with the names only being used to find the inodes. This is no longer how things work in Linux (and perhaps other Unixes, but Linux is what I'm most familiar with for this). Today, I believe that filesystem mounts in Linux are best understood as namespace operations.

Each separate (unmounted) filesystem is a a tree of names (a namespace). At a broad level, filesystem mounts in Linux take some name from that filesystem tree and project it on top of something in an existing namespace, generally with some properties attached to the projection. A regular conventional mount takes the root name of the new filesystem and puts the whole tree somewhere, but for a long time Linux's bind mounts took some other name in the filesystem as their starting point (what we could call the root inode of the mount). In modern Linux, there can also be multiple mount namespaces in existence at one time, with different contents and properties. A filesystem mount does not necessarily appear in all of them, and different things can be mounted at the same spot in the tree of names in different mount namespaces.

(Some mount properties are still global to the filesystem as a whole, while other mount properties are specific to a particular mount. See mount(2) for a discussion of general mount properties. I don't know if there's a mechanism to handle filesystem specific mount properties on a per mount basis.)

This can't really be implemented with an inode-based view of mounts. You can somewhat implement traditional Linux bind mounts with an inode based approach, but mount namespaces have to be separate from the underlying inodes. At a minimum a mount point must be a pair of 'this inode in this namespace has something on top of it', instead of just 'this inode has something on top of it'.

(A pure inode based approach has problems going up the directory tree even in old bind mounts, because the parent directory of a particular directory depends on how you got to the directory. If /usr/share is part of /usr and you bind mounted /usr/share to /a/b, the value of '..' depends on if you're looking at '/usr/share/..' or '/a/b/..', even though /usr/share and /a/b are the same inode in the /usr filesystem.)

If I'm reading manual pages correctly, Linux still normally requires the initial mount of any particular filesystem be of its root name (its true root inode). Only after that initial mount is made can you make bind mounts to pull out some subset of its tree of names and then unmount the original full filesystem mount. I believe that a particular filesystem can provide ways to sidestep this with a filesystem specific mount option, such as btrfs's subvol= mount option that's covered in the btrfs(5) manual page (or 'btrfs subvolume set-default').

You can add arbitrary zones to NSD (without any glue records)

By: cks
27 October 2025 at 03:29

Suppose, not hypothetically, that you have a very small DNS server for a captive network situation, where the DNS server exists only to give clients answers for a small set of hosts. One of the ways you can implement this is with an authoritative DNS servers, such as NSD, that simply has an extremely minimal set of DNS data. If you're using NSD for this, you might be curious how minimal you can be and how much you need to mimic ordinary DNS structure.

Here, by 'mimic ordinary DNS structure', I mean inserting various levels of NS records so there is a more or less conventional path of NS delegations from the DNS root ('.') down to your name. If you're providing DNS clients with 'dog.example.org', you might conventionally have a NS record for '.', a NS record for 'org.', and a NS record for 'example.org.', mimicking what you'd see in global DNS. Of course all of your NS records are going to point to your little DNS server, but they're present if anything looks.

Perhaps unsurprisingly, NSD doesn't require this and DNS clients normally don't either. If you say:

zone:
  name: example.org
  zonefile: example-stub

and don't have any other DNS data, NSD won't object and it will answer queries for 'dog.example.org' with your minimal stub data. This works for any zone, including completely made up ones:

zone:
  name: beyond.internal
  zonefile: beyond-stub

The actual NSD stub zone files can be quite minimal. An older OpenBSD NSD appears to be happy with zone files that have only a $ORIGIN, a $TTL, a '@ IN SOA' record, and what records you care about in the zone.

Once I thought about it, I realized I should have expected this. An authoritative DNS server normally only holds data for a small subset of zones and it has to be willing to answer queries about the data it holds. Some authoritative DNS servers (such as Bind) can also be used as resolving name servers so they'd sort of like to have information about at least the root nameservers, but NSD is a pure authoritative server so there's no reason for it to care.

As for clients, they don't normally do DNS resolution starting from the root downward. Instead, they expect to operate by sending the entire query to whatever their configured DNS resolver is, which is going to be your little NSD setup. In a number of configurations, clients either can't talk directly to outside DNS or shouldn't try to do DNS resolution that way because it won't work; they need to send everything to their configured DNS resolver so it can do, for example, "split horizon" DNS.

(Yes, the modern vogue for DNS over HTTPS puts a monkey wrench into split horizon DNS setups. That's DoH's problem, not ours.)

Since this works for a .net zone, you can use it to try to disable DNS over HTTPS resolvers in your stub DNS environment by providing a .net zone with 'use-application-dns CNAME .' or the like, to trigger at least Firefox's canary domain detection.

(I'm not going to address whether you should have such a minimal stub DNS environment or instead count on your firewall to block traffic and have a normal DNS environment, possibly with split horizon or response policy zones to introduce your special names.)

Some of the things that ZFS scrubs will detect

By: cks
26 October 2025 at 02:41

Recently I saw a discussion of my entry on how ZFS scrubs don't really check the filesystem structure where someone thought that ZFS scrubs only protected you from the disk corrupting data at rest, for example due to sectors starting to fail (here). While ZFS scrubs have their limits, they do manage to check somewhat more than this.

To start with, ZFS scrubs check the end to end hardware path for reading all your data (and implicitly for writing it). There are a variety of ways that things in the hardware path can be unreliable; for example, you might have slowly failing drive cables that are marginal and sometimes give you errors on data reads (or worse, data writes). A ZFS scrub has some chance to detect this; if a ZFS scrub passes, you know that as of that point in time you can reliably read all your data from all your disks and that all the data was reliably written.

If a scrub passes, you also know that the disks haven't done anything obviously bad with your data. This can be important if you're doing operations that you consider somewhat exotic, such as telling SSDs to discard unused sectors. If you have ZFS send TRIM commands to a SSD and then your scrub passes, you know that the SSD didn't incorrectly discard some sectors that were actually used.

Related to this, if you do a ZFS level TRIM and then the scrub passes, you know that ZFS itself didn't send TRIM commands that told the SSD to discard sectors that were actually used. In general, if ZFS has a serious problem where it writes the wrong thing to the wrong place, a scrub will detect it (although the scrub can't fix it). Similarly, a scrub will detect if a disk itself corrupted the destination of a write (or a read), or if things were corrupted somewhere in the lower level software and hardware path of the write.

There are a variety of ZFS level bugs that could theoretically write the wrong thing to the wrong place, or do something that works out to the same effect. ZFS could have a bug in free space handling (so that it incorrectly thinks some in use sectors are free and overwrites them), or it could write too much or too little, or it could correctly allocate and write data but record the location of the data incorrectly in higher level data structures, or it could accidentally not do a write (for example, if it's supposed to write a duplicate copy of some data but forgets to actually issue the IO). ZFS scrubs can detect all of these issues under the right circumstances.

(To a limited extent a ZFS scrub also checks the high level metadata of filesystems and snapshots. since it has to traverse that metadata to find the object set for each dataset and similar things. Since a scrub just verifies checksums, this won't cross check dataset level metadata like information on how much data was written in each snapshot, or the space usage.)

What little I want out of web "passkeys" in my environment

By: cks
25 October 2025 at 03:19

WebAuthn is yet another attempt to do an API for web authentication that doesn't involve passwords but that instead allows browsers, hardware tokens, and so on to do things more securely. "Passkeys" (also) is the marketing term for a "WebAuthn credential", and an increasing number of websites really, really want you to use a passkey for authentication instead of any other form of multi-factor authentication (they may or may not still require your password).

Most everyone that wants you to use passkeys also wants you to specifically use highly secure ones. The theoretically most secure are physical hardware security keys, followed by passkeys that are stored and protected in secure enclaves in various ways by the operating system (provided that the necessary special purpose hardware is available). Of course the flipside of 'secure' is 'locked in', whether locked in to your specific hardware key (or keys, generally you'd better have backups) or locked in to a particular vendor's ecosystem because their devices are the only ones that can possibly use your encrypted passkey vault.

(WebAuthn neither requires nor standardizes passkey export and import operations, and obviously security keys are built to not let anyone export the cryptographic material from them, that's the point.)

I'm extremely not interested in the security versus availability tradeoff that passkeys make in favour of security. I care far more about preserving availability of access to my variety of online accounts than about nominal high security. So if I'm going to use passkeys at all, I have some requirements:

Linux people: is there a passkeys implementation that does not use physical hardware tokens (software only), is open source, works with Firefox, and allows credentials to be backed up and copied to other devices by hand, without going through some cloud service?

I don't think I'm asking for much, but this is what I consider the minimum for me actually using passkeys. I want to be 100% sure of never losing them because I have multiple backups and can use them on multiple machines.

Apparently KeePassXC more or less does what I want (when combined with its Firefox extension), and it can even export passkeys in a plain text format (well, JSON). However, I don't know if anything else can ingest those plain text passkeys, and I don't know if KeePassXC can be told to only do passkeys with the browser and not try to take over passwords.

(But at least a plain text JSON backup of your passkeys can be imported into another KeePassXC instance without having to try to move, copy, or synchronize a KeePassXC database.)

Normally I would ignore passkeys entirely, but an increasing number of websites are clearly going to require me to use some form of multi-factor authentication, no matter how stupid this is (cf), and some of them will probably require passkeys or at least make any non-passkey option very painful. And it's possible that reasonably integrated passkeys will be a better experience than TOTP MFA with my janky minimal setup.

(Of course KeePassXC also supports TOTP, and TOTP has an extremely obvious import process that everyone supports, and I believe KeePassXC will export TOTP secrets if you ask nicely.)

While KeePassXC is okay, what I would really like is for Firefox to support 'memorized passkeys' right along with its memorized passwords (and support some kind of export and import along with it). Should people use them? Perhaps not. But it would put that choice firmly in the hands of the people using Firefox, who could decide on how much security they did or didn't want, not in the hands of websites who want to force everyone to face a real risk of losing their account so that the website can conduct security theater.

(Firefox will never support passkeys this way for an assortment of reasons. At most it may someday directly use passkeys through whatever operating system services expose them, and maybe Linux will get a generic service that works the way I want it to. Nor is Firefox ever going to support 'memorized TOTP codes'.)

Two reasons why Unix traditionally requires mount points to exist

By: cks
24 October 2025 at 02:29

Recently on the Fediverse, argv minus one asked a good question:

Why does #Linux require #mount points to exist?

And are there any circumstances where a mount can be done without a pre-existing mount point (i.e. a mount point appears out of thin air)?

I think there is one answer for why this is a good idea in general and otherwise complex to do, although you can argue about it, and then a second historical answer based on how mount points were initially implemented.

The general problem is directory listings. We obviously want and need mount points to appear in readdir() results, but in the kernel, directory listings are historically the responsibility of filesystems and are generated and returned in pieces on the fly (which is clearly necessary if you have a giant directory; the kernel doesn't read the entire thing into memory and then start giving your program slices out of it as you ask). If mount points never appear in the underlying directory, then they must be inserted at some point in this process. If mount points can sometimes exist and sometimes not, it's worse; you need to somehow keep track of which ones actually exist and then add the ones that don't at the end of the directory listing. The simplest way to make sure that mount points always exist in directory listings is to require them to have an existence in the underlying filesystem.

(This was my initial answer.)

The historical answer is that in early versions of Unix, filesystems were actually mounted on top of inodes, not directories (or filesystem objects). When you passed a (directory) path to the mount(2) system call, all it was used for was getting the corresponding inode, which was then flagged as '(this) inode is mounted on' and linked (sort of) to the new mounted filesystem on top of it. All of the things that dealt with mount points and mounted filesystem did so by inode and inode number, with no further use of the paths and the root inode of the mounted filesystem being quietly substituted for the mounted-on inode. All of the mechanics of this needed the inode and directory entry for the name to actually exist (and V7 required the name to be a directory).

I don't think modern kernels (Linux or otherwise) still use this approach to handling mounts, but I believe it lingered on for quite a while. And it's a sufficiently obvious and attractive implementation choice that early versions of Linux also used it (see the Linux 0.96c version of iget() in fs/inode.c).

Sidebar: The details of how mounts worked in V7

When you passed a path to the mount(2) system call (called 'smount()' in sys/sys3.c), it used the name to get the inode and then set the IMOUNT flag from sys/h/inode.h on it (and put the mount details in a fixed size array of mounts, which wasn't very big). When iget() in sys/iget.c was fetching inodes for you and you'd asked for an IMOUNT inode, it gave you the root inode of the filesystem instead, which worked in cooperation with name lookup in a directory (the name lookup in the directory would find the underlying inode number, and then iget() would turn it into the mounted filesystem's root inode). This gave Research Unix a simple, low code approach to finding and checking for mount points, at the cost of pinning a few more inodes into memory (not necessarily a small thing when even a big V7 system only had at most 200 inodes in memory at once, but then a big V7 system was limited to 8 mounts, see h/param.h).

We can't really do progressive rollouts of disruptive things

By: cks
23 October 2025 at 02:49

In a comment on my entry on how we reboot our machines right after updating their kernels, Jukka asked a good question:

While I do not know how many machines there are in your fleet, I wonder whether you do incremental rolling, using a small snapshot for verification before rolling out to the whole fleet?

We do this to some extent but we can't really do it very much. The core problem is that the state of almost all of our machines is directly visible and exposed to people. This is because we mostly operate an old fashioned Unix login server environment, where people specifically use particular servers (either directly by logging in to them or implicitly because their home directory is on a particular NFS fileserver). About the only genuinely generic machines we have are the nodes in our SLURM cluster, where we can take specific unused nodes out of service temporarily without anyone noticing.

(Some of these login servers in use all of the time; others we might find idle if we're extremely lucky. But it's hard to predict when someone will show up to try to use a currently empty server.)

This means that progressively rolling out a kernel update (and rebooting things) to our important, visible core servers requires multiple people-visible reboots of machines, instead of one big downtime when everything is rebooted. Generally we feel that repeated disruptions are much more annoying and disruptive overall to people; it's better to get the pain of reboot disruptions over all at once. It's also much easier to explain to people, and we don't have to annoy them with repeated notifications that yet another subset of our servers and services will be down for a bit.

(To make an incremental deployment more painful for us, these will normally have to be after-hours downtimes, which means that we'll be repeatedly staying late, perhaps once a week for three or four weeks as we progressively work through a rollout.)

In addition to the nodes of our SLURM cluster, there are a number of servers that can be rebooted in the background to some degree without people noticing much. We will often try the kernel update out on a few of them in advance, and then update others of them earlier in the day (or the day before) both as a final check and to reduce the number of systems we have to cover at the actual out of hours downtime. But a lot of our servers cannot really be tested much in advance, such as our fileservers or our web server (which is under constant load for reasons outside the scope of this entry). We can (and do) update a test fileserver or a test web server, but neither will see a production load and it's under production loads that problems are most likely to surface.

This is a specific example of how the 'cattle' model doesn't fit all situations. To have a transparent rolling update that involves reboots (or anything else that's disruptive on a single machine), you need to be able to transparently move people off of machines and then back on to them. This is hard to get in any environment where people have long term usage of specific machines, where they have login sessions and running compute jobs and so on, and where you have have non-redundant resources on a single machine (such as NFS fileservers without transparent failover from server to server).

We don't update kernels without immediately rebooting the machine

By: cks
22 October 2025 at 03:07

I've mentioned this before in passing (cf, also) but today I feel like saying it explicitly: our habit with all of our machines is to never apply a kernel update without immediately rebooting the machine into the new kernel. On our Ubuntu machines this is done by holding the relevant kernel packages; on my Fedora desktops I normally run 'dnf update --exclude "kernel*"' unless I'm willing to reboot on the spot.

The obvious reason for this is that we want to switch to the new kernel under controlled, attended conditions when we'll be able to take immediate action if something is wrong, rather than possibly have the new kernel activate at some random time without us present and paying attention if there's a power failure, a kernel panic, or whatever. This is especially acute on my desktops, where I use ZFS by building my own OpenZFS packages and kernel modules. If something goes wrong and the kernel modules don't load or don't work right, an unattended reboot can leave my desktops completely unusable and off the network until I can get to them. I'd rather avoid that if possible (sometimes it isn't).

(In general I prefer to reboot my Fedora machines with me present because weird things happen from time to time and sometimes I make mistakes, also.)

The less obvious reason is that when you reboot a machine right after applying a kernel update, it's clear in your mind that the machine has switched to a new kernel. If there are system problems in the days immediately afterward the update, you're relatively likely to remember this and at least consider the possibility that the new kernel is involved. If you apply a kernel update, walk away without rebooting, and the machine reboots a week and a half later for some unrelated reason, you may not remember that one of the things the reboot did was switch to a new kernel.

(Kernels aren't the only thing that this can happen with, since not all system updates and changes take effect immediately when made or applied. Perhaps one should reboot after making them, too.)

I'm assuming here that your Linux distribution's package management system is sensible, so there's no risk of losing old kernels (especially the one you're currently running) merely because you installed some new ones but didn't reboot into them. This is how Debian and Ubuntu behave (if you don't 'apt autoremove' kernels), but not quite how Fedora's dnf does it (as far as I know). Fedora dnf keeps the N most recent kernels around and probably doesn't let you remove the currently running kernel even if it's more than N kernels old, but I don't believe it tracks whether or not you've rebooted into those N kernels and stretches the N out if you haven't (or removes more recent installed kernels that you've never rebooted into, instead of older kernels that you did use at one point).

PS: Of course if kernel updates were perfect this wouldn't matter. However this isn't something you can assume for the Linux kernel (especially as patched by your distribution), as we've sometimes seen. Although big issues like that are relatively uncommon.

We (I) need a long range calendar reminder system

By: cks
21 October 2025 at 03:05

About four years ago I wrote an entry about how your SMART drive database of attribute meanings needs regular updates. That entry was written on the occasion of updating the database we use locally on our Ubuntu servers, and at the time we were using a mix of Ubuntu 18.04 and Ubuntu 20.04 servers, both of which had older drive databases that probably dated from early 2018 and early 2020 respectively. It is now late 2025 and we use a mix of Ubuntu 24.04 and 22.04 servers, both of which have drive databases that are from after October of 2021.

Experienced system administrators know where this one is going: today I updated our SMART drive database again, to a version of the SMART database that was more recent than the one shipped with 24.04 instead of older than it.

It's a fact of life that people forget things. People especially forget things that are a long way away, even if they make little notes in their worklog message when recording something that they did (as I did four years ago). It's definitely useful to plan ahead in your documentation and write these notes, but without an external thing to push you or something to explicitly remind you, there's no guarantee that you'll remember.

All of which leads me to the view that it would be useful for us to have a long range calendar reminder system, something that could be used to set reminders for more than a year into the future and ideally allow us to write significant email messages to our future selves to cover all of the details (although there are hacks around that, such as putting the details on a web page and having the calendar mail us a link). Right now the best calendar reminder system we have is the venerable calendar, which we can arrange to email one-line notes to our general address that reaches all sysadmins, but calendar doesn't let you include the year in the reminder date.

(For SMART drive database updates, we could get away with mailing ourselves once a year in, say, mid-June. It doesn't hurt to update the drive database more than every Ubuntu LTS release. But there are situations where a reminder several years in the future is what we want.)

PS: Of course it's not particularly difficult to build an ad-hoc script system to do this, with various levels of features. But every local ad-hoc script that we write is another little bit of overhead, and I'd like to avoid that kind of thing if at all possible in favour of a standard solution (that isn't a shared cloud provider calendar).

We need to start doing web blocking for non-technical reasons

By: cks
20 October 2025 at 03:37

My sense is that for a long time, technical people (system administrators, programmers, and so on) have seen the web as something that should be open by default and by extension, a place where we should only block things for 'technical' reasons. Common technical reasons are a harmful volume of requests or clear evidence of malign intentions, such as probing for known vulnerabilities. Otherwise, if it wasn't harming your website and wasn't showing any intention to do so, you should let it pass. I've come to think that in the modern web this is a mistake, and we need to be willing to use blocking and other measures for 'non-technical' reasons.

The core problem is that the modern web seems to be fragile and is kept going in large part by a social consensus, not technical things such as capable software and powerful servers. However, if we only react to technical problems, there's very little that preserves and reinforces this social consensus, as we're busy seeing. With little to no consequences for violating the social consensus, bad actors are incentivized to skate right up to and even over the line of causing technical problems. When we react by taking only narrow technical measures, we tacitly reward the bad actors for their actions; they can always find another technical way. They have no incentive to be nice or to even vaguely respect the social consensus, because we don't punish them for it.

So I've come to feel that if something like the current web is to be preserved, we need to take action not merely when technical problems arise but also when the social consensus is violated. We need to start blocking things for what I called editorial reasons. When software or people do things that merely shows bad manners and doesn't yet cause us technical problems, we should still block it, either soft (temporarily, perhaps with HTTP 429 Too Many Requests) or hard (permanently). We need to take action to create the web that we want to see, or we aren't going to get it or keep it.

To put it another way, if we want to see good, well behaved browsers, feed readers, URL fetchers, crawlers, and so on, we have to create disincentives for ones that are merely bad (as opposed to actively damaging). In its own way, this is another example of the refutation of Postel's Law. If we accept random crap to be friendly, we get random crap (and the quality level will probably trend down over time).

To answer one potential criticism, it's true that in some sense, blocking and so on for social reasons is not good and is in some theoretical sense arguably harmful for the overall web ecology. On the other hand, the current unchecked situation itself is also deeply harmful for the overall web ecology and it's only going to get worse if we do nothing, with more and more things effectively driven off the open web. We only get to pick the poison here.

I wish SSDs gave you CPU performance style metrics about their activity

By: cks
19 October 2025 at 02:54

Modern CPUs have an impressive collection of performance counters for detailed, low level information on things like cache misses, branch mispredictions, various sorts of stalls, and so on; on Linux you can use 'perf list' to see them all. Modern SSDs (NVMe, SATA, and SAS) are all internally quite complex, and their behavior under load depends on a lot of internal state. It would be nice to have CPU performance counter style metrics to expose some of those details. For a relevant example that's on my mind (cf), it certainly would be interesting to know how often flash writes had to stall while blocks were hastily erased, or the current erase rate.

Having written this, I checked some of our SSDs (the ones I'm most interested in at the moment) and I see that our SATA SSDs do expose some of this information as (vendor specific) SMART attributes, with things like 'block erase count' and 'NAND GB written' to TLC or SLC (as well as the host write volume and so on stuff you'd expect). NVMe does this in a different way that doesn't have the sort of easy flexibility that SMART attributes do, so a random one of ours that I checked doesn't seem to provide this sort of lower level information.

It's understandable that SSD vendors don't necessarily want to expose this sort of information, but it's quite relevant if you're trying to understand unusual drive performance. For example, for your workload do you need to TRIM your drives more often, or do they have enough pre-erased space available when you need it? Since TRIM has an overhead, you may not want to blindly do it on a frequent basis (and its full effects aren't entirely predictable since they depend on how much the drive decides to actually erase in advance).

(Having looked at SMART 'block erase count' information on one of our servers, it's definitely doing something when the server is under heavy fsync() load, but I need to cross-compare the numbers from it to other systems in order to get a better sense of what's exceptional and what's not.)

I'm currently more focused on write related metrics, but there's probably important information that could be exposed for reads and for other operations. I'd also like it if SSDs provided counters for how many of various sorts of operations they saw, because while your operating system can in theory provide this, it often doesn't (or doesn't provide them at the granularity of, say, how many writes with 'Force Unit Access' or how many 'Flush' operations were done).

(In Linux, I think I'd have to extract this low level operation information in an ad-hoc way with eBPF tracing.)

A (filesystem) journal can be a serialization point for durable writes

By: cks
18 October 2025 at 02:57

Suppose that you have a filesystem that uses some form of a journal to provide durability (as many do these days) and you have a bunch of people (or processes) writing and updating things all over the filesystem that they want to be durable, so these processes are all fsync()'ing their work on a regular basis (or the equivalent system call or synchronous write operation). In a number of filesystem designs, this creates a serialization point on the filesystem's journal.

This is related to the traditional journal fsync() problem, but that one is a bit different. In the traditional problem you have a bunch of changes from a bunch of processes, some of which one process wants to fsync() and most of which it doesn't; this can be handled by only flushing necessary things. Here we have a bunch of processes making a bunch of relatively independent changes but approximately all of the processes want to fsync() their changes.

The simple way to get durability (and possibly integrity) for fsync() is to put everything that gets fsync()'d into the journal (either directly or indirectly) and then force the journal to be durably committed to disk. If the filesystem's journal is a linear log, as is usually the case, this means that multiple processes mostly can't be separately writing and flushing journal entries at the same time. Each durable commit of the journal is a bottleneck for anyone who shows up 'too late' to get their change included in the current commit; they have to wait for the current commit to be flushed to disk before they can start adding more entries to the journal (but then everyone can be bundled into the next commit).

In some filesystems, processes can readily make durable writes outside of the journal (for example, overwriting something in place); such processes can avoid serializing on a linear journal. Even if they have to put something in the journal, you can perhaps minimize the direct linear journal contents by having them (durably) write things to various blocks independently, then put only compact pointers to those out of line blocks into the linear journal with its serializing, linear commits. The goal is to avoid having someone show up wanting to write megabytes 'to the journal' and forcing everyone to wait for their fsync(); instead people serialize only on writing a small bit of data at the end, and writing the actual data happens in parallel (assuming the disk allows that).

(I may have made this sound simple but the details are likely fiendishly complex.)

If you have a filesystem in this situation, and I believe one of them is ZFS, you may find you care a bunch about the latency of disks flushing writes to media. Of course you need the workload too, but there are certain sorts of workloads that are prone to this (for example, traditional Unix mail spools).

I believe that you can also see this sort of thing with databases, although they may be more heavily optimized for concurrent durable updates.

Sidebar: Disk handling of durable writes can also be a serialization point

Modern disks (such as NVMe SSDs) broadly have two mechanism to force things to durable storage. You can issue specific writes of specific blocks with 'Force Unit Access' (FUA) set, which causes the disk to write those blocks (and not necessarily any others) to media, or you can issue a general 'Flush' command to the disk and it will write anything it currently has in its write cache to media.

If you issue FUA writes, you don't have to wait for anything else other than your blocks to be written to media. If you issue 'Flush', you get to wait for everyone's blocks to be written out. This means that for speed you want to issue FUA writes when you want things on media, but on the other hand you may have already issued non-FUA writes for some of the blocks before you found out that you wanted them on media (for example, if someone writes a lot of data, so much that you start writeback, and then they issue a fsync()). And in general, the block IO programming model inside your operating system may favour issuing a bunch of regular writes and then inserting a 'force everything before this point to media' fencing operation into the IO stream.

NVMe SSDs and the question of how fast they can flush writes to flash

By: cks
17 October 2025 at 03:17

Over on the Fediverse, I had a question I've been wondering about:

Disk drive people, sysadmins, etc: would you expect NVMe SSDs to be appreciably faster than SATA SSDs for a relatively low bandwidth fsync() workload (eg 40 Mbytes/sec + lots of fsyncs)?

My naive thinking is that AFAIK the slow bit is writing to the flash chips to make things actually durable when you ask, and it's basically the same underlying flash chips, so I'd expect NVMe to not be much faster than SATA SSDs on this narrow workload.

This is probably at least somewhat wrong. This 2025 SSD hierarchy article doesn't explicitly cover forced writes to flash (the fsync() case), but it does cover writing 50 GBytes of data in 30,000 files, which is probably enough to run any reasonable consumer NVMe SSD out of fast write buffer storage (either RAM or fast flash). The write speeds they get on this test from good NVMe drives are well over the maximum SATA data rates, so there's clearly a sustained write advantage to NVMe SSDs over SATA SSDs.

In replies on the Fediverse, several people pointed out that NVMe SSDs are likely using newer controllers than SATA SSDs and these newer controllers may well be better at handling writes. This isn't surprising when I thought about it, especially in light of NVMe perhaps overtaking SATA for SSDs, although apparently 'enterprise' SATA/SAS SSDs are still out there and probably seeing improvements (unlike consumer SATA SSDs where price is the name of the game).

Also, apparently the real bottleneck in writing to the actual flash is finding erased blocks or, if you're unlucky, having to wait for blocks to be erased. Actual writes to the flash chips may be able to go at something close to the PCIe 3.0 (or better) bandwidth, which would help explain the Tom's Hardware large write figures (cf).

(If this is the case, then explicitly telling SSDs about discarded blocks is especially important for any write workload that will be limited by flash write speeds, including fsync() heavy workloads.)

PS: The reason I'm interested in this is that we have a SATA SSD based system that seems to have periodic performance issues related enough write IO combined with fsync()s (possibly due to write buffering interactions), and I've been wondering how much moving it to be NVMe based might help. Since this machine uses ZFS, perhaps one thing we should consider is manually doing some ZFS 'TRIM' operations.

The strange case of 'mouse action traps' in GNU Emacs with (slower) remote X

By: cks
16 October 2025 at 02:18

Some time back over on the Fediverse, I groused about GNU Emacs tooltips. That grouse was a little imprecise; the situation I usually see problems with is specifically running GNU Emacs in SSH-forwarded X from home, which has a somewhat high latency. This high latency caused me to change how I opened URLs from GNU Emacs, and it seems to be the root of the issues I'm seeing.

The direct experience I was having with tooltips was that being in a situation where Emacs might want to show a GUI tooltip would cause Emacs to stop responding to my keystrokes for a while. If the tooltip was posted and visible it would stay visible, but the stall could happen without that. However, it doesn't seem to be tooltips as such that cause this problem, because even with tooltips disabled as far as I can tell (and certainly not appearing), the cursor and my interaction with Emacs can get 'stuck' in places where there's mouse actions available.

(I tried both setting the tooltip delay times to very large numbers and setting tooltip-functions to do nothing.)

This is especially visible to me because my use of MH-E is prone to this in two cases. First, when composing email flyspell mode will attach a 'correct word' button-2 popup menu to misspelled words, which can then stall things if I move the cursor to them (especially if I use a mouse click to do so, perhaps because I want to make the word into an X selection). Second, when displaying email that has links in it, these links can be clicked on (and have hover tooltips to display what the destination URL is); what I frequently experience is that after I click on a link, when I come back to the GNU Emacs (X) window I can't immediately switch to the next message, scroll the text of the current message, or otherwise do things.

This 'trapping' and stall doesn't usually happen when I'm in the office, which is still using remote X but over a much faster and lower latency 1G network connection. Disabling tooltips themselves isn't ideal because it means I no longer get to see where links go, and anyway it's relatively pointless if it doesn't fix the real problem.

When I thought this was an issue specific to tooltips, it made sense to me because I could imagine that GNU Emacs needed to do a bunch of relatively synchronous X operations to show or clear a tooltip, and those operations could take a while over my home link. Certainly displaying regular GNU Emacs (X) menus isn't particularly fast. Without tooltips displaying it's more mysterious, but it's still possible that Emacs is doing a bunch of X operations when it thinks a mouse or tooltip target is 'active', or perhaps there's something else going on.

(I'm generally happy with GNU Emacs but that doesn't mean it's perfect or that I don't have periodic learning experiences.)

PS: In theory there are tools that can monitor and report on the flow of X events (by interposing themselves into it). In practice it's been a long time since I used any of them, and anyway there's probably nothing I can do about it if GNU Emacs is doing a lot of X operations. Plus it's probably partly the GTK toolkit at work, not GNU Emacs itself.

PPS: Having taken a brief look at the MH-E code, I'm pretty sure that it doesn't even begin to work with GNU Emacs' TRAMP (also) system for working with remote files. TRAMP has some support for running commands remotely, but MH-E has its own low-level command execution and assumes that it can run commands rapidly, whenever it feels like, and then read various results out of the filesystem. Probably the most viable approach would be to use sshfs to mount your entire ~/Mail locally, have a local install of (N)MH, and then put shims in for the very few MH commands that have to run remotely (such as inc and the low level post command that actually sends out messages you've written). I don't know if this would work very well, but it would almost certainly be better than trying to run all those MH commands remotely.

Staring at code can change what I see (a story from long ago)

By: cks
15 October 2025 at 03:14

I recently read Hillel Wayne's Sapir-Whorf does not apply to Programming Languages (via, which I will characterize as being about how programming can change how you see things even though the Sapir-Whorf hypothesis doesn't apply (Hillel Wayne points to the Tetris Effect). As it happens, long ago I experienced a particular form of this that still sticks in my memory.

Many years ago, I was recruited to be a TA for the university's upper year Operating Systems course, despite being an undergraduate at the time. One of the jobs of TAs was to mark assignments, which we did entirely by hand back in those days; any sort of automated testing was far in the future, and for these assignments I don't think we even ran the programs by hand. Instead, marking was mostly done by having students hand in printouts of their modifications to the course's toy operating system and we three TAs collectively scoured the result to see if they'd made the necessary changes and spot errors.

Since this was an OS course, some assignments required dealing with concurrency, which meant that students had to properly guard and insulate their changes (in, for example, memory handling) from various concurrency problems. Failure to completely do so would cost marks, so the TAs were on the lookout for such problems. Over the course of the course, I got very good at spotting these concurrency problems entirely by eye in the printed out code. I didn't really have to think about it, I'd be reading the code (or scanning it) and the problem would jump out at me. In the process I formed a firm view that concurrency is very hard for people to deal with, because so many students made so many mistakes (whether obvious or subtle).

(Since students were modifying the toy OS to add or change features, there was no set form that their changes had to follow; people implemented the new features in various different ways. This meant that their concurrency bugs had common patterns but not specific common forms.)

I could have thought that I was spotting these problems because I was a better programmer than these other undergraduate students (some of whom were literally my peers, it was just that I'd taken the OS course a year earlier than they had because it was one of my interests). However, one of the most interesting parts of the whole experience was getting pretty definitive proof that I wasn't, and it was my focused experience that made the difference. One of the people taking this course was a fellow undergraduate who I knew and I knew was a better programmer than I was, but when I was marking his version of one assignment I spotted what I viewed at the time as a reasonably obvious concurrency issue. So I wasn't seeing these issues when the undergraduates doing the assignment missed them because I was a better programmer, since here I wasn't: I was seeing the bugs because I was more immersed in this than they were.

(This also strongly influenced my view of how hard and tricky concurrency is. Here was a very smart programmer, one with at least some familiarity with the whole area, and they'd still made a mistake.)

Uses for DNS server delegation

By: cks
14 October 2025 at 03:52

A commentator on my entry on systemd-resolved's new DNS server delegation feature asked:

My memory might fail me here, but: wasn't something like this a feature introduced in ISC's BIND 8, and then considered to be a bad mistake and dropped again in BIND 9 ?

I don't know about Bind, but what I do know is that this feature is present in other DNS resolvers (such as Unbound) and that it has a variety of uses. Some of those uses can be substituted with other features and some can't be, at least not as-is.

The quick version of 'DNS server delegation' is that you can send all queries under some DNS zone name off to some DNS server (or servers) of your choice, rather than have DNS resolution follow any standard NS delegation chain that may or may not exist in global DNS. In Unbound, this is done through, for example, Forward Zones.

DNS server delegation has at least three uses that I know of. First, you can use it to insert entire internal TLD zones into the view that clients have. People use various top level names for these zones, such as .internal, .kvm, .sandbox (our choice), and so on. In all cases you have some authoritative servers for these zones and you need to direct queries to these servers instead of having your queries go to the root nameservers and be rejected.

(Obviously you will be sad if IANA ever assigns your internal TLD to something, but honestly if IANA allows, say, '.internal', we'll have good reason to question their sanity. The usual 'standard DNS environment' replacement for this is to move your internal TLD to be under your organizational domain and then implement split horizon DNS.)

Second, you can use it to splice in internal zones that don't exist in external DNS without going to the full overkill of split horizon authoritative data. If all of your machines live in 'corp.example.org' and you don't expose this to the outside world, you can have your public example.org servers with your public data and your corp.example.org authoritative servers, and you splice in what is effectively a fake set of NS records through DNS server delegation. Related to this, if you want you can override public DNS simply by having an internal and an external DNS server, without split horizon DNS; you use DNS server delegation to point to the internal DNS server for certain zones.

(This can be replaced with split horizon DNS, although maintaining split horizon DNS is its own set of headaches.)

Finally, you can use this to short-cut global DNS resolution for reliability in cases where you might lose external connectivity. For example, there are within-university ('on-campus' in our jargon) authoritative DNS servers for .utoronto.ca and .toronto.edu. We can use DNS server delegation to point these zones at these servers to be sure we can resolve university names even if the university's external Internet connection goes down. We can similarly point our own sub-zone at our authoritative servers, so even if our link to the university backbone goes down we can resolve our own names.

(This isn't how we actually implement this; we have a more complex split horizon DNS setup that causes our resolving DNS servers to have a complete copy of the inside view of our zones, acting as caching secondaries.)

The early Unix history of chown() being restricted to root

By: cks
13 October 2025 at 03:37

A few years ago I wrote about the divide in chown() about who got to give away files, where BSD and V7 were on one side, restricting it to root, while System III and System V were on the other, allowing the owner to give them away too. At the time I quoted the V7 chown(2) explanation of this:

[...] Only the super-user may execute this call, because if users were able to give files away, they could defeat the (nonexistent) file-space accounting procedures.

Recently, for reasons, chown(2) and its history was on my mind and so I wondered if the early Research Unixes had always had this, or if a restriction was added at some point.

The answer is that the restriction was added in V6, where the V6 chown(2) manual page has the same wording as V7. In Research Unix V5 and earlier, people can chown(2) away their own files; this is documented in the V4 chown(2) manual page and is what the V5 kernel code for chown() does. This behavior runs all the way back to the V1 chown() manual page, with an extra restriction that you can't chown() setuid files.

(Since I looked it up, the restriction on chown()'ing setuid files was lifted in V4. In V4 and later, a setuid file has its setuid bit removed on chown; in V3 you still can't give away such a file, according to the V3 chown(2) manual page.)

At this point you might wonder where the System III and System V unrestricted chown came from. The surprising to me answer seems to be that System III partly descends from PWB/UNIX, and PWB/UNIX 1.0, although it was theoretically based on V6, has pre-V6 chown(2) behavior (kernel source, manual page). I suspect that there's a story both to why V6 made chown() more restricted and also why PWB/UNIX specifically didn't take that change from V6, but I don't know if it's been documented anywhere (a casual Internet search didn't turn up anything).

(The System III chown(2) manual page says more or less the same thing as the PWB/UNIX manual page, just more formally, and the kernel code is very similar.)

Maybe why OverlayFS had its readdir() inode number issue

By: cks
12 October 2025 at 02:53

A while back I wrote about readdir()'s inode numbers versus OverlayFS, which discussed an issue where for efficiency reasons, OverlayFS sometimes returned different inode numbers in readdir() than in stat(). This is not POSIX legal unless you do some pretty perverse interpretations (as covered in my entry), but lots of filesystems deviate from POSIX semantics every so often. A more interesting question is why, and I suspect the answer is related to another issue that's come up, the problem of NFS exports of NFS mounts.

What's common in both cases is that NFS servers and OverlayFS both must create an 'identity' for a file (a NFS filehandle and an inode number, respectively). In the case of NFS servers, this identity has some strict requirements; OverlayFS has a somewhat easier life, but in general it still has to create and track some amount of information. Based on reading the OverlayFS article, I believe that OverlayFS considers this expensive enough to only want to do it when it has to.

OverlayFS definitely needs to go to this effort when people call stat(), because various programs will directly use the inode number (the POSIX 'file serial number') to tell files on the same filesystem apart. POSIX technically requires OverlayFS to do this for readdir(), but in practice almost everyone that uses readdir() isn't going to look at the inode number; they look at the file name and perhaps the d_type field to spot directories without needing to stat() everything.

If there was a special 'not a valid inode number' signal value, OverlayFS might use that, but there isn't one (in either POSIX or Linux, which is actually a problem). Since OverlayFS needs to provide some sort of arguably valid inode number, and since it's reading directories from the underlying filesystems, passing through their inode numbers from their d_ino fields is the simple answer.

(This entry was inspired by Kevin Lyda's comment on my earlier entry.)

Sidebar: Why there should be a 'not a valid inode number' signal value

Because both standards and common Unix usage include a d_ino field in the structure readdir() returns, they embed the idea that the stat()-visible inode number can easily be recovered or generated by filesystems purely by reading directories, without needing to perform additional IO. This is true in traditional Unix filesystems, but it's not obvious that you would do that all of the time in all filesystems. The on disk format of directories might only have some sort of object identifier for each name that's not easily mapped to a relatively small 'inode number' (which is required to be some C integer type), and instead the 'inode number' is an attribute you get by reading file metadata based on that object identifier (which you'll do for stat() but would like to avoid for reading directories).

But in practice if you want to design a Unix filesystem that performs decently well and doesn't just make up inode numbers in readdir(), you must store a potentially duplicate copy of your 'inode numbers' in directory entries.

Keeping notes is for myself too, illustrated (once again)

By: cks
11 October 2025 at 03:18

Yesterday I wrote about restarting or redoing something after a systemd service restarts. The non-hypothetical situation that caused me to look into this was that after we applied a package update to one system, systemd-networkd on it restarted and wiped out some critical policy based routing rules. Since I vaguely remembered this happening before, I sighed and arranged to have our rules automatically reapplied on both systems with policy based routing rules, following the pattern I worked out.

Wait, two systems? And one of them didn't seem to have problems after the systemd-networkd restart? Yesterday I ignored that and forged ahead, but really it should have set off alarm bells. The reason the other system wasn't affected was I'd already solved the problem the right way back in March of 2024, when we first hit this networkd behavior and I wrote an entry about it.

However, I hadn't left myself (or my co-workers) any notes about that March 2024 fix; I'd put it into place on the first machine (then the only machine we had that did policy based routing) and forgotten about it. My only theory is that I wanted to wait and be sure it actually fixed the problem before documenting it as 'the fix', but if so, I made a mistake by not leaving myself any notes that I had a fix in testing. When I recently built the second machine with policy based routing I copied things from the first machine, but I didn't copy the true networkd fix because I'd forgotten about it.

(It turns out to have been really useful that I wrote that March 2024 entry because it's the only documentation I have, and I'd probably have missed the real fix if not for it. I rediscovered it in the process of writing yesterday's entry.)

I know (and knew) that keeping notes is good, and that my memory is fallible. And I still let this slip through the cracks for whatever reason. Hopefully the valuable lesson I've learned from this will stick a bit so I don't stub my toe again.

(One obvious lesson is that I should make a note to myself any time I'm testing something that I'm not sure will actually work. Since it may not work I may want to formally document it in our normal system for this, but a personal note will keep me from completely losing track of it. You can see the persistence of things 'in testing' as another example of the aphorism that there's nothing as permanent as a temporary fix.)

Restarting or redoing something after a systemd service restarts

By: cks
10 October 2025 at 03:21

Suppose, not hypothetically, that your system is running some systemd based service or daemon that resets or erase your carefully cultivated state when it restarts. One example is systemd-networkd, although you can turn that off (or parts of it off, at least), but there are likely others. To clean up after this happens, you'd like to automatically restart or redo something after a systemd unit is restarted. Systemd supports this, but I found it slightly unclear how you want to do this and today I poked at it, so it's time for notes.

(This is somewhat different from triggering one unit when another unit becomes active, which I think is still not possible in general.)

First, you need to put whatever you want to do into a script and a .service unit that will run the script. The traditional way to run a script through a .service unit is:

[Unit]
....

[Service]
Type=oneshot
RemainAfterExit=True
ExecStart=/your/script/here

[Install]
WantedBy=multi-user.target

(The 'RemainAfterExit' is load-bearing, also.)

To get this unit to run after another unit is started or restarted, what you need is PartOf=, which causes your unit to be stopped and started when the other unit is, along with 'After=' so that your unit starts after the other unit instead of racing it (which could be counterproductive when what you want to do is fix up something from the other unit). So you add:

[Unit]
...
PartOf=systemd-networkd.service
After=systemd-networkd.service

(This is what works for me in light testing. This assumes that the unit you want to re-run after is normally always running, as systemd-networkd is.)

In testing, you don't need to have your unit specifically enabled by itself, although you may want it to be for clarity and other reasons. Even if your unit isn't specifically enabled, systemd will start it after the other unit because of the PartOf=. If the other unit is started all of the time (as is usually the case for systemd-networkd), this effectively makes your unit enabled, although not in an obvious way (which is why I think you should specifically 'systemctl enable' it, to make it obvious). I think you can have your .service unit enabled and active without having the other unit enabled, or even present.

You can declare yourself PartOf a .target unit, and some stock package systemd units do for various services. And a .target unit can be PartOf a .service; on Fedora, 'sshd-keygen.target' is PartOf sshd.service in a surprisingly clever little arrangement to generate only the necessary keys through a templated 'sshd-keygen@.service' unit.

I admit that the whole collection of Wants=, Requires=, Requisite=, BindsTo=, PartOf=, Upholds=, and so on are somewhat confusing to me. In the past, I've used the wrong version and suffered the consequences, and I'm not sure I have them entirely right in this entry.

Note that as far as I know, PartOf= has those Requires= consequences, where if the other unit is stopped, yours will be too. In a simple 'run a script after the other unit starts' situation, stopping your unit does nothing and can be ignored.

(If this seems complicated, well, I think it is, and I think one part of the complication is that we're trying to use systemd as an event-based system when it isn't one.)

Systemd-resolved's new 'DNS Server Delegation' feature (as of systemd 258)

By: cks
9 October 2025 at 03:04

A while ago I wrote an entry about things that resolved wasn't for as of systemd 251. One of those things was arbitrary mappings of (DNS) names to DNS servers, for example if you always wanted *.internal.example.org to query a special DNS server. Systemd-resolved didn't have a direct feature for this and attempting to attach your DNS names to DNS server mappings to a network interface could go wrong in various ways. Well, time marches on and as of systemd v258 this is no longer the state of affairs.

Systemd v258 introduces systemd.dns-delegate files, which allow you to map DNS names to DNS servers independently from network interfaces. The release notes describe this as:

A new DNS "delegate zone" concept has been introduced, which are additional lookup scopes (on top of the existing per-interface and the one global scope so far supported in resolved), which carry one or more DNS server addresses and a DNS search/routing domain. It allows routing requests to specific domains to specific servers. Delegate zones can be configured via drop-ins below /etc/systemd/dns-delegate.d/*.dns-delegate.

Since systemd v258 is very new I don't have any machines where I can actually try this out, but based on the systemd.dns-delegate documentation, you can use this both for domains that you merely want diverted to some DNS server and also domains that you also want on your search path. Per resolved.conf's Domains= documentation, the latter is 'Domains=example.org' (example.org will be one of the domains that resolved tries to find single-label hostnames in, a search domain), and the former is 'Domains=~example.org' (where we merely send queries for everything under 'example.org' off to whatever DNS= you set, a route-only domain).

(While resolved.conf's Domains= officially promises to check your search domains in the order you listed them, I believe this is strictly for a single 'Domains=' setting for a single interface. If you have multiple 'Domains=' settings, for example in a global resolved.conf, a network interface, and now in a delegation, I think systemd-resolved makes no promises.)

Right now, these DNS server delegations can only be set through static files, not manipulated through resolvectl. I believe fiddling with them through resolvectl is on the roadmap, but for now I guess we get to restart resolved if we need to change things. In fact resolvectl doesn't expose anything to do with them, although I believe read-only information is available via D-Bus and maybe varlink.

Given the timing of systemd v258's release relative to Fedora releases, I probably won't be able to use this feature until Fedora 44 in the spring (Fedora 42 is current and Fedora 43 is imminent, which won't have systemd v258 given that v258 was released only a couple of weeks ago). My current systemd-resolved setup is okay (if it wasn't I'd be doing something else), but I can probably find uses for these delegations to improve it.

Why I have a GPS bike computer

By: cks
8 October 2025 at 03:42

(This is a story about technology. Sort of.)

Many bicyclists with a GPS bike computer probably have it primarily to record their bike rides and then upload them to places like Strava. I'm a bit unusual in that while I do record my rides and make some of them public, and I've come to value this, it's not my primary reason to have a GPS bike computer. Instead, my primary reason is following pre-made routes.

When I started with my recreational bike club, it was well before the era of GPS bike computers. How you followed (or lead) our routes back then was through printed cue sheets, which had all of the turns and so on listed in order, often with additional notes. One of the duties of the leader of the ride was printing out a sufficient number of cue sheets in advance and distributing them to interested parties before the start of the ride. If you were seriously into using cue sheets, you'd use a cue sheet holder (nowadays you can only find these as 'map holders', which is basically the same job); otherwise you might clip the cue sheet to a handlebar brake or gear cable or fold it up and stick it in a back jersey pocket.

Printed cue sheets have a number of nice features, such as giving you a lot of information at a glance. One of them is that a well done cue sheet was and is a lot more than just a list of all of the turns and other things worthy of note; it's an organized, well formatted list of these. The cues would be broken up into sensibly chosen sections, with whitespace between them to make it easier to narrow in on the current one, and you'd lay out the page (or pages) so that the cue or section breaks happened at convenient spots to flip the cue sheet around in cue holders or clips. You'd emphasize important turns, cautions, or other things in various ways. And so on. Some cue sheets even had a map of the route printed on the back.

(You needed to periodically flip the cue sheet around and refold it because many routes had too many turns and other cues to fit in a small amount of printed space, especially if you wanted to use a decently large font size for easy readability.)

Starting in the early 2010s, more and more TBN people started using GPS bike computers or smartphones (cf). People began converting our cue sheet routes to computerized GPS routes, with TBN eventually getting official GPS routes. Over time, more and more members got smartphones and GPS units and there was more and more interest in GPS routes and less and less interest in cue sheets. In 2015 I saw the writing on the wall for cue sheets and the club more or less deprecated them, so in August 2016 I gave in and got a GPS unit (which drove me to finally get a smartphone, because my GPS unit assumed you had one). Cue sheet first routes lingered on for some years afterward, but they're all gone by now; everything is GPS route first.

You can still get cue sheets for club routes (the club's GPS routes typically have turn cues and you can export these into something you can print). But what we don't really have any more is the old school kind of well done, organized cue sheets, and it's basically been a decade since ride leaders would turn up with any printed cue sheets at all. These days it's on you to print your own cue sheet if you need it, and also on you to make a good cue sheet from the basic cue sheet (if you care enough to do so). There are some people who still use cue sheets, but they're a decreasing minority and they probably already had the cue sheet holders and so on (which are now increasingly hard to find). A new rider who wanted to use cue sheets would have an uphill struggle and they might never understand why long time members could be so fond of them.

Cue sheets are still a viable option for route following (and they haven't fundamentally changed). They're just not very well supported any more in TBN because they stopped being popular. If you insist on sticking with them, you still can, but it's not going to be a great experience. I didn't move to a GPS unit because I couldn't possibly use cue sheets any more (I still have my cue sheet holder); I moved because I could see the writing on the wall about which one would be the more convenient, more usable option.

Applications to the (computing) technologies of your choice are left as an exercise for the reader.

PS: As a whole I think GPS bike computers are mostly superior to cue sheets for route following, but that's a different discussion (and it depends on what sort of bicycling you're doing). There are points on both sides.

A Firefox issue and perhaps how handling scaling is hard

By: cks
7 October 2025 at 03:09

Over on the Fediverse I shared a fun Firefox issue I've just run into:

Today's fun Firefox bug: if I move my (Nightly) Firefox window left and right across my X display, the text inside the window reflows to change its line wrapping back and forth. I have a HiDPI display with non-integer scaling and some other settings, so I'm assuming that Firefox is now suffering from rounding issues where the exact horizontal pixel position changes its idea of the CSS window width, triggering text reflows as it jumps back and forth by a CSS pixel.

(I've managed to reproduce this in a standard Nightly, although so far only with some of my settings.)

Close inspection says that this isn't quite what's happening, and the underlying problem is happening more often than I thought. What is actually happening is that as I move my Firefox window left and right, a thin vertical black line usually appears and disappears at the right edge of the window (past a scrollbar if there is one). Since I can see it on my HiDPI display, I suspect that this vertical line is at least two screen pixels wide. Under the right circumstances of window width, text size, and specific text content, this vertical black bar takes enough width away from the rest of the window to cause Firefox to re-flow and re-wrap text, creating easily visible changes as the window moves.

A variation of this happens when the vertical black bar isn't drawn but things on the right side of the toolbar and the URL bar area will shift left and right slightly as the window is moved horizontally. If the window is showing a scrollbar, the position of the scroll target in the scrollbar will move left and right, with the right side getting ever so slightly wider or returning back to being symmetrical. It's easiest to see this if I move the window sideways slowly, which is of course not something I do often (usually I move windows rapidly).

(This may be related to how X has a notion of sizing windows in non-pixel units if the window asks for it. Firefox in my configuration definitely asks for this; it asserts that it wants to be resized in units of 2 (display) pixels both horizontally and vertically. However, I can look at the state of a Firefox window in X and see that the window size in pixels doesn't change between the black bar appearing and disappearing.)

All of this is visible partly because under X and my window manager, windows can redisplay themselves even during an active move operation. If the window contents froze while I dragged windows around, I probably wouldn't have noticed this for some time. Text reflowing as I moved a Firefox window sideways created a quite attention-getting shimmer.

It's probably relevant that I need unusual HiDPI settings and I've also set Firefox's layout.css.devPixelsPerPx to 1.7 in about:config. That was part of why I initially assumed this was a scaling and rounding issue, and why I still suspect that area of Firefox a bit.

(I haven't filed this as a Firefox bug yet, partly because I just narrowed down what was happening in the process of writing this entry.)

What (I think) you need to do basic UDP NAT traversal

By: cks
6 October 2025 at 03:52

Yesterday I wished for a way to do native "blind" WireGuard relaying, without needing to layer something on top of WireGuard. I wished for this both because it's the simplest approach for getting through NATs and the one you need in general under some circumstances. The classic and excellent work on all of the complexities of NAT traversal is Tailscale's How NAT traversal works, which also winds up covering the situation where you absolutely have to have a relay. But, as I understand things, in a fair number of situations you can sort of do without a relay and have direct UDP NAT traversal, although you need to do some extra work to get it and you need additional pieces.

Following RFC 4787, we can divide NAT into to two categories, endpoint-independent mapping (EIM) and endpoint-dependent mapping (EDM). In EIM, the public IP and port of your outgoing NAT'd traffic depend only on your internal IP and port, not on the destination (IP or port); in EDM they (also) depend on the destination. NAT'ing firewalls normally NAT based on what could be called "flows". For TCP, flows are a real thing; you can specifically tell a single TCP connection and it's difficult to fake one. For UDP, a firewall generally has no idea of what is a valid flow, and the best it can do is accept traffic that comes from the destination IP and port, which in theory is replies from the other end.

This leads to the NAT traffic traversal trick that we can do for UDP specifically. If we have two machines that want to talk to each other on each other's UDP port 51820, the first thing they need is to learn the public IP and port being used by the other machine. This requires some sort of central coordination server as well as the ability to send traffic to somewhere on UDP port 51820 (or whatever port you care about). In the case of WireGuard, you might as well make this a server on a public IP running WireGuard and have an actual WireGuard connection to it, and the discount 'coordination server' can then be basically the WireGuard peer information from 'wg' (the 'endpoint' is the public IP and port you need).

Once the two machines know each other's public IP and port, they start sending UDP port 51820 (or whatever) packets to each other, to the public IP and port they learned through the coordination server. When each of them sends their first outgoing packet, this creates a 'flow' on their respective NAT firewall which will allow the other machine's traffic in. Depending on timing, the first few packets from the other machine may arrive before your firewall has set up its state to allow them in and will get dropped, so each side needs to keep sending until it works or until it's clear that at least one side has an EDM (or some other complication).

(For WireGuard, you'd need something that sets the peer's endpoint to your now-known host and port value and then tries to send it some traffic to trigger the outgoing packets.)

As covered in Tailscale's article, it's possible to make direct NAT traversal work in some additional circumstances with increasing degrees of effort. You may be lucky and have a local EDM firewall that can be asked to stop doing EDM for your UDP port (via a number of protocols for this), and otherwise it may be possible to feel your way around one EDM firewall.

If you can arrange a natural way to send traffic from your UDP port to your coordination server, the basic NAT setup can be done without needing the deep cooperation of the software using the port; all you need is a way to switch what remote IP and port it uses for a particular peer. Your coordination server may need special software to listen to traffic and decode which peer is which, or you may be able to exploit existing features of your software (for example, by making the coordination server a WireGuard peer). Otherwise, I think you need either some cooperation from the software involved or gory hacks.

Wishing for a way to do 'blind' (untrusted) WireGuard relaying

By: cks
5 October 2025 at 02:32

Over on the Fediverse, I sort of had a question:

I wonder if there's any way in standard WireGuard to have a zero-trust network relay, so that two WG peers that are isolated from each other (eg both behind NAT) can talk directly. The standard pure-WG approach has a public WG endpoint that everyone talks to and which acts as a router for the internal WG IPs of everyone, but this involves decrypting and re-encrypting the WG traffic.

By 'talk directly' I mean that each of the peers has the WireGuard keys of the other and the traffic between the two of them stays encrypted with those keys all the way through its travels. The traditional approach to the problem of two NAT'd machines that want to talk to each other with WireGuard is to have a WireGuard router that both of them talk to over WireGuard, but this means that the router sees the unencrypted traffic between them. This is less than ideal if you don't want to trust your router machine, for example because you want to make it a low-trust virtual machine rented from some cloud provider.

Since we love indirection in computer science, you can in theory solve this with another layer of traffic encapsulation (with a lot of caveats). The idea is that all of the 'public' endpoint IPs of WireGuard peers are actually on a private network, and you route the private network through your public router. Getting the private network packets to and from the router requires another level of encapsulation and unless you get very clever, all your traffic will go through the router even if two WireGuard peers could talk directly. Since WireGuard automatically keeps track of the current public IPs of peers, it would be ideal to do this with WireGuard, but I'm not sure that WG-in-WG can have the routing maintained the way we want.

This untrusted relay situation is of course one of the things that 'automatic mesh network on top of WireGuard' systems give you, but it would be nice to be able to do this with native features (and perhaps without an explicit control plane server that machines talk to, although that seems unlikely). As far as I know such systems implement this with their own brand of encapsulation, which I believe requires running their WireGuard stack.

(On Linux you might be able to do something clever with redirecting outgoing WireGuard packets to a 'tun' device connected to a user level program, which then wrapped them up, sent them off, received packets back, and injected the received packets into the system.)

Using systems because you know them already

By: cks
4 October 2025 at 03:35

Every so often on the Fediverse, people ask for advice on a monitoring system to run on their machine (desktop or server), and some of the time Prometheus, and when it does I wind up making awkward noises. On the one hand, we run Prometheus (and Grafana) and are happy with it, and I run separate Prometheus setups on my work and home desktops. On the other hand, I don't feel I can recommend picking Prometheus for a basic single-machine setup, despite running it that way myself.

Why do I run Prometheus on my own machines if I don't recommend that you do so? I run it because I already know Prometheus (and Grafana), and in fact my desktops (re)use much of our production Prometheus setup (but they scrape different things). This is a specific instance (and example) of a general thing in system administration, which is that not infrequently it's simpler for you to use something you already know even if it's not necessarily an exact fit (or even a great fit) for the problem. For example, if you're quite familiar with operating PostgreSQL databases, it might be simpler to use PostgreSQL for a new system where SQLite could do perfectly well and other people would find SQLite much simpler. Especially if you have canned setups, canned automation, and so on all ready to go for PostgreSQL, and not for SQLite.

(Similarly, our generic web server hammer is Apache, even if we're doing things that don't necessarily need Apache and could be done perfectly well or perhaps better with nginx, Caddy, or whatever.)

This has a flipside, where you use a tool because you know it even if there might be a significantly better option, one that would actually be easier overall even accounting for needing to learn the new option and build up the environment around it. What we could call "familiarity-driven design" is a thing, and it can even be a confining thing, one where you shape your problems to conform to the tools you already know.

(And you may not have chosen your tools with deep care and instead drifted into them.)

I don't think there's any magic way to know which side of the line you're on. Perhaps the best we can do is be a little bit skeptical about our reflexive choices, especially if we seem to be sort of forcing them in a situation that feels like it should have a simpler or better option (such as basic monitoring of a single machine).

(In a way it helps that I know so much about Prometheus because it makes me aware of various warts, even if I'm used to them and I've climbed the learning curves.)

Apache .htaccess files are important because they enable delegation

By: cks
3 October 2025 at 03:03

Apache's .htaccess files have a generally bad reputation. For example, lots of people will tell you that they can cause performance problems and you should move everything from .htaccess files into your main Apache configuration, using various pieces of Apache syntax to restrict what configuration directives apply to. The result can even be clearer, since various things can be confusing in .htaccess files (eg rewrites and redirects). Despite all of this, .htaccess files are important and valuable because of one property, which is that they enable delegation of parts of your server configuration to other people.

The Apache .htaccess documentation even spells this out in reverse, in When (not) to use .htaccess files:

In general, you should only use .htaccess files when you don't have access to the main server configuration file. [...]

If you operate the server and would be writing the .htaccess file, you can put the contents of the .htaccess in the main server configuration and make your life easier and Apache faster (and you probably should). But if the web server and its configuration isn't managed as a unitary whole by one group, then .htaccess files allow the people managing the overall Apache configuration to safely delegate things to other people on a per-directory basis, using Unix ownership. This can both enable people to do additional things and reduce the amount of work the central people have to do, letting people things scale better.

(The other thing that .htaccess files allow is dynamic updates without having to restart or reload the whole server. In some contexts this can be useful or important, for example if the updates are automatically generated at unpredictable times.)

I don't think it's an accident that .htaccess files emerged in Apache, because one common environment Apache was initially used in was old fashioned multi-user Unix web servers where, for example, every person with a login on the web server might have their own UserDir directory hierarchy. Hence features like suEXEC, so you could let people run CGIs without those CGIs having to run as the web user (a dangerous thing), and also hence the attraction of .htaccess files. If you have a bunch of (graduate) students with their own web areas, you definitely don't want to let all of them edit your departmental web server's overall configuration.

(Apache doesn't solve all your problems here, at least not in a simple configuration; you're still left with the multiuser PHP problem. Our solution to this problem is somewhat brute force.)

These environments are uncommon today but they're not extinct, at least at universities like mine, and .htaccess files (and Apache's general flexibility) remain valuable to us.

Readdir()'s inode numbers versus OverlayFS

By: cks
2 October 2025 at 03:09

Recently I re-read Deep Down the Rabbit Hole: Bash, OverlayFS, and a 30-Year-Old Surprise (via) and this time around, I stumbled over a bit in the writeup that made me raise my eyebrows:

Bash’s fallback getcwd() assumes that the inode [number] from stat() matches one returned by readdir(). OverlayFS breaks that assumption.

I wouldn't call this an 'assumption' so much as 'sane POSIX semantics', although I'm not sure that POSIX absolutely requires this.

As we've seen before, POSIX talks about 'file serial number(s)' instead of inode numbers. The best definition of these is covered in sys/stat.h, where we see that a 'file identity' is uniquely determined by the combination the inode number and the device ID (st_dev), and POSIX says that 'at any given time in a system, distinct files shall have distinct file identities' while hardlinks have the same identity. The POSIX description of readdir() and dirent.h don't caveat the d_ino file serial numbers from readdir(), so they're implicitly covered by the general rules for file serial numbers.

In theory you can claim that the POSIX guarantees don't apply here since readdir() is only supplying d_ino, the file serial number, not the device ID as well. I maintain that this fails due to a POSIX requirement:

[...] The value of the structure's d_ino member shall be set to the file serial number of the file named by the d_name member. [...]

If readdir() gives one file serial number and a fstatat() of the same name gives another, a plain reading of POSIX is that one of them is lying. Files don't have two file serial numbers, they have one. Readdir() can return duplicate d_ino numbers for files that aren't hardlinks to each other (and I think legitimately may do so in some unusual circumstances), but it can't return something different than what fstatat() does for the same name.

The perverse argument here turns on POSIX's 'at any given time'. You can argue that the readdir() is at one time and the stat() is at another time and the system is allowed to entirely change file serial numbers between the two times. This is certainly not the intent of POSIX's language but I'm not sure there's anything in the standard that rules it out, even though it makes file serial numbers fairly useless since there's no POSIX way to get a bunch of them at 'a given time' so they have to be coherent.

So to summarize, OverlayFS has chosen what are effectively non-POSIX semantics for its readdir() inode numbers (under some circumstances, in the interests of performance) and Bash used readdir()'s d_ino in a traditional Unix way that caused it to notice. Unix filesystems can depart from POSIX semantics if they want, but I'd prefer if they were a bit more shamefaced about it. People (ie, programs) count on those semantics.

(The truly traditional getcwd() way wouldn't have been a problem, because it predates readdir() having d_ino and so doesn't use it (it stat()s everything to get inode numbers). I reflexively follow this pre-d_ino algorithm when I'm talking about doing getcwd() by hand (cf), but these days you want to use the dirent d_ino and if possible d_type, because they're much more efficient than stat()'ing everything.)

How part of my email handling drifted into convoluted complexity

By: cks
1 October 2025 at 01:50

Once upon a time, my email handling was relatively simple. I wasn't on any big mailing lists, so I had almost everything delivered straight to my inbox (both in the traditional /var/mail mbox sense and then through to MH's own inbox folder directory). I did some mail filtering with procmail, but it was all for things that I basically never looked at, so I had procmail write them to mbox files under $HOME/.mail. I moved email from my Unix /var/mail inbox to MH's inbox with MH's inc command (either running it directly or having exmh run it for me). Rarely, I had a mbox file procmail had written that I wanted to read, and at that point I inc'd it either to my MH +inbox or to some other folder.

Later, prompted by wanting to improve my breaks and vacations, I diverted a bunch of mailing lists away from my inbox. Originally I had procmail write these diverted messages to mbox files, then later I'd inc the files to read the messages. Then I found that outside of vacations, I needed to make this email more readily accessible, so I had procmail put them in MH folder directories under Mail/inbox (one of MH's nice features is that your inbox is a regular folder and can have sub-folders, just like everything else). As I noted at the time, procmail only partially emulates MH when doing this, and one of the things it doesn't do is keep track of new, unread ('unseen') messages.

(MH has a general purpose system for keeping track of 'sequences' of messages in a MH folder, so it tracks unread messages based on what is in the special 'unseen' sequence. Inc and other MH commands update this sequence; procmail doesn't.)

Along with this procmail setup I wrote a basic script, called mlists, to report how many messages each of these 'mailing list' inboxes had in them. After a while I started diverting lower priority status emails and so on through this system (and stopped reading the mailing lists); if I got a type of email in any volume that I didn't want to read right away during work, it probably got shunted to these side inboxes. At some point I made mlists optionally run the MH scan command to show me what was in each inbox folder (well, for the inbox folders where this was potentially useful information). The mlists script was still mostly simple and the whole system still made sense, but it was a bit more complex than before, especially when it also got a feature where it auto-reset the current message number in each folder to the first message.

A couple of years ago, I switched the MH frontend I used from exmh to MH-E in GNU Emacs, which changed how I read my email in practice. One of the changes was that I started using the GNU Emacs Speedbar, which always displays a count of messages in MH folders and especially wants to let you know about folders with unread messages. Since I had the hammer of my mlists script handy, I proceeded to mutate it to be what a comment in the script describes as "a discount maintainer of 'unseen'", so that MH-E's speedbar could draw my attention to inbox folders that had new messages.

This is not the right way to do this. The right way to do this is to have procmail deliver messages through MH's rcvstore, which as a MH command can update the 'unseen' sequence properly. But using rcvstore is annoying, partly because you have to use another program to add the locking it needs, so at every point the path of least resistance was to add a bit more hacks to what I already had. I had procmail, and procmail could deliver to MH folder directories, so I used it (and at the time the limitations were something I considered a feature). I had a script to give me basic information, so it could give me more information, and then it could do one useful thing while it was giving me information, and then the one useful thing grew into updating 'unseen'.

And since I have all of this, it's not even worth the effort of switching to the proper rcvstore approach and throwing a bunch of it away. I'm always going to want the 'tell me stuff' functionality of my mlists script, so part of it has to stay anyway.

Can I see similarities between this and how various of our system tools have evolved, mutated, and become increasingly complex? Of course. I think it's much the same obvious forces involved, because each step seems reasonable in isolation, right up until I've built a discount environment that duplicates much of rcvstore.

Sidebar: an extra bonus bit of complexity

It turns out that part of the time, I want to get some degree of live notification of messages being filed into these inbox folders. I may not look at all or even many of them, but there are some periodic things that I do want to pay attention to. So my discount special hack is basically:

tail -f .mail/procmail-log |
  egrep -B2 --no-group-separator 'Folder: /u/cks/Mail/inbox/'

(This is a script, of course, and I run it in a terminal window.)

This could be improved in various ways but then I'd be sliding down the convoluted complexity slope and I'm not willing to do that. Yet. Give it a few years and I may be back to write an update.

More on the tools I use to read email affecting my email reading

By: cks
30 September 2025 at 03:32

About two years ago I wrote an entry about how my switch from reading email with exmh to reading it in GNU Emacs with MH-E had affected my email reading behavior more than I expected. As time has passed and I've made more extensive customizations to my MH-E environment, this has continued. One of the recent ways I've noticed is that I'm slowly making more and more use of the fact that GNU Emacs is a multi-window editor ('multi-frame' in Emacs terminology) and reading email with MH-E inside it still leaves me with all of the basic Emacs facilities. Specifically, I can create several Emacs windows (frames) and use this to be working in multiple MH folders at the same time.

Back when I used exmh extensively, I mostly had MH pull my email into the default 'inbox' folder, where I dealt with it all at once. Sometimes I'd wind up pulling some new email into a separate folder, but exmh only really giving me a view of a single folder at a time combined with a system administrator's need to be regularly responding to email made that a bit awkward. At first my use of MH-E mostly followed that; I had a single Emacs MH-E window (frame) and within that window I switched between folders. But lately I've been creating more new windows when I want to spend time reading a non-inbox folder, and in turn this has made me much more willing to put new email directly into different (MH) folders rather than funnel it all into my inbox.

(I don't always make a new window to visit another folder, because I don't spend long on many of my non-inbox folders for new email. But for various mailing lists and so on, reading through them may take at least a bit of time so it's more likely I'll decide I want to keep my MH inbox folder still available.)

One thing that makes this work is that MH-E itself has reasonably good support for displaying and working on multiple folders at once. There are probably ways to get MH-E to screw this up and run MH commands with the wrong MH folder as the current folder, so I'm careful that I don't try to have MH-E carry out its pending MH operations in two MH-E folders at the same time. There are areas where MH-E is less than ideal when I'm also using command-line MH tools, because MH-E changes MH's global notion of the current folder any time I have it do things like show a message in some folder. But at least MH-E is fine (in normal circumstances) if I use MH commands to change the current folder; MH-E will just switch it back the next time I have it show another message.

PS: On a purely pragmatic basis, another change in my email handling is that I'm no longer as irritated with HTML emails because GNU Emacs is much better at displaying HTML than exmh was. I've actually left my MH-E setup showing HTML by default, instead of forcing multipart/alternative email to always show the text version (my exmh setup). GNU Emacs and MH-E aren't up to the level of, say, Thunderbird, and sometimes this results in confusing emails, but it's better than it was.

(The situation that seems tricky for MH-E is that people sometimes include inlined images, for example screenshots as part of problem reports, and MH-E doesn't always give any indication that it's even omitting something.)

Recently

4 October 2025 at 00:00

Some meta-commentary on reading: I’ve been trying to read through my enormous queue of articles saved on Instapaper. In general I love reading stuff from the internet but refuse to do it with a computer or a phone: I don’t want all my stuff to glow!

So, this led me to an abortive trial of the Boox Go 7, an eReader that runs a full Android operating system. Rendering Android interfaces with e-ink was pretty bad, and even though it was possible to run the Instapaper Android app, highlighting text didn’t work and the whole fit & finish was off. I love that most e-ink tablets are really purpose-built computers: this didn’t give me that impression.

So, this month I bought a Kobo Libra Color. Kobo and Instapaper recently announced a partnership and first-class support for Instapaper on the devices.

Overall: it’s better than the Boox experience and definitely better than sending articles to one of my Kindles. My notes so far are:

  • I wish it had highlighting. Pretty confident that it’s on the product roadmap, but for now all it can do is sync, archive, and like articles.
  • The Kobo also integrates directly with Overdrive for local libraries! Amazing and unexpected for me, the vast majority of my books are from the Brooklyn Library or the NYPL.
  • The hardware is pretty decent: the color screen is surprisingly useful because a lot of articles have embedded images. The page turn buttons are a little worse than those on my Kindle Oasis because they’re hinged, so they only work well if you press the top of one button and the bottom of the other. I’ve gotten used to it, but wish they worked via a different mechanism.
  • The first run experience was pretty slick.
  • Annoyingly, it goes to fully ‘off’ mode pretty quickly instead of staying in ‘sleep’ mode, and waking it up from being fully off takes about 15 seconds.

Overall: I think my biggest gripe (no highlighting) will get worked out and this’ll be the perfect internet-reading-without-a-computer device.

Anyway, what’s been good this month?

Reading

In Greece, a homeowner traded ther property development rights to a builder and kept equity in place by receiving finished apartments in the resulting building, often one for themselves and one or more additional units for rental income or adult children, rather than a one-time cash payout. Applied to the U.S., that becomes, swap a house for an apartment (or three), in the same location, with special exemptions to the tax for the home sale.

From Millenial American Dream love a good scheme to fix the housing crisis and densify cities. Definitely seems like a net-positive, but like all other schemes that require big changes to tax codes, would take a miracle to actually implement.

In Zitron’s analysis, it’s always bad. It’s bad when they raise too little money because they’ll run out. It’s bad when they raise too much money because it means they need it.

David Crespo’s critique of Ed Zitron is really strong. Honestly Zitron’s writing, though needed in a certain sense, never hits home for me. In fact a lot of AI critique seems overblown. Maybe I’m warming to it a little.

This Martin Fowler article about how ‘if it hurts, do it more often,’ was good. Summarizes some well-tested wisdom.

I had really mixed feelings about ‘The Autistic Half-Century’ by Byrne Hobart. I probably have some personal stake in this - every test I’ve taken puts me on the spectrum and I have some of the classic features, but like Hobart I’ve never been interested in identifying as autistic. But my discomfort with the subject is all knotted-together and hard to summarize.

One facet I can pick out is my feeling that this era is ending, maybe getting replaced with the ADHD half-century. But the internet I grew up on was pseudonomous, text-oriented, and for me, a calm place. The last ten years have been a slow drift toward real names, and then photograph avatars, and now more and more information being delivered by some person talking into the camera, and that feels really bad, man. Heck, not to drill down on ‘classic traits’, but the number of TikTok videos in which, for some reason the person doing the talking is also eating at the same time, close to the phone, the mouth-sounds? Like, for a long time it was possible to convey information without thinking about whether you were good-looking or were having a good hair day, and that era is ending because everything is becoming oral culture.

If you believed that Trump winning would mean that everyone who supported him was right to have done so, because they had picked the winner; that the mega-rich AI industry buying its way into all corners of American society would mean that critics of the technology and of using it to displace human labors were not just defeated but meaningfully wrong in their criticisms; that some celebrity getting richer from a crypto rug-pull that ripped off hundreds of thousands of less-rich people would actually vindicate the celebrity’s choice to participate in it, because of how much richer it made them. Imagine holding this as an authentic understanding of how the world works: that the simple binary outcome of a contest had the power to reach back through time and adjust the ethical and moral weight of the contestants’ choices along the way. Maybe, in that case, you would feel differently about what to the rest of us looks like straight-up shit eating.

Defector (here, Albert Burneko) is doing some really good work. See also their article on Hailey Welch and ‘bag culture’.

Altman’s claim of a “Cambrian explosion” rings hollow because any tool built on the perverse incentives of social media is not truly designed with creativity in mind, but addiction. Sora may spark a new wave of digital expression, but it’s just as likely to entrench the same attention economy that has warped our online lives already.

From Parmy Olsen writing about OpenAI Sora in Bloomberg. I think this is a really good article: technology is rightfully judged based on what it actually does, not what it could do or is meant to do, and if AI continues to be used for things that make the world worse, it will earn itself a bad reputation.

Watching

Watching Night On Earth was such a joy. It was tender, laugh-out-loud funny, beautiful.

Youth had me crying the whole time, but I’d still recommend it. Got me into Sun Kil Moon from just a few minutes of Mark Kozelek being onscreen as the guitarist at the fancy hotel.

Listening

My old friends at BRNDA have a hit album on their hands. Kind of punk, kind of Sonic Youth, sort of indescribable, lots of good rock flute on this one.

More good ambient-adjacent instrumental rock.

This album feels next door to some Hop Along tracks. It’s a tie between highlighting this track or Holding On which also has a ton of hit quality.


Elsewhere: in September I did a huge bike trip and posted photos of it over in /photos. Might do a full writeup eventually: we rode the Great Allegheny Passage and the C&O canal in four days of pretty hard riding. Everyone survived, we had an amazing time, and I’m extremely glad that we were all able to make the time to do it. It made me feel so zen for about a week after getting back - I have to do that more often!

Porteur bag 2

27 September 2025 at 00:00

Back in May, I wrote about a custom porteur bag that I sewed for use on my bike. That bag served me well on two trips - a solo ride up to Brewster and back, and my semi-yearly ride on the Empire State Trail, from Poughkeepsie to Brooklyn in two days.

But I had a longer ride in the plans summer, which I just rode two weeks ago: Pittsburgh to DC, 348 miles in 4 days, with two nights of camping. And at the last minute I decided to make the next version of that bag. Specifically, I wanted to correct three shortcomings of v1:

  • The attachment system was too complex. It had redundant ways to attach to the rack, but the main method was via a shock cord that looped through six grosgrain loops, plus another shock cord that I attached to keep it attached to the back of the rack. Plus hardware that’s used for attachment is better if it’s less flexible. Bungee cords and shock cords are ultimately pretty flawed as ways to attach things on bikes. My frame bag uses non-stretchy paracord, and most bikepacking setups are reliant on voile straps which have a minimal amount of stretch.
  • The bag had no liner, and the material is plasticky and odd-looking. ECOPAK EPLX has a waterproof coating on the outside that makes it look like a plastic bag instead of something from synthetic fabric.
  • The bag had way too many panels: each side was a panel, plus the bottom, plus the flap. These made the shape of the bag messy.

Version 2

Finished bag

It turned out pretty well. Here’s the gist:

Materials

As you can see, there’s only one built-in way to secure this bag to the bike: a webbing strap that attaches to the two buckles and tightens below the rack. This is definitely a superior mechanism to v1: instead of attaching just the front of the bag to the rack and having to deal with the back of the bag separately, this pulls and tensions the whole bag, including its contents, to the rack. It rattles a lot less and is a lot simpler to attach.

On the bike

Construction

This bag is made like a tote bag. The essential ingredient of a tote bag is a piece of fabric cut like this:

Tote bag construction

The lining is simply: the same shape, again, sewed in the same way, sewn to the inside of the bag but with the seams facing in the opposite direction, so that the seams of the liner and the outer shell of the bag face each other.

Techniques for building tote bags are everywhere on the internet, so it’s a really nice place to start. Plus, the bag body can be made with just one cut of fabric. In this case the bottom of the bag is Cordura and the top is ECOPAK, so I just tweaked the tote bag construction by adding panels of ECOPAK on the left and right of the first step above.

The risky part of this bag was its height and the zipper: ideally it could both zip and the top of the bag could fold over for water resistance. I didn’t accomplish both goals and learned something pretty important: if you’re building a waterproof bag with a zipper, once it’s zipped it’ll be hard to compress because the zipper keeps in air.

But the end of the tour included a surprise thunderstorm in Washington, DC and the zipper kept the wet out, so I count that as a win! The zipper also makes the bag very functional off the bike - using the same strap as I use to attach it to the bike but using that as a shoulder strap makes it pretty convenient to carry around. This really came in handy when we were moving bikes onto and off of Amtrak trains.

Plans for version three

Mostly kidding - I’m going to actually use use this bag for the next few trips instead of tinkering with it. But I do have thoughts, from this experience:

  • I have mixed feelings about using Cordura next time. It’s what everyone does, and it helps with the abrasion that the bag experiences from the rack. But I have a feeling that ECOPAK would hold up for many thousands of miles by itself, and is lighter and more waterproof than cordura. It could be cool to make the whole bag body from the same material.
  • I think I’ll make the next bag taller, and skip the zipper. I have more faith in folding over the material rather than relying on a waterproof zipper.
  • A good system for tightening the straps still eludes me. This setup included sliplock slide adjusters, but it’s still kind of annoying to get the right angles to pull the bag tight.
  • Again, I didn’t add any external pockets. I don’t think they’re absolutely necessary, but as it is, like the previous version, it’s not really possible to access the contents of this bag on the move. Which is fine because I have other bags that are easier to access - on this trip this bag carried my clothes, camp kit, and backup spare tires, so nothing essential in the middle of the day.

Cooking with glasses

21 September 2025 at 00:00

I’ve been thinking the new Meta Ray-Ban augmented reality glasses. Not because they failed onstage, which they absolutely did. Or that shortly after they received rave reviews from Victoria Song at The Verge and MKBHD, two of the most influential tech reviewers. My impression is that the hardware has improved but the software is pretty bad.

Mostly I keep thinking about the cooking demo. Yeah, it bombed. But what if it worked? What if Meta releases the third iteration of this hardware next year and it worked? This post is just questions.

The demos were deeply weird: both Mark Zuckerberg and Jack Mancuso (the celebrity chef) had their AR glasses in a particular demo mode that broadcasted the audio they were hearing and the video they were seeing to the audience and to the live feed of the event.

Of course they need to square the circle of AR glasses being ‘invisible technology,’ but showing people what it does. According to MKBHD’s review, one of the major breakthroughs of the new edition is that you can’t see when other people are looking at the glasses built-in display.

I should credit Song for mentioning the privacy issues of the glasses in her review for The Verge. MKBHD briefly talks about his concern that people will use the glasses to be even more distracted during face-to-face interactions. It’s not like everyone’s totally ignoring the implications.

But the implications. Like for example: I don’t know, sometimes I’m cooking for my partner. Do they hear a one-sided conversation between me and the “Live AI” about what to do first, how to combine the ingredients? Without the staged demo’s video-casting, this would have been the demo: a celebrity chef talking to himself like a lunatic, asking how to start.

Or what if we’re cooking together – do we both have the glasses, and maybe the Live AI responds to… both of us? Do we hear each other’s audio, or is it a mystery what the AI is telling the sous-chef? Does it broadcast to a bluetooth speaker maybe?

Maybe the glasses are able to do joint attention and know who else is in your personal space and share with them in a permissioned way? Does this lock everyone into a specific brand of the glasses or is there some open protocol? It’s been a long, long time since I’ve seen a new open protocol.

Marques’s video shows an impressively invisible display to others, but surely there’s some reflection on people’s retinas: could you scan that and read people’s screens off of their eyeballs?


I hate to preempt critiques but the obvious counterpoint is that “the glasses will be good for people with accessibility needs.” Maybe the AI is unnecessary for cooking, and a lot of its applications will be creepy or fascist, but being able to translate or caption text in real-time will be really useful to people with hearing issues.

I think that’s true! In the same way as crypto was useful for human rights and commerce in countries with unreliable exchanges and banks - a point most forcefully spoken by Alex Gladstein. I’m not one to do the ethical arithmetic on the benefits of crypto in those situations versus the roughly $79 billion worth of scams that it has generated or its impact on climate change. My outsider view is that there are more people and more volume of crypto involved in the speculative & scam sector than in the human-rights sector, but I’m willing to be wrong.

I’ve always been intrigued by, but not sold on, Dynamicland and its New York cousin, folk.computer. But every AR demo I see sells me on that paradigm more. For all of the kookiness of the Dynamicland paradigm, it seems designed for a social, playful world. Nothing about the technology guarantees that outcome, but it definitely suggests it. Likewise, the AR glasses might usher in a new age of isolation and surveillance, but that isn’t guaranteed - that just seems like a likely outcome.

I’m searching for a vision of the AR future, just like I want to read about what really happens with AI proliferation, or what I wished crypto advocates would have written. I want people who believe in this tech to write about how it’ll change their lives and the lives of others, and what really can be done about the risks.

What's up with FUTO?

22 October 2025 at 00:00

Some time ago, I noticed some new organization called FUTO popping up here and there. I’m always interested in seeing new organizations that fund open source popping up, and seeing as they claim several notable projects on their roster, I explored their website with interest and gratitude. I was first confused, and then annoyed by what I found. Confused, because their website is littered with bizzare manifestos,1 and ultimately annoyed because they were playing fast and loose with the term “open source”, using it to describe commercial source-available software.

FUTO eventually clarified their stance on “open source”, first through satire and then somewhat more soberly, perpetuating the self-serving myth that “open source” software can privilege one party over anyone else and still be called open source. I mentally categorized them as problematic but hoped that their donations or grants for genuinely open source projects would do more good than the harm done by this nonsense.

By now I’ve learned better. tl;dr: FUTO is not being honest about their “grant program”, they don’t have permission to pass off these logos or project names as endorsements, and they collaborate with and promote mask-off, self-proclaimed fascists.

An early sign that something is off with FUTO is in that “sober” explanation of their “disdain for OSI approved licenses”, where they make a point of criticizing the Open Source Initiative for banning Eric S. Raymond (aka ESR) from their mailing lists, citing right-wing reactionary conspiracy theorist Bryan Lunduke’s blog post on the incident. Raymond is, as you may know, one of the founders of OSI and a bigoted asshole. He was banned from the mailing lists, not because he’s a bigoted asshole, but because he was being a toxic jerk on the mailing list in question. Healthy institutions outgrow their founders. That said, FUTO’s citation and perspective on the ESR incident could be generously explained as a simple mistake, and we should probably match generosity with generosity given their prolific portfolio of open source grants.

I visited FUTO again quite recently as part of my research on Cloudflare’s donations to fascists, and was pleased to discover that this portfolio of grants had grown immensely since my last visit, and included a number of respectable projects that I admire and depend on (and some projects I don’t especially admire, hence arriving there during my research on FOSS projects run by fascists). But something felt fishy about this list – surely I would have heard about it if someone was going around giving big grants to projects like ffmpeg, VLC, musl libc, Tor, Managarm, Blender, NeoVim – these projects have a lot of overlap with my social group and I hadn’t heard a peep about it.

So I asked Rich Felker, the maintainer of musl libc, about the FUTO grant, and he didn’t know anything about it. Rich and I spoke about this for a while and eventually Rich uncovered a transaction in his GitHub sponsors account from FUTO: a one-time donation of $1,000. This payment circumvents musl’s established process for donations from institutional sponsors. The donation page that FUTO used includes this explanation: “This offer is for individuals, and may be available to small organizations on request. Commercial entities wishing to be listed as sponsors should inquire by email.” It’s pretty clear that there are special instructions for institutional donors who wish to receive musl’s endorsement as thanks for their contribution.

The extent of the FUTO “grant program”, at least in the case of musl libc, involved ignoring musl’s established process for institutional sponsors, quietly sending a modest one-time donation to one maintainer, and then plastering the logo of a well-respected open source project on a list of “grant recipients” on their home page. Rich eventually posted on Mastodon to clarify that the use of the musl name and logo here was unauthorized.

I also asked someone I know on the ffmpeg project about the grant that they had received from FUTO and she didn’t know anything about it, either. Here’s what she said:

I’m sure we did not get a grant from them, since we tear each other to pieces over everything, and that would be enough to start a flame war. Unless some dev independently got money from them to do something, but I’m sure that we as a project got nothing. The only grant we’ve received is from the STF last year.

Neovim is another project FUTO lists as a grant recipient, and they also have a separate process for institutional sponsors. I didn’t reach out to anyone to confirm, but FUTO does not appear on the sponsor list so presumably the M.O. is the same. This is also the case for Wireshark, Conduit, and KiCad. GrapheneOS is listed prominently as well, but that doesn’t seem to have worked out very well for them. Presumably ffmpeg received a similar quiet donation from FUTO, rather than something more easily recognizable as a grant.

So, it seems like FUTO is doing some shady stuff and putting a bunch of notable FOSS projects on their home page without good reason to justify their endorsement. Who’s behind all of this?

As far as I can tell, the important figures are Eron Wolf2 and Louis Rossmann.3 Wolf is the founder of FUTO – a bunch of money fell into his lap from founding Yahoo Games before the bottom fell out of Yahoo, and he made some smart investments to grow his wealth, which he presumably used to fund FUTO. Rossmann is a notable figure in the right to repair movement, with a large following on YouTube, who joined FUTO a year later and ultimately moved to Austin to work more closely with them. His established audience and reputation provides a marketable face for FUTO. I had heard of Rossmann prior to learning about FUTO and held him in generally good regard, despite little specific knowledge of his work, simply because we have a common cause in right to repair.

I hadn’t heard of Wolf before looking into FUTO. However, in the course of my research, several people tipped me off to his association with Curtis Yarvin (aka moldbug), and in particular to the use of FUTO’s platform and the credentials of Wolf and Rossman to platform and promote Yarvin. Curtis Yarvin is a full-blown, mask-off, self-proclaimed fascist. A negligible amount of due diligence is required to verify this, but here’s one source from Politico in January 2025:

I’ve interacted with Vance once since the election. I bumped into him at a party. He said, “Yarvin, you reactionary fascist.” I was like, “Thank you, Mr. Vice President, and I’m glad I didn’t stop you from getting elected.”

Ian Ward for Politico

Vice President Vance and numerous other figures in the American right have cited Yarvin as a friend and source of inspiration in shaping policy.4 Among his many political positions, Yarvin has proclaimed that black people are genetically predisposed to a lower IQ than white people, and moreover suggests that black people are inherently suitable for enslavement.5

Yarvin has appeared on FUTO’s social media channels, in particular in an interview published on PeerTube and Odysee, the latter a platform controversial for its role in spreading hate speech and misinformation.6 Yarvin also appeared on stage to “debate” Louis Rossmann in June 2022, in which Yarvin is permitted to speak at length with minimal interruptions or rebuttals to argue for an authoritarian techno-monarchy to replace democracy.

Rossmann caught some flack for this “debate” and gave a milquetoast response in a YouTube comment on this video, explaining that he agreed to this on very short notice as a favor to Eron, who had donated “a million” to Rossmann’s non-profit prior to bringing Rossmann into the fold at FUTO. Rossmann does rebuke Yarvin’s thesis, albeit buried in this YouTube comment rather than when he had the opportunity to do so on-stage during the debate. Don’t argue with fascists, Louis – they aren’t arguing with you, they are pitching their ideas7 to the audience. Smart fascists are experts at misdirection and bad-faith debate tactics and as a consequence Rossmann just becomes a vehicle for fascist propaganda – consult the YouTube comments to see who this video resonates with the most.

In the end, Rossmann seems to regret agreeing to this debate. I don’t think that Eron Wolf regrets it, though – based on his facilitation of this debate and his own interview with Yarvin on the FUTO channel a month later, I can only assume that Wolf considers Yarvin a close associate. No surprise given that Wolf is precisely the kind of insecure silicon valley techbro Yarvin’s rhetoric is designed to appeal to – moderately wealthy but unknown, and according to Yarvin, fit to be a king. Rossmann probably needs to reflect on why he associates with and lends his reputation to an organization that openly and unapologetically platforms its founder’s fascist friends.

In summary, FUTO is not just the product of some eccentric who founded a grant-making institution that funds open source at the cost of making us read his weird manifestos on free markets and oligopoly. It’s a private, for-profit company that associates with and uses their brand to promote fascists. They push an open-washing narrative and they portray themselves as a grant-making institution when, in truth, they’re passing off a handful of small donations as if they were endorsements from dozens of respectable, high-profile open source projects, in an attempt to legitimize themselves, and, indirectly, legitimize people they platform like Curtis Yarvin.

So, if you read this and discover that your project’s name and logo is being proudly displayed on the front page of a fascist-adjacent, washed-up millionaire’s scummy vanity company, and you don’t like that, maybe you should ask them to knock it off? Eron, Louis – you know that a lot of these logos are trademarked, right?


Updated 2025-10-27:

The FUTO website has been updated to clarify the nature of grants versus donations (before, after) and to reduce the appearance of endorsements from the donation recipients – the site is much better after this change.

I spoke to a representative who spoke for the FUTO leadership, and shared positive feedback regarding the changes to the website. I also asked for a follow-up on the matter of platforming fascist activist Curtis Yarvin on their social media channels, and I was provided this official response:

We prefer to spend our time building great software for people to use, funding interesting projects, and making FUTO better every day. We have no interest in engaging in politics as this is a distraction from our work. We’d like to move past this. We don’t have any further comment that that.

I understand the view that distancing oneself from politics can be a productive approach to your work, and the work FUTO does funding great software like Immich is indeed important work that should be conducted relatively free of distractions. However, FUTO is a fundamentally political organization and it does not distance itself from politics. Consider for example the FUTO statement on Open Source, which takes several political positions:

  • Disapproval of the Open Source Initiative and their legitimacy as an authority
  • Disapproval of OSI’s proposed AI standards
  • Disapproval of the “tech oligopoly”
  • Advocacy for an “open source” which is inclusive of restrictions on commercial use
  • Support for Eric S. Raymond’s side in his conflict with OSI
  • Tacit support for Bryan Lunduke

A truer “apolitical” approach would accept the mainstream definition of open source, would not take positions on conflicts with OSI or Eric Raymond, and would be careful not to cite (or platform) controversial figures such as Lunduke.

It is difficult for FUTO’s work to be apolitical at all. Importantly, there are biases in their grants and donations: their selections have a tendency to privilege projects focused on privacy, decentralization, multimedia, communication, and right to repair, all of which suggest the political priorities of FUTO. The choice to fund “source first” software in addition to open source, or not to fund outright closed source software, or not to vet projects based on their community moderation, are also political factors in their process of selecting funding recipients, or at least are seemingly apolitical decisions which ultimately have political consequences.

This brings us to the political nature of the choice to platform Curtis Yarvin. Yarvin is a self-proclaimed fascist who argues openly for fascist politics. Platforming Yarvin on the FUTO channels legitimizes Yarvin’s ideas and his work, and provides curious listeners a funnel that leads to Yarvin’s more radical ideas and into a far-right rabbit-hole. Platforming Yarvin advances the fascist political program.

It should go without saying that it is political to support fascism or fascists. There is an outrageous moral, intellectual, and political contradiction in claiming that it is apolitical to promote a person whose political program is to dismantle democracy and eject people he disagrees with from the political sphere entirely. FUTO should reflect on their values, acknowledge the political nature of their work, and consider the ways in which their work intersects with politics writ large, then make decisions that align their political actions with their political beliefs.


  1. Here’s a quote from another one: “While the vast majority of elite coders have been neutralized by the Oligopoly, the vast majority of those remaining have become preoccupied with cryptocurrency money-making schemes.” ↩︎

  2. Source: What is FUTO? ↩︎

  3. Source: Announcing the Newest Member of FUTO: Louis Rossmann! ↩︎

  4. Source: Politico: The Seven Thinkers and Groups That Have Shaped JD Vance’s Unusual Worldview ↩︎

  5. Source: Inc.: Controversy Rages Over ‘Pro-Slavery’ Tech Speaker Curtis Yarvin ↩︎

  6. Source: The Guardian: Video platform chief says Nazi posts on white superiority do not merit removal ↩︎

  7. You know, authoritarianism, anti-democracy, genocide, etc. ↩︎

Cloudflare bankrolls fascists

24 September 2025 at 00:00

US politics has been pretty fascist lately. The state is filling up concentration camps, engaging in mass state violence against people on the basis of racialized traits, deporting them to random countries without any respect for habeas corpus, exerting state pressure on the free press to censor speech critical of the current administration, and Trump is openly floating the idea of an unconstitutional third term.

Fascism is clearly on the rise, and they’re winning more and more power. None of this is far removed from us in the FOSS community – there are a number of fascists working in FOSS, same as the rest of society. I don’t call them fascists baselessly – someone who speaks out in support of and expresses solidarity with fascists, or who uses fascists dog-whistles or promotes fascist ideology and talking points, or boosts fascist conspiracy theories – well, they’re a fascist.

If one consistently speaks in support of a certain political position and against the opponents of that position then it is correct to identify them with this political position. Facts, as it were, don’t care about feelings, namely the feelings that get hurt when someone is called a fascist. Fascists naturally do not want to be identified as such and will reject the label, but we shouldn’t take their word for it. People should be much more afraid of being called out as fascist than they are afraid of calling someone a fascist. If someone doesn’t want to be called a fascist, they shouldn’t act like one.

It’s in this disturbing political context that I saw an odd post from the Cloudflare blog pop up in my circles this week: Supporting the future of the open web: Cloudflare is sponsoring Ladybird and Omarchy. Based on Ladybird’s sponsorship terms we can assume that these projects received on the order of $100,000 USD from Cloudflare. I find this odd for a few reasons, in particular because one thing that I know these two projects have in common is that they are both run by fascists.

Even at face value this is an unusual pair of projects to fund. I’m all for FOSS projects getting funded, of course, and I won’t complain about a project’s funding on the solitary basis that it’s an odd choice. I will point out that these are odd choices, though, especially Omarchy.

Ladybird makes some sense, given that it’s aligned in principle with Cloudflare’s stated objective to “support the open web”, though I remain bearish that new web browser engines are even possible to make.1 Omarchy is a bizarre choice, though – do we really need another pre-customized Arch Linux distribution? And if we do, do we really need a big corporation like Cloudflare to bankroll it? Everyone on /r/unixporn manages to make Arch Linux look pretty for free.

Omarchy is a very weird project to fund, come to think of it. Making an Arch Linux spin technically requires some work, and work is work, I won’t deny it, but most of the work done here is from Arch Linux and Hyprland. Why not fund those, instead? Well, don’t fund Hyprland, since it’s also run by a bunch of fascists, but you get my point.

Anyway, Omarchy and Ladybird are both run by fascists. Omarchy makes this pretty obvious from the outset – on the home page the big YouTube poster image prominently features SuperGrok, which is a pathetically transparent dog-whistle to signal alliance with Elon Musk’s fascist politics. Omarchy is the pet project of David Heinemeier Hansson, aka DHH, who is well known as a rich fascist weirdo.2 One need only consult his blog to browse his weird, racist views on immigration, fat-shaming objections to diverse representation, vaguely anti-feminist/homophobic/rapey rants on consent, and, recently, tone-policing antifascists who celebrate the death of notable fascist Charlie Kirk.

Speaking of tributes to Charlie Kirk, that brings us to Andreas Kling, the project lead for Ladybird, who tweeted on the occasion of his assassination:

RIP Charlie Kirk

I hope many more debate nerds carry on his quest to engage young people with words, not fists.

@awesomekling

Kling has had a few things to say about Kirk on Twitter lately. Here’s another one – give you three guesses as to which “[group]” he objects to punching. You may also recall that Kling achieved some notoriety for his obnoxious response as the maintainer of SerenityOS when someone proposed gender-neutral language for the documentation:

Screenshot of the interaction on GitHub. Kling responds “This project is not an appropriate arena to advertise your personal politics.”

Replacing “he” with “they” in one sentence of the documentation is the kind of “ideologically motivated change” that Serenity’s CONTRIBUTING.md apparently aims to prevent, a classic case of the sexist “identities that are not men are inherently political” nonsense. Ladybird has a similar, weirdly defensive policy on “neutrality”, and a milquetoast code of conduct, which is based on the Ruby Community Conduct Guideline, which has been itself the subject of many controversies due to its inadequacy leading to real-world incidents of harassment and abuse.

Here’s another one – Kling endorsing white replacement theory in June:

White males are actively discriminated against in tech.

It’s an open secret of Silicon Valley.

One of the last meetings I attended before leaving Apple (in 2017) was management asking us to “keep the corporate diversity targets in mind” when interviewing potential new hires.

The phrasing was careful, but the implication was pretty clear.

I knew in my heart this wasn’t wholesome, but I was too scared to rock the boat at the time.

@awesomekling replying to @danheld3

And in a moment of poetic irony, a few days ago Kling spoke in solidarity with Hansson over his “persecution” for “banal, mainstream positions” on Twitter just a few days ago, in response to Hansson’s tweet signal-boosting another notable reactionary tech fascist, Bryan Lunduke.

So, to sum it up, Kling wears his mask a bit better than Hansson, but as far as I’m concerned it seems clear that both projects are run by fascists. If it walks like a fascist and quacks like a fascist… then why is Cloudflare giving them hundreds of thousands of dollars?


  1. I did think it was cute that Kling used a screenshot of Ladybird rendering my blog post back in 2023 to headline one of his articles on Ladybird, though. ↩︎

  2. And a multi-millionaire who can fund his “Arch Linux for Obnoxious Reactionary Assholes” project his own damn self, in my opinion. He also does not want to be called a fascist, in which case he should probably stop being so fucking faschy. ↩︎

  3. Who is ultimately responding to @shaunmmaguire’s tweet lying about being told he wouldn’t be promoted at Google for being white. Shaun also posts about the threat of “Radical Islamist” “Marxists” to whom Europe “will be lost”, falsely accuse the “far left” of committing “almost all mass murders and political violence” in the last few years, and praise Trump for classifying antifascists as terrorists. ↩︎

A better future for JavaScript that won't happen

17 September 2025 at 00:00

In the wake of the largest supply-chain attack in history, the JavaScript community could have a moment of reckoning and decide: never again. As the panic and shame subsides, after compromised developers finish re-provisioning their workstations and rotating their keys, the ecosystem might re-orient itself towards solving the fundamental flaws that allowed this to happen.

After all, people have been sounding the alarm for years that this approach to dependency management is reckless and dangerous and broken by design. Maybe this is the moment when the JavaScript ecosystem begins to understand the importance and urgency of this problem, and begins its course correction. It could leave behind its sprawling dependency trees full of micro-libraries, establish software distribution based on relationships of trust, and incorporate the decades of research and innovation established by more serious dependency management systems.

Perhaps Google and Mozilla, leaders in JavaScript standards and implementations, will start developing a real standard library for JavaScript, which makes micro-dependencies like left-pad a thing of the past. This could be combined with a consolidation of efforts, merging micro-libraries into larger packages with a more coherent and holistic scope and purpose, which prune their own dependency trees in turn.

This could be the moment where npm comes to terms with its broken design, and with a well-funded effort (recall that, ultimately, npm is GitHub is Microsoft, market cap $3 trillion USD), will develop and roll out the next generation of package management for JavaScript. It could incorporate the practices developed and proven in Linux distributions, which rarely suffer from these sorts of attacks, by de-coupling development from packaging and distribution, establishing package maintainers who assemble and distribute curated collections of software libraries. By introducing universal signatures for packages of executable code, smaller channels and webs of trust, reproducible builds, and the many other straightforward, obvious techniques used by responsible package managers.

Maybe other languages that depend on this broken dependency management model, like Cargo, PyPI, RubyGems, and many more, are watching this incident and know that the very same crisis looms in their future. Maybe they will change course, too, before the inevitable.

Imagine if other large corporations who depend on and profit from this massive pile of recklessly organized software committed their money and resources to it, through putting their engineers to the task of fixing these problems, through coming together to establish and implement new standards, through direct funding of their dependencies and by distributing money through institutions like NLNet, ushering in an era of responsible, sustainable, and secure software development.

This would be a good future, but it’s not the future that lies in wait for us. The future will be more of the same. Expect symbolic gestures – mandatory 2FA will be rolled out in more places, certainly, and the big players will write off meager donations in the name of “OSS security and resilience” in their marketing budgets.

No one will learn their lesson. This has been happening for decades and no one has learned anything from it yet. This is the defining hubris of this generation of software development.

LibreWolf

22 October 2025 at 00:00
I've hit the last straw with Mozilla's antics in Firefox. I've been a Firefox champion for years and years. But every danged upgrade has some new insane junk I don't want or need and they keep popping up these little "tips" to rub my nose in it. I'm sick of it. I'm done. THANKFULLY we have LibreWolf to solve all of that. I'm a heavy browser user migrating from the fox to the wolf and here are my notes so far...

Sometimes you want to skip that CGI Status header

16 September 2025 at 05:06

I'm on a roll with these "check out what I screwed up" posts, so let's add another one to the mix. This one is about the feed reader score service and a particular foible it had ever since it launched until a few minutes ago when I finally fixed it.

The foible is: no matter what you sent to the project's test server, it would return a full copy of the meaningless placeholder test feed with a 200 response code. Even if you nailed the If-Modified-Since and/or If-None-Match headers, you'd get a full response as if you fired off an unconditional request like the worst clownware.

Well, in this case, the clown is me, and here's how it happened.

I've been writing hacky little CGI scripts for testing purposes for a very long time, and they generally look like this:

#!/bin/bash
set -e -u -o pipefail

echo "Content-type: text/plain"
echo ""
echo "Something or other here"

Simple and stupid, right? You don't need a Status header, and Apache will happily send a 200 for you. You can set one if you're deliberately overriding it, naturally. I'd only do that for weird stuff, like forcing a 404, or a 301 redirect, or something like that.

What I didn't appreciate is that you can provide a body in there, and as long as you emit the right headers, Apache will do the legwork to see if it matches whatever the client sent. It'll actually absorb the body and will send out a nice "304 Not Modified" instead.

The problem comes from the fact that I wrote a dumb little library to handle CGI stuff a while back, and one of the things it does is to make sure that every single request gets some kind of response. It starts out with status set to 500 (as in "server error"), and it's on you and your handler to set it to something appropriate. This way, nothing goes out the door until you do all of the work required to set everything up properly: the fact it's a 200, an appropriate content-type, access controls on who/what can load it, and all of that other good stuff. Then it always sends a Status header which matches its internal state.

As a result, it was happily blasting out "Status: 200 OK" when it would run within the context of the fake testing feed endpoint which is part of this project. I now know that this will make Apache not do its usual conditional test and instead it will ship the body as-is... with the 200 you told it to use.

Why did it take me a while to figure this out? Well, did you notice that my dumb little shell script never sends a Status header? It doesn't try to cover all of the angles because it's "just a dumb little script". In being implicit about the result code, it ends up enabling a little useful bit of behavior that I wish I had all along.

If you're one of the people participating in the test, and your program just started getting 304s from my end, that's why! You're not hallucinating. You're just seeing the outcome of me finally realizing "why this thing is not like that thing".

As for what the fix was, it's "allow this program to suppress Status generation when it's going down the happy path". It's stupid, but it works.

Side note: this is only talking about the feed score test endpoint. My actual feed (i.e., the thing with this post in it) works completely differently and has been doing 304s properly the whole time.

Solving the NYTimes Pips puzzle with a constraint solver

18 October 2025 at 15:41

The New York Times recently introduced a new daily puzzle called Pips. You place a set of dominoes on a grid, satisfying various conditions. For instance, in the puzzle below, the pips (dots) in the purple squares must sum to 8, there must be fewer than 5 pips in the red square, and the pips in the three green squares must be equal. (It doesn't take much thought to solve this "easy" puzzle, but the "medium" and "hard" puzzles are more challenging.)

The New York Times Pips puzzle from Oct 5, 2025 (easy). Hint: What value must go in the three green squares?

The New York Times Pips puzzle from Oct 5, 2025 (easy). Hint: What value must go in the three green squares?

I was wondering about how to solve these puzzles with a computer. Recently, I saw an article on Hacker News—"Many hard LeetCode problems are easy constraint problems"—that described the benefits and flexibility of a system called a constraint solver. A constraint solver takes a set of constraints and finds solutions that satisfy the constraints: exactly what Pips requires.

I figured that solving Pips with a constraint solver would be a good way to learn more about these solvers, but I had several questions. Did constraint solvers require incomprehensible mathematics? How hard was it to express a problem? Would the solver quickly solve the problem, or would it get caught in an exponential search?

It turns out that using a constraint solver was straightforward; it took me under two hours from knowing nothing about constraint solvers to solving the problem. The solver found solutions in milliseconds (for the most part). However, there were a few bumps along the way. In this blog post, I'll discuss my experience with the MiniZinc1 constraint modeling system and show how it can solve Pips.

Approaching the problem

Writing a program for a constraint solver is very different from writing a regular program. Instead of telling the computer how to solve the problem, you tell it what you want: the conditions that must be satisfied. The solver then "magically" finds solutions that satisfy the problem.

To solve the problem, I created an array called pips that holds the number of domino pips at each position in the grid. Then, the three constraints for the above problem can be expressed as follows. You can see how the constraints directly express the conditions in the puzzle.

constraint pips[1,1] + pips[2,1] == 8;
constraint pips[2,3] < 5;
constraint all_equal([pips[3,1], pips[3,2], pips[3,3]]);

Next, I needed to specify where dominoes could be placed for the puzzle. To do this, I defined an array called grid that indicated the allowable positions: 1 indicates a valid position and 0 indicates an invalid position. (If you compare with the puzzle at the top of the article, you can see that the grid below matches its shape.)

grid = [|
1,1,0|
1,1,1|
1,1,1|];

I also defined the set of dominoes for the problem above, specifying the number of spots in each half:

spots = [|5,1| 1,4| 4,2| 1,3|];

So far, the constraints directly match the problem. However, I needed to write some more code to specify how these pieces interact. But before I describe that code, I'll show a solution. I wasn't sure what to expect: would the constraint solver give me a solution or would it spin forever? It turned out to find the unique solution in 109 milliseconds, printing out the solution arrays. The pips array shows the number of pips in each position, while the dominogrid array shows which domino (1 through 4) is in each position.

pips = 
[| 4, 2, 0
 | 4, 5, 3
 | 1, 1, 1
 |];
dominogrid = 
[| 3, 3, 0
 | 2, 1, 4
 | 2, 1, 4
 |];

The text-based solution above is a bit ugly. But it is easy to create graphical output. MiniZinc provides a JavaScript API, so you can easily display solutions on a web page. I wrote a few lines of JavaScript to draw the solution, as shown below. (I just display the numbers since I was too lazy to draw the dots.) Solving this puzzle is not too impressive—it's an "easy" puzzle after all—but I'll show below that the solver can also handle considerably more difficult puzzles.

Graphical display of the solution.

Graphical display of the solution.

Details of the code

While the above code specifies a particular puzzle, a bit more code is required to define how dominoes and the grid interact. This code may appear strange because it is implemented as constraints, rather than the procedural operations in a normal program.

My main design decision was how to specify the locations of dominoes. I considered assigning a grid position and orientation to each domino, but it seemed inconvenient to deal with multiple orientations. Instead, I decided to position each half of the domino independently, with an x and y coordinate in the grid.2 I added a constraint that the two halves of each domino had to be in neighboring cells, that is, either the X or Y coordinates had to differ by 1.

constraint forall(i in DOMINO) (abs(x[i, 1] - x[i, 2]) + abs(y[i, 1] - y[i, 2]) == 1);

It took a bit of thought to fill in the pips array with the number of spots on each domino. In a normal programming language, one would loop over the dominoes and store the values into pips. However, here it is done with a constraint so the solver makes sure the values are assigned. Specifically, for each half-domino, the pips array entry at the domino's x/y coordinate must equal the corresponding spots on the domino:

constraint forall(i in DOMINO, j in HALF) (pips[y[i,j], x[i, j]] == spots[i, j]);

I decided to add another array to keep track of which domino is in which position. This array is useful to see the domino locations in the output, but it also keeps dominoes from overlapping. I used a constraint to put each domino's number (1, 2, 3, etc.) into the occupied position of dominogrid:

constraint forall(i in DOMINO, j in HALF) (dominogrid[y[i,j], x[i, j]] == i);

Next, how do we make sure that dominoes only go into positions allowed by grid? I used a constraint that a square in dominogrid must be empty or the corresponding grid must allow a domino.3 This uses the "or" condition, which is expressed as \/, an unusual stylistic choice. (Likewise, "and" is expressed as /\. These correspond to the logical symbols ∨ and ∧.)

constraint forall(i in 1..H, j in 1..W) (dominogrid[i, j] == 0 \/ grid[i, j] != 0);

Honestly, I was worried that I had too many arrays and the solver would end up in a rathole ensuring that the arrays were consistent. But I figured I'd try this brute-force approach and see if it worked. It turns out that it worked for the most part, so I didn't need to do anything more clever.

Finally, the program requires a few lines to define some constants and variables. The constants below define the number of dominoes and the size of the grid for a particular problem:

int: NDOMINO = 4; % Number of dominoes in the puzzle
int: W = 3; % Width of the grid in this puzzle
int: H = 3; % Height of the grid in this puzzle

Next, datatypes are defined to specify the allowable values. This is very important for the solver; it is a "finite domain" solver, so limiting the size of the domains reduces the size of the problem. For this problem, the values are integers in a particular range, called a set:

set of int: DOMINO = 1..NDOMINO; % Dominoes are numbered 1 to NDOMINO
set of int: HALF = 1..2; % The domino half is 1 or 2
set of int: xcoord = 1..W; % Coordinate into the grid
set of int: ycoord = 1..H;

At last, I define the sizes and types of the various arrays that I use. One very important syntax is var, which indicates variables that the solver must determine. Note that the first two arrays, grid and spots do not have var since they are constant, initialized to specify the problem.

array[1..H,1..W] of 0..1: grid; % The grid defining where dominoes can go
array[DOMINO, HALF] of int: spots; % The number of spots on each half of each domino
array[DOMINO, HALF] of var xcoord: x; % X coordinate of each domino half
array[DOMINO, HALF] of var ycoord: y; % Y coordinate of each domino half
array[1..H,1..W] of var 0..6: pips; % The number of pips (0 to 6) at each location.
array[1..H,1..W] of var 0..NDOMINO: dominogrid; % The domino sequence number at each location

You can find all the code on GitHub. One weird thing is that because the code is not procedural, the lines can be in any order. You can use arrays or constants before you use them. You can even move include statements to the end of the file if you want!

Complications

Overall, the solver was much easier to use than I expected. However, there were a few complications.

By changing a setting, the solver can find multiple solutions instead of stopping after the first. However, when I tried this, the solver generated thousands of meaningless solutions. A closer look showed that the problem was that the solver was putting arbitrary numbers into the "empty" cells, creating valid but pointlessly different solutions. It turns out that I didn't explicitly forbid this, so the sneaky constraint solver went ahead and generated tons of solutions that I didn't want. Adding another constraint fixed the problem. The moral is that even if you think your constraints are clear, solvers are very good at finding unwanted solutions that technically satisfy the constraints. 4

A second problem is that if you do something wrong, the solver simply says that the problem is unsatisfiable. Maybe there's a clever way of debugging, but I ended up removing constraints until the problem can be satisfied, and then see what I did wrong with that constraint. (For instance, I got the array indices backward at one point, making the problem insoluble.)

The most concerning issue is the unpredictability of the solver: maybe it will take milliseconds or maybe it will take hours. For instance, the Oct 5 hard Pips puzzle (below) caused the solver to take minutes for no apparent reason. However, the MiniZinc IDE supports different solver backends. I switched from the default Gecode solver to Chuffed, and it immediately found numerous solutions, 384 to be precise. (Sometimes the Pips puzzles sometimes have multiple solutions, which players find controversial.) I suspect that the multiple solutions messed up the Gecode solver somehow, perhaps because it couldn't narrow down a "good" branch in the search tree. For a benchmark of the different solvers, see the footnote.5

Two of the 384 solutions to the NYT Pips puzzle from Oct 5, 2025 (hard difficulty).

Two of the 384 solutions to the NYT Pips puzzle from Oct 5, 2025 (hard difficulty).

How does a constraint solver work?

If you were writing a program to solve Pips from scratch, you'd probably have a loop to try assigning dominoes to positions. The problem is that the problem grows exponentially. If you have 16 dominoes, there are 16 choices for the first domino, 15 choices for the second, and so forth, so about 16! combinations in total, and that's ignoring orientations. You can think of this as a search tree: at the first step, you have 16 branches. For the next step, each branch has 15 sub-branches. Each sub-branch has 14 sub-sub-branches, and so forth.

An easy optimization is to check the constraints after each domino is added. For instance, as soon as the "less than 5" constraint is violated, you can backtrack and skip that entire section of the tree. In this way, only a subset of the tree needs to be searched; the number of branches will be large, but hopefully manageable.

A constraint solver works similarly, but in a more abstract way. The constraint solver assigns values to the variables, backtracking when a conflict is detected. Since the underlying problem is typically NP-complete, the solver uses heuristics to attempt to improve performance. For instance, variables can be assigned in different orders. The solver attempts to generate conflicts as soon as possible so large pieces of the search tree can be pruned sooner rather than later. (In the domino case, this corresponds to placing dominoes in places with the tightest constraints, rather than scattering them around the puzzle in "easy" spots.)

Another technique is constraint propagation. The idea is that you can derive new constraints and catch conflicts earlier. For instance, suppose you have a problem with the constraints "a equals c" and "b equals c". If you assign "a=1" and "b=2", you won't find a conflict until later, when you try to find a value for "c". But with constraint propagation, you can derive a new constraint "a equals b", and the problem will turn up immediately. (Solvers handle more complicated constraint propagation, such as inequalities.) The tradeoff is that generating new constraints takes time and makes the problem larger, so constraint propagation can make the solver slower. Thus, heuristics are used to decide when to apply constraint propagation.

Researchers are actively developing new algorithms, heuristics, and optimizations6 such as backtracking more aggressively (called "backjumping"), keeping track of failing variable assignments (called "nogoods"), and leveraging Boolean SAT (satisfiability) solvers. Solvers compete in annual challenges to test these techniques against each other. The nice thing about a constraint solver is that you don't need to know anything about these techniques; they are applied automatically.

Conclusions

I hope this has convinced you that constraint solvers are interesting, not too scary, and can solve real problems with little effort. Even as a beginner, I was able to get started with MiniZinc quickly. (I read half the tutorial and then jumped into programming.)

One reason to look at constraint solvers is that they are a completely different programming paradigm. Using a constraint solver is like programming on a higher level, not worrying about how the problem gets solved or what algorithm gets used. Moreover, analyzing a problem in terms of constraints is a different way of thinking about algorithms. Some of the time it's frustrating when you can't use familiar constructs such as loops and assignments, but it expands your horizons.

Finally, writing code to solve Pips is more fun than solving the problems by hand, at least in my opinion, so give it a try!

For more, follow me on Bluesky (@righto.com), Mastodon (@kenshirriff@oldbytes.space), RSS, or subscribe here.

Solution to the Pips puzzle, September 21, 2005 (hard). This puzzle has regions that must all be equal (=) and regions that must all be different (≠). Conveniently, MiniZinc has all_equal and alldifferent constraint functions.

Solution to the Pips puzzle, September 21, 2005 (hard). This puzzle has regions that must all be equal (=) and regions that must all be different (≠). Conveniently, MiniZinc has all_equal and alldifferent constraint functions.

Notes and references

  1. I started by downloading the MiniZinc IDE and reading the MiniZinc tutorial. The MiniZinc IDE is straightforward, with an editor window at the top and an output window at the bottom. Clicking the "Run" button causes it to generate a solution.

    Screenshot of the MiniZinc IDE. Click for a larger view.

    Screenshot of the MiniZinc IDE. Click for a larger view.

     

  2. It might be cleaner to combine the X and Y coordinates into a single Point type, using a MiniZinc record type

  3. I later decided that it made more sense to enforce that dominogrid is empty if and only if grid is 0 at that point, although it doesn't affect the solution. This constraint uses the "if and only if" operator <->.

    constraint forall(i in 1..H, j in 1..W) (dominogrid[i, j] == 0 <-> grid[i, j] == 0);
    
     
  4. To prevent the solver from putting arbitrary numbers in the unused positions of pips, I added a constraint to force these values to be zero:

    constraint forall(i in 1..H, j in 1..W) (grid[i, j] == 0 -> pips[i, j] == 0);
    

    Generating multiple solutions had a second issue, which I expected: A symmetric domino can be placed in two redundant ways. For instance, a double-six domino can be flipped to produce a solution that is technically different but looks the same. I fixed this by adding constraints for each symmetric domino to allow only one of the two redundant positions. The constraint below forces a preferred orientation for symmetric dominoes.

    constraint forall(i in DOMINO) (spots[i,1] != spots[i,2] \/ x[i,1] > x[i,2] \/ (x[i,1] == x[i,2] /\ y[i,1] > y[i,2]));
    

    To enable multiple solutions in MiniZinc, the setting is under Show Configuration Editor > User Defined Behavior > Satisfaction Problems or the --all flag from the command line. 

  5. MiniZinc has five solvers that can solve this sort of integer problem: Chuffed, OR Tools CP-SAT, Gecode, HiGHS, and Coin-OR BC. I measured the performance of the five solvers against 20 different Pips puzzles. Most of the solvers found solutions in under a second, most of the time, but there is a lot of variation.

    Timings for different solvers on 20 Pip puzzles.

    Timings for different solvers on 20 Pip puzzles.

    Overall, Chuffed had the best performance on the puzzles that I tested, taking well under a second. Google's OR-Tools won all the categories in the 2025 MiniZinc challenge, but it was considerably slower than Chuffed for my Pips programs. The default Gecode solver performed very well most of the time, but it did terribly on a few problems, taking over 15 minutes. HiGHs was slower in general, taking a few minutes on the hardest problems, but it didn't fail as badly as Gecode. (Curiously, Gecode and HiGHS sometimes found different problems to be difficult.) Finally, Coin-OR BC was uniformly bad; at best it took a few seconds, but one puzzle took almost two hours and others weren't solved before I gave up after two hours. (I left Coin-OR BC off the graph because it messed up the scale.)

    Don't treat these results too seriously because different solvers are optimized for different purposes. (In particular, Coin-OR BC is designed for linear problems.) But the results demonstrate the unpredictability of solvers: maybe you get a solution in a second and maybe you get a solution in hours. 

  6. If you want to read more about solvers, Constraint Satisfaction Problems is an overview presentation. The Gecode algorithms are described in a nice technical report: Constraint Programming Algorithms used in Gecode. Chuffed is more complicated: "Chuffed is a state of the art lazy clause solver designed from the ground up with lazy clause generation in mind. Lazy clause generation is a hybrid approach to constraint solving that combines features of finite domain propagation and Boolean satisfiability." The Chuffed paper Lazy clause generation reengineered and slides are more of a challenge.  

I love the smell of autopoiesis in the morning

15 October 2025 at 09:52

I belatedly joined the 3D printer club and got myself a brand new Bambu P1S. (Days before the next-gen P2S was released it turns out, but I got the P1S on sale so I don’t feel so bad.)

I have wooden floors.

So I’ve been printing printer foot adaptors to make the optional rubber anti-vibration feet (which I also bought) more effective.

Here’s the 3D model for the adaptors. You open it in the app, select your printer and filament, and it prints directly.

Here are the adaptors printing on my insta.

I am very very into this as a concept: using the printer to print parts of itself. It is a glimpse of the old vision of 3D printers as bootstrapping machines. The old vision and maybe the future too.

(Bootstrapping (Wikipedia) in the self-starting sense, i.e. "analogous to one who would lift himself by his own bootstraps," which is where booting/rebooting/etc for computers comes from.)

For instance RepRap:

a free desktop 3D printer capable of printing plastic objects. Since many parts of RepRap are made from plastic and RepRap prints those parts, RepRap self-replicates by making a kit of itself - a kit that anyone can assemble given time and materials. It also means that - if you’ve got a RepRap - you can print lots of useful stuff, and you can print another RepRap for a friend…

RepRap started in 2005. Remember the heady days of maker culture? People really felt like we were on a path to decentralising and democratising manufacturing; putting the means of production in the hands of everyone.

I believe the RepRap project is now dormant.

But it harks back to an older idea, being John von Neumann’s concept of a universal constructor:

the self-replicating machine consists of three parts: a “description” of (‘blueprint’ or program for) itself, a universal constructor mechanism that can read any description and construct the machine (sans description) encoded in that description, and a universal copy machine that can make copies of any description.

Autopoiesis, right? A system that creates and maintains itself. Life!

In 1980 Robert Freitas took the universal constructor idea for a Nasa study and came up with the self-replicating interstellar probe, still the most rapid architecture for reaching every star system in the Milky Way (due to its exponential growth). On reaching a new star system, REPRO would build mines and factories etc in the local asteroid belt or whatever to make >1 copies of itself, which would then travel onward to multiple other star systems, rinse and repeat.

BTW:

As previously discussed, given the effectiveness of the Von Neumann probe approach, I feel that it’s likely that our own civilisation is merely a stepping stone (as previously discussed, 2022) in someone else’s exploration of the galaxy.

So when I’m printing parts for my new printer using my new printer, I am not just taking a shortcut around ordering accessories off Amazon.

Nor am I just being part of a nascent democratic mode of production.

But rather:

I am participating in the first inch of the first step to develop the tools and technologies that will one day be used to build the very first ancestral self-replicating von Neumann probe and so light the fuse on the human exploration of our home galaxy.

In a microscopic way anyhow. The feeling is there, a faint shimmer, the glint of distant stars in the deep dark. And the printer foot adaptors are handy too. Bright orange!

Cyborgs vs rooms, two visions for the future of computing

13 October 2025 at 21:38

Loosely I can see two visions for the future of how we interact with computers: cyborgs and rooms.

The first is where the industry is going today; I’m more interested in the latter.

Cyborgs

Near-term, cyborgs means wearables.

The original definition of cyborg by Clynes and Kline in 1960 was of a human adapting its body to fit a new environment (as previously discussed).

Apple AirPods are cyborg enhancements: transparency mode helps you hear better.

Meta AI glasses augment you with better memory and the knowledge of the internet – you mutter your questions and the answer is returned in audio, side-loaded into your working memory. Cognitively this feels just like thinking hard to remember something.

I can see a future being built out where I have a smart watch that gives me a sense of direction, a smart ring for biofeedback, smart earphones and glasses for perfect recall and anticipation… Andy Clark’s Natural Born Cyborgs (2003) lays out why this is perfectly impedance-matched to how our brains work already.

Long term? I’ve joked before about a transcranial magnetic stimulation helmet that would walk my legs to work and this is the cyborg direction of travel: nootropics, CRISPR gene therapy, body modification and slicing open your fingertips to insert magnets for an electric field sixth sense.

But you can see the cyborg paradigm in action with hardware startups today trying to make the AI-native form factor of the future: lapel pins, lanyards, rings, Neuralink and other brain-computer interfaces…

When tech companies think about the Third Device - the mythical device that comes after the PC and the smartphone - this is what they reach for: the future of the personal computer is to turn the person into the computer.

Rooms

Contrast augmented users with augmented environments. Notably:

  • Dynamicland (2018) – Bret Victor’s vision of "a computer that is a place," a programmable room
  • Put-that-there (1980) – MIT research into room-scale, multimodal (voice and gesture) conversational computing
  • Project Cybersyn (1971) – Stafford Beer’s room-sized cybernetic brain for the economy of Chile
  • SAGE (as previously discussed) (1958–) – the pinnacle of computing before the PC, group computing out of the Cold War.

And innumerable other HCI projects…

The vision of room-scale computing has always had factions.

Is it ubiquitous computing (ubicomp), in which computing power is embedded in everything around us, culminating in smart dust? It is ambient computing, which also supposes that computing will be invisible? Or calm computing, which is more of a design stance that computing must mesh appropriately with our cognitive systems instead of chasing attention?

So there’s no good word for this paradigm, which is why I call it simply room-scale, which is the scale that I can act as a user.

I would put smart speakers in the room-scale/augmented environments bucket: Amazon Alexa, Google Home, all the various smart home systems like Matter, and really the whole internet of things movement – ultimately it’s a Star Trek Holodeck/"Computer…" way of seeing the future of computer interaction.

And robotics too. Roomba, humanoid robots that do our washing up, and tabletop paper robots that act as avatars for your mates, all part of this room-scale paradigm.


Rather than “cyborg”, I like sci-fi author Becky Chambers’ concept of somaforming (as previously discussed), the same concept but gentler.

Somaforming vs terraforming, changing ourselves to adapt to a new environment, or changing the environment to adapt to us.


Both cyborgs and rooms are decent North Stars for our collective computing futures, you know?

Both can be done in good ways and ugly ways. Both can make equal use of AI.

Personally I’m more interested in room-scale computing and where that goes. Multi-actor and multi-modal. We live in the real world and together with other people, that’s where computing should be too. Computers you can walk into… and walk away from.

So it’s an interesting question: while everyone else is building glasses, AR, and AI-enabled cyborg prosthetics that hang round your neck, what should we build irl, for the rooms where we live and work? What are the core enabling technologies?

It has been overlooked I think.


More posts tagged: glimpses-of-our-cyborg-future (15).

Auto-detected kinda similar posts:

Parables involving gross bodily functions

10 October 2025 at 09:28

When I was at uni I went with some friends into a cellar with a druid to experience his home-made 6 foot tall light-strobing monolith, what I now know was a type of Dreammachine, which was a pair of stacked out-of-sync strobing lights so bright that it makes the receptors in your eyes produce concentric circles of moving colour even with your eyelids tight shut.

His performance of changing and interconnected strobe frequencies, which he had composed specially for my friend’s birthday, was exciting but also pretty overwhelming and long story short it made me puke.


Another story.

When I was a teenager there was a guy in our circle who - and I don’t know how this came up and apologies in advance for the image - when we were discussing what your poo might feel like, because truthfully you don’t know right, it’s there at the bottom of the water in the loo and you see it and can imagine the feeling it of but you don’t know with the actual sensory accuracy of your hands - he said he did know, and when we asked he continued that he had wondered once, so he had reached into the loo and picked one up, it’s just washing your hands after that’s all, he said.

And we were all vocally disgusted but actually I have always admired him, then and today decades later (I remember exactly where we were standing outside in the garden when he told us) for his clarity of purpose and also with jealousy of his secret knowledge. Anyway he is a successful Hollywood producer now.


Here’s some advice that has never left me since I read it: "When stumped by a life choice, choose “enlargement” over happiness" (as previously discussed).

I feel like sometimes I don’t do some enlarging things, big or small, because it never occurs to me because it would involve something vaguely uncomfortable. Not gross necessarily, but uncomfortable like going into a room that I don’t feel like I belong in, or doing something publicly I don’t know to do with automatic unpleasant embarrassment included. An assumed micro-unhappiness.

I’m pretty good at challenging myself when the potential discomfort is visible to me, but the non-obvious ones are insidious.

It’s important to recognise these invisible discomforts because maybe actually they don’t matter, and they’re inhibiting me from doing things.

I’m talking about the opposite of a velleity.

David Chapman’s concept of velleity, as discussed by DRMacIver, which is where I learnt about it:

A “velleity” is a wish so weak that it does not occur to you to act on it. These are desires you instantly dismiss because they do not match your picture of what you think you want.

(But you should identify them and pursue them! That’s the point!)

So these anti-velleities, micro-discomforts, when unrecognised they act as blinkers so I don’t even think of the possibilities I am denying myself.

The future Hollywood producer was able to see past that.

Wise beyond his years!

Not all enlargements matter, of course. Gotta kiss a lot of frogs.

And what else is life for, if not puking in a druid’s cellar on a friend’s birthday? Answer me that.

The Campaign for Vertical Television

1 October 2025 at 18:47

TV networks should have an extra channel that shows exclusively portrait mode content.

i.e. BBC Vertical.

Also: Netflix Vertical, National Geographic Vertical, ESPN Vertical, etc.

Why?

  • Because there’s demand
  • To discover how to make good portrait-mode content
  • To promote new consumer electronics
  • To counter slop.

Good for phones and big black bars down the sides on regular TVs, at least until they ship new ones that hand differently.

Strategic play init.


Why don’t networks have vertical channels already?

They should.

It’s how most people consume most video, judging by how I see people watching on the train.

Sometimes I do see people watching in landscape on the train, it’s true. But it’s always something massively popular like live football or Love Island. Anything less, people avoid regular horizontal TV. (I’ll come back to why that is.)

I think there’s an assumption from the commissioners at traditional TV that portrait mode = short form = story mode. So they look down on it and that’s why they don’t produce any long-form vertical content.

That was the mistaken assumption also made by Quibi (Wikipedia) which launched in April 2020 with a ton of traditional content partners and investor backing of $1.75bn. It flamed out by the end of the year.

The premise was portrait mode, mobile-first TV and ALSO in 10 minute segments.

Too much. To me their approach over-determined the format. (Also I feel like it had misaligned incentives: the only way Quibi would have worked long-term is if viewers habitually opened the Quibi app first and what, content partners are going to cannibalise their existing audiences by pointing them at this upstart rival brand?)

The core premise - that portrait mode TV should be a thing - is good.

But forget the “10 minute segments” constraint. Forget story mode.


So why do people avoid watching horizontally?

Well, it’s so distancing to watch footie or regular telly on a horizontal phone, you feel so far away. It’s un-engaging.

But more than that, it’s uncomfortable…

My dumb explanation of portrait mode dominance is that phones are easier to hold that way, and physically uncomfortable to hold in landscape – increasingly so as they get bigger (compensating for that feeling of being far away). It’s just the shape of our hands and our wrists and the way our necks attach to our heads.

And the only screen-filling content available when you have your phone vertical is story mode content.

But that doesn’t intrinsically mean that it’s the best we can do.


It wouldn’t be enough to take regular TV and cut the sides. Vertical formats will be different.

This is the kind of content that you’d want on BBC Vertical:

  • long-form drama
  • long-form video podcasts
  • news
  • live sport

Live sport in particular. The age of blockbuster MCU-style movies is over. Spoilers are on the socials instantly, there are no surprises in cinematic universes anymore; the real time spent is in the fandoms. You can get an AI to churn out wilder special effects than anything Hollywood can do. Or see it on the news. Who cares about a superhero thumping another superhero.

But live sport? Genuine surprise, genuine drama, genuine “appointments to view” and social shared experiences.

But vertical live sport will need to be shot differently. Different zoom, different angles.

Ditto drama. While vertical drama is not shorter necessarily, it is more first person.

e.g. what the kids are watching:

I walked into a room of tweens watching YouTube on the big screen in the dark at a party yesterday and that’s how I learnt about NEN FAM.

The NEN FAM YouTube channel has over 3m subscribers and a ton of vids over 7m views, which is more than a lot of popular TV shows.

(Sidenote: But no Wikipedia page. Wikipedia is stuck with topics which are notable for millennials and older, right? This is a sign that it has ossified?)

afaict NEN FAM is a bunch of 16 kids in Utah who live in the same house and they get up to hi-jinx like sneaking out at midnight but telling their parents first (YouTube, 1 hour) and hanging out at a playground, all filmed like a super vapid Blair Witch Project. It’s v wholesome.

Join me next week for another instalment of Matt Encounters Modern Culture

Anyway formats will be different is what I’m saying.


So NEN FAM and its like is the future of entertainment, given what kids do today is what mainstream culture is tomorrow.

Honestly I think the only reasons that NEN FAM is not vertical-first are

  • The YouTube algo pushes against that
  • Big screens remain obstinately horizontal.

But how would you do vertical Pride & Prejudice?

omg can you even imagine?

House of Cards with its breaking the 3rd wall and mix in characters carrying the phone pov-style. It would be amazing.

All of this needs to be figured out. Producers, directors, writers, actors, camera crew, lighting – this won’t happen automatically. There needs to be distribution (i.e. the new channels I propose) and the pump needs to be primed so everyone involved has room to learn.


None of this will happen without vertical big screens in the front room.

And this is where there’s opportunity.

Samsung, Sony, LG, all the new Chinese consumer electronics manufacturers: every single one of them is looking for the trigger for a refresh super cycle.

So here’s my proposal:

The consumer electronics companies should get together and start an industry consortium, say the Campaign for Vertical Television or something.

C4VTV would include a fund that they all chip in to.

Then any broadcast or streaming TV network can apply for grants to launch and commission content for their new channels.

It’s a long-term play, sure.

But it has a way better chance of shifting consumer behaviour (and the products we buy) than - to take an example - that push for home 3D a decade ago. At least this pump-priming push is in line with a consumer behaviour shift which is already underway.


The imperative here is that vertical video is getting locked in today, and a lot of it is super poisonous.

Story mode is so often engagement farming, attention mining capitalist slop.

And we’ve seen this week two new apps which are AI-video-only TikTok competitors: Sora from OpenAI and Vibes by Meta AI.

Look I love AI slop as previously discussed.

But what is being trained with these apps is not video quality or internal physics model consistency. They’re already good at that. What they’re pointing the AI machine at is learning how to get a stranglehold on your attention.

The antidote? Good old fashioned content from good old fashion program makers.

But: vertical.


Not just television btw.

Samsung – when you establish the Campaign for Vertical Television, or whatever we call it, can you also throw some money at Zoom and Google Meet to get them to go portrait-mode too?

Honestly on those calls we’re talking serious business things and building relationship. I want big faces. Nobody needs to devote two thirds of screen real estate to shoulders.


Auto-detected kinda similar posts:

I like AI slop and I cannot lie

26 September 2025 at 16:38

I looked in my home directory in my desktop Mac, which I don’t very often (I run a tidy operation here), and I found a file I didn’t recognise called out.html.

Here is out.html.

For the benefit of the tape: it is a half-baked GeoCities-style homepage complete with favourite poems, broken characters, and a "This page is best viewed with Netscape Navigator 4.0 or higher!" message in the footer.

The creation date of the file is March of this year.

I don’t know how it got there.

Maybe my computer is haunted?


I have a vague memory of trying out local large language models for HTML generation, probably using the llm command-line tool.

out.html is pretty clearly made with AI (the HTML comments, if you View Source, are all very LLM-voice).

But it’s… bad. ChatGPT or Claude in 2025 would never make a fake GeoCities page this bad.

So what I suspect has happened is that I downloaded a model to run on my desktop Mac, prompted it to save its output into my home directory (lazily), then because the model was local it was really slow… then got distracted and forgot about it while it whirred away in a window in the background, only finding the output 6 months down the line.


UPDATE. This is exactly what happened! I just realised I can search my command history and here is what I typed:

llm -m gemma3:27b ‘Build a single page HTML+CSS+JavaScript UI which looks like an old school GeoCities page with poetry and fave books/celebs, and tons and tons of content. Use HTML+CSS really imaginatively because we do not have images. Respond with only the HTML so it can be run immediately’ > out.html

And that will have taken a whole bunch of time so I must have tabbed elsewhere and not even looked at the result.


Because I had forgotten all about it, it was as if I had discovered a file made by someone else. Other footprints on the deserted beach.

I love it.

I try to remain sensitive to New Feelings.

e.g…

The sense of height and scale in VR is a New Feeling: "What do we do now the gamut of interaction can include vertigo and awe? It’s like suddenly being given an extra colour."

And voice: way back I was asked to nominate for Designs of the Year 2016 and one my nominations was Amazon Echo – it was new! Here’s part of my nomination statement:

we’re now moving into a Post PC world: Our photos, social networks, and taxi services live not in physical devices but in the cloud. Computing surrounds us. But how will we interact with it?

So the New Feeling wasn’t voice per se, but that the location of computing/the internet had transitioned from being contained to containing us, and that felt new.

(That year I also nominated Moth Generator and Unmade, both detailed in dezeen.)

I got a New Feeling when I found out.html just now.


Stumbling across littered AI slop, randomly in my workspace!

I love it, I love it.

It’s like having a cat that leaves dead birds in the hall.

Going from living in a house in which nothing changes when nobody is in the house to a house which has a cat and you might walk back into… anything… is going from 0 to 1 with “aliveness.” It’s not much but it’s different.

Suddenly my computer feels more… inhabited??… haunted maybe, but in a good way.


Three references about computers being inhabited:

  1. Every page on my blog has multiplayer cursors and cursor chat because every webpage deserves to be a place (2024) – and once you realise that a webpage can show passers-by then all other webpages feel obstinately lonely.
  2. Little Computer People (1985), the Commodore 64 game that revealed that your computer was really a tiny inhabited house, and I was obsessed at the time. LCP has been excellently written up by Jay Springett (2024).
  3. I wrote about Gordon Brander’s concept for Geists (2022). Geists are/were little bots that meander over your notes directory, "finding connections between notes, remixing notes, issuing oracular provocations and gnomic utterances."

And let’s not forget Steve Jobs’ unrealised vision for Mister Macintosh: "a mysterious little man who lives inside each Macintosh. He pops up every once in a while, when you least expect it, and then winks at you and disappears again."


After encountering out.html I realise that I have an Old Feeling which is totally unrecognised, and the old feeling, which has always been there it turns out, is that being in my personal computer is lonely.

I would love a little geist that runs a local LLM and wanders around my filesystem at night, perpetually out of sight.

I would know its presence only by the slop it left behind, slop as ectoplasm from where the ghost has been,

a collage of smiles cut out of photos from 2013 and dropped in a mysterious jpg,

some doggerel inspired by a note left in a text file in a rarely-visited dusty folder,

if I hit Back one to many times in my web browser it should start hallucinating whole new internets that have never been.


More posts tagged: ghosts (7).

Auto-detected kinda similar posts:

Filtered for bird calls and catnip

19 September 2025 at 17:55

1.

With less human noise to compete with, the birds are able to have ‘deeper conversations,’ says biologist (2020):

Researchers studying birdsong in the San Francisco Bay found the sparrows’ mating calls became quieter, more complex, and just generally “sexier” now that they don’t have to compete with the sounds of cars and cellphones, says study co-author Elizabeth Derryberry.

A side-effect of the pandemic:

Without cars, mating calls travel twice the distance, and also more information can be transmitted.

Does this analogy work? It’s the one they give: "As the party winds down and people go home, you get quieter again, right? You don’t keep yelling, and you maybe have your sort of deeper conversations at that point."

I would love to see a follow-up study? For a brief period, long-form discursive song was favoured. So was there a generation of famous sparrow rhetoricians, like the orators of Ancient Greece? Do they look back on the early 2020s as the golden age of sparrow Homer?

PREVIOUSLY:

Just pandemic things (2023).

2.

Parrots taught to video call each other become less lonely, finds research (The Guardian, 2023): "In total the birds made 147 deliberate calls to each other during the study."

Some would sing, some would play around and go upside down, others would want to show another bird their toys.

More at the FT: Scientists pioneer ‘animal internet’ with dog phones and touchscreens for parrots (paywall-busting link).

26 birds involved … would use the system up to three hours a day, with each call lasting up to five minutes.

Why? Because pet parrots:

typically live alone in their owners’ homes though their counterparts in the wild typically socialise within large flocks.

Flocks.

You know, although the 1:1 parrot-phone is interesting, I wonder whether a zoom conference call would be more appropriate? Or, better, an always-on smart speaker that is a window to a virtual forest, collapsing geography.

Another project, mentioned in that same article:

Ilyena Hirskyj-Douglas, who heads the university’s Animal-Computer Interaction Group, started by developing a DogPhone that enables animals to contact their owners when they are left alone.

Ref.

Birds of a Feather Video-Flock Together: Design and Evaluation of an Agency-Based Parrot-to-Parrot Video-Calling System for Interspecies Ethical Enrichment, CHI ‘23: Proceedings of the 2023 CHI Conference on Human Factors in Computing Systems.

3.

Empty cities, virtual forests… this bird sings across time.

Lubman first became intrigued by reports of a curious echo from the Mayan pyramid of Kukulkan at Chichen Itza, in Mexico’s Yucatan region. The odd “chirped” echo resounds from the pyramid’s staircases in response to hand claps of people standing near its base. To hear for himself, Lubman packed up his recording gear and traveled to Chichen Itza last January.

After studying the staircases and analyzing his recordings and sonograms of the echoes, Lubman came back convinced that this was no architectural freak. In his paper, Lubman argued that the design of the staircases was deliberate and that the echo is an ancient recording, coded in stone, of the call of the Maya’s sacred bird, the quetzal.

omg too sublime, I am a sucker for avian archaeoacoustics

PREVIOUSLY:

Microdosing cathedrals and the synthetic acoustic environment of the ancient world (2022).

4.

You will NEVER GUESS what I found out about CATNIP this week.

You know when cats go crazy about catnip and start rubbing their faces all over it?

Yeah well catnip is a mosquito repellent. So it’s evolutionary behaviour to avoid mosquito-borne parasites.

Catnip, Nepeta cataria, contains nepetalactol:

Rubbing behavior transfers nepetalactol onto the faces and heads of respondents where it repels the mosquito, Aedes albopictus. Thus, self-anointing behavior helps to protect cats against mosquito bites.

Not quite the same but:

Tonic water contains quinine which is used to treat malaria and gin & tonic was invented by the East India Company to keep its colonising army safe in India.

Bet the covid vaccine would have been more popular if it also got you drunk. A lesson for next time around.

Ref.

Uenoyama, R., Miyazaki, T., Hurst, J. L., Beynon, R. J., Adachi, M., Murooka, T., Onoda, I., Miyazawa, Y., Katayama, R., Yamashita, T., Kaneko, S., Nishikawa, T., & Miyazaki, M. (2021). The characteristic response of domestic cats to plant iridoids allows them to gain chemical defense against mosquitoes. Science advances, 7(4), eabd9135.


More posts tagged: filtered-for (118).

Get your Claude Code muzak here

18 September 2025 at 10:42

I suggested last week that Claude Code needs elevator music

Like, in the 3 minutes while you’re waiting for your coding agent to write your code, and meanwhile you’re gazing out the window and contemplating the mysteries of life, or your gas bill, maybe some gentle muzak could pass the time?

WELL.

Mike Davidson a.k.a. Mike Industries only went and built it.

Get Claude Muzak here (GitHub):

Elevator music for your Claude Code tasks! Automatically plays background music while Claude is working, because coding is better with a soundtrack.

Easy to install. Includes three pitch-perfect elevator music MP3s to get you started.

Wild. Wonderful.


Although:

Boris Smus adds a wrinkle: "I do my Claude codes at least two at a time. Is there polyphonic elevator music?"

YES PLEASE.

btw I’ve heard this a few times, Claude Code experts do indeed task up multiple claudes in parallel. Either working on different parts of the same app, or entirely different projects.

I would love to see what a custom UI to manage multiple claudes even looks like. Just, zero downtime, giving go-ahead permissions to one instance, pulling the throttle on another, instructing yet another…

I have a buddy who builds UI for traders at a global bank. The way he describes it, the traders are no longer doing individual stock trades. Instead they have a whole bunch of algorithms written by the quants running high-frequency trading on the market.

And the traders’ job is to sit there taking in the various signals and numbers, I imagine like Ozymandius bathing in the screens, "just me and the world," and instinctively steer the ship by ramping different HFT algorithms up and down.

The UI is to facilitate this fugue state crossed with a airplane pilot console situation.

One day driving our claude swarms will work like this.


So polyphonic Claude Codes?

Imagine that each claude instance plays just one track each and they come together…

I’m imagining Steve Reich’s Music for 18 Claudes.

Or to put it another way, something like video game music which is layered and looped and adaptive and I have discussed before (2020). In Red Dead Redemption,

the music reacts in real-time to the player’s actions … a more foreboding sound is required during moments of suspense. All of these loops have to segue into each other as events evolve on screen.

So the experience of playing RDR is that you’re galloping along at sunset in this beautiful American south-west landscape, and you notice the music quietly weave in an ominous new refrain… so you look out for what’s about to happen before it happens.

I want my polyphonic Claude Codes to intensify the minor key rhythms as the code tests fail and fail and fail again, before it is even escalated to me, drumbeats over the horizon, the distant rumble of thunder, approaching…


Also mentioned in that post:

Matthew Irvine Brown’s project Music for Shuffle (2011) which he composed for the wonderful belt-clip-mounted iPod Shuffle:

music specifically for shuffle mode – making use of randomness to make something more than the sum of its parts.

18 tracks sub 10 seconds each that can be played continuously in any order.

Like… Anthropic and OpenAI and Google, they must have arts programs, right? Are they commissioning for projects like this? Because they should.

The Phase Change

16 October 2025 at 14:59

Last week I ran my first 10k.

It wasn't a race or anything. I left that evening planning to run a 5k, and then three miles later thought "what if I kept going?"1

I've been running for just over two years now. My goal was to run a mile, then three, then three at a pace faster than a power-walk. I wish I could say that I then found joy in running, but really I was just mad at myself for being so bad at it. Spite has always been my brightest muse.

Looking back, the thing I find most fascinating is what progress looked like. I couldn't tell you if I was physically progressing steadily, but for sure mental progress moved in discrete jumps. For a long time a 5k was me pushing myself, then suddenly a "phase change" happens and it becomes something I can just do on a run. Sometime in the future the 10k will feel the same way.

I've noticed this in a lot of other places. For every skill I know, my sense of myself follows a phase change. In every programming language I've ever learned, I lurch from "bad" to "okay" to "good". There's no "20% bad / 80% normal" in between. Pedagogical experts say that learning is about steadily building a mental model of the topic. It really feels like knowledge grows continuously, and then it suddenly becomes a model.

Now, for all the time I spend writing about software history and software theory and stuff, my actually job boils down to teaching formal methods. So I now have two questions about phase changes.

The first is "can we make phase changes happen faster?" I don't know if this is even possible! I've found lots of ways to teach concepts faster, cover more ground in less time, so that people know the material more quickly. But it doesn't seem to speed up that very first phase change from "this is foreign" to "this is normal". Maybe we can't really do that until we've spent enough effort on understanding.

So the second may be more productive: "can we motivate people to keep going until the phase change?" This is a lot easier to tackle! For example, removing frustration makes a huge difference. Getting a proper pair of running shoes made running so much less unpleasant, and made me more willing to keep putting in the hours. For teaching tech topics like formal methods, this often takes the form of better tooling and troubleshooting info.

We can also reduce the effort of investing time. This is also why I prefer to pair on writing specifications with clients and not just write specs for them. It's more work for them than fobbing it all off on me, but a whole lot less work than writing the spec by themselves, so they'll put in time and gradually develop skills on their own.

Question two seems much more fruitful than question one but also so much less interesting! Speeding up the phase change feels like the kind of dream that empires are built on. I know I'm going to keep obsessing over it, even if that leads nowhere.


  1. For non-running Americans: 5km is about 3.1 miles, and 10km is 6.2. 

Three ways formally verified code can go wrong in practice

10 October 2025 at 17:06

New Logic for Programmers Release!

v0.12 is now available! This should be the last major content release. The next few months are going to be technical review, copyediting and polishing, with a hopeful 1.0 release in March. Full release notes here.

Cover of the boooooook

Three ways formally verified code can go wrong in practice

I run this small project called Let's Prove Leftpad, where people submit formally verified proofs of the eponymous meme. Recently I read Breaking “provably correct” Leftpad, which argued that most (if not all) of the provably correct leftpads have bugs! The lean proof, for example, should render leftpad('-', 9, אֳֽ֑) as ---------אֳֽ֑, but actually does ------אֳֽ֑.

You can read the article for a good explanation of why this goes wrong (Unicode). The actual problem is that correct can mean two different things, and this leads to confusion about how much formal methods can actually guarantee us. So I see this as a great opportunity to talk about the nature of proof, correctness, and how "correct" code can still have bugs.

What we talk about when we talk about correctness

In most of the real world, correct means "no bugs". Except "bugs" isn't a very clear category. A bug is anything that causes someone to say "this isn't working right, there's a bug." Being too slow is a bug, a typo is a bug, etc. "correct" is a little fuzzy.

In formal methods, "correct" has a very specific and precise meaning: the code conforms to a specification (or "spec"). The spec is a higher-level description of what is supposed the code's properties, usually something we can't just directly implement. Let's look at the most popular kind of proven specification:

-- Haskell
inc :: Int -&gt; Int
inc x = x + 1

The type signature Int -> Int is a specification! It corresponds to the logical statement all x in Int: inc(x) in Int. The Haskell type checker can automatically verify this for us. It cannot, however, verify properties like all x in Int: inc(x) > x. Formal verification is concerned with verifying arbitrary properties beyond what is (easily) automatically verifiable. Most often, this takes the form of proof. A human manually writes a proof that the code conforms to its specification, and the prover checks that the proof is correct.

Even if we have a proof of "correctness", though, there's a few different ways the code can still have bugs.

1. The proof is invalid

For some reason the proof doesn't actually show the code matches the specification. This is pretty common in pencil-and-paper verification, where the proof is checked by someone saying "yep looks good to me". It's much rarer when doing formal verification but it can still happen in a couple of specific cases:

  1. The theorem prover itself has a bug (in the code or introduced in the compiled binary) that makes it accept an incorrect proof. This is something people are really concerned about but it's so much rarer than every other way verified code goes wrong, so is only included for completeness.

  2. For convenience, most provers and FM languages have an "just accept this statement is true" feature. This helps you work on the big picture proof and fill in the details later. If you leave in a shortcut, and the compiler is configured to allow code-with-proof-assumptions to compile, then you can compile incorrect code that "passes the proof checker". You really should know better, though.

2. The properties are wrong

The horrible bug you had wasn't covered in the specification/came from some other module/etc

Galois

This code is provably correct:

inc :: Int -&gt; Int
inc x = x-1

The only specification I've given is the type signature Int -> Int. At no point did I put the property inc(x) > x in my specification, so it doesn't matter that it doesn't hold, the code is still "correct".

This is what "went wrong" with the leftpad proofs. They do not prove the property "leftpad(c, n, s) will take up either n spaces on the screen or however many characters s takes up (if more than n)". They prove the weaker property "len(leftpad(c, n, s)) == max(n, len(s)), for however you want to define len(string)". The second is a rough proxy for the first that works in most cases, but if someone really needs the former property they are liable to experience a bug.

Why don't we prove the stronger property? Sometimes it's because the code is meant to be used one way and people want to use it another way. This can lead to accusations that the developer is "misusing the provably correct code" but this should more often be seen as the verification expert failing to educate devs on was actually "proven".

Sometimes it's because the property is too hard to prove. "Outputs are visually aligned" is a proof about Unicode inputs, and the core Unicode specification is 1,243 pages long.

Sometimes it's because the property we want is too hard to express. How do you mathematically represent "people will perceive the output as being visually aligned"? Is it OS and font dependent? These two lines are exactly five characters but not visually aligned:

|||||

MMMMM

Or maybe they are aligned for you! I don't know, lots of people read email in a monospace font. "We can't express the property" comes up a lot when dealing with human/business concepts as opposed to mathematical/computational ones.

Finally, there's just the possibility of a brain fart. All of the proofs in Nearly All Binary Searches and Mergesorts are Broken are like this. They (informally) proved the correctness of binary search with unbound integers, forgetting that many programming languages use machine integers, where a large enough sum can overflow.

3. The assumptions are wrong

This is arguably the most important and most subtle source of bugs. Most properties we prove aren't "X is always true". They are "assuming Y is true, X is also true". Then if Y is not true, the proof no longer guarantees X. A good example of this is binary sort search, which only correctly finds elements assuming the input list is sorted. If the list is not sorted, it will not work correctly.

Formal verification adds two more wrinkles. One: sometimes we need assumptions to make the property valid, but we can also add them to make the proof easier. So the code can be bug-free even if the assumptions used to verify it no longer hold! Even if a leftpad implements visual alignment for all Unicode glyphs, it will be a lot easier to prove visual alignment for just ASCII strings and padding.

Two: we need make a lot of environmental assumptions that are outside our control. Does the algorithm return output or use the stack? Need to assume that there's sufficient memory to store stuff. Does it use any variables? Need to assume nothing is concurrently modifying them. Does it use an external service? Need to assume the vendor doesn't change the API or response formats. You need to assume the compiler worked correctly, the hardware isn't faulty, and the OS doesn't mess with things, etc. Any of these could change well after the code is proven and deployed, meaning formal verification can't be a one-and-done thing.

You don't actually have to assume most of these, but each assumption drop makes the proof harder and the properties you can prove more restricted. Remember, the code might still be bug-free even if the environmental assumptions change, so there's a tradeoff in time spent proving vs doing other useful work.

Another common source of "assumptions" is when verified code depends on unverified code. The Rust compiler can prove that safe code doesn't have a memory bug assuming unsafe code does not have one either, but depends on the human to confirm that assumption. Liquid Haskell is verifiable but can also call regular Haskell libraries, which are unverified. We need to assume that code is correct (in the "conforms to spec") sense, and if it's not, our proof can be "correct" and still cause bugs.


These boundaries are fuzzy. I wrote that the "binary search" bug happened because they proved the wrong property, but you can just as well argue that it was a broken assumption (that integers could not overflow). What really matters is having a clear understanding of what "this code is proven correct" actually tells you. Where can you use it safely? When should you worry? How do you communicate all of this to your teammates?

Good lord it's already Friday

New Blog Post: " A Very Early History of Algebraic Data Types"

25 September 2025 at 16:50

Last week I said that this week's newsletter would be a brief history of algebraic data types.

I was wrong.

That history is now a 3500 word blog post.

Patreon blog notes here.


I'm speaking at P99 Conf!

My talk, "Designing Low-Latency Systems with TLA+", is happening 10/23 at 11:30 central time. It's an online conf and the talk's only 16 minutes, so come check it out!

We finally stopped touching ourselves.

By: Boy Boy
11 October 2025 at 13:29

💾

Go to https://piavpn.com/boyboy to get 83% off from our sponsor Private Internet Access with 4 months free!

Go to our Patreon for more spicy content https://www.patreon.com/Boy_Boy

Follow us on twitter: https://twitter.com/BoyBoy_Official

Thanks to Ostonox for the amazing edit: https://twitter.com/ostonox

Wat heeft Zeus gedaan, de afgelopen maan?

31 August 2025 at 00:00

Wat heeft Zeus gedaan, de afgelopen maan?

We kijken even naar wat er is gebeurd tijdens de zomermaanden!

Het is zomer, wat kun je anders doen dan je in een kelder verstoppen voor de hitte en een hoop code schrijven? Dat is in ieder geval gebeurd! De highlights zijn:

Zeus Profile Images

Zeus heeft nu profielfotos!

Er is een centrale site om je profielfoto in te stellen: https://zpi.zeus.gent/

Je foto zal (langzaam maar zeker) zichtbaar zijn in meerdere Zeus-applicaties. Op dit moment kun je je foto al aanschouwen in ZAUTH en EVENTS. We zijn ook van plan om Tap hier naar over te schakelen, dus als je graag je oude Tapfoto wilt gebruiken, is nu het moment om deze over te zetten!

Documentatie

We zijn aan het werken aan Zeus-documentatie! Omdat nieuwe bestuursleden tijdens de eerste vergadering heel wat nieuwe dingen moeten onthouden (en voor de oudere die iets vergeten zijn) zijn we bezig met informatie te verzamelen in 1 centraal punt over hoe Zeus werkt. Ook informatie over sysadmin zaken en over dingen in de kelder zullen hier verzameld worden. Een WIP versie is hier te vinden. PRs, comments en bedenkingen zijn altijd welkom.

Omdat we toch geen docs kunnen schrijven zonder code te schrijven hebben we ook vlug onze eigen SSG geschreven.

EVENTS

Ons eigen event-management platform heeft een hele reeks updates gekregen. Het is bedoeld voor bestuur om events efficienter te plannen, te zorgen dat announcements op tijd worden gemaakt, en heeft een aantal andere superleuke features zoals automatisch powerpoints maken!

Keldermuzieksoftware

Rond muziek werden er ook leuke dingen gedaan: veel informatie over het huidig nummer is nu beschikbaar via onze MQTT broker, je kunt stemmen op nummers (in de kelder) via ZODOM en informatie krijgen over nummers via GUITAR. Ook hebben we nu terug een versterker! Hoezee!

Ledstrip

Er werd gesleuteld aan de ledstrip: er is nu een muziek progress bar en er werden verbeteringen aangebracht in de besturingscode.

Tap

Tap heeft enkele UX verbeteringen gekregen.

Bestuur

Op vlak van bestuurzaken, zullen we binnenkort (9 september) samenzitten om de events van dit semester te plannen en ons voor te bereiden op de start van het academiejaar!

Ook staan we in contact met Pelicano die graag samen met ons hun eigen urenloop willen organiseren. De details (en of het mogelijk is) zijn we nog aan het bekijken, maar in ieder geval binnenkort meer nieuws hierover!

We hebben ook onze eerste sponsor van dit jaar! De details liggen nog niet vast, maar we zullen samen een event organiseren.

We krijgen vaak de vraag van een bedrijf (of andere instantie) of we een job willen verspreiden onder onze leden. Aangezien we geen fan zijn van dat zomaar in ~general to posten, hebben we het ~jobs kanaal gemaakt, zodat dit voor iedereen opt-in is. Bedrijven die ons sponsoren kunnen hier ook enkele jobs plaatsen (maar dit zal steeds door ons gaan).

Hopelijk was het al een fijne zomer voor iedereen!

Pompeii Changed How I Think About The Roman Empire - Smarter Every Day 310

7 September 2025 at 20:37

💾

You can try AnyDesk for free. It's good. https://anydesk.com/smarter
http://www.patreon.com/smartereveryday
Get Email Updates: https://www.smartereveryday.com/email

A big thank you to our guide, Ciro. We paid full price for the tour, and are grateful he and his father Gaetano agreed to let me film and share a bit of what we learned. (The video was approved by Gaetano before releasing.) There is FAR more to see at Pompeii than I show in this video. We were very happy with our experience and found Ciro to be very professional and patient with our questions. You can book your own personal tour of Pompeii with Ciro or Gaetano at their website. https://www.pompeiitourguide.com/

Click here if you're interested in subscribing: http://bit.ly/Subscribe2SED
⇊ Click below for more links! ⇊
~~~~~~~~~~~~~~~~~~~~~~~~~~~~
GET SMARTER SECTION
https://en.wikipedia.org/wiki/Pompeii

An interesting BBC video of a discovery made at Pompeii.
https://www.youtube.com/watch?v=wPMmQ3kNCgE

~~~~~~~~~~~~~~~~~~~~~~~~~~~~

Smarter Every Day on Patreon
http://www.patreon.com/smartereveryday

Ambiance, audio and musicy things by: Gordon McGladdery
https://www.ashellinthepit.com/
http://ashellinthepit.bandcamp.com/

If you feel like this video was worth your time and added value to your life, please SHARE THE VIDEO!

If you REALLY liked it and feel that this video has added value to your life, please consider supporting financially by becoming a Patron. http://www.patreon.com/smartereveryday

Warm Regards,

Destin

I'm so Sick of this Lazy Excuse ...

12 October 2025 at 11:50

💾

Want to restore the planet’s ecosystems and see your impact in monthly videos? The first 100 people to join Planet Wild with my code BIKES10 will get the first month for free at: https://planetwild.com/r/notjustbikes/join
If you want to get to know them better first, check out their project using chainsaw detectors to protect one of Europe’s last ancient forests: https://planetwild.com/r/notjustbikes/m31

Watch this video ad-free and sponsor-free on Nebula: https://nebula.tv/videos/notjustbikes-the-laziest-excuse-for-bad-cities

Patreon: https://patreon.com/notjustbikes
Mastodon: @notjustbikes@notjustbikes.com
NJB Live (my live-streaming channel): https://youtube.com/@njblive

---
Relevant Videos

The Dumbest Excuse for Bad Cities
https://nebula.tv/videos/notjustbikes-the-dumbest-excuse-for-bad-cities
https://www.youtube.com/watch?v=REni8Oi1QJQ

Why Canadians Can't Bike in the Winter (but Finnish people can)
https://nebula.tv/videos/not-just-bikes-why-canadians-can-t-bike-in-the-winter-but-finnish-people-can
https://www.youtube.com/watch?v=Uhx-26GfCBU

Are Taiwan's Roads Still a "Living Hell"?
https://nebula.tv/videos/notjustbikes-are-taiwans-roads-still-a-living-hell
https://www.youtube.com/watch?v=ZdDYVjDwgwA

They Tore Down a Highway and Made it a River (and traffic got better)
https://nebula.tv/videos/notjustbikes-they-tore-down-a-highway-and-made-it-a-river
https://www.youtube.com/watch?v=wqGxqxePihE

---
References & Further Reading
Fresno: A City Reborn - rare 1968 documentary by Victor Gruen Associates
https://www.youtube.com/watch?v=bdTS_LLJvcw

Fulton Mall
https://www.flickr.com/photos/81918828@N00/albums/72157603574663279/
https://www.pediment.com/blogs/news/the-original-fresno-county-courthouse
https://www.kvpr.org/community/2014-09-02/fulton-mall-at-50-when-things-dont-go-according-to-plan
https://www.kvpr.org/government-politics/2014-02-11/is-it-terrible-or-a-treasure-fresnos-fulton-mall-debate-heats-up
By Bryan Harley - Own work {Bryan Harley,own work}, CC0, https://commons.wikimedia.org/w/index.php?curid=91790930
By Michelle Baxter - Own work, CC BY-SA 3.0, https://commons.wikimedia.org/w/index.php?curid=21664245
By David Prasad - https://www.flickr.com/photos/33671002@N00/6191573398/, CC BY-SA 2.0, https://commons.wikimedia.org/w/index.php?curid=124016206
By David Prasad - https://www.flickr.com/photos/33671002@N00/6189802045/, CC BY-SA 2.0, https://commons.wikimedia.org/w/index.php?curid=123750104
https://esotericsurvey.blogspot.com/2024/01/fulton-mall-update-2024.html

Beaufort scale
https://en.wikipedia.org/wiki/Beaufort_scale

NK Tegenwindfietsen
https://nl.wikipedia.org/wiki/NK_Tegenwindfietsen

The Dutch headwind cycling championships are amazing (Tom Scott)
https://www.youtube.com/watch?v=VMinwf-kRlA

Cycling to work in 90 large American cities: new evidence on the role of bike paths and lanes
https://www.saferoutespartnership.org/sites/default/files/pdf/Lib_of_Res/SS_ST_Rutgers_impactbikepaths_bikecommutingbehavior_042012%20-%20Copy.pdf

Weather is not significantly correlated with destination-specific transport-related physical activity among adults: A large-scale temporally matched analysis
https://pmc.ncbi.nlm.nih.gov/articles/PMC5551438/

Weather and daily mobility in international perspective: A crosscomparison of Dutch, Norwegian and Swedish city regions
https://www.diva-portal.org/smash/get/diva2%3A1343725/FULLTEXT01.pdf

Modelling the impact of weather conditions on active transportation travel behaviour
https://www.sciencedirect.com/science/article/abs/pii/S1361920911001143

Promotional Umbrella SENZ wind tunnel test
https://youtu.be/EZyfUq4lt-k

Some clips of winter cycling provided by CycleYYZ
https://www.youtube.com/channel/UCHinhHYjd7k_K7d2lEmC73w

As usual, the YouTube Description is not nearly long enough for all of the references. You can find the full list on Nebula or at this link:
https://notjustbikes.com/references/badweather.txt

The vast majority of the content in this video was filmed on location by Not Just Bikes, with some images licensed from Getty Images and other sources.

No generative AI or AI voices were used in the making of this video.

---
Chapters
0:00 Intro
1:19 Copenhagen (Strøget)
2:41 Fresno (Fulton)
3:53 The weather didn't make the difference
5:09 The wind in the Netherlands
6:08 Sunnyvale, California
7:13 The weather isn't a predictor of cycling
8:12 It sucks to film in bad weather
9:54 Fewer people cycle when it's raining (surprised pikachu)
11:05 You're not made of sugar (so quit your bitching)
12:25 It's not all the same everywhere
13:15 Infrastructure matters even more in bad weather
15:00 It builds character (suck it up)
15:58 Winter cycling
17:51 Ughh ... another requisite rant about ignorant Canadians 😫
19:59 You get used to it
21:41 Good urbanism in hot weather
24:31 The worst way to design for the heat
26:19 Concluding thoughts
27:41 Outro and Planet Wild

This Train Just Keeps Getting Worse 😢 (VIA Rail "The Ocean")

14 September 2025 at 09:53

💾

The Canadian VIA Rail Train Skyline Car Forward View Crossing South Saskatchewan River
Trip Clips (YouTube)
https://www.youtube.com/watch?v=zpJdKfNKDSQ

Rail project to reduce heavy truck traffic
https://www.cbc.ca/news/canada/nova-scotia/halifax-rail-project-to-cut-truck-traffic-now-under-construction-1.6855003

Explosive Demolition To The London Ontario VIA Rail "CN" Train Station
https://archive.org/details/videoplayback-2021-05-07-t-135853.747

As usual, the YouTube Description is not nearly long enough for all of the references. You can find the full list on Nebula or at this link:
https://notjustbikes.com/references/viarailtheocean.txt

The vast majority of the content in this video was filmed on location by Not Just Bikes, with some images licensed from Getty Images and other sources.

No generative AI or AI voices were used in the making of this video.

---
Chapters
0:00 Intro
2:20 The second-best train in Canada
3:16 Forgettable Halifax
4:19 Lunenburg
5:07 Halifax train station
5:45 From six to three days per week
6:50 The CN era
9:22 Boarding "The Ocean"
10:20 The Budd Company
11:09 Our sleeper rooms
12:24 "The Ocean Limited" in 1965
13:32 Departure
15:04 Lunch
16:23 Chronic underfunding
16:55 Renaissance cars
20:14 Dinner
20:32 Other sleeping options
21:31 Seating cars
22:01 Station stops
22:41 The missing Park car
24:30 Sleeping
25:08 Terrible, terrible tracks
28:06 Breakfast
28:42 Arriving in Montréal
29:04 Booking this train
29:56 The future of "The Ocean"
31:15 Our continued travels
31:48 Supporting this channel & Nebula

Practical ChatGPT for Academics: Deep Research, Code & LaTeX — Live Demo

22 October 2025 at 14:00

Description: A 2.5 hour live-demo seminar on practical, high-impact use of ChatGPT in academia: effective prompting and voice tactics, verified Deep Research, programmatic problem-solving (live code — digital filter design), figure creation from text/screenshots, and LaTeX outputs (Beamer slides & posters in Overleaf). Hands-on participation optional.

Detailed description: This seminar provides a comprehensive introduction to using ChatGPT as a versatile academic assistant for the entire research workflow. Participants will see, step by step, how to use ChatGPT to perform structured literature reviews, locate relevant open-source repositories, and organize research findings efficiently. The emphasis is on verifiable, citation-based outputs and critical interaction with the model rather than passive use. Real-world examples will show how Deep Research mode can complement traditional search engines and databases when used responsibly.

A major part of the seminar focuses on practical code generation and execution. Attendees will observe how ChatGPT can design, implement, and run small-scale analytical projects directly inside the chat interface, including numerical simulations and data visualizations. The demonstration will include a digital filter design example, highlighting verification strategies, interpretation of results, and common pitfalls. This section will also touch on multimodal features such as generating figures from text or screenshots, showing how image understanding and textual reasoning combine to support reproducible research workflows.

Finally, the seminar will demonstrate how ChatGPT can support academic writing and dissemination. Participants will learn how to automatically produce Beamer presentations and scientific posters directly from LaTeX papers in Overleaf, ensuring consistency between research outputs and presentations. The session concludes with a short bonus demo of text-driven 3D scene generation to illustrate creative applications of language-to-visual synthesis. Throughout the event, best practices for prompting, critical review, and AI literacy will be emphasized to ensure reliable and ethical use of generative models in research and education.

Date and time: 22. October 2025  from 16.00 - 18.30

Language: English

Level: Intermediate

Location: Virtual (Zoom) - link will only be available to registered participants

Target audience: Researchers, graduate and undergraduate students, lecturers, and technical staff in academia who want to integrate ChatGPT effectively into research, coding, writing, and presentation workflows.

Max. number of participants: 300

Prerequisites (optional): 

•    Overleaf account; 
•    ChatGPT (Plus/Team/Pro)

Workflow: Live Zoom session with screen-shared demonstrations. The seminar begins with prompting and voice tactics, followed by Deep Research for literature and repository discovery, live Python code execution (digital filter example), figure generation from text or screenshots, and creation of Beamer presentations and posters in Overleaf. Concludes with a short bonus demo (text-described 3D scene) and Q&A.

Learning outcomes:

 • Effective prompting and voice/dictation for complex academic tasks.
• Verified literature and code-repository search.
• Using ChatGPT’s built-in code execution for analysis and visualization.
• Generating publication-quality figures from text or screenshots.
• Creating Beamer presentations and posters from LaTeX papers in Overleaf.
• Minimizing hallucinations and improving response reliability.

 

Organizator:

 

Lecturers:

Assoc. prof. dr. Janez Perš

Associate professor at the Faculty of Electrical Engineering, University of Ljubljana, and a member of the Laboratory for Machine Intelligence (LMI). His research and teaching span computer vision, machine intelligence, and embedded systems, with a strong focus on applying artificial intelligence to real-world problems such as autonomous vehicles, robotics, and precision agriculture. He has been an early and active adopter of AI-assisted tools in research, teaching, and academic communication, exploring how systems like ChatGPT can augment productivity, creativity, and critical thinking in academic contexts.

janez.pers@fe.uni-lj.si 

 

as. dr. Janez Križaj
Dr. Križaj is a postdoctoral researcher at the Laboratory for Machine Intelligence, Faculty of Electrical Engineering, University of Ljubljana. His research focuses on computer vision, biometrics, and deep learning.
janez.krizaj@fe.uni-lj.si 

Delavnica: Osnove globokega učenja

23 October 2025 at 08:00

Kratek opis: Ta tečaj nudi praktičen uvod v globoko učenje, zmogljivo tehniko umetne inteligence, ki se uporablja v panogah, kot so zdravstvo, maloprodaja in avtomobilizem. Študenti se bodo naučili trenirati modele globokega učenja z uporabo orodij, kot je PyTorch, s poudarkom na ključnih konceptih, kot so konvolucijske nevronske mreže (CNN), povečevanje podatkov in prenos učenja. Skozi praktične vaje bodo udeleženci pridobili izkušnje pri gradnji modelov za klasifikacijo slik, obdelavo naravnega jezika ipd. Ob koncu tečaja boste pridobili veščine za reševanje projektov globokega učenja s sodobnimi okviri in pristopi.

Podrobnejši opis: Ta tečaj ponuja obsežen uvod v poglobljeno učenje, ključno tehnologijo, ki spodbuja napredek v panogah, kot so zdravstvo, maloprodaja in avtomobilizem. Globoko učenje uporablja večplastne nevronske mreže za reševanje kompleksnih nalog, kot so prepoznavanje slik, prevajanje jezikov in obdelava govora. Cilj tečaja je opremiti študente s temeljnimi veščinami, potrebnimi za usposabljanje in uvajanje modelov globokega učenja z uporabo sodobnih orodij, kot je PyTorch. S praktičnimi aplikacijami, ki segajo od zaznavanja predmetov do prilagojenih izkušenj, se boste naučili, kako uporabiti umetno inteligenco za težave v resničnem svetu.

Skozi tečaj boste raziskovali pomembne koncepte globokega učenja, kot so konvolucijske nevronske mreže (CNN), povečanje podatkov in prenos učenja. Te tehnike so bistvene za izboljšanje natančnosti in učinkovitosti modela, zlasti pri delu z velikimi, zapletenimi nabori podatkov. Kurikulum zajema tudi uporabo vnaprej pripravljenih modelov, ki omogočajo hitrejše usposabljanje modelov z izkoriščanjem obstoječega znanja. Poleg tega boste raziskali napredne teme, kot so ponavljajoče se nevronske mreže (RNN) in obdelava naravnega jezika (NLP), ki so ključne za zaporedne podatkovne naloge in besedilne aplikacije.

Ob koncu tečaja boste svoje znanje uporabili v končnem projektu, kjer boste s tehnikami računalniškega vida zgradili model klasifikacije objektov. Izboljšali boste delovanje modela z učenjem prenosa in povečanjem podatkov ter pridobili dragocene izkušnje pri optimizaciji modelov z omejenimi podatki. Tečaj vas bo vodil tudi skozi nastavitev lastnega razvojnega okolja AI in vas tako pripravil na samostojno izvajanje projektov globokega učenja. Ne glede na to, ali ste začetnik pri umetni inteligenci ali želite razširiti svoje spretnosti, ta tečaj zagotavlja trdno osnovo za vse, ki jih zanima hitro razvijajoče se področje globokega učenja.

Ob koncu delavnice lahko udeleženci pridobijo uradni certifikat Deep Learning Institute pri NVIDIA.

Zahtevnost: Osnovna

Jezik: Slovenski

Opis poteka izobraževanja: Delavnica poteka na daljavo preko brskalnika na oblačni infrastrukturi AWS.

Priporočeno predznanje: Razumevanje osnovnih konceptov programiranja v Python 3, kot so funkcije, zanke, slovarji in polja; poznavanje podatkovnih struktur Panda; in razumevanje, kako izračunati regresijsko črto.

Ciljna publika: Študenti računalništva, inženirji, raziskovalci, razvijalci ter vsi, ki želijo razumeti, kako ta tehnologija deluje.

Na izobraževanju pridobljena znanja:

  • Naučite se osnovnih tehnik in orodij, potrebnih za usposabljanje modela globokega učenja
  •  Pridobite izkušnje s pogostimi podatkovnimi tipi globokega učenja in arhitekturami modelov
  • Izboljšajte nabore podatkov z razširitvijo podatkov, da izboljšate natančnost modela
  • Izkoristite prenos učenja med modeli za doseganje učinkovitih rezultatov z manj podatkov in računanja
  • Zgradite samozavest, da se lotite lastnega projekta s sodobnim ogrodjem poglobljenega učenja

 

Omejitev števila udeležencev: 30

Virtualna lokacija: MS Teams

Organizator: UM FERI, NVIDIA


Predavatelji:

Ime: Domen Verber
Opis: Domen Verber je docent na Fakulteti za elektrotehniko in računalništvo Univerze v Mariboru (UM FERI) ter ambasador NVIDIA Deep Learning Institute za Univerzo v Mariboru in njihov specialist za umetno inteligenco in HPC. S problematiko HPC in umetne inteligence se ukvarja že več kot 25 let.
  domen.verber@um.si, deep.learning@um.si

 

 

Ime: Jani Dugonik 
Opis: Jani Dugonik je raziskovalec na Fakulteti za elektrotehniko, računalništvo in informatiko Univerze v Mariboru (UM FERI). Ukvarja se z raziskavami na področjih obdelave naravnega jezika, evolucijskih algoritmov in umetne inteligence ter kibernetske in informacijske varnosti..
  jani.dugonik@um.si

 


Seminar: Usage of GIT with Gitlab, Github and Bitbucket

23 October 2025 at 13:00

SLO: Seminar bo potekal virtualno. Predstavitev bo potekala 1h, po predstavitvi bo predavatelj na voljo za vprašanja. Upravljanje izvorne kode (SCM) z Gitom nudi podporo za različice z razvejanjem in združevanjem v skupnem razvoju. Git kot porazdeljeni SCM je običajno povezan z osrednjim strežnikom Git, ki ponuja spletno funkcionalnost za pregled virov, povlečenje zahtev in integracijo z drugimi storitvami, kot sta stalna integracija (CI) in dokumentacija kode. Na tem seminarju si bomo ogledali razvojni proces Git ter priljubljene strežnike in integrirane storitve (CI, Read the docs). Kako upravljati z različicamin velikih datotek v okolju HPC, skupaj s kodo in dokumentacijo, bomo razpravljali s praktičnega vidika in izvora podatkov.

ENG: The seminar will be held virtually. The presentation will last 1 hour, after the presentation the lecturer will be available for questions. Source code management (SCM) with Git provides support for versioning with branching and merging in a collaborative work. Git as distributed SCM is usually connected to a central Git server that provides web functionality for source review, pull requests and integration with other services such as continuous integration (CI) and code documentation. In this seminar we will take a look into Git development process and popular servers and integrated services (CI, Read the docs). How to Git version large files in HPC environment along with code and documentation will be discussed from practical and data provenance viewpoint.

 

Date and time: 23. 10. 2025  od 15.00 - 16.30

Language: According to applications

Organiser:

Lecturer:

Name: Leon Kos
  Izr. prof. dr. Leon Kos je docent na ULFS in je usposobljen za več tem, povezanih s HPC. Je kvalificirani trener iz programa HLRS za usposabljanje in je bil ključni razvijalec PRACE MOOC Managing Big Data with R in Hadoop. Bil je vodja PRACE poletja HPC treningov v letih 2014, 2015, 2016, 2017, 2018 in 2019. Je tudi slovenski nosilec več državnih in mednarodnih projektov.
E-mail: leon.kos@fs.uni-lj.si

Delavnica: Vsebniki na superračunalnikih

11 November 2025 at 09:00

Opis: Raziskovalci se pogosto spopadajo z velikimi računskimi izzivi, na primer pri analizi velikih podatkov, fizikalnih simulacijah, računski kemiji, računski biologiji, napovedovanju vremena, simulacijah dinamike tekočin ipd. Za reševanje mnogih problemov je pogosto na voljo ustrezna programska oprema, ki pa jo je potrebno prilagoditi za izvajanje na izbranem superračunalniku.

Na delavnici si bomo ogledali več načinov nalaganja programske opreme: v domačo mapo, preko okoljskih modulov in vsebnikov. Spoznali se bomo s konceptom virtualnih strojev in vsebnikov ter osvetlili razlike med zasnovo vsebnikov Docker in Apptainer. Naučili se bomo uporabiti že pripravljene vsebnike in na praktičnih primerih spoznali, kako zgraditi enostaven vsebnik Apptainer ter ga zagnati v superračunalniškem okolju. V nadaljevanju si bomo ogledali, kako v vsebnik vključiti podporo za grafične pospeševalnike in procesiranje na več vozliščih.

Delavnica bo praktično usmerjena, vaje bomo izvajali na modernem sistemu HPC.

Zahtevnost: Napredna

Jezik: Slovenski

Termin: 11. 11. 2025 od 10:00 - 15:00

Omejitev števila udeležencev: 30

Virtualna lokacija: ZOOM (povezava bo na voljo samo registriranim udeležencem)

Ciljna publika: raziskovalci, inženirji, študenti, vsi ki potrebujejo več računskih virov pri svojem delu

Priporočeno predznanje: 

  • opravljena delavnica Osnove superračunalništva,
  • razumevanje zgradbe računalniške gruče,
  • delo preko odjemalca SSH (ukazna vrstica, prenašanje datotek),
  • osnovno poznavanje vmesne programske opreme Slurm,
  • osnovno znanje operacijskega sistema Linux in lupine Bash
  • osnovno poznavanje programskega jezika Python

 

Na izobraževanju pridobljena znanja:

  • poznavanje vmesne programske opreme Slurm
  • razumevanje okoljskih modulov in vsebnikov
  • uporaba obstoječih vsebnikov Docker in Apptainer
  • gradnja lastnih vsebnikov Apptainer za izvajanje izbranih programov na superračunalniški gruči
  • raba različnih računskih virov v okoljskih modulih in vsebnikih (procesorska jedra, grafični pospeševalniki, vozlišča)

 

Organizator:

FRI logo

Predavatelja:

Ime: Davor Sluga
Opis: https://fri.uni-lj.si/sl/o-fakulteti/osebje/davor-sluga 
E-mail: davor.sluga@fri.uni-lj.si
Ime: Ratko Pilipović
Opis: https://www.fri.uni-lj.si/sl/o-fakulteti/osebje/ratko-pilipovic
E-mail: ratko.pilipovic@fri.uni-lj.si

 


Dnevi SLING

19 November 2025 at 08:00

Dnevi slovenskega superračunalniškega omrežja (Dnevi SLING) so ključni dogodek, na katerem predstavljamo najnovejše dosežke na področju superračunalništva, delovanje kompetenčnega centra ter primere dobre prakse uporabe HPC infrastrukture v raziskovalnem in industrijskem okolju. Letos bo dogodek potekal od 19. do 21. novembra v okviru Arnesove konference Mreža znanja v hotelu Four Points by Sheraton Ljubljana Mons.

Poleg bogatega programa s področja superračunalništva bodo vključeni tudi novi tematski sklopi, kot so umetna inteligenca, Tovarna UI, povezovanje superračunalništva in odprte znanosti ter številne druge aktualne teme.

Več informacij, program in prijava bodo v kratkem objavljeni na spletni strani Mreže znanja.

Workshop: CuPY - calculating on GPUs made easy

26 November 2025 at 12:00

Description: Scientific computing increasingly relies on GPU acceleration to handle large datasets and complex numerical tasks. While traditional CPU-based workflows remain essential, modern research benefits greatly from learning how to harness GPUs in an accessible way through Python. CuPY provides a NumPy-like interface that enables users to offload array computations to the GPU with minimal code changes.

On Day 1, we will cover the motivation for GPU computing, discuss what GPUs are best suited for, and set up a self-contained environment. Participants will learn to use conda/mamba for environment management, install and configure a GPU-ready CuPY setup, and verify its functionality.  On Day 2, we will focus on the CuPY library itself. We will explore its syntax and functionality, emphasizing similarities and differences with NumPy. Through a series of simple examples, and culminating in a more involved case study, participants will gain the skills to confidently integrate GPU acceleration into their Python workflows.

Difficulty: Beginner

Date & Time:

Day 1: 26. 11. 2025  from 13.00 to 17.00

Day 2: 27. 11. 2025 from 13.00 to 17.00

Language: English

Prerequisite knowledge: Basic knowledge of Linux, the Terminal and some Python

Target audience: The workshop is intended for beginners and others interested in using GPUs with python.

Virtual location: ZOOM (only registered participants will see ZOOM link)

Workflow: The training is live over zoom, in the afternoon. The workshop will combine lecture and practical parts, where your own laptop suffices is needed to gain access to the ARNES gpu cluster.

 

Organizer:

Univerza v Ljubljani v leto 2024 ...

Lecturer: 

Name: Luka Leskovec
Description: Scientist and educationalist involved in theoretical physics and supercomputing
E-mail: luka.leskovec@fmf.uni-lj.si

NISO BILI SAMO ŠTEVILKE: JAVNO BRANJE IMEN UBITIH OTROK IZ GAZE

11 October 2025 at 15:40

“Vsi otroci so naši otroci,
je rekel prijazen glas,
vse smrklje in frklje in froci,
vsi svetlih in črnih in kodravih las.
Vsi naši. Tudi tisti drugačni,
drugačne usode in vere in ras,
vsi beli in črni, vsi bolni in lačni,
so taki kot kdo izmed nas.”

(Tone Pavček, “Vsi naši otroci”)

V sklopu Tedna otroka in v znak solidarnosti danes v več slovenskih krajih poteka branje imen otrok ubitih v Gazi. Po uradnih podatkih naj bi v zadnjih dveh letih genocida bilo ubitih 20 000 otrok, v resnici pa je številka verjetno višja. Ob takšnem številu žrtev težko vidimo, da je za vsako številko ubit otrok – brat, sestra, nečak, vnukinja. Da se na to ne bi pozabilo, so v različnih krajih po Sloveniji brali imena otrok, ki so umrli v dveh letih kot posledica izraelskega bombardiranja in uničevanja Gaze ter njenih prebivalk in prebivalcev.. V Ljubljani bodo imena brali več ur, kar pa se ne bo niti približalo imenom vseh žrtev. Če bi želeli prebrati vsa imena, bi potrebovali celih trideset ur branja.

Naj bo to opomnik, da navidezno premirje, ki so ga na Trumpovo pobudo sklenili pred kratkim pravzaprav pomeni le, da Izrael lahko genocih izvaja malce počasneje. Dokler ta kolonialna tvorba ne bo uničena, bodo sionistični skrajneži še naprej ubijali otroke in ostale nedolžne žrtve.

The post NISO BILI SAMO ŠTEVILKE: JAVNO BRANJE IMEN UBITIH OTROK IZ GAZE first appeared on Rdeča Pesa.

VSI NA PROTEST V VIDEM – POKAŽIMO IZRAELU RDEČI KARTON!

9 October 2025 at 10:57

V nam bližnjem Vidmu se bo v torek odvila tekma evropskih kvalifikacij za prihodnje svetovno prvenstvo v nogometu med Italijo in Izraelom. Gre za tekmo do katere sploh ne bi smelo priti – zavržno je, da je Izraelu med izvajanjem genocida še vedno dovoljeno nastopanje na mednarodnih tekmovanjih.

Pri tem se jasno kaže konformizem in dvojna merila mednarodnih športnih organizacij. FIFA in UEFA sta na primer Rusijo izključili iz vseh nogometnih tekmovanj štiri dni po prvem napadu na Ukrajino, na drugi strani pa se šele dve leti po začetku genocida v Gazi pričenjajo pogovori o morebitni izključitvi Izraela. Spomnimo tudi na sprenevedanje obeh organizacij ob smrti več kot 400 palestinskih nogometašev, med njimi tudi lokalne nogometne legende Suleimana Al-Obeida.

Tudi v tem primeru se je pokazalo, da na državne in druge institucije ne gre računati, ampak da se moramo za končanje genocida organizirati prav sami. Kot redno poročamo je prav Italija med najsvetlejšimi primeri takšnega odpora “od spodaj”- tako bo tudi v Vidmo v torek ob 17:30 organiziran velik protest.

Prejšnje leto, ko je Italija z Izraelom v Vidmu igrala tekmo Lige narodov, se je na protestu zbralo okrog 3000 ljudi. Letos jih organizatorji pričakujejo še bistveno več – najverjetneje bo število protestnikov pred stadionom, ki sprejme 6000 ljudi, preseglo število gledalcev na njem.

Italijanska vlada je zato odobrila prisotnost agentov izraelske obveščevalne službe Mosad, ki naj bi varovali izraelske nogometaše. Odločitev je v lokalni skupnosti in širše v Italiji dvignila veliko prahu ter še dodatno podžgala ljudi, da rečejo konec hlapčevskemu odnosu do Izraela.

Podprimo jih tudi mi, Videm ni daleč! Organizirajmo se za konec genocida, pokažimo Izraelu rdeči karton.

The post VSI NA PROTEST V VIDEM – POKAŽIMO IZRAELU RDEČI KARTON! first appeared on Rdeča Pesa.

ZAČETEK ZBIRANJA 40.000 PODPISOV PROTI POKOJNINSKI REFORMI

8 October 2025 at 11:39

Z današnjim dnem v Delavski koaliciji pričenjamo z zbiranjem 40.000 overjenih podpisov proti nedavno sprejeti varčevalni pokojninski reformi. Zbrani podpisi bodo omogočili razpis referenduma, na katerem bomo ljudje odločali, ali škodljivo reformo želimo ali ne.

Že tako ali tako so vse do danes sprejete pokojninske reforme v kapitalistični Sloveniji počasi poslabševale položaj upokojencev in upokojenk. Zaradi daljšanja referenčnega obdobja, nižanja odmernih odstotkov in zaradi vse večje odvisnosti pokojnin od inflacije namesto od rasti plač, živi skoraj 100.000 upokojencev pod pragom tveganja revščine. Kot za Radio Študent zapiše Anže Dolinar: “[…] je lani kar 23 tisoč starejših od 65 let potrebovalo materialno pomoč v hrani. To so ljudje, ki so stali v vrstah za moko, riž, mleko in testenine.”

Trenutna reforma pa bo položaj delavk in delavcev, bodočih upokojencev in upokojenk še poslabšala! 

Če nočeš delati dolgo v svojo starost in prejemati penzije, ki bo komaj pokrila življenjske stroške, oddaj svoj podpis!

40.000 overjenih podpisov bomo zbirali med 8. 10. in 11. 11. 2025 po vsej Sloveniji.

Možnosti oddaje overjenih podpisov:

Digitalna oddaja podpisa

Enostavno, v 1 minuti digitalno (SIGEN-CA, SI-PASS, e-Osebna ali drugo) oddaj overjen podpis.

Fizična oddaja podpisa

Brez čakanja na katerikoli upravni enoti po Sloveniji, tudi ob četrtkih! Na UE je posebno okence namenjeno podpisovanju.

Večje upravne enote so odprte v urah:

PON: 9:00 – 15:00

TOR: 8:00 – 15:00

SRE: 8:00 – 18:00

ČET: 8:00 – 15:00

PET: 8:00 – 14:00

Aktiviste, delavce, študentke – člane in članice Delavske koalicije boste lahko večino dni srečali pred upravnimi enotami po Sloveniji. Pomagali vam bodo pri oddaji podpisa.

Tudi vaša pomoč je dobrodošla! Pri zbiranju podpisov potrebujemo čim več prostovoljcev, ki bi nekaj ur svojega časa namenili za delavsko stvar in pred upravnimi enotami vabili mimoidoče k oddaji podpisa. Prav tako vas vabimo, da razširite vest o kampanji med svoje prijatelje, sorodnike in sodelavce.

Več nas bo, močnejši bomo!

The post ZAČETEK ZBIRANJA 40.000 PODPISOV PROTI POKOJNINSKI REFORMI first appeared on Rdeča Pesa.

DVE LETI GENOCIDA V GAZI: KAJ ŠE LAHKO NAREDIŠ?

7 October 2025 at 14:08

Dve leti je od 7. oktobra 2023, ko je palestinsko odporniško gibanje Hamas kot odgovor na desetletja trajajoče nasilno zatiranje Palestincev, napadlo Izrael. Dogodek je med mednarodno politično elito sprožil val zgražanja, obsodb napada in solidarnosti z državo Izrael. Že nekaj ur po tem je Izrael odgovoril z brutalnim napadom na Gazo. Ta je kmalu prerasel v genocid nepojmljivih razsežnosti, ki traja še danes. To ni konflikt ali vojna, ampak nezakonita okupacija, apartheid, etnično čiščenje. Zdi se, da to vemo že čisto vsi, razen vrlih medijev, ki še vedno pišejo o konfliktu, vojni, dveh enakovrednih straneh in krivca vidijo v ljudeh, ki se upirajo imperialističnemu ustroju. 

Al Jazeera poroča, da je v dveh letih izraelska vojska umorila kar 67 000 Palestincev, tisoče jih je še vedno pod ruševinami. Povprečno v Gazi umre en otrok na uro. Vsaj 169 000 Palestincev v Gazi je huje ranjenih. 4000 otrok v Gazi je zaradi napadov izgubilo eno ali več okončin. Izrael je poškodoval 125 bolnišnic in umoril 1722 zdravstvenih delavcev. Ubitih je bilo vsaj 300 medijskih delavcev. Vsaj 459 ljudi je umrlo zaradi stradanja, od tega 154 otrok. Vsaj en otrok od štirih trpi zaradi akutnega pomanjkanja hranil. Več kot 19 000 ljudi je bilo ranjenih med tem ko so šli iskati hrano, vsaj 2600 jih je med iskanjem humanitarne pomoči ubila izraelska vojska. Uničili so tudi skoraj vse domove in ključno infrastrukturo v Gazi. Izrael med tem ugrablja, zapira, pretepa in ubija tudi ljudi na Zahodnem bregu, naseljenci ruvajo oljke in uničujejo hiše. 

V medijih trenutno odmeva Trumpov “mirovni načrt”, a ta nas ne sme zavesti. Prvič, ker je tisti, ki je vse dogovore o miru do sedaj prelomil, bil prav Izrael, ne Hamas. Drugič, ker gre le še za en poskus, da Palestincem odreče pravico do samoodločbe in utrdi ameriško-izraelski nadzor nad Gazo. Načrt zahteva popolno razpustitev Hamasa – osvobodilna organizacija se je sicer že strinjala, da ne bo vladala Gazi, toda razpustitev odporniškega gibanja je nekaj drugega. V realnosti je stanje takšno: “najbolj demokratični državi na svetu”, ZDA in Izrael,  si na vse kriplje prizadevata, da bi imeli popoln nadzor nad tem, kdo lahko vlada v drugi državi in kako lahko vlada. Pri čemer zatirani ne bi smeli migniti s prstom. Ne pozabimo, da je Trump v eni izmed svojih bizarnih objav želel Gazo spremeniti v letovišče. In da prav Trump in ZDA omogočata genocid.

Protesti, stavke in mednarodne flotilje so zato KLJUČNEGA POMENA. Konec koncev si lahko pišemo zasluge vsaj za to, da se politiki pogovarjajo o miru, saj si želijo na vse kriplje ustaviti val upora, ki se dviga v svetu. Na kratko: BOJIJO SE NAS. BOJIJO SE MOČI LJUDSTVA. In to je dober znak. Nekaj delamo prav. 

Proti Gazi že plujejo nove ladje s humanitarno pomočjo in z okoli 300 aktivisti. Po svetu še vedno potekajo množični protesti. Sindikat Di Base USB je najavil, da lahko v kratkem pričakujemo dan skupnega boja vseh pristaniških sindikatov, ki so se podpisali pod skupno izjavo za končanje genocida. Med njimi so tudi slovenski žerjavisti. Sindikati so v izjavi zapisali, da zahtevajo takojšnjo ustavitev genocida, odprtje humanitarnih koridorjev, pristanišča brez orožja za vse vojne in zaustavitev ponovnega oboroževanja EU ter preusmeritev ogromnih sredstev v osnovne storitve za vse prebivalstvo. 

Izraelska genocidna vojska med tem postopno izpušča nezakonito priprte aktiviste, ki so v Gazo želeli dostaviti humanitarno pomoč. Na prostosti so predvsem aktivisti iz Evrope in Združenih držav Amerike. Mnogi, predvsem tisti, s temnejšo poltjo, so še vedno zaprti.

Eden od vodilnih mednarodne flotilje, Thiago Avila, je v ponedeljek iz zapora v puščavi Negev napisal pismo: “Mnoge med nami so fizično napadli, grozijo nam s psmi, ne pustijo nas spati, psihično nas zlorabljajo. O tem ne poročam zato, da bi se pritoževal, ampak da se svet zave, da če to počnejo s humanitarnimi delavci, si lahko predstavljate, kaj počnejo s Palestinci.” 11 000 Palestink in Palestincev je zaprtih v izraelskih zaporih, od tega 400 otrok. Takšno ravnanje z zaporniki je v Negevu nekaj povsem običajnega!

Izrael za Palestince nima legalne jurisdikcije, kar pomeni, da jih lahko zapirajo brez sojenja. Da ne smemo pozabiti na Gazo, je izpostavila tudi Greta Thunberg, po tem ko je bila izpuščena. 

Palestinke in Palestinci nimajo izbire, vsak dan se morajo vstati in najti upanje in energijo za boj. Zato NE OBUPAJ. Agresija izraelske vojske naj te MOTIVIRA, da narediš VSE, KAR JE V TVOJI MOČI, da kot družba enkrat za vselej končamo z apartheidom, kolonializmom in genocidom. 

V Sloveniji se za Palestino trenutno izvaja več različnih akcij. Izpostavljamo jih zgolj nekaj. Če veš še ti za kakšno, jo zapiši spodaj v komentar. 

  1. PIŠI POLITIKOM in od vlade zahtevaj, da v Združenih narodih zahteva zamrznitev članstva Izraela (povezava). Zapiši to, kar čutiš in kar si želiš. (povezava)
  2. PODPIŠI PETICIJO ZA VARNE IN ZAKONITE POTI PALESTINSKIM BEGUNCEM. Slovenska vlada ima možnost, da reševanje življenj postavi pred birokracijo in beguncem omogoči varno pot v Slovenijo, še posebej, če so tukaj že njihovi sorodniki. (povezava)
  3. BOJKOTIRAJ IZDELKE, ki prihajajo iz Izraela. Mnoge multinacionalke so zaradi sistematičnega bojkota že izkusile finančno škodo. V komentarju je nekaj uporabnih povezav, ki ti bodo olajšale bojkot. (povezava )
  4. ORGANIZIRAJ SE V SVOJEM OKOLJU. Ali ima podjetje, za katero delaš, kakšne povezave z izraelskimi podjetji? Združi se s sodelavci in zahtevaj takojšnjo prekinitev sodelovanja z Izraelom in podjetji, ki podpirajo genocid. 
  5. SPREMLJAJ POTOVANJE MEDNARODNIH LADIJ IN OD VLAD ZAHTEVAJ IZPUSTITEV HUMANITARCEV. 
  6. DELI TO OBJAVO in spodbudi prijatelje, sorodnike in sodelavke, da ukrepajo za SVOBODNO PALESTINO. 

The post DVE LETI GENOCIDA V GAZI: KAJ ŠE LAHKO NAREDIŠ? first appeared on Rdeča Pesa.

[KOMENTAR] »Evropski način življenja« izčrpava življenja vseh tistih, ki do njega nimajo dostopa

6 October 2025 at 17:09

Okoljska škoda ogroža »evropski način življenja«, pravi poročilo Evropske agencije za okolje (povezavo do njega najdete v komentarju pod objavo). A resnica je pravzaprav obratna: tako imenovani »evropski način življenja« je tisti, ki uničuje okolje. Zato je ideja, da moramo varovati okolje, da bi rešili »evropski način življenja«, precej nenavadna in protislovna.

»Evropski način življenja« se nanaša na življenjske sloge, ki so se razmahnili v obdobju po drugi svetovni vojni. Fordizem je množično proizvajal potrošno blago, keynesianizem – in pozneje neoliberalno gospodarstvo dolga in nepremičninskih balonov – pa je ustvaril ogromno množico potrošnikov. Eden od namenov takšnega gospodarstva je bil tudi pomiriti nezadovoljne revne množice, ki bi sicer lahko podprle revolucionarne sile. Kasneje se je »evropski način življenja« uveljavil kot norma, ki so jo uporabljali za blatenje alternativnih gibanj, ki so si prizadevala za izstop iz začaranega kroga mezdnega dela in potrošništva.

A »evropski način življenja« ni tako razširjen, kot si morda kdo misli. Mnogi Evropejci, zlasti v vzhodni in južni Evropi, v njem ne sodelujejo, ker so revni in pogosto živijo bolj tradicionalna kmečka oziroma vaška življenja. Podobno velja za slabo plačane delavce, Evropejce z migrantskim ozadjem in prebivalce podeželja zahodne Evrope. Za te skupine je »evropski način življenja« predvsem življenje velikega mesta in srednjega razreda drugod – in morda nekaj, čemur si želijo približati.

Kakorkoli že: življenja južnih in vzhodnih Evropejcev, ljudi s podeželja, delavcev in migrantov ne ustrezajo »evropskemu načinu življenja« – in zato so pogosto obsojana. A težava s tem belim, mestnim in srednjerazrednim standardom ni le v stigmi, ki jo proizvaja. Temeljna protislovnost je v tem, da »evropski način življenja« v veliki meri omogoča prav revščina tistih, ki so označeni kot »manjvredni«. Ni bogatega in udobnega življenja brez (dotoka) poceni hrane, poceni delovne sile in poceni surovin iz obrobnih območij in od ljudi na robu družbe. Z drugimi besedami: »evropski način življenja« izčrpava življenja vseh tistih, ki do njega nimajo dostopa – v Evropi in zunaj nje. Poleg tega pa urbane srednje razrede drži v dolgovih in izčrpane od dela ter skrbi za svoj družbeni status.

Če bi ta standard zlomili in bi se obrobne regije in ljudje izvili iz izkoriščanja, bi življenja teh regij in ljudi lahko nudila primer počasnejšega, bolj družabnega in na drugačen način produktivnega življenja. Evropski urbani srednji razredi že zdaj zavidajo in romantizirajo podeželska, periferna življenja, ko dopustujejo v Grčiji, Sardiniji ali Dalmaciji, ali pa družabnost migrantskih skupnosti, ko so povabljeni na veliko poroko.

Toda obenem se trdno oklepajo kariere, potrošništva in napihnjenih cen nepremičnin – vsega tistega, kar jih same dela preobremenjene, migrante, delavce in ljudi na obrobju pa ohranja revne in prav tako izžete, a brez vsakršnega družbenega prestiža. Prav ta razdeljen in neenak način življenja uničuje planet, na katerem živimo, in spodjeda samega sebe.

Ne gre torej za to, da bi morali rešiti »evropski način življenja«, temveč za to, da ga moramo spremeniti. Ne samo, da bi ohranili planet, ki nam omogoča življenje, temveč da bi ustvarili dobro življenje za vse. 

Brali ste pesin prevod komentarja, ki ga je napisal @Bue Rübner Hansen, ki je med drugim gostoval na lanski Mednarodni poletni šoli politične ekologije v Ljubljani. Povezava do posnetka njegovega predavanja je dostopen spodaj.

Europe’s environment and climate: knowledge for resilience, prosperity and sustainability | Europe’s environment 2025 (EEA) /

Environmental damage is putting European way of life at risk, says report | Climate crisis | The Guardian

Bue Rübner Hansen: In/equality beyond fact and norm – towards strategic research

The post [KOMENTAR] »Evropski način življenja« izčrpava življenja vseh tistih, ki do njega nimajo dostopa first appeared on Rdeča Pesa.

SKUPAJ DO 6.000

5 October 2025 at 15:16

Veseli smo, da naše vsebine dosegajo vse večje število ljudi. Še naprej se bomo trudili odpirati tematike, ki ne najdejo prostora v mainstream medijih in bomo ojačevalec vseh bojev delovnih ljudi za svet urejen v skladu z družbenimi potrebami in ne privatnimi interesi peščice najbogatejših. Ažurno in analitično bomo še naprej spremljali prizadevanja za osvoboditev Palestine, boj delovnih ljudi zoper pokojninsko reformo, napore za izboljšanje javnega potniškega prometa in osvetljevali splošne trende ekološke, družbene in politične krize sodobnega kapitalizma. 

Tega pa ne zmoremo sami. Pri tem potrebujemo vašo pomoč, drage sledilke in sledilci. Ker želimo čim prej doseči nov mejnik v številu sledilk in sledilcev na Facebooku (6.000) in s tem vsebine Rdeče pese približati novim ljudem, vas prijazno pozivamo, da nam pri tem priskočite na pomoč. Pojdite na našo stran na Facebooku in s klikom v desnem robu (tri pike) ter izbiro možnosti “Povabi prijatelje” povabite k všečkanju Rdeče pese vaše prijatelje in prijateljice. Prav tako bomo hvaležni, če boste objavo delili. Mi pa bomo poskrbeli, da odločitve ne boste obžalovali. 

Več nas je, prej bomo na cilju

The post SKUPAJ DO 6.000 first appeared on Rdeča Pesa.

❌
❌