Reading view

There are new articles available, click to refresh the page.

Google Search Is More Useful if You Know Its Advanced Operators, to a Point

By: Nick Heer

Hana Lee Goldin:

The search bar you already have is more capable than that arrangement requires you to know. With the right syntax, it becomes a precision instrument: narrow by domain, by date, by file type, by exact phrase. We can pull up archived pages, surface open file directories, and even find what people said in forums instead of what brands want us to find. None of it requires a new tool or a paid account. The capability has been there the whole time.

Advanced search operations are something Google does better than any competitor. DuckDuckGo has its bangs and I like them very much, but Google has a vast catalogue able to be searched with such precision — to a point. If you use these advanced search operators, get ready to see a lot of CAPTCHAs. Google will slow you down and may even block you temporarily if you use it too well.

⌥ Permalink

Upgrade Presents: The Origin of Apple

By: Nick Heer

The newest episode of “Upgrade” is a wonderful retelling of a very particular history (also available as a video):

Jason and Myke tell the story of Apple’s origin. It emerged from the unique environment of the Santa Clara valley suburbs of the ’70s thanks to the particular genius of its two co-founders and some surprising help they got along the way.

Though I was familiar with much of this, I cannot think of many better people to tell it than Jason Snell. I have already seen one thinkpiece after another about what a fifty year-old — ish — Apple means in the grand scope, and there is definitely a place for that. Today’s Apple is a long way from this origin story, of course, but what a story it is.

This gives me an excuse to explain why I am fascinated by this one computer company. Though this story is great, that is not why, nor is it the history of successfully bringing the graphical user interface to the market, nor the ’90s–’00s turnaround. Those are all parts of it. But the main reason I am fascinated by Apple is that it has built such a distinct identity for itself. It has not always stuck to it but, if anything, I think that helps reinforce the existence of an Apple-y identity. Some might attribute that to a particular way of marketing itself which, while true, also emphasizes how important that identity is: when its messaging does not match the products, services, experience, or expected corporate behaviour, it is noticeable.

This is all a bit mythical, to be sure. The garage-era Steves probably would not imagine Apple celebrating its fiftieth birthday by being the second most valuable corporation in the world, nor would they think it would hire Paul McCartney for its employee party. To me, one of those things feels more Apple-y than the other. It feels right for the company to celebrate with a music legend; it probably does not need to be quite so rich or powerful to do that, though. Apple has long been a really, really big corporation, and that — in itself — does not feel very Apple-y to me. That, too, is fascinating.

⌥ Permalink

⌥ Apple’s Supposed A.I. Strategy Shift Is the Company’s Normal Strategy

By: Nick Heer

Mark Gurman, last week in Bloomberg:

Apple Inc. plans to open Siri to outside artificial intelligence assistants, a major move aimed at bolstering the iPhone as an AI platform.

The company is preparing to make the change as part of a Siri overhaul in its upcoming iOS 27 operating system update, according to people with knowledge of the matter. The assistant can already tap into ChatGPT through a partnership with OpenAI, but Apple will now allow competing services to do the same.

This is not unexpected. In the Apple Intelligence introduction at WWDC 2024, Craig Federighi said “we want you to be able to use these external models without having to jump between different tools”, and that they were “starting” with ChatGPT. Gurman points this out and also notes Federighi’s teased Google Gemini integration. Tim Cook, in an October 2025 earnings call, said much the same. (Gurman also notes that this integration is “separate from Apple’s work with Google to rebuild Siri using Gemini models”, but “the news initially weighed on shares of Google”, which I am sure is exactly the reason for them dropping 3.4% and nothing to do with an existing weeklong slide but, then again, I do not work at Bloomberg so who the hell am I to say?)

Gurman, in his “Power On” newsletter over the weekend, further explored what he calls Apple “doubl[ing] down” on a “revamped A.I. and Siri strategy”:

That reality is shaping the company’s new approach, set to be unveiled at the Worldwide Developers Conference on June 8. Rather than engaging in an AI arms race, Apple is focusing on its core strengths: selling highly profitable hardware and making money off the services that run on it.

Historically, Apple’s software — iMessage, Maps and Photos, for example — has been about driving product sales rather than generating revenue in their own right. Rivals, in contrast, are aggressively monetizing AI through subscriptions and premium apps. Apple understands that few, if any, users will pay for Siri or its other AI technology. The opportunity to turn Apple Intelligence into a moneymaker has effectively passed.

What would have been more newsworthy here is if Apple’s A.I. strategy were anything other than building software exclusively for its proprietary hardware. This does not sound like a “revamped” strategy; it sounds like Apple’s whole deal. If it can use Apple Intelligence or Siri in the future, it certainly might; it is putting ads in Apple Maps after all. Services is a money-printing machine with less risk. But it is still a hardware company.

This part made me double-take and wonder if I missed something. In February 2024, following Apple’s cancellation of its car project, Gurman predicted that hardware would continue to be Apple’s primary business “for now”, as though that will change in the near future. This has been constant since Apple Intelligence was announced at WWDC that year.

What one could argue has been a change of strategy is the rumoured development of a chatbot; Gurman called it a “strategic shift” when he broke the news. But that, too, is somewhat inaccurate in two ways: Gurman’s description of it is as an overhauled version of Siri that will let people do normal Siri stuff — setting timers, end of list — plus some of the features Apple announced in 2024 but has not yet shipped which, confusingly, were also first set to ship in an update to iOS 26 without the wholly new version of Siri but also depending on Gemini. Got it?

But even that is not much of a strategy shift. Gurman tweeted in May 2024 — before WWDC and the debut of Apple Intelligence — that “Apple isn’t building its own chatbot but knows the market wants it so it’s going elsewhere for it. It’s the same playbook as search.” So, again, it is just borrowing from its ages-old playbook. It will continue to have proprietary stuff that ostensibly works seamlessly across a user’s Apple-branded hardware, allow installation of third-party add-ons, and rely on Google for some core functionality. How, exactly, is this a “revamp”?

Anyway, here is what Gurman wrote in January after the Gemini announcement and before the first build of iOS 26.4 was released:

Today, Apple appears to be less than a month away from unveiling the results of this partnership. The company has been planning an announcement of the new Siri in the second half of February, when it will give demonstrations of the functionality.

Whether that takes the form of a major event or a smaller, tightly controlled briefing — perhaps at Apple’s New York media loft — remains unclear. Either way, Apple is just weeks away from finally delivering on the Siri promises made at its Worldwide Developers Conference back in June 2024. At long last, the assistant should be able to tap into personal data and on-screen content to fulfill tasks.

Apple today shipped the first build of iOS 26.5 to developers without any sign of those features. While they may come in a later build, Juli Clover, of MacRumors, speculates they have been kicked to iOS 27.

Does not seem like much has changed at all.

⌥ I Regret the Blood Pact I Have Made With iCloud Photos

By: Nick Heer

Sometimes, I do not recognize a trap until I am already in it. Photos in iCloud is one such situation.

When Apple launched iCloud Photo Library in 2014, I was all-in. Not only is it where I store the photos I take on my iPhone, it is where I keep the ones from my digital cameras and my film scans, and everything from my old iPhoto and Aperture libraries. I have culled a bunch of bad photos and I try not to hoard, but it is more-or-less a catalogue of every photo I have taken since mid-2007. I like the idea of a centralized database of my photos, available on all my devices, that is functionally part of my backup strategy.1

But, also, it is large. When I started putting photos in there eleven years ago with a 200 GB plan, I failed to recognize it would become an albatross. iCloud Storage says it is now 1.5 TB and, between the amount of other stuff I have in iCloud and my Family Sharing usage, I have just 82 GB of available space. 2 TB seemed like such a large amount of space until I used 1.9 of it.

Apple’s next iCloud tier is a generous 6 TB, but it costs another $324 per year. I could buy a new 6 TB hard disk annually for that kind of money. While upgrading tiers is, by far, the easiest way to solve this problem, it only kicks that can down that road, the end of which currently has whatever two terabytes’ worth of cans looks like.

A better solution is to recognize I do not need instant access to all 95,000 photos in my library, but iCloud has no room for this kind of nuance. The iCloud syncing preference is either on or off for the entire library.

Unfortunately, trying to explain what goes wrong when you try to deviate from Apple’s model of how photo libraries ought to work will become a bit of a rant. And I will preface this by saying this is all using Photos running on MacOS Ventura, which is many years behind the most recent version of MacOS. It is not possible for me to use the latest version of Photos to make these changes because upgraded libraries cannot be opened by older versions of Photos. However, in my defense, I will also note that the version on Ventura is Photos 8.0 and these are the kinds of bugs and omissions inexcusable after that many revisions.

So: the next best thing is to create a separate Photos library — one that will remain unsynced with iCloud. Photos makes this pretty easy by launching while holding the Option (⌥) key. But how does one move images from one library to the other? Photos is a single-window application — you cannot even open different images in new windows, let alone run separate libraries in separate windows. This should be possible, but it is not.

As a workaround, Apple allows you to import images from one Photos library into another — but not if the source library is synced with iCloud. You therefore need to turn off iCloud sync before proceeding, at which point you may discover that iCloud is not as dependable as you might have expected.

I have “Download Originals to this Mac” enabled, which means that Photos should — should — retain a full copy of my library on my local disk. But when I unchecked the “iCloud Photos” box in Settings, I was greeted by a dialog box informing me that I would lose 817 low-resolution local copies, something which should not exist given my settings, though reassuring me that the originals were indeed safe in iCloud. There is no way to know which photos these are nor, therefore, any way to confirm they are actually stored at full resolution in iCloud. I tried all the usual troubleshooting steps. I repaired my library, then attempted to turn off iCloud Photos; now I had 850 low-resolution local copies. I tried a neat trick where you select all the pictures in your library and select “Play Slideshow”, at which point my Mac said it was downloading 733 original images, then I tried turning off iCloud Photos again and was told I would lose around 150 low-resolution copies.

You will note none of these numbers add or resolve correctly. That is, I have learned, pretty standard for Photos. Currently, it says I have 94,529 photos and 898 videos in the “Library” view, but if I select all the items in that view, it says there are a total of 95,433 items selected, which is not the same as 94,529 + 898. It is only a difference of six items but, also, it is an inexplicable difference of six.

At this point, I figured I would assume those 150 photos were probably in iCloud, sacrifice the low-resolution local copies, and prepare for importing into the second non-synced library I had created. So I did that, switched libraries, and selected my main library for import. You might think reading one Photos library from another stored on the same SSD would be pretty quick. Yes, there are over 95,000 items and they all have associated thumbnails, but it takes only a beat to load the library from scratch in Photos.

It took over thirty minutes.

After I patiently waited that out, I selected a batch of photos from a specific event and chose to import them into an album, so they stay categorized. Oh, that is right — just because you are importing across Photos libraries, that does not mean the structure will be retained. There is no way, as far as I can tell, to keep the same albums across libraries; you need to rebuild them.

After those finished importing, I pulled up my main library again to do the next event. You might expect it to retain some memory of the import source I had only just accessed. No — it took another thirty minutes to load. It does this every time I want to import media from my main library. It is not like that library is changing; it is no longer synced with iCloud, remember. It just treats every time it is opened as the first time.

And it was at this point I realized the importer did not display my library in an organized or logical fashion. I had expected it to be sorted old-to-new since that is how Photos says it is displayed, but I saw photos from many different years all jumbled together. It is almost in order, at times, but then I would notice sequential photos scattered all over.

My guess — and this is only a guess — is that it sub-orders by album, but does no further sorting after that. This is a problem for me given a quirk in my organizational structure. In addition to albums for different events, I have smart albums for each of my cameras and each of my iPhone’s individual lenses. But that still does not excuse the importer’s inability to sort old-to-new. The event I spotted early on and was able to import was basically a fluke. If I continued using this cross-library importing strategy, I would not be able to keep track of which photos I could remove from my main library.

There is another option, which is to export a selection of unmodified originals from my primary library to a folder on disk, and then switch libraries, and import them. This is an imperfect solution. Most obviously, it requires a healthy amount of spare disk space, enough to store the selected set of photos thrice, at least temporarily: once in the primary library, once in the folder, and once in the new library. It also means any adjustments made using the Photos app will be discarded — but, then again, importing directly from the library only copies the edited version of a photo without any of its history or adjustments preserved.

What I would not do, under any circumstance — and what I would strongly recommend anyone avoiding — is to use the Export Photos option. This will produce a bunch of lossy-compressed photos, and you do not want that.

Anyway, on my first attempt of trying the export-originals-then-import process, I exported the 20,528 oldest photos in my library to a folder. Then I switched to the archive library I had created, and imported that same folder. After it was complete, Photos said it had imported 17,848 items, a difference of nearly 3,000 photos. To answer your question: no, I have no idea why, or which ones, or what happened here.

This sucks. And it particularly sucks because most data is at least kind of important, but photos are really important, and I cannot trust this application to handle them.

There is this quote that has stuck with me for nearly twenty years, from Scott Forstall’s introduction to Time Machine (31:30) at WWDC 2006. Maybe it is the message itself or maybe it is the perfectly timed voice crack on the word “awful”, but this resonated with me:

When I look on my Mac, I find these pictures of my kids that, to me, are absolutely priceless. And in fact, I have thousands of these photos.

If I were to lose a single one of these photos, it would be awful. But if I were to lose all of these photos because my hard drive died, I’d be devastated. I never, ever want to lose these photos.

I have this library stored locally and backed up, or at least I though I did. I thought I could trust iCloud to be an extra layer of insurance. What I am now realizing is that iCloud may, in fact, be a liability. The simple fact is that I have no idea the state my photos library is currently in: which photos I have in full resolution locally, which ones are low-resolution with iCloud originals, and which ones have possibly been lost.

The kindest and least cynical interpretation of the state of iCloud Photos is that Apple does not care nearly enough about this “absolutely priceless” data. (A more cynical explanation is, of course, that services revenue has compromised Apple’s standards.) Many of these photos are, in fact, priceless to me, which is why I am questioning whether I want iCloud involved at all. I certainly have no reason to give Apple more money each month to keep wrecking my library.

I will need to dedicate real, significant time to minimizing my iCloud dependence. I will need to check and re-check everything I do as best I can, while recognizing the difficulty I will have in doing so with the limited information I have in my iCloud account. This is undeniably frustrating. I am glad I caught this, however, as I sure had not previously thought nearly as much as I should have about the integrity of my library. Now, I am correcting for it. I hope it is not too late.


  1. It is no longer the sole place I store my photos. I have everything stored locally, too, and that gets backed up with Backblaze. Or, at least, I think I have everything stored locally. ↥︎

Bill C–22 Gives Canadian Authorities Additional Warrantless Powers

By: Nick Heer

Gabriel Hilty, Toronto Star:

Speaking alongside Chief Myron Demkiw on Thursday at Toronto police headquarters, Public Safety Minister Gary Anandasangaree said Bill C-22, the Lawful Access Act, will “create a legal framework for modernized, lawful access regime in Canada,” something that police forces have been requested “for decades.”

The bill is Prime Minister Mark Carney government’s second push to pass expanded police search powers into law. An earlier proposal on lawful access was met with widespread concerns over potential overreach.

Paula Tran, Ottawa Citizen:

“The bill effectively lowers the standard that police have to meet. Sure, law enforcement says they’re happy, but that means they need less evidence and need to do less work to get the information about subscribers, and I don’t think that’s that’s a good thing. It’s the lowest standard in Canadian criminal law,” [Michael] Geist said.

[…]

Bill C-22 also proposes new legislation that would compel telecommunication companies to store and retain client metadata, like device location, for a year and to make it available to law enforcement and CSIS with a warrant. The metadata can be used to track a person’s live location in case they pose a national security threat or are considered to be in danger.

OpenMedia is running a campaign to email Members of Parliament, though I am suspicious these form letter campaigns actually work. It is a bare minimum signal since it requires almost no commitment. My M.P. is usually opposed to anything proposed by this government, since he is in the official opposition, but his reaction to this bill’s much worse predecessor is that it contained “the most commonsensical security changes we need to make in Canada”. I expect I will be writing him and, when I do, I will be sure to adjust OpenMedia’s form letter. If you are writing to your M.P., I suggest you do the same if you can spare the time.

⌥ Permalink

Wealthsimple Clears Regulatory Hurdle to Bring ‘Prediction Markets’ to Canada

By: Nick Heer

Meera Raman, Globe and Mail:

Wealthsimple is seeking to offer prediction trading in Canada, a controversial type of betting on real-world events that has surged in popularity in the past year, and has been largely banned in this country.

[…]

The approval for Ontario-based Wealthsimple permits it only to offer contracts tied to economic indicators, financial markets and climate trends, the company confirmed – not sports or elections, which are among the most popular uses of prediction markets in the United States.

Interactive Brokers launched here last April. Why are we doing this to ourselves?

⌥ Permalink

A Different Perspective on the ‘Design Choices’ Social Media Company Verdicts

By: Nick Heer

Mike Masnick, of Techdirt, unsurprisingly opposes the verdicts earlier this week finding Meta and Google guilty of liability for how their products impact children’s safety. I think it is a perspective worth reading. Unlike the Wall Street Journal, Masnick respects your intelligence and brings actual substance. Still, I have some disagreements.

Masnick, on the “design choices” argument:

This distinction — between “design” and “content” — sounds reasonable for about three seconds. Then you realize it falls apart completely.

Here’s a thought experiment: imagine Instagram, but every single post is a video of paint drying. Same infinite scroll. Same autoplay. Same algorithmic recommendations. Same notification systems. Is anyone addicted? Is anyone harmed? Is anyone suing?

Of course not. Because infinite scroll is not inherently harmful. Autoplay is not inherently harmful. Algorithmic recommendations are not inherently harmful. These features only matter because of the content they deliver. The “addictive design” does nothing without the underlying user-generated content that makes people want to keep scrolling.

This sounds like a reasonable retort until you think about it for three more seconds and realize that the lack of neutrality in the outcomes of these decisions is the entire point. Users post all kinds of stuff on social media platforms, and those posts can be delivered in all kinds of different ways, as Masnick also writes. They can be shown in reverse-chronological order in a lengthy scroll, or they can be shown one at a time like with Stories. The source of the posts someone sees might be limited to just accounts a user has opted into, or it can be broadened to any account from anyone in the world. Twitter used to have a public “firehose” feed.

But many of the biggest and most popular platforms have coalesced around a feed of material users did not ask for. This is not like television, where each show has been produced and vetted by human beings, and there are expectations for what is on at different times of the day. This is automated and users have virtually no control within the platforms themselves. If you do not like what Instagram is serving you on your main feed, your choice is to stop using Instagram entirely — even if you like and use other features.

Platforms know people will post objectionable and graphic material if they are given a text box or an upload button. We know it is “impossible” to moderate a platform well at scale. But we are supposed to believe they have basically no responsibility for what users post and what their systems surface in users’ feeds? Pick one.

Masnick, on the risks of legal accountability for smaller platforms:

And this is already happening. TikTok and Snap were also named as defendants in the California case. They both settled before trial — not because they necessarily thought they’d lose on the merits, but because the cost of fighting through a multi-week jury trial can be staggering. If companies the size of TikTok and Snap can’t stomach the expense, imagine what this means for mid-size platforms, small forums, or individual website operators.

I am going to need a citation that TikTok and Snap caved because they could not afford continuing to fight. It seems just as plausible they could see which way the winds were blowing, given what I have read so far in the evidence that has been released.

Masnick:

One of the key pieces of evidence the New Mexico attorney general used against Meta was the company’s 2023 decision to add end-to-end encryption to Facebook Messenger. The argument went like this: predators used Messenger to groom minors and exchange child sexual abuse material. By encrypting those messages, Meta made it harder for law enforcement to access evidence of those crimes. Therefore, the encryption was a design choice that enabled harm.

The state is now seeking court-mandated changes including “protecting minors from encrypted communications that shield bad actors.”

Yes, the end result of the New Mexico ruling might be that Meta is ordered to make everyone’s communications less secure. That should be terrifying to everyone. Even those cheering on the verdict.

This is undeniably a worrisome precedent. I will note Raúl Torrez, New Mexico’s Attorney General and the man who brought this case against Meta, says he wants to do so for minors only. The implementation of this is an obvious question, though one that mandated age-gating would admittedly make straightforward.

Meta cited low usage when it announced earlier this month that it would be turning off end-to-end encryption in Instagram. If it is a question of safety or liability, it is one Meta would probably find difficult to articulate given end-to-end encryption remains available and enabled by default in Messenger and WhatsApp. An executive raised concerns about the feature when it was being planned, drawing a distinction between it and WhatsApp because the latter “does not make it easy to make social connections, meaning making Messenger e2ee will be far, far worse”.

I think Masnick makes some good arguments in this piece and raises some good questions. It is very possible or even likely this all gets unwound when it is appealed. I, too, expect the ripple effects of these cases to create some chaos. But I do not think the correct response to a lack of corporate accountability — or, frankly, standards — is, in Masnick’s words, “actually funding mental health care for young people”. That is not to say mental health should not be funded, only that it is a red herring response. In the U.S., total spending on children’s mental health care rose by 50% between 2011 and 2017; it continued to rise through the pandemic, of course. Perhaps that is not enough. But, also, it is extraordinary to think that we should allow companies to do knowingly harmful things and expect everyone else to correct for the predictable outcomes.

⌥ Permalink

Apple Discontinues the Mac Pro

By: Nick Heer

Chance Miller, of 9to5Mac, serving here as Apple’s official bad news launderer:

It’s the end of an era: Apple has confirmed to 9to5Mac that the Mac Pro is being discontinued. It has been removed from Apple’s website as of Thursday afternoon. The “buy” page on Apple’s website for the Mac Pro now redirects to the Mac’s homepage, where all references have been removed.

Apple has also confirmed to 9to5Mac that it has no plans to offer future Mac Pro hardware.

Mark Gurman reported last year that it was “on the back burner”.

The Mac Pro was, realistically, killed off when the Apple Silicon era ended support for expandability and upgradability. The Mac Studio effectively takes its place, and is strategically similar to the “trash can” Mac Pro with all expandability offloaded to external peripherals. Unfortunate, but I think it was dishonest to keep selling this version of a “pro” Macintosh.

⌥ Permalink

Meta Loses Two Landmark Cases Regarding Product Safety and Children’s Use; Google Loses One

By: Nick Heer

Morgan Lee, Associated Press:

A New Mexico jury found Tuesday that social media conglomerate Meta is harmful to children’s mental health and in violation of state consumer protection law.

The landmark decision comes after a nearly seven-week trial. Jurors sided with state prosecutors who argued that Meta — which owns Instagram, Facebook and WhatsApp — prioritized profits over safety. The jury determined Meta violated parts of the state’s Unfair Practices Act on accusations the company hid what it knew [about] the dangers of child sexual exploitation on its platforms and impacts on child mental health.

Meta communications jackass Andy Stone noted on X his company’s delight to be liable for “a fraction of what the State sought”. The company says it will appeal the verdict.

Stephen Morris and Hannah Murphy, Financial Times:

Meta and Google were found liable in a landmark legal case that social media platforms are designed to be addictive to children, opening up the tech giants to penalties in thousands of similar claims filed around the US.

A jury in the Los Angeles trial on Wednesday returned a verdict after nine days of deliberation, finding Meta’s platforms such as Instagram and Google’s YouTube were harmful to children and teenagers and that the companies failed to warn users of the dangers.

Dara Kerr, the Guardian:

To come to its liability decision, the jury was asked whether the companies’ negligence was a substantial factor in causing harm to KGM [the plaintiff] and if the tech firms knew the design of their products was dangerous. The 12-person panel of jurors returned a 10-2 split answering in favor of the plaintiff on every single question.

Meta says it will also appeal this verdict.

Sonja Sharp, Los Angeles Times:

Collectively, the suits seek to prove that harm flowed not from user content but from the design and operation of the platforms themselves.

That’s a critical legal distinction, experts say. Social media companies have so far been protected by a powerful 1996 law called Section 230, which has shielded the apps from responsibility for what happens to children who use it.

For its part, the Wall Street Journal editorial board is standing up for beleaguered social media companies in an editorial today criticizing everything about these verdicts, including this specific means of liability, which it calls a “dodge” around Section 230.

But it is not. The principles described by Section 230 are a good foundation for the internet. This law, while U.S.-centric, has enabled the web around the world to flourish. Making companies legally liable for the things users post will not fix the mess we are in, but it would cause great damage if enacted.

Product design, though, is a different question. It would be a mistake, I think, to read Section 230 as a blanket allowance for any way platforms wish to use or display users’ posts. (Update: In part, that is because it is a free speech question.) From my entirely layman perspective, it has never struck me as entirely reasonable that the recommendations systems of these platforms should have no duty or expectation of care.

The Journal’s editorial board largely exists to produce rage bait and defend the interests of the powerful, so I am loath to give it too much attention, but I thought this paragraph was pretty rich:

Trial lawyers and juries may figure that Big Tech companies can afford to pay, but extorting companies is certain to have downstream consequences. Meta and Google are spending hundreds of billions of dollars on artificial intelligence this year, which could have positive social impacts such as accelerating treatments for cancer.

Do not sue tech companies because they could be finding cancer treatments — why should I take this editorial board seriously if its members are writing jokes like these? They think you are stupid.

As for the two cases, I am curious about how these conclusions actually play out. I imagine other people who feel their lives have been eroded by the specific way these platforms are designed will be able to test their claims in court, too, and that it will be complicated by the inevitably lengthy appeals and relitigation process.

I am admittedly a little irritated by both decisions being reached by jury instead of a judge; I would have preferred to see reasoning instead of overwhelming agreement among random people. However, it sends a strong signal to big social media platforms that people saw and heard evidence about how these products are designed, and they agreed it was damaging. This is true of all users, not just children. Meta tunes its feeds (PDF) for maximizing engagement across the board, and it surely is not the only one. There are a staggering number of partially redacted exhibits released today to go through, if one is so inclined.

If these big social platforms are listening, the signals are out there: people may be spending a lot of time with these products, but that is not a good proxy for their enjoyment or satisfaction. Research indicates a moderate amount of use is correlated with neutral or even positive outcomes among children, yet there are too many incentives in these apps to push past self-control mechanisms. These products should be designed differently.

⌥ Permalink

Meta Laid Off Several Hundred People Today

By: Nick Heer

Ashley Capoot and Jonathan Vanian, CNBC:

Meta is laying off several hundred employees on Wednesday, CNBC confirmed.

The cuts are happening across several different organizations within the company, including Facebook, global operations, recruiting, sales and its virtual reality division Reality Labs, according to a source familiar with the company’s plans who asked not to be named because they are confidential.

Some impacted employees are being offered new roles within the company, the person said. In some cases, those new positions will require relocation.

“Several hundred” employees is a long way off from the numbers reported earlier this month. Perhaps Reuters got it all wrong but, more worryingly for employees, perhaps those figures were correct and this is only the beginning.

⌥ Permalink

Talking Liquid Glass With Apple

By: Nick Heer

Danny Bolella attended one of Apple’s “Let’s Talk Liquid Glass” workshops:

Let’s address the elephant in the room. If you read the comments on my articles or browse the iOS subreddits, there is a vocal contingent of developers betting that Apple is going to roll back Liquid Glass.

The rationale usually points to the initial community backlash, the slower adoption rate of iOS 26, and the news that Alan Dye left Apple for Meta. The prevailing theory has been: “Just wait it out. They’ll revert to flat design.”

I shared this exact sentiment with the Apple team.

Their reaction? Genuine shock. They were actually concerned that developers were holding onto this position. They made it emphatically clear that Liquid Glass is absolutely moving forward, evolving, and expanding across the ecosystem.

Unsurprising. Though I expect a number of people reading this will be disappointed, I cannot imagine a world in which Apple would either revert to its previous design language or whip together something new. It is going to ride Liquid Glass and evolve it for a long time; if history is a good rule of thumb, assume ten years.

In theory, this is a good thing. Even on MacOS, I can find things I prefer to its predecessor, though admittedly they are few and far between. This visual design feels much more at home on iOS. The things that cause me far more frustration on a daily basis are the unrelenting bugs across Apple’s ecosystem, like how I just finished listening to an album with my headphones and then, when I clicked “play” on a new album, Music on MacOS decided it should AirPlay to my television instead of continuing through my headphones. That kind of stuff.

Regardless of whatever one thinks the visual qualities of Liquid Glass, the software quality problem is notable there, too. We are now on the OS 26.4 set of releases and I am still running into plenty of instances with bizarre and distracting compositing problems. On my iPhone, the gradients that are supposed to help with legibility in the status bar and toolbar appear, disappear, and change colour with seemingly little relevance to what is underneath them. Notification Centre remains illegible until it is fully pulled down. Plus, I still see the kinds of graphics bugs and Auto Layout problems I have seen for a decade.

I hope to see a more fully considered version of the Liquid Glass design language at WWDC this year, and not merely from a visual perspective. This user interface is software, just like dedicated applications, and it is chockablock full of bugs.

Bolella, emphasis mine:

I plan to share an article soon where I break down the exact physics, z-axis rules, and “Barbell Layouts” of this hierarchy. But the high-level takeaway from the NYC labs is crystal clear: maximize your content, push your controls to the poles, and never let the interface compete with the information.

If you say so, Apple.

⌥ Permalink

OpenAI to Discontinue Sora App, Video Platform

By: Nick Heer

Berber Jin, Wall Street Journal:

CEO Sam Altman announced the changes to staff on Tuesday, writing that the company would wind down products that use its video models. In addition to the consumer app, OpenAI is also discontinuing a version of Sora for developers and won’t support video functionality inside ChatGPT, either.

OpenAI is not shutting this down because it has ethical qualms with what it has created, despite good reasons to do just that. It is because it is expensive without any clear reason for it to exist other than because OpenAI wants to be everywhere.

If you are desperate for a completely synthetic social media feed, Meta’s Vibes is apparently still around. Users are readily abusing it, of course, because that is what happens if you give people a text input box.

Update: In a tweet, OpenAI has confirmed it is shutting down Sora. But, while it originally announced “We’re saying goodbye to Sora”, it changed that about an hour later to read “We’re saying goodbye to the Sora app“, emphasis mine. The Journal has not changed its report to retract claims about shutting down the platform altogether, though, while OpenAI continues to promote Sora API pricing.

⌥ Permalink

Ads Are Coming to Apple Maps Later This Year

By: Nick Heer

Apple, in a press release with the title “Introducing Apple Business — a new all‑in‑one platform for businesses of all sizes”, buried in a section tucked in the middle labelled “Enhanced Discoverability in Apple Maps”, both of which are so anodyne as to encourage missing this key bit of news:

Every day, users choose Apple Maps to discover and explore places and businesses around them. Beginning this summer in the U.S. and Canada, businesses will have a new way to be discovered by using Apple Business to create ads on Maps. Ads on Maps will appear when users search in Maps, and can appear at the top of a user’s search results based on relevance, as well as at the top of a new Suggested Places experience in Maps, which will display recommendations based on what’s trending nearby, the user’s recent searches, and more. Ads will be clearly marked to ensure transparency for Maps users.

The way they are “clearly marked” is with a light blue background and a small “Ad” badge, though it is worth noting Apple has been testing an even less obvious demarcation for App Store ads. In the case of the App Store, I have found the advertising blitz junks up search results more than it helps me find things I am interested in.

This is surely not something users are asking for. I would settle for a more reliable search engine, one that prioritizes results immediately near me instead of finding places in cities often hundreds of kilometres away. There are no details yet on what targeting advertisers will be allowed to use, but it will be extremely frustrating if the only reason I begin seeing more immediately relevant results is because a local business had to pay for the spot.

Update: I have this one little nagging thought I cannot shake. Maps has been an imperfect — to be kind — app for nearly fifteen years, but it was ultimately a self-evident piece of good software, at least in theory. It was a directory of points-of-interest, and a means of getting directions. With this announcement, it becomes a container for advertising. Its primary function feels corrupted, at least a little bit, because what users care about is now subservient to the interests of the businesses paying Apple.

⌥ Permalink

Someone Has Publicly Leaked an Exploit Kit That Can Hack Millions of iPhones

By: Nick Heer

Lorenzo Franceschi-Bicchierai and Zack Whittaker, TechCrunch:

Last week, cybersecurity researchers uncovered a hacking campaign targeting iPhone users that used an advanced hacking tool called DarkSword. Now someone has leaked a newer version of DarkSword and published it on the code-sharing site GitHub.

Researchers are warning that this will allow any hacker to easily use the tools to target iPhone users running older versions of Apple’s operating systems who have not yet updated to its latest iOS 26 software. This likely affects hundreds of millions of actively used iPhones and iPads, according to Apple’s own data on out-of-date devices.

This is an entirely different exploit chain to the “Coruna” one which also surfaced earlier this month — so now there are two massive security exploits just floating around in the wild affecting a large number of iPhones. Apple is apparently concerned enough about these vulnerabilities that it is issuing patches as far back as iOS 15 though, disappointingly, only for devices that do not support newer major versions. If you have a device that can run iOS 26, you will be safer if it is running iOS 26.

It is, I should say, pretty brazen for the developers of this exploit chain to call the JavaScript file “rce_loader.js”. RCE stands for remote code execution. It is basically like calling the file “hacking_happens_here.js”.

⌥ Permalink

In a ‘Test’, Google Is Automatically Rewriting News Headlines in Its Search Results

By: Nick Heer

Sean Hollister, the Verge:

Since roughly the turn of the millennium, Google Search has been the bedrock of the web. People loved Google’s trustworthy “10 blue links” search experience and its unspoken promise: The website you click is the website you get.

Now, Google is beginning to replace news headlines in its search results with ones that are AI-generated. After doing something similar in its Google Discover news feed, it’s starting to mess with headlines in the traditional “10 blue links,” too. We’ve found multiple examples where Google replaced headlines we wrote with ones we did not, sometimes changing their meaning in the process.

As I noted when I linked to Hollister’s article about Discover back in December, this is not new in search results; it has been happening for years.

Danny Goodwin, Search Engine Land:

Dig deeper. Google changed 76% of title tags in Q1 2025 – Here’s what that means […]

According to the Google Search Central section on title links, originally published in 2021:

I am not arguing this is good or normal — the examples Hollister shows are extremely poor reflections of the articles in question — but I do not understand why it is only gaining traction now, nor how it meaningfully differs from what Google has been doing all along. It is indeed frustrating.

Many of the results you see in Google Search misrepresent the source material and are misleading. But that has been true for a while — which is a problem unto itself. People should not trust the results they see as represented by Google Search. The visual tone Google has maintained, however, is that it is a neutral directory. The summaries in A.I. Overview are delivered with an unearned dry authority, and the ten links below it are there because of a tense truce between Google’s goals and those of search optimization professionals.

Also, I had no idea that Search Engine Land had been acquired at some point by Semrush which, in turn, was bought by Adobe.

⌥ Permalink

Lobbying Firms Funded by Apple and Meta Are Duelling on Age Verification

By: Nick Heer

Emily Birnbaum, writing for Bloomberg in July:

Meta is also helping to fund the Digital Childhood Alliance, a coalition of conservative groups leading efforts to pass app-store age verification, according to three people familiar with the funding.

The App Store Accountability Act is based on model legislation written by the Digital Childhood Alliance. The lobbying group also publishes marketing pieces, including one (PDF) that calls Apple’s age verification frameworks “ineffective”. Specifically, it points to the lack of parental consent required “for kids to enter into complex contracts”, with “no way to verify that parental consent has been obtained”.

Meta, for its part, requires users to self-report their birthday and click a button that says “I agree” to create an Instagram account. In fairness, the title of that page says “read and agree to our terms” and, on the terms page, Meta does say you need to be 13 years old. This is pretty standard stuff but, if Meta actually cared about this, it could voluntarily implement the stricter controls at sign-up without a legislative incentive.

Though this article was published last year, I am linking to it now because something called the TBOTE Project recently resurfaced these findings and added some of its own in an open source investigation. Unlike similar investigations from sources like Bellingcat, it does not appear that the person or people behind TBOTE have editors or fact-checkers to verify their interpretation of this information. That does not mean it is useless; it is simply worth exercising some caution. Regardless, their findings show a massive amount of lobbyist spending on Meta’s part to try and get these laws passed.

Birnbaum continues:

The App Association, a group backed by Apple, has been running ads in Texas, Alabama, Louisiana and Ohio arguing that the app store age verification bills are backed by porn websites and companies. The adult entertainment industry’s main lobby said it is not pushing for the bills; pornography is mostly banned from app stores.

This is obviously bad faith, but also flawed in the opposite direction: the porn industry wants device-level verification.

⌥ Permalink

Tech CEOs and Investors Are Just Saying Stuff

By: Nick Heer

Jacob Silverman, Business Insider:

The growing bloat of popular tech rhetoric could serve as evidence for how the tech industry, having conquered so much of daily life, work, and entertainment, has begun to exhaust its imaginative capacities. Industry leaders promised that the mammoth capital for AI outlay would lead to the creation of a smarter-than-human intelligence that would serve as a universal solvent, fixing climate change, poverty, and even the problem of death itself. But that horizon — which we are supposed to reach by pumping out more fossil-fuel emissions and destabilizing labor and education — remains impossibly far away.

Gallup’s polling on views of different business sectors has, frustratingly, no ability to permalink to a particular industry and its historical rankings; so, you will need to go down to “Industry and Business Sector Ratings, B Through E” and then click the pagination arrow to get to “Computer Industry” on the second page. Once there, you will find what seem at first glance to be some remarkably stable figures.

Look a little closer, though, and the numbers tell a different story. Summing the “very” and “somewhat” figures for each type of response shows a marked decline in positive reception since a high in 2017, and a steady climb in negative reception. There are lots of reasons for this; many of them I have written about. But I do not think these loudmouth executives are doing the industry any favours by bullshitting their way through interviews and promising nonsense.

That is the data-driven answer. These guys also just sound really stupid when they say stuff like “it also takes a lot of energy to train a human” or “the long-term vision is to […] create a tradeable asset out of any difference in opinion” or “I bought Twitter […] to try to help humanity, whom I love”. I know I am writing this on a website called Pixel Envy and I am, well, me, but these barons sound comical and dorky.

⌥ Permalink

Adobe Pays Early Termination Fee, or ‘Settlement’, in U.S. Lawsuit Over Hidden Fees

By: Nick Heer

When the United States Federal Trade Commission and Department of Justice jointly filed a lawsuit in 2024 against Adobe, I commented on the similarities between that complaint and the one against Amazon. Both are about the ease of entering into subscriptions that are later difficult or expensive to leave, both had alleged personal liability by executives — and, now, both have been settled out of court.

Michael Kan, PC Magazine:

Adobe has settled a 2024 lawsuit from the US government that alleged the company used hidden fees to trap users into paying for subscriptions.

On Friday, Adobe “finalized” an agreement with the Justice Department, which accused the software vendor of failing to inform new customers about payment terms or early termination fees. “While we disagree with the government’s claims and deny any wrongdoing, we are pleased to resolve this matter,” Adobe says.

I am sure Adobe has learned its lesson. Let us go and check its work. In its statement, Adobe says it has “made [its] sign-up and cancellation processes even more streamlined and transparent”. Here is how it describes its annual pricing, billed monthly, on its U.S. website:

Fee applies of half your remaining annual commitment if you cancel after Mar 31.

This is not the most direct sentence, but it is an accurate explanation of how much the fee will be, and when that fee takes effect — fourteen days from when I am writing this. It is followed by a little “i” informational icon. Clicking on it will display a callout noting when service will be cut off. For comparison, here is the equivalent disclaimer on its Canadian site:

Fee applies if you cancel after 14 days.

Here, too, there is a little informational icon. When you hover over it, Adobe says the same thing about cancellation, and adds that cancelling will incur an early termination fee. It is the same on the U.K. site.

What is the answer here? Does each country need to sue Adobe for its billing flow to disclose a reasonable amount of information?

⌥ Permalink

Meta Realizes Horizon Worlds on Quest Never Had Legs, Will Shut It Down in June

By: Nick Heer

A few weeks ago, Meta published an update from Samantha Ryan, of Reality Labs, announced a “renewed focus” and a “doubling down” on virtual reality. It planned to achieve this by “almost exclusively” betting its future on the smartphone Horizon Worlds app.

In an announcement today, Meta shifted its definition of “almost exclusively” to simply “exclusively”:

Earlier this year, we shared an update on our renewed focus for VR and Horizon. We are separating the two platforms so each can grow with greater focus, and the Horizon Worlds platform will become a mobile-only experience. This separation will extend across our ecosystem, including our mobile app. To support this vision, we are making the following changes to streamline your Quest experience throughout 2026.

This opening paragraph is opaque and, though the announcement goes on to explain exactly what is happening, it is not nearly as clear as the email sent to Horizon Worlds users. I really think Meta is looking to exit from its pure V.R. efforts, especially with the sales success of the perv glasses.

As I write this, the Horizon app for iOS is the sixty-ninth most popular free game in the Canadian App Store, just behind Wordscapes and ahead of Perfect Makeover Cleaning ASMR. Nice?

⌥ Permalink

A Roadmap for Currency Symbol Implementation

By: Nick Heer

The Unicode Consortium would like to remind you to work closely with them if you are introducing a new symbol for your currency:

Such public usage leads to a need for the symbol to be encoded in the Unicode Standard and supported in commercial software and services. Standardization of a new character and subsequent support by vendors takes time: typically, at least one year, and often longer. All too often, however, monetary authorities announce creation of a new currency symbol anticipating immediate public adoption, then later discover there will be an unavoidable delay before the new symbol is widely supported in products and services.

I had no idea so many currency symbols had been introduced recently. Then again, before I read this, I had not given much thought to the one we use: $.

Hephzibah Anderson, for the BBC, in 2019:

The most widely accepted theory does in fact involve Spanish coinage, and it goes like this: in the colonies, trade between Spanish Americans and English Americans was lively, and the peso, or peso de ocho reales, was legal tender in the US until 1857. It was often shortened, so historians tell us, to the initial ‘P’ with an ‘S’ hovering beside it in superscript. Gradually, thanks to the scrawl of time-pressed merchants and scribes, that ‘P’ merged with the ‘S’ and lost its curve, leaving the vertical stroke like a stake down the centre of the ‘S’. A Spanish dollar was more or less worth an American dollar, so it’s easy to see how the sign might have transferred.

Not only the explanation for why all the world’s dollars have the same symbol, but also why we share it with the peso.

⌥ Permalink

Updating Ubuntu packages that you have local changes for with dgit

By: cks

Suppose, not entirely hypothetically, that you've made local changes to an Ubuntu package using dgit and now Ubuntu has come out with an update to that package that you want to switch to, with your local changes still on top. Back when I wrote about moving local changes to a new Ubuntu release with dgit, I wrote an appendix with a theory of how to do this, based on a conversation. Now that I've actually done this, I've discovered that there is a minor variation and I'm going to write it down explicitly (with additional notes because I forgot some things between then and now).

I'll assume we're starting from an existing dgit based repository with a full setup of local changes, including an updated debian/changelog. Our first step, for safety, is to make a branch to capture the current state of our repository. I suggest you name this branch after the current upstream package version that you're on top of, for example if the current upstream version you're adding local changes to can be summarized as 'ubuntu2.6':

git branch cslab-2.6

Making a branch allows you to use 'git diff cslab-2.6..' later to see exactly what changed between your versions. A useful thing to do here is to exclude the 'debian/' directory from diffs, which can be done with 'git diff cslab-2.6.. -- . :!debian', although your shell may require you to quote the '!' (cf).

Then we need to use dgit to fetch the upstream updates:

dgit fetch -d ubuntu

We need to use '-d ubuntu', at least in current versions of dgit, or 'dgit fetch' gets confused and fails. At this point we have the updated upstream in the remote tracking branch 'dgit/dgit/jammy,-security,-updates' but our local tree is still not updated.

(All of dgit's remote tracking branches start with 'dgit/dgit/', while all of its local branches start with just 'dgit/'. This is less than optimal for my clarity.)

Normally you would now rebase to shift your local changes on top of the new upstream, but we don't want to immediately do that. The problem is that our top commit is our own dgit-based change to debian/changelog, and we don't want to rebase that commit; instead we'll make a new version of it after we rebase our real local changes. So our first step is to discard our top commit:

git reset --hard HEAD~

(In my original theory I didn't realize we had to drop this commit before the rebase, not after, because otherwise things get confused. At a minimum, you wind up with debian/changelog out of order, and I don't know if dropping your HEAD commit after the rebase works right. It's possible you might get debian/changelog rebase conflicts as well, so I feel dropping your debian/changelog change before the rebase is cleaner.)

Now we can rebase, for which the simpler two-argument form does work (but not plain rebasing, or at least I didn't bother testing plain rebasing):

git rebase dgit/dgit/jammy,-security,-updates dgit/jammy,-security,-updates

(If you are wondering how this command possibly works, as I was part way through writing this entry, note that the first branch is 'dgit/dgit/...', ie our remote tracking branch, and then second branch is 'dgit/...', our local branch with our changes on it.)

At this point we should have all of our local changes stacked on top of the upstream changes, but no debian/changelog entry for them that will bump the package version. We create that with:

gbp dch --since dgit/dgit/jammy,-security,-updates --local .cslab. --ignore-branch --commit

Then we can build with 'dpkg-buildpackage -uc -b', and afterward do 'git clean -xdf; git reset --hard' to reset your tree back to its pristine state.

(My view is that while you can prepare a source package for your work if you want to, the 'source' artifact you really want to save is your dgit VCS repository. This will be (much) less bulky when you clean it up to get rid of all of the stuff (to be polite) that dpkg-buildpackage leaves behind.)

Here in 2026, we're retaining old systems instead of discarding them

By: cks

I mentioned recently that at work, we're retaining old systems that we would have normally discarded. We're doing this for the obvious reason that new servers have become increasingly expensive, due to escalating prices of RAM (especially DDR5 RAM) and all forms of SSDs, especially as new servers might really require us to buy ones that support U.2 NVMe instead of SATA SSDs (because I'm not sure how available SATA SSDs are these days).

Our servers are generally fairly old anyways, so our retention takes two forms. The straightforward one is that we're likely going to slow down completely pushing old servers out of service. Instead, we'll keep them on the shelf for if we want test or low importance machines, and along with that we're probably going to be more careful about which generation of hardware we use for new machines. We've traditionally simply used the latest hardware any time we turn over a machine (for example, updating it to a new Ubuntu version), but this time around a bunch of those will reuse what we consider second generation hardware or even older hardware for machines where we don't care too much if it's down for a day or two.

The second form of retention is that we're sweeping up older hardware that other groups at the university are disposing of, when in the past we'd have passed on the offer or taken only a small number of machines. For example, we just inherited a bunch of Supermicro servers and Lenovo P330 desktops (both old enough that they use DDR4 RAM), and in the past we'd have taken only a few of each at most. These inherited servers are likely to be used as part of what we consider 'second generation' hardware, equivalent to Dell R340s and R240s (and perhaps somewhat better in practice), so we'll use them for somewhat less important machines but ones where we still actually care.

(A couple of the inherited servers have already been reused as test servers.)

The hardware we're inheriting is perfectly good hardware and it'll probably work reliably for years to come (and if not, we have a fair number of spares now). But it's hardware with several years of use and wear already on it, and there's nothing special about it that makes it significantly better than the sort of second generation hardware we already have. However, we're looking at a future where we may not be able to afford to get new general purpose 1U servers and our current server fleet is all we'll have for a few years, even as some of them break or increasingly age out. So we're hoarding what we can get, in case. Maybe we won't need them, but if we do need them and we pass them up now, we'll really regret it.

(The same logic applies to the desktops. We don't have any immediate, obvious use for them, but at the same time they're not something we could get a replacement for if we pass on them now. We'll probably put a number of them to use for things we might not have bothered with it we had to get new machines; for example, I may set one up as a backup for my vintage 2017 office desktop.)

I suspect that there will be more of this sort of retention university-wide, whether or not the retained hardware gets used in the end. We're not in a situation where we can assume a ready supply of fresh hardware, so we'd maybe better hold on to what we have if it still works.

How old our servers are (as of 2026)

By: cks

Back in 2022, I wrote about how old our servers were at the time, partly because they're older than you might expect, and today I want to update that with our current situation. My group handles the general departmental infrastructure for the research side of the department (the teaching side is a different group), and we've tended to keep servers for quite a while. Research groups are a different matter; they often have much more modern servers and turn them over much faster.

As in past installments, our normal servers remain Dell 1U servers. What we consider our current generation are Dell R350s, which it looks like we got about two years ago in 2024 (and are now out of production). We still have plenty of Dell R340s and R240s in production, which were our most recent generation in 2022. We still have some Dell R230s and even R210 IIs in production in less important server roles. We also have a fair number of Supermicro servers in production, of assorted ages and in assorted roles (including our fileservers and our giant login server, which is now somewhat old).

(On a casual look, the Dell R210 IIs are all for machines that we consider decidedly unimportant; they're still in service because we haven't had to touch them. Our current view is that R350s are for important servers, and R340s and R240s are acceptable for less important ones.)

In a change from 2022, we turned over the hardware for our fileservers somewhat recently, 'modernizing' all of our ZFS filesystems in the process. The current fileservers have 512 GBytes of RAM in each, so I expect that we'll run this hardware for more than five years unless prices drop drastically back to what they were when we could afford to get a half-dozen machines with a combined multiple terabytes of (DDR5) RAM.

(Today, a single machine with 128 GBytes of DDR5 RAM and some U.2 NVMe drives came out far more expensive than we hoped (and the prices forced us to lower the amount of RAM we were targeting).)

Our SLURM cluster is quite a mix of machines. We have both CPU-focused and GPU-focused machines, and on both sides there's a lot of hand-built machines stuffed into rack cases. On the GPU side, the vendor servers are mostly Dell 3930s; on the CPU side, they're mostly Supermicro servers. A significant number of these servers are relatively old by now; the 3930s appear to date from 2019, for example. We have updated the GPUs somewhat but we mostly haven't bothered to update the servers otherwise, as we assume people mostly want GPU computation in GPU SLURM nodes. Even the CPU nodes are not necessarily the most modern; half of them (still) have Threadripper 2990WX CPUs (launched in 2018, and hand built into the same systems as in 2022). With RAM prices being the way they are, it's unlikely that we'll replace these CPU nodes with anything more recent in the near future.

With current hardware prices being what they are (and current and future likely funding levels), I don't think we're likely to get a new generation of 1U servers in the moderate future. We have one particular important server getting a hardware refresh soon, but apart from that we'll run servers on the hardware we have available today. This may mean we have to accept more hardware failures than usual (our usual amount of server hardware failures is roughly zero), but hopefully we'll have a big enough pool of old spare servers to deal with this.

(I expect us to reuse a lot more old servers than we traditionally have. For instance, our first generation of Linux ZFS fileservers date from 2018 but they've been completely reliable and they have a lot of disk bays and decent amounts of RAM. Surely we can find uses for that.)

PS: If I'm doing the math correctly, we have roughly 10 TBytes of DDR4 RAM of various sizes in machines that report DMI information to our metrics system, compared to roughly 6 TBytes of DDR5 RAM. That DDR5 RAM number is unlikely to go up by much any time soon; the DDR4 number probably will, for various reasons beyond the scope of this entry. This doesn't include our old fileserver hardware, which is currently turned off and not in service (and so not reporting DMI information about their decent amount of DDR4 RAM).

New old systems in the age of hardware shortages

By: cks

Recently I asked something on the Fediverse:

Lazyweb, if you were going to put together new DDR4-based desktop (because you already have the RAM and disks), what CPU would you use? Integrated graphics would probably be ideal because my needs are modest and that saves wrangling a GPU.

(Also I'm interested in your motherboard opinions, but the motherboard needs 2x M.2 and 2x to 4x SATA, which makes life harder. And maybe 4K@60Hz DisplayPort output, for integrated graphics)

If I was thinking of building a new desktop under normal circumstances, I would use all modern components (which is to say, current generation CPU, motherboard, RAM, and so on). But RAM is absurdly expensive these days, so building a new DDR5-based system with the same 64 GBytes of RAM that I currently have would cost over a thousand dollars Canadian just for the RAM. The only particularly feasible way to replace such an existing system today is to reuse as many components as possible, which means reusing my DDR4 RAM. In turn, this means that a lot of the rest of the system will be 'old'. By this I don't necessarily mean that it will have been manufactured a while ago (although it may have) but that its features and capabilities will be from a while back.

If you want an AMD CPU for your DDR4-based system, it will have to be an AM4 CPU and motherboard. I'm not sure how old good CPUs are for AM4, but the one you want may be as old as a 2022 CPU (Ryzen 5 5600; other more recent options don't seem to be as well regarded). Intel's 14th generation CPUs ("Raptor Lake") from late 2023 still support DDR4 with compatible motherboards, but at this point you're still looking at things launched two years or more ago, which at one point was an eternity in CPUs.

(It's still somewhat of an eternity in CPUs, especially AMD, because AMD has introduced support for various useful instructions since then. For instance, Go's latest garbage collector would like you to have AVX-512 support. Intel desktop CPUs appear to have no AVX-512 at all, though.)

Beyond CPU performance, older CPUs and often older motherboards also often mean that you have older PCIe standards, fewer PCIe lanes, less high speed USB ports, and so on. You're not going to get the latest PCIe from an older CPU and chipset. Then you may step down in other components as well (like GPUs and NVMe drives), depending on how long you expect to keep them, or opt to keep your current components if those are good enough.

My impression is that such 'new old systems' have usually been a relatively unusual thing in the PC market, and that historically people have upgraded to the current generation. This lead to a steady increase in baseline capabilities over time as you could assume that desktop hardware would age out on a somewhat consistent basis. If people are buying new old systems and keeping old systems outright, that may significantly affect not just the progress of performance but also the diffusion of new features (such as AVX-512 support) into the CPU population.

The other aspect of this is, well, why bother upgrading to a new old system at all, instead of keeping your existing old old system? If your old system works, you may not get much from upgrading to a new old system. If your old system doesn't have enough performance or features, spending money on a new old system may not get you enough of an improvement to remove your problems (although it may mitigate them a bit). New old systems are effectively a temporary bridge and there's a limit to how much people are willing to spend on temporary bridges unless they have to. This also seems likely to slow down both the diffusion of nice new CPU features and the slow increase in general performance that you could assume.

(At work, the current situation has definitely caused us to start retaining machines that we would have discarded in the past, and in fact were planning to discard until quite recently.)

PS: One potentially useful thing you can get out of a new old system like this is access to newer features like PCIe bifurcation or decent UEFI firmware that your current system doesn't support or have.

Canonical's Netplan is hard to deal with in automation

By: cks

Suppose, not entirely hypothetically, that you've traditionally used /etc/resolv.conf on your Ubuntu servers but you're considering switching to systemd-resolved, partly for fast failover if your normal primary DNS server is unavailable and partly because it feels increasingly dangerous not to, since resolved is the normal configuration and what software is likely to expect. One of the ways that resolv.conf is nice is that you can set the configuration by simply copying a single file that isn't used for anything else. On Ubuntu, this is unfortunately not the case for systemd-resolved.

Canonical expects you to operate all of your Ubuntu server networking through Canonical Netplan. In reality, Netplan will render things down to a systemd-networkd configuration, which has some important effects and creates some limitations. Part of that rendered networkd configuration is your DNS resolution settings, and the natural effect of this is that they have to be associated with some interface, because that's the resolved model of the world. This means that Netplan specifically attaches DNS server information to a specific network interfaces in your Netplan configuration. This means that you must find the specific device name and then modify settings within it, and those settings are intermingled (in the same file) with settings you can't touch.

(Sometimes Netplan goes the other way, separating interface specific configuration out to a completely separate section.)

Netplan does not give you a way to do this; if anything, Netplan goes out of its way to not do so. For example, Netplan can dump its full or partial configuration, but it does so in YAML form with no option for JSON (which you could readily search through in a script with jq). However, if you want to modify the Netplan YAML without editing it by hand, 'netplan set' sometimes requires JSON as input. Lack of any good way to search or query Netplan's YAML matters because for things like DNS settings, you need to know the right interface name. Without support for this in Netplan, you wind up doing hacks to try to get the right interface name.

Netplan also doesn't provide you any good way to remove settings. The current Ubuntu 26.04 beta installer writes a Netplan configuration that locks your interfaces to specific MAC addresses:

  enp1s0:
    match:
      macaddress: "52:54:00:a5:d5:fb"
    [...]
    set-name: "enp1s0"

This is rather undesirable if you may someday swap network cards or transplant server disks from one chassis to another, so we would like to automatically take it out. Netplan provides no support for this; 'netplan set' can't be given a blank replacement, for example (and 'netplan set "network.ethernets.enp1s0.match={}"' doesn't do anything). If Netplan would give you all of the enp1s0 block in JSON format, maybe you could edit the JSON and replace the whole thing, but that's not available so far.

(For extra complication you also need to delete the set-name, which is only valid with a 'match:'.)

Another effect of not being able to delete things in scripts is that you can't write scripts that move things out to a different Netplan .conf file that has only your settings for what you care about. If you could reliably get the right interface name and you could delete DNS settings from the file the installer wrote, you could fairly readily create a '/etc/netplan/60-resolv.conf' file that was something close to a drop-in /etc/resolv.conf. But as it is, you can't readily do that.

There are all sorts of modifications you might want to make through a script, such as automatically configuring a known set of VLANs to attach them to whatever the appropriate host interface is. Scripts are good for automation and they're also good for avoiding errors, especially if you're doing repetitive things with slight differences (such as setting up a dozen VLANs on your DHCP server). Netplan fights you almost all the way about doing anything like this.

My best guess is that all of Canonical's uses of Netplan either use internal tooling that reuses Netplan's (C) API or simply re-write Netplan files from scratch (based on, for example, cloud provider configuration information).

(To save other people the time, the netplan Python package on PyPI seems to be a third party package and was last updated in 2019. Which is a pity, because it theoretically has a quite useful command line tool.)

One bleakly amusing thing I've found out through using 'netplan set' on Ubuntu 26.04 is that the Ubuntu server installer and Netplan itself have slightly different views on how Netplan files should be written. The original installer version of the above didn't have the quotes around the strings; 'netplan set' added them.

(All of this would be better if there was a widely agreed on, generally shipped YAML equivalent of 'jq', or better yet something that could also modify YAML in place as well as query it in forms that were useful for automation. But the 'jq for YAML' ecosystem appears to be fragmented at best.)

Considering mmap() verus plain reads for my recent code

By: cks

The other day I wrote about a brute force approach to mapping IPv4 /24 subnets to Autonomous System Numbers (ASNs), where I built a big, somewhat sparse file of four-byte records, with the record for each /24 at a fixed byte position determined by its first three octets (so 0.0.0.0/24's ASN, if any, is at byte 0, 0.0.1.0/24 is at byte 4, and so on). My initial approach was to open, lseek(), and read() to access the data; in a comment, Aristotle Pagaltzis wondered if mmap() would perform better. The short answer is that for my specific case I think it would be worse, but the issue is interesting to talk about.

(In general, my view is that you should use mmap() primarily if it makes the code cleaner and simpler. Using mmap() for performance is a potentially fraught endeavour that you need to benchmark.)

In my case I have two strikes against mmap() likely being a performance advantage: I'm working in Python (and specifically Python 2) so I can't really directly use the mmap()'d memory, and I'm normally only making a single lookup in the typical case (because my program is running as a CGI). In the non-mmap() case I expect to do an open(), an lseek(), and a read() (which will trigger the kernel possibly reading from disk and then definitely copying data to me). In the mmap() case I would do open(), mmap(), and then access some page, triggering possible kernel IO and then causing the kernel to manipulate process memory mappings to map the page into my address space. In general, it seems unlikely that mmap() plus the page access handling will be cheaper than lseek() plus read().

(In both the mmap() and read() cases I expect two transitions into and out of the kernel. As far as I know, lseek() is a cheap system call (and certainly it seems unlikely to be more expensive than mmap(), which has to do a bunch of internal kernel work), and the extra work the read() does to copy data from the kernel to user space is probably no more work than the kernel manipulating page tables, and could be less.)

If I was doing more lookups in a single process, I could possibly win with the mmap() approach but it's not certain. A lot depends on how often I would be looking up something on an already mapped page and how expensive mapping in a new page is compared to some number of lseek() plus read() system calls (or pread() system calls if I had access to that, which cuts the number of system calls in half). In some scenarios, such as a burst of traffic from the same network or a closely related set of networks, I could see a high hit rate on already mapped pages. In others, the IPv4 addresses are basically random and widely distributed, so many lookups would require mapping new pages.

(Using mmap() makes it unnecessary to keep my own in-process cache, but I don't think it really changes what the kernel will cache for me. Both read()'ing from pages and accessing them through mmap() keeps them recently used.)

Things would also be better in a language where I could easily make zero-copy use of data right out of the mmap()'d pages themselves. Python is not such a language, and I believe that basically any access to the mmap()'d data is going to create new objects and copy some bytes around. I expect that this results in as many intermediate objects and so on as if I used Python's read() stuff.

(Of course if I really cared there's no substitute for actually benchmarking some code. I don't care that much, and the code is simpler with the regular IO approach because I have to use the regular IO approach when writing the data file.)

Early notes on switching some libvirt-based virtual machines to UEFI

By: cks

I keep around a small collection of virtual machines so I don't have to drag out one of our spare physical servers to test things on. These virtual machines have traditionally used traditional MBR-based booting ('BIOS' in libvirt instead of 'UEFI'), partly because for a long time libvirt didn't support snapshots of UEFI based virtual machines and snapshots are very important for my use of these scratch virtual machines. However, I recently discovered that libvirt now can do snapshots of UEFI based virtual machines, and also all of our physical server installs are UEFI based, so in the past couple of days I've experimented with moving some of my Ubuntu scratch VMs from BIOS to UEFI.

As far as I know, virt-manager and virsh don't directly allow you to switch a virtual machine between BIOS and UEFI after it's been created, partly because the result is probably not going to boot (unless you deliberately set up the OS inside the VM with both an EFI boot and a BIOS MBR boot environment). Within virt-manager, you can only select BIOS or UEFI at setup time, so you have to destroy your virtual machine and recreate it. This works, but it's a bit annoying.

(On the other hand, if you've had some virtual machines sitting around for years and years, you might want to refresh all of their settings anyway.)

It's possible to change between BIOS and UEFI by directly editing the libvirt XML to transform the <os> node. You may want to remove any old snapshots first because I don't know what happens if you revert from a 'changed to UEFI' machine to a snapshot where your virtual machine was a BIOS one. In my view, the easiest way to get the necessary XML is to create (or recreate) another virtual machine with UEFI, and then dump and copy its XML with some minor alterations.

For me, on Fedora with the latest libvirt and company, the <os> XML of a BIOS booting machine is:

 <os>
   <type arch='x86_64' machine='pc-q35-6.1'>hvm</type>
 </os>

Here the 'machine=' is the machine type I picked, which I believe is the better of the two options virt-manager gives me.

My UEFI based machines look like this:

 <os firmware='efi'>
   <type arch='x86_64' machine='pc-q35-9.2'>hvm</type>
   <firmware>
     <feature enabled='yes' name='enrolled-keys'/>
     <feature enabled='yes' name='secure-boot'/>
   </firmware>
   <loader readonly='yes' secure='yes' type='pflash' format='qcow2'>/usr/share/edk2/ovmf/OVMF_CODE_4M.secboot.qcow2</loader>
   <nvram template='/usr/share/edk2/ovmf/OVMF_VARS_4M.secboot.qcow2' templateFormat='qcow2' format='qcow2'>/var/lib/libvirt/qemu/nvram/[machine name]_VARS.qcow2</nvram>
 </os>

Here the '[machine-name]' bit is the libvirt name of my virtual machine, such as 'vmguest1'. This nvram file doesn't have to exist in advance; libvirt will create it the first time you start up the virtual machine. I believe it's used to provide snapshots of the UEFI variables and so on to go with snapshots of your physical disks and snapshots of the virtual machine configuration.

(This feature may have landed in libvirt 10.10.0, if I'm reading release notes correctly. Certainly reading the release notes suggests that I don't want to use anything before then with UEFI snapshots.)

Manually changing the XML on one of my scratch machines has worked fine to switch it from BIOS MBR to UEFI booting as far as I can tell, but I carefully cleared all of its disk state and removed all of its snapshots before I tried this. I suspect that I could switch it back to BIOS if I wanted to. Over time, I'll probably change over all of my as yet unchanged scratch virtual machines to UEFI through direct XML editing, because it's the less annoying approach for me. Now that I've looked this up, I'll probably do it through 'virsh edit ...' rather than virt-manager, because that way I get my real editor.

(This is the kind of entry I write for my future use because I don't want to have to re-derive this stuff.)

PS: Much of this comes from this question and answers.

Going from an IPv4 address to an ASN in Python 2 with Unix brute force

By: cks

For reasons, I've reached the point where I would like to be able to map IPv4 addresses into the organizations responsible for them, which is to say their Autonomous System Number (ASN), for use in DWiki, the blog engine of Wandering Thoughts. So today on the Fediverse I mused:

Current status: wondering if I can design an on-disk (read only) data structure of some sort that would allow a Python 2 program to efficiently map an IP address to an ASN. There are good in-memory data structures for this but you have to load the whole thing into memory and my Python 2 program runs as a CGI so no, not even with pickle.

(Since this is Python 2, about all I have access to is gdbm or rolling my own direct structure.)

Mapping IP addresses to ASNs comes up a lot in routing Internet traffic, so there are good in-memory data structures that are designed to let you efficiently answer these questions once you have everything loaded. But I don't think anyone really worries about on-disk versions of this information, while it's the case that I care about, although I only care about some ASNs (a detail I forgot to put in the Fediverse post).

Then I had a realization:

If I'm willing to do this by /24 (and I am) and represent the ASNs by 16-bit ints, I guess you can do this with a 32 Mbyte sparse file of two-byte blocks. Seek to a 16-byte address determined by the first three octets of the IP, read two bytes, if they're zero there's no ASN mapping we care about, otherwise they're the ASN in some byte order I'd determine.

If I don't care about the specific ASN, just a class of ASNs of interest of which there are at most 255, it's only 16 Mbytes.

(And if all I care about is a yes or know answer, I can represent each /24 by a bit, so the storage required drops even more, to only 2 Mbytes.)

This Fediverse post has a mistake. I thought ASNs were 16-bit numbers, but we've gone well beyond that by now. So I would want to use the one-byte 'class of ASN' approach, with ASNs I don't care about mapping to a class of zero. Alternately I could expand to storing three bytes for every /24, or four bytes to stay aligned with filesystem blocks.

That storage requirement is 'at most' because this will be a Unix sparse file, where filesystem blocks that aren't written to aren't stored on disk; when read, the data in them is all zero. The lookup is efficient, at least in terms of system calls; I'd open the file, lseek() to the position, and read two bytes (causing the system to read a filesystem block, however big that is). Python 2 doesn't have access to pread() or we could do it in one system call.

Within the OS this should be reasonably efficient, because if things are active much of the important bits of the mapping file will be cached into memory and won't have to be read from disk. 32 Mbytes is nothing these days, at least in terms of active file cache, and much of the file will be sparse anyway. The OS obviously has reasonably efficient random access to the filesystem blocks of the file, whether in memory or on disk.

This is a fairly brute force approach that's only viable if you're typically making a single query in your process before you finish. It also feels like something that is a good fit for Unix because of sparse files, although 16 Mbytes isn't that big these days even for a non-sparse file.

Realizing the brute force approach feels quite liberating. I've been turning this problem over in my mind for a while but each time I thought of complicated data structures and complicated approaches and it was clear to me that I'd never implement them. This way is simple enough that I could actually do it and it's not too impractical.

PS: I don't know if I'll actually build this, but every time a horde of crawlers descends on Wandering Thoughts from a cloud provider that has a cloud of separate /24s and /23s all over the place, my motivation is going to increase. If I could easily block all netblocks of certain hosting providers all at once, I definitely would.

(To get the ASN data there's pyasn (also). Conveniently it has a simple on-disk format that can be post-processed to go from a set of CIDRs that map to ASNs to a data file that maps from /24s to ASN classes for ASNs (and classes) that I care about.)

Update: After writing most of this entry I got enthused and wrote a stand-alone preliminary implementation (initially storing full ASNs in four-byte records), which can both create the data file and query it. It was surprisingly straightforward and not very much code, which is probably what I should have expected since the core approach is so simple. With four-byte records, a full data file of all recent routes from pyasn is about 53 Mbytes and the data file can be created in less than two minutes, which is pretty good given that the code writes records for about 16.5 million /24s.

(The whole thing even appears to work, although I haven't strongly tested it.)

Fedora's virt-manager started using external snapshots for me as of Fedora 41

By: cks

Today I made an unpleasant discovery about virt-manager on my (still) Fedora 42 machines that I shared on the Fediverse:

This is my face that Fedora virt-manager appears to have been defaulting to external snapshots for some time and SURPRISE, external snapshots can't be reverted by virsh. This is my face, especially as it seems to have completely screwed up even deleting snapshots on some virtual machines.

(I only discovered this today because today is the first time I tried to touch such a snapshot, either to revert to it or to clean it up. It's possible that there is some hidden default for what sort of snapshot to make and it's only been flipped for me.)

Neither virt-manager nor virsh will clearly tell you about this. In virt-manager you need to click on each snapshot and if it says 'external disk only', congratulations, you're in trouble. In virsh, 'virsh snapshot-list --external <vm>' will list external snaphots, and then 'virsh snapshot-list --tree <vm>' will tell you if they depend on any internal snapshots.

My largest problems came from virtual machines where I had earlier internal snapshots and then I took more snapshots, which became external snapshots from Fedora 41 onward. You definitely can't revert to an external snapshot in this situation, at least not with virsh or virt-manager, and the error messages I got were generic ones about not being able to revert external snapshots. I haven't tested reverting external snapshots for a VM with no internal ones.

(Not being able to revert to external snapshots is a long standing libvirt issue, but it's possible they now work if you only have external snapshots. Otherwise, Fedora 41 and Fedora 42 defaulting to external snapshots is extremely hard to understand (to be polite).)

Update: you can revert an external snapshot in the latest libvirt if all of your snapshots are external. You can't revert them if libvirt helpfully gave you external snapshots on top of internal ones by switching the default type of snapshots (probably in Fedora 41).

If you have an external snapshot that you need to revert to, all I can do is point to a libvirt wiki page on the topic (although it may be outdated by now) along with libvirt's documentation on its snapshot XML. I suspect that there is going to be suffering involved. I haven't tried to do this; when it came up today I could afford to throw away the external snapshot.

If you have internal snapshots and you're willing to throw away the external snapshot and what's built on it, you can use virsh or virt-manager to revert to an internal snapshot and then delete the external snapshot. This leaves the external snapshot's additional disk file or files dangling around for you to delete by hand.

If you have only an external snapshot, it appears that libvirt will let you delete the snapshot through 'virsh snapshot-delete <vm> <external-snapshot>', which preserves the current state of the machine's disks. This only helps if you don't want the snapshot any more, but this is one of my common cases (where I take precautionary snapshots before significant operations and then get rid of them later when I'm satisfied, or at least committed).

The worst situation appears to be if you have an external snapshot made after (and thus on top of) an earlier internal snapshot and you to keep the live state of things while getting rid of the snapshots. As far as I can tell, it's impossible to do this through libvirt, although some of the documentation suggests that you should be able to. The process outlined in libvirt's Merging disk image chains didn't work for me (see also Disk image chains).

(If it worked, this operation would implicitly invalidate the snapshots and I don't know how you get rid of them inside libvirt, since you can't delete them normally. I suspect that to get rid of them, you need to shut down all of the libvirt daemons and then delete the XML files that (on Fedora) you'll find in /var/lib/libvirt/qemu/snapshot/<domain>.)

One reason to delete external snapshots you don't need is if you ever want to be able to easily revert snapshots in the future. I wouldn't trust making internal snapshots on top of external ones, if libvirt even lets you, so if you want to be able to easily revert, it currently appears that you need to have and use only internal snapshots. Certainly you can't mix new external snapshots with old internal snapshots, as I've seen.

(The 5.1.0 virt-manager release will warn you to not mix snapshot modes and defaults to whatever snapshot mode you're already using. I don't know what it defaults to if you don't have any snapshots, I haven't tried that yet.)

Sidebar: Cleaning this up on the most tangled virtual machine

I've tried the latest preview releases of the libvirt stuff, but it doesn't make a difference in the most tangled situation I have:

$ virsh snapshot-delete hl-fedora-36 fedora41-preupgrade
error: Failed to delete snapshot fedora41-preupgrade
error: Operation not supported: deleting external snapshot that has internal snapshot as parent not supported

This VM has an internal snapshot as the parent because I didn't clean up the first snapshot (taken before a Fedora 41 upgrade) before making the second one (taken before a Fedora 42 upgrade).

In theory one can use 'virsh blockcommit' to reduce everything down to a single file, per the knowledge base section on this. In practice it doesn't work in this situation:

$ virsh blockcommit hl-fedora-36 vda --verbose --pivot --active
error: invalid argument: could not find base image in chain for 'vda'

(I tried with --base too and that didn't help.)

I was going to attribute this to the internal snapshot but then I tried 'virsh blockcommit' on another virtual machine with only an external snapshot and it failed too. So I have no idea how this is supposed to work.

Since I could take a ZFS snapshot of the entire disk storage, I chose violence, which is to say direct usage of qemu-img. First, I determined that I couldn't trivially delete the internal snapshot before I did anything else:

$ qemu-img snapshot -d fedora40-preupgrade fedora35.fedora41-preupgrade
qemu-img: Could not delete snapshot 'fedora40-preupgrade': snapshot not found

The internal snapshot is in the underlying file 'fedora35.qcow2'. Maybe I could have deleted it safely even with an external thing sitting on top of it, but I decided not to do that yet and proceed to the main show:

$ qemu-img commit -d fedora35.fedora41-preupgrade
Image committed.
$ rm fedora35.fedora41-preupgrade

Using 'qemu-img info fedora35.qcow2' showed that the internal snapshot was still there, so I removed it with 'qemu-img snapshot -d' (this time on fedora35.qcow2).

All of this left libvirt's XML drastically out of step with the underlying disk situation. So I removed the XML for the snapshots (after saving a copy), made sure all libvirt services weren't running, and manually edited the VM's XML, where it turned out that all I needed to change was the name of the disk file. This appears to have worked fine.

I suspect that I could have skipped manually removing the internal snapshot and its XML and libvirt would then have been happy to see it and remove it.

(I'm writing all of the commands and results down partly for my future reference.)

Mass production's effects on the cheapest way to get some things

By: cks

We have a bunch of networks in a number of buildings, and as part of looking after them, we want to monitor whether or not they're actually working. For reasons beyond the scope of this entry we don't do things like collect information from our switches through SNMP, so our best approach is 'ping something on the network in the relevant location'. This requires something to ping. We want that thing to be stable and always on the network, which typically rules out machines and devices run by other people, and we want it to run from standard wall power for various reasons.

You can imagine a bunch of solutions to this for both wired and wireless networks. There are lots of cheap little computers these days that can run Linux, so you could build some yourself or expect to find someone selling them pre-made. However, these are unlikely to be a mass produced volume product, and it turns out that the flipside of things only being cheap when there is volume is that if there is volume, unexpected things can be the cheapest option.

The cheapest wall-powered device you can put on your wireless network to ping these days turns out to be a remote controlled power plug intended for home automation (as a bonus it will report uptime information for you if you set it up right, so you can tell if it lost power recently). They can fail after a few years, but they're inexpensive so we consider them consumables. And if you have another device that turns out to be flaky and has to be power cycled every so often, you can reuse a 'wifi reachability sensor' for its actual remote power control capabilities.

Similarly, as far as we've found, the cheapest wall powered device that plugs into a wired Ethernet and can be given an IP address so it can be pinged is a basic five port managed switch. You give it a 'management IP', plug one port into the network, and optionally plug up its other four ports so no one uses it for connectivity (because it's a cheap switch and you don't necessarily trust it). You might even be able to find one that supports SNMP so you can get some additional information from it (although our current ones don't, as far as I can tell).

In both cases it's clear that these are cheap because of mass production. People are making lots of wireless remote controlled power plugs and five port managed switches, so right now you can get the switches for about $30 Canadian each and the power plugs for $10 Canadian. In both cases what we get is overkill for what we want, and you could do a simpler version that has a smaller, cheaper bill of materials (BOM). But that smaller version wouldn't have the volume so it would cost much more for us to get it or an approximation.

(Even if we designed and built our own, we probably can't beat the price of the wireless remote controlled power plugs. We might be able to get a cheaper BOM for a single-Ethernet simple computer with case and wall plug power supply, but that ignores staff time to design, program, and assemble the thing.)

At one level this makes me sad. We're wasting the reasonably decent capabilities of both devices, and it feels like there should be a more frugal and minimal option. But it's hard to see what it would be and how it could be so cheap and readily available.

A traditional path to getting lingering duplicate systems

By: cks

In yesterday's entry I described a lingering duplicate system and how it had taken us a long time to get rid of it, but I got too distracted by the story to write down the general thoughts I had on how this sort of thing happens and keeps happening (also, the story turned out to be longer than I expected). We've had other long running duplicate systems, and often they have more or less the same story as yesterday's disk space usage tracking system.

The first system built is a basic system. It's not a bad system, but it's limited and you know it. You can only afford to gather disk usage information once a day and you have nowhere to put it other than in the filesystem, which makes it easy to find and independent of anything else but also stops it updating when the filesystem fills up. Over time you may improve this system (cheaper updates that happen more often, a limited amount of high resolution information), but the fundamental issues with it stick around.

After a while it becomes possible to build a different, better system (you gather disk usage information every few minutes and put it in your new metrics system), or maybe you just realize how to do a better version from scratch. But often the initial version of this new system has its own limitations or works a bit differently or both, or you've only implemented part of what you'd need for a full replacement of the first system. And maybe you're not sure it will fully work, that it's really the right answer, or if you'll be able to support it over the long term (perhaps the cardinality of the metrics will be too overwhelming).

(You may also be wary of falling victim to the "second system effect", since you know you're building a second system.)

Usually this means that you don't want to go through the effort and risk of immediately replacing the old system with the new system (if it's even immediately possible without more work on the new system). So you use the new system for new stuff (providing dashboards of disk space usage) and keep the old system for the old stuff (the officially supported commands that people know). The old system is working so it's easier to have it stay "for now". Even if you replace part of the use of the old system with the new system, you don't replace all of it.

(If your second system started out as only a partial version of the old system, you may also not be pushed to evolve it so that it could fully replace the old system, or that may only happen slowly. In some ways this is a good thing; you're getting practical experience with the basic version of the new system rather than immediately trying to build the full version. This is a reasonable way to avoid the "second system effect", and may lead you to find out that in the new system you want things to operate differently than the old one.)

Since both the old system and the new system are working, you now generally have little motivation to do more work to get rid of the old system. Until you run into clear limitations of the old system, moving back to only having one system is (usually) cleanup work, not a priority. If you wanted to let the new system run for a while to prove itself, it's also easy to simply lose track of this as a piece of future work; you won't necessarily put it on a calendar, and it's something that might be months or a year out even in the best of circumstances.

(The times when the cleanup is a potential priority are when the old system is using resources that you want back, including money for hardware or cloud stuff, or when the old system requires ongoing work.)

A contributing factor is that you may not be sure about what specific behaviors and bits of the old system other things are depending on. Some of these will be actual designed features that you can perhaps recover from documentation, but others may be things that simply grew that way and became accidentally load bearing. Figuring these out may take careful reverse engineering of how the system works and what things are doing with it, which takes work, and when the old system is working it's easier to leave it there.

Lingering duplicate systems and the expense of weeding them out (an illustration)

By: cks

We have been operating a fileserver environment for a long time now, back before we used ZFS. When you operate fileservers in a traditional general Unix environment, one of the things you need is disk usage information. So a very long time ago, before I even arrived, people built a very Unix-y system to do this. Every night, raw usage information was generated for each filesystem (for a while with 'du'), written to a special system directory in the filesystem, and then used to create a text file with a report showing currently usage and the daily and weekly change in everyone's usage. A local 'report disk usage' script would then basically run your pager on this file.

After a while, we we able to improve this system by using native ZFS commands to get per-user 'quota' usage information, which made it much faster than the old way (we couldn't do this originally because we started with ZFS before ZFS tracked this information). Later, this made it reasonable to generate a 'frequent' disk usage report every fifteen minutes (with it keeping a day's worth of data), which could be helpful to identify who had suddenly used a lot of disk space; we wrote some scripts to use this information, but never made them as public as the original script. However, all of this had various limitations, including that it stopped updating once the filesystem had filled up.

Shortly after we set up our Prometheus metrics system and actually had a flexible metrics system we could put things into, we started putting disk space usage information into it, giving us more fine grained data, more history (especially fine grained history, where we'd previously only had the past 24 hours), and the ability to put it into Grafana graphs on dashboards. Soon afterward it became obvious that sometimes the best way to expose information is through a command, so we wrote a command to dump out current disk usage information in a relatively primitive form.

Originally this 'getdiskusage' command produced quite raw output because it wasn't really intended for direct use. But over time, people (especially me) kept wanting more features and options and I never quite felt like writing some scripts to sit on top of it when I could just fiddle the code a bit more. Recently, I added some features and tipped myself over a critical edge, where it felt like I could easily re-do the old scripts to get their information from 'getdiskusage' instead of those frequently written files. One thing led to another and so now we have some new documentation and new (and revised) user-visible commands to go with it.

(The raw files were just lines of 'disk-space login', and this was pretty close to what getdiskusage produced already in some modes.)

However, despite replacing the commands, we haven't yet turned off the infrastructure on our fileservers that creates and updates those old disk usage files. Partly this is because I'd want to clean up all the existing generated files rather than leave them to become increasingly out of date, and that's a bit of a pain, and partly it's because of inertia.

Inertia is also a lot of why it took so long to replace the scripts. We've had the raw capability to replace them for roughly six years (since 'getdiskusage' was written, demonstrating that it was easily possible to extract the data from our metrics system in a usable form), and we'd said to each other that we wanted to do it for about that long, but it was always "someday". One reason for the inertia was that the existing old stuff worked fine, more or less, and also we didn't think very many people used it very often because it wasn't really documented or accessible. Perhaps another reason was that we weren't entirely sure we wanted to commit to the new system, or at least to exact form we first implemented our disk space metrics in.

DMARC DNS record inheritance and DMARC alignment requirements

By: cks

To simplify, DMARC is based on the domain in the 'From:' header, and what policy (if any) that domain specifies. As I've written about (and rediscovered) more than once (here and here), DMARC will look up the DNS record for the DMARC policy in exactly one of two places, either in the exact From: domain or on the organization's top level domain. In other words, if a message has a From: of 'someone@breaking.news.example.org', a receiver will first look for a DMARC TXT DNS record with the name _dmarc.breaking.news.example.org and then one with the name _dmarc.example.org.

(But there will be no lookup for _dmarc.news.example.org.)

DMARC also has the concept of policy inheritance, where the example.org DMARC DNS TXT record can specify a different DMARC policy for the organizational domain than for subdomains that don't have their own policy. For example, example.org could specify 'p=reject; sp=none' to say that 'From: user@example.org' should be rejected if it fails DMARC but it has no views on a default for 'From: user@news.example.org'.

If you're an innocent person, you might think that if your organization has 'sp=none' on its organization policy, you don't have to be concerned about the DMARC (and DKIM, and SPF) behavior of sub-names that don't have their own DMARC records, including hosts that send as 'From: local-account@host.dept.example.org'. Your organizational policy says 'sp=none', meaning don't do anything with sub-names for DMARC, and surely everyone will follow that.

This is unfortunately not quite true in an environment where people care about DKIM results regardless of DMARC policy settings. The problem is DKIM (and SPF) alignment. Under relaxed DKIM alignment, a 'From: flash@eng.news.example.org' would pass if it's DKIM signed by anything in example,org, for example 'eng.example.org'. Under strict DKIM alignment, it must be signed specifically by 'eng.news.example.org'.

The choice of what DKIM alignment to require is not a 'policy' and is not covered by 'p=' or 'sp=' in DMARC DNS TXT records. It's instead covered by a separate parameter, 'adkim=', and there is no 'sadkim=' parameter that only applies to subdomains. This means that there's no way for example.org to change the alignment policy for just 'From: user@example.org'; the moment they set 'adkim=s' in the _dmarc.example.org DNS TXT record, all sub-names without their own _dmarc.<whatever> records also switch to strict DKIM alignment. Even if the top level domain specifies 'sp=none', various mail systems out there may actively reject your mail because they no longer consider it properly aligned or increase their suspicion score a bit due to the lack of alignment (in some views your mail went from 'properly DKIM signed' to 'not properly DKIM signed').

The only way to deal with this is the same as with policy inheritance. Any host or domain name within your (sub-)organization that appears in From: headers must have its own valid DMARC DNS TXT record. If you want strict DKIM alignment you need to set that as 'adkim=s'. If you want relaxed alignment in theory that's the default but you might find it clearer to explicitly set 'adkim=r' (and probably 'aspf=r', also for clarity).

(Setting alignment explicitly makes it clear to other people and future you that you're deliberately choosing an alignment that might wind up different from your top level organizational alignment.)

PS: As far as I can see this is the behavior the DMARC RFC implicitly requires for all DMARC settings other than 'p=' (which has the 'sp=' version), but I could be wrong and missing something.

One problem with (Python) docstrings is that they're local

By: cks

When I wrote about documenting my Django forms, I said that I knew I didn't want to put my documentation in docstrings, because I'd written some in the past and then not read it this time around. One of the reasons for that is that Python docstrings have to be attached to functions, or more generally, Python docstrings have to be scattered through your code. The corollary to this is that to find relevant docstrings you have to read through your code and then remember which bits of it are relevant to what you're wondering about.

When your docstring is specifically about the function you already know you want to look at, this is fine. Docstrings work perfectly well for local knowledge, for 'what is this function about' summaries that you want to read before you delve into the function. I feel they work rather less well for finding what function you want to look at (ideally you want some sort of skimmable index for that); if you have to read docstrings to find a function, you're going to be paging through a lot of your code until you hit the right docstring.

This is also why I feel docstrings are a bad fit for documenting my Django forms. Even if I attach them to the Python functions that handle each particular form, the resulting documentation is going to be mingled with my code and spread all through it. Not only is there no overview, but I'd have to skip around my code as I read about how one form interacts with another; there's no single place where I can read about the flow of forms, one leading to another.

(This is the case even if all of the form handling functions are in one spot with nothing between them, because the docstrings will be split up by the code itself and the comments in the code.)

Another issue is that sensible docstrings can only be so big, because they separate the function's 'def' statement from its actual code. You don't want those two too far apart, which pushes docstrings toward being relatively concise. My feeling is that if I have a lot to say about what the function is used for or how it relates to other things, I can't really put it in a docstring. I usually put it in a comment in front of the function (which means that some of my Python code has a mixture of comments and docstrings). The less a function can be described purely by itself (and concisely), the more its docstring is going to sprawl and the more awkward that gets.

(Docstrings on functions are also generally seen as what I could call external documentation, written for people who might want to call the function and understand how it relates to other functions they might also use. Comments are the usual form of internal documentation that you want at hand while reading the function's code.)

It's conventional to say that docstrings are documentation for what they're on. I think it's better to say that docstrings are summaries. Some things can be described purely through summaries (with additional context that the programmer is assumed to have), but not everything can be.

(Comments before a function are also local to some degree, but they intrude less on the function's code since they don't put themselves between 'def' and the rest of things.)

Wayland has good reasons to put the window manager in the display server

By: cks

I recently ran across Isaac Freund's Separating the Wayland Compositor and Window Manager (via), which is excellent news as far as I'm concerned. But in passing, it says:

Traditionally, Wayland compositors have taken on the role of the window manager as well, but this is not in fact a necessary step to solve the architectural problems with X11. Although, I do not know for sure why the original Wayland authors chose to combine the window manager and Wayland compositor, I assume it was simply the path of least resistance. [...]

Unfortunately, I believe that there are excellent reasons to put the window manager into the display server the way Wayland has, and the Wayland people (who were also X people) were quite familiar with them and how X has had problems over the years because of its split.

One large and more or less core problem is that event handling is deeply entwined with window management. As an example, consider this sequence of (input) events:

  1. your mouse starts out over one window. You type some characters.
  2. you move your mouse over to a second window. You type some more characters.
  3. you click a mouse button without moving the mouse.
  4. you type more characters.

Your window manager is extremely involved in the decisions about where all of those input events go and whether the second window receives a mouse button click event in the third step. If the window manager is separate from whatever is handling input events, either some things trigger synchronous delays in further event handling or sufficiently fast typeahead and actions are in a race with the window manager to see if it handles changes in where future events should go fast enough or if some of your typing and other actions are misdirected to the wrong place because the window manager is lagging.

Embedding the window manager in the display server is the simple and obvious approach to insuring that the window manager can see and react to all events without lag, and can freely intercept and modify all events as it wishes without clients having to care. The window manager can even do this using extremely local knowledge if it wants. Do you want your window manager to have key bindings that only apply to browser windows, where the same keys are passed through to other programs? An embedded window manager can easily do that (let's assume it can reliably identify browser windows).

(An outdated example of how complicated you can make mouse button bindings, never mind keyboard bindings, is my mouse button bindings in fvwm.)

X has a collection of mechanisms that try to allow window managers to manage 'focus' (which window receives keyboard input), intercept (some) keys at a window manager level, and do other things that modify or intercept events. The whole system is complex, imperfect, and limited, and a variety of these mechanisms have weird side effects on the X events that regular programs receive; you can often see this with a program such as xev. Historically, not all X programs have coped gracefully with all of the interceptions that window managers like fvwm can do.

(X also has two input event systems, just to make life more complicated.)

X's mechanisms also impose limits on what they'll allow a window manager to do. One famous example is that in X, mouse scroll wheel events always go to the X window under the mouse cursor. Even if your window manager uses 'click (a window) to make it take input', mouse scroll wheel input is special and cannot be directed to a window this way. In Wayland, a full server has no such limitations; its window manager portion can direct all events, including mouse scroll wheels, to wherever it feels like.

(This elaborates on a Fediverse post of mine.)

Cleaning old GPG RPM keys that your Fedora install is keeping around

By: cks

Approximately all RPM packages are signed by GPG keys (or maybe they're supposed to be called PGP keys), which your system stores in the RPM database as pseudo-packages (because why not). If your Fedora install has been around long enough, as mine have, you will have accumulated a drift of old keys and sometimes you either want to clean them up or something unfortunate will happen to one of those keys (I'll get to one case for it).

One basic command to see your collection of GPG keys in the RPM database is (taken from this gist):

rpm -q gpg-pubkey --qf '%{NAME}-%{VERSION}-%{RELEASE}\t%{SUMMARY}\n'

On some systems this will give you a nice short list of keys. On others, your list may be very long.

Since Fedora 42 (cf), DNF has functionality (I believe more or less built in) that should offer to remove old GPG keys that have actually expired. This is in the 'expired PGP keys plugin' which comes from the 'libdnf5-plugin-expired-pgp-keys' if you don't have it installed (with a brief manpage that's called 'libdnf5-expired-pgp-keys'). I believe there was a similar DNF4 plugin. However, there are two situations where this seems to not work correctly.

The first situation is now-obsolete GPG keys that haven't expired yet, for various reasons; these may be for past versions of Fedora, for example. These days, the metadata for every DNF repository you use should list a URL for its GPG keys (see the various .repo files in /etc/yum.repos.d/ and look for the 'gpgkey=' lines). So one way to clean up obsolete keys is to fetch all of the current keys for all of your current repositories (or at least the enabled ones), and then remove anything you have that isn't among the list. This process is automated for you by the 'clean-rpm-gpg-pubkey' command and package, which is mentioned in some Fedora upgrade instructions. This will generally clean out most of your obsolete keys, although rare people will have keys that are so old that it chokes on them.

The second situation is apparently a repository operator who is sufficiently clever to have re-issued an expired key using the same key ID and fingerprint but a new expiry date in the future; this fools RPM and related tools and everything chokes. This is unfortunate, since it will often stall all DNF updates unless you disable the repo. One repository operator who has done this is Google, for their Fedora Chrome repository. To fix this you'll have to manually remove the relevant GPG key or keys. Once you've used clean-rpm-gpg-pubkey to reduce your list of GPG keys to a reasonable level, you can use the RPM command I showed above to list all your remaining keys, spot the likely key or keys (based on who owns it, for example), and then use 'rpm -e --allmatches gpg-pubkey-d38b4796-570c8cd3' (or some other appropriate gpg-pubkey name) to manually scrub out the GPG key. Doing a DNF operation such as installing or upgrading a package from the repository should then re-import the current key.

(This also means that it's theoretically harmless to overshoot and remove the wrong key, because it will be fetched back the next time you need it.)

(When I wrote my Fediverse post about discovering clean-rpm-gpg-pubkey, I apparently thought I would remember it without further prompting. This was wrong, and in fact I didn't even remember to use it when I upgraded my home desktop. This time it will hopefully stick, and if not, I have it written down here where it will probably be easier to find.)

Making empirical decisions about web access (here in 2026)

By: cks

Recently, Denis Warburton wrote in a comment on my entry on how HTTP results today depend on what HTTP User-Agent you use:

Making decisions based on user-provided information is unwise in 2026. The originating ip address is the only source of "truth" ... and even then, that information needs to be further examined before discerning whether or not it is a valid piece of communication.

It's absolutely true that everything except the source IP address is under the control of an attacker (and it always has been), and in one sense you can't trust it. But this doesn't mean you can't use information that's under the attacker's control in making decisions about whether to allow access to something; instead, it means that you have to be thoughtful about how you use the information and what for.

In practice, web agents emit a lot of data in their HTTP headers and requests. Some of these signals are complicated, such as browser version numbers, and some of them require work to use, but this doesn't mean that there's no signal at all that can be derived from all of the data that a web agent emits. For example, consider a web agent that uses the HTTP User-Agent of:

Mozilla/5.0 (compatible; Googlebot/2.1; +http://www.google.com/bot.html)

This web agent is telling you that it's claiming to be Googlebot. Under the right circumstances this can be a valuable signal of malfeasance and worth denying access.

Similarly, a web agent that emits user agent hints while its HTTP User-Agent is claiming to be an authentic version of Firefox 147 is giving you the signal that it's not an unaltered, standard version of Firefox, because standard versions of Firefox 147 don't do that. It's most likely something built on Chromium, but in any case you might decide that this signal means it is suspicious enough to be denied access. Neither the User-Agent nor the Sec-CH-UA headers create true facts to definitively identify the browser and both could be faked by the attacker, but the inconsistency is real.

What an attacker tells you (deliberately or accidentally) is a signal, and it's up to you to interpret and use that signal (which I think you should these days). This is an empirical thing, something that depends on the surrounding environment (for example, you have to interpret the attacker's signal in terms of its difference from the signals of legitimate visitors), what you're doing, and what you care about, but then security is always ultimately people, not math, even though tech loves to avoid this sort of empiricism (which is a bad thing).

As a pragmatic thing, it's usually easier to use attacker signals if you allow things by default rather than deny them by default. If you allow by default, your primary concern is false positives (legitimate visitors who are emitting signals you find too suspicious), rather than false negatives, because an attacker that wants to work hard enough can always obtain access. Conveniently, public web sites (such as Wandering Thoughts) are exactly such an allow by default environment, which is why these days I use a lot of signals here when deciding what to accept or block (including IP addresses and networks).

(If you need a deny by default environment with real security, you need to use something that attackers can't fake. IP addresses can be one option in the right circumstances, but they aren't the only one.)

I think dependency cooldowns would be a good idea for Go

By: cks

Via Filippo Valsorda, I recently heard about a proposal to add dependency cooldowns to Go. The general idea of dependency cooldowns is to make it so that people don't immediately update to new versions of dependencies; instead, you wait some amount of time for people to inspect the new version and so on (either through automated tooling or manual work). Since one of Go's famous features is 'minimum version selection', you might think that a cooldown would be unnecessary, since people have to manually update the version of dependencies anyway and don't automatically get them.

Unfortunately, this is not the actual observed reality. In the actual observed reality, people update dependency versions fast enough to catch out other people who change what a particular version is of a module they publish. This seems to be in part from things like 'Dependabot' automatically cruising around looking for version updates, but in general it seems clear that some amount of people will update to new versions of dependencies the moment those new versions become visible to them. And if a dependency is used widely enough, through random chance there's pretty much always going to be a developer somewhere who is running 'go list -m -u all' right after a new version of the package is released. So I feel that some sort of a cooldown would be useful in practice, even with Go's other protections.

I follow the VCS repositories of a fair number of Go projects, and a lot of their dependency updates are automated, through things like Dependabot. If these things supported dependency cooldowns and if people turned that on, we might get a lot of the benefit without Go's own mechanisms having to add code to support this. On the other hand, not everyone uses Dependabot or equivalent features (especially if people migrate away from Github, as some are) and there's always going to be people checking and doing dependency updates by hand. To support them, we need assistance from tooling.

(In theory this tooling assistance could be showing how old a version is then leaving it up to people to notice and decide, but in practice I feel that's abrogating responsibilities. We've seen that show before; easy support and defaults matter.)

While I don't have any strong or well informed opinions on how this should be implemented in Go, I do feel that both defaults and avoiding mistakes are important. This biases me towards, say, a setting for this in your go.mod, because then that way it's automatically persistent and everyone who works on your project gets it applied automatically, unlike (for example) an environment variable that you have to make sure everyone has set.

(This elaborates on some badly phrased thoughts I posted on the Fediverse.)

On today's web, HTTP results depend on the HTTP User-Agent you use

By: cks

Back in the old days, search engines mostly crawled your sites with their regular, clearly identifying HTTP User-Agent headers, but once in a while they would switch up to fetching with a browser's User-Agent. What they were trying to detect was if you served one set of content to "Googlebot" but another set of content to "Firefox", and if you did they tended to penalize you; you were supposed to serve the same content to both, not SEO-bait to Googlebot and wall to wall ads to browsers. Googlebot identified itself as a standard courtesy, not so you could handle it differently.

Obviously those days are long over. It's now routine and fully accepted to serve different things to Googlebot and to regular browsers. Generally websites offer Googlebot more access and plain text, and browsers less access (even paywalls) and JavaScript encrusted content (leading to people setting their User-Agent to Googlebot to bypass paywalls). Since people give Googlebot special access, people impersonate it and other well accepted crawlers and other people (like me) block that impersonation.

This is part of an increasingly common general pattern, which is that different HTTP User-Agents get different results for the same URL. Especially, some HTTP User-Agents will get errors, HTTP redirections, or challenge pages, and other User-Agents won't; instead they'll get the real content. What this means in concrete terms is it's increasingly bad to take the results from one HTTP User-Agent and assume they apply for another. This isn't just me and Wandering Thoughts; for example, if a site has a standard configuration of Anubis, having a User-Agent that includes 'Mozilla' will cause you to get a challenge page instead of the actual page (cf).

(One of the amusing effects of this is what it does to 'link previews', which require the website displaying the preview to fetch a copy of the URL from the original site. On the Fediverse, fairly often the link preview I see is just some sort of a challenge page.)

In practice, you're probably reasonably safe if you're doing close variations of what's fundamentally the same distinctive User-Agent. But you're living dangerously if you try this with browser-like User-Agent values, either two different ones or a browser-like User-Agent and a distinctive non-browser one, because those are the ones that are most frequently forged and abused by covert web crawlers and other malware. Everyone who wants to look normal is imitating a browser, which means looking like a browser is a bad idea today.

Unfortunately, however bad an idea it is, people seem to keep trying fetches with multiple User-Agent header values and then taking a result from one User-Agent and using it in the context of another. Especially, feed reader companies seem to do it, first Feedly and now Inoreader.

You (I) should document the forms of your Django web application

By: cks

We have a long-standing Django web application to handle (Unix) account requests. Since these are requests, there is some state involved, so for a long time a request could be pending, approved, or rejected, with the extra complexity that an approved request might be incomplete and waiting on the person to pick their login. Recently I added being able to put a request into a new state, 'held', in order to deal with some local complexities where we might have a request that we didn't want to delete but also didn't want to go through to create an account.

(For instance, it's sometimes not clear if new incoming graduate students who've had to defer their arrival are going to turn up later or wind up not coming at all. So now we can put their requests on hold.)

When I initially wrote the new code, I though that this new 'held' status was relatively weak, and in particular that professors (who approve accounts) could easily take an account request out of 'held' status and approve it. At the time I decided that this was probably a feature, since a professor might know that one of their graduate students was about to turn up after all and this way they didn't have to get us to un-hold the account request. Then the other day we sort of wanted to hold an account request even against the professor involved approving it, and because I knew that the 'held' status was weak this way, I didn't bother trying.

Well, it turns out I was wrong. Because I had forgotten how our forms worked, I hadn't realized that my new 'held' status was less 'held' and more 'frozen', and I only learned better today because I took a stab at creating a real 'frozen' status. In the current state, while it's possible for professors to deliberately un-hold a request, it takes a certain amount of work to find the one obscure place it's possible and you can't do it by accident (and it would be easy to close that possibility off if we decided to). You definitely can't accidentally approve a request that's currently held without realizing it.

(So my admittedly modest amount of work to add a 'frozen' status was sort of wasted, although it did lead to greater understanding in the end.)

Past me, immersed in the application, presumably found all of the rules about who could see what form and what they showed to be obvious (at least in context). Present me is a long distance from past me and did not remember all of those things. Brief documentation on each form would have been really quite handy, and if I'm smart I'll spend some time this time around to write some.

I'm not sure where I'll put any new forms documentation. Probably not in our views.py, which is already big enough. I could put it in urls.py, or I could write a separate README.forms file that doesn't try to embed this in code. And I know that I don't want to put it in Python docstrings, because I wrote some things in Python docstrings on the existing forms functions and then didn't read them. Even if I had read them, the existing docstrings don't entirely cover the sort of information I now know I want to know.

(I think there's a good reason for my not reading my own docstrings, but that's for another entry.)

UEFI-only booting with GRUB has gone okay on our (Ubuntu 24.04) servers

By: cks

We've been operating Ubuntu servers for a long time and for most of that time we've booted them through traditional MBR BIOS boots. Initially it was entirely through MBR and then later it was still mostly through MBR (somewhat depending on who installed a particular server; my co-workers are more tolerant of UEFI than I am). But when we built the 24.04 version of our customized install media, my co-worker wound up making it UEFI only, and so for the past two years all of our 24.04 machines have been UEFI (with us switching BIOSes on old servers into UEFI mode as we updated them). The headline news is that it's gone okay, more or less as you'd expect and hope by now.

All of our servers have mirrored system disks, and the one UEFI thing we haven't really had to deal with so far is fixing Ubuntu's UEFI boot disk redundancy stuff after one disk fails. I think we know how to do it in theory but we haven't had to go through it in practice. It will probably work out okay but it does make me a bit nervous, along with the related issue that the Ubuntu installer makes it hard to be consistent about which disk your '/boot/efi' filesystem comes from.

(In the installer, /boot/efi winds up on the first disk that you set as the boot device, but the disks aren't always presented in order so you can do this on 'the first disk' in the installer and discover that the first disk it listed was /dev/sdb.)

The Ubuntu 24.04 default bootloader is GRUB, so that's what we've wound up with even though as a UEFI-only environment we could in theory use simpler ones, such as systemd-boot. I'm not particularly enthused about GRUB but in practice it does what we want, which is to reliably boot our servers, and it has the huge benefit that it's actively supported by Ubuntu (okay, Canonical) so they're going to make sure it works right, including with their UEFI disk redundancy stuff. If Ubuntu switches default UEFI bootloaders in their server installs, I expect we'll follow along.

(I don't know if Canonical has any plans to switch away from GRUB to something else. I suspect that they'll stick with GRUB for as long as they support MBR booting, which I suspect will be a while, especially as people look more and more likely to hold on to old hardware for much longer than normally expected.)

PS: One reason I'm writing this down is that I've been unenthused about UEFI for a long time, so I'm not sure I would have predicted our lack of troubles in advance. So I'm going to admit it, UEFI has been actually okay. And in its favour, UEFI has regularized some things that used to be pretty odd in the MBR BIOS era.

(I'm still not happy about the UEFI non-story around redundant system disks, but I've accepted that hacks like the Ubuntu approach are the best we're going to get. I don't know what distributions such as Fedora are doing here; my Fedora machines are MBR based and staying that way until the hardware gets replaced, which on current trends won't be any time soon.)

I haven't made anything with AT Proto yet

Landscape

I haven't made anything with AT Proto.

Okay, technically, I did made the Bluesky ThinkUp Tribute, which syncs with your Bluesky account and sends a nightly email about who changed their bio or handle on the website. It's a great little utility and I rely on it constantly. But that doesn't integrate very deeply with AT Proto.

I've fallen into the cycle of reading about AT Proto but not building anything on it: a pattern that I want to break. I blame other priorities for my lack of weekend hacking - when I do get time and energy to computer on the weekends I've spent it on maintaining and contributing to established projects instead of building new experiments. And my time during the week is mostly spent on Val Town priorities, like keeping the servers online, developing features, and implementing moderation.

I don't especially like writing about things without having 'something to show,' but to avoid the trap of neither writing nor building, here's some writing.


The tech that runs Bluesky is general-purpose

The AT Protocol is the tech that Bluesky, the Twitter alternative, is built on. It's fairly general-purpose and well-suited for building all kinds of applications, not just Bluesky, and has some very utopian ideas built in. Collectively, we're calling the stack and its applications the 'Atmosphere.'

This has been, recently, in my filter bubble, a big deal. Applications like Leaflet for blogging and tangled, a GitHub alternative, use the AT Protocol as core architecture, storing data on it, allowing other applications to provide alternative frontends, and using its identity system to let people log in with their domain names or Bluesky handles.

It is a breath of fresh air in the tech industry. The creativity of this community is inspiring, and with a few exceptions people are friendly and welcoming.


AT Proto learned lessons from other decentralization attempts

Decentralization has had a lot of false starts: see my old posts on Dat, IPFS, IPFS again, and Arweave for some of that backstory. I am a seeker in that space, ready to try out what's new and hoping that the technology works, even though most of the results so far have been lackluster.

The Bluesky team has a lot of experience with those previous efforts: Paul Frazee, the CTO cofounded Blue Link Labs which made Beaker and integrated with Dat, and he worked on Secure Scuttlebutt before that. Other Bluesky employees like Jeromy Johnson came from the IPFS team.

So Bluesky is a lot of people's second or third try at making decentralization work, and it shows in some of the thinking, especially Paul's writing about how Bluesky compares to P2P and magical mesh networks.

This is encouraging. A lot of decentralization ideas work in theory but not in practice. Much of the challenge is practical and human-level, and it is good that it seems like the Bluesky team anticipated things like moderating content from day one.


It's more like a magical database than like a new internet

The AT Protocol is a lot different from the decentralization tech that I've played around with the most, like Dat and IFPS. Both Dat and IPFS are kind of like 'generic blob stores': you can store any kind of content on them, and they had URL-like addressing for that content. They both aspired to be a sort of future-internet in shape: Dat had the Beaker Browser and for a while IPFS was built into the Brave Browser. So I kept trying to deploy my website onto these technologies, with varying success, and IPFS tried to host all of Wikipedia, with varying success.

AT Proto is more like a magic semi-schemaless database. A 'post' on Bluesky looks like this:

{
  "text": "some placemark updates: sorting & resizing table columns, new releases of simple-statistics and tokml, using changesets in all my projects\n\nmacwright.com/2026/03/15/o...",
  "$type": "app.bsky.feed.post",
  "embed": {
    "$type": "app.bsky.embed.external",
    "external": {
      "uri": "https://macwright.com/2026/03/15/oss-changelog",
      "title": "Placemark & OSS Changelog",
      "description": "JavaScript, math, maps, etc"
    }
  },
  "langs": ["en"],
  "facets": [{
    "index": { "byteEnd": 169, "byteStart": 140 },
    "features": [
      {
        "uri": "https://macwright.com/2026/03/15/oss-changelog",
        "$type": "app.bsky.richtext.facet#link"
      }
    ]
  }],
  "createdAt": "2026-03-15T23:03:29.022Z"
}

It's JSON-encoded, structured, and opinionated, and importantly, limited in size. Don't expect to put a ton of data in this Record - right now a record can't be more than 1MB when encoded as CBOR.

Of course a modern social network is nothing without images and vertical video, so Bluesky needs to store more than just JSON documents, and so there's Blob support - stored as raw binary data, referenced from a Record. Though blobs are limited too, with the limits varying by server but usually 100MB.

This was a big realization for me around tangled - that project which is extremely cool (rebuilding a more decentralized code collaboration platform) is not using AT Proto to store git data, but rather has servers called Knots that handle the git parts. It's a very cool infrastructure, but important to note that the way in which metadata and git content are stored is quite different.


Is AT Protocol a good database?

Obviously it isn't (just) a database but it's a useful frame: how does AT Protocol work with typical database requirements?

  • Can it store a lot of data? Yes, but in small bits. It's more of a DynamoDB than an S3.
  • Is it fast? It's surprisingly fast from what I've seen: stream.place uses the protocol for comments on livestreams, and they work quite well.
  • Is it reliable? It seems so: the whole thing is built on event-sourcing and streams, and it has both the ability to replay streams when servers go down as well as to sync archival data.
  • Is it decentralized? Kind of? It's federated, but if you have your data stored in a Personal Data Server, it isn't automatically replicated to other servers on the network. This is a unlike 'magical mesh networks' like secure scuttlebutt which store lots of copies of data.
  • Is it indexed? Kind of! Obviously you don't want to process all of the data across all of the Bluesky network, and thankfully services like jetstream let you filter to only a specific collection.

Privacy is still a hard unsolved problem

So: you can store structured documents on AT Proto and small binary blobs - what about privacy? This might change soon because there's so much active development, but right now: you can't really use AT Proto for private data.

Paul has written a great discussion of different approaches, and it's clear that there are deep problems that require introspection and thorough evaluation, but that nothing is deployed yet. Bluesky does have direct messages, but according to Gavin Anderegg's investigation, they're 'off-protocol' so not actually anywhere on AT Proto.

This is obviously a big stumbling block for applications. Val Town couldn't use the Atmosphere for data storage if there is no concept of privacy. Right now our traditional backend infrastructure (mostly Postgres) makes both privacy and good-enough encryption at rest (mostly AES-GCM) pretty simple to implement, if not foolproof.

There are experiments around implementing privacy on AT Proto, like Germ, but none have solved all of the problems that need solving.


What should I do with it?

I have plenty of existing and potential projects to use as testbeds for new technology: that's one of the main reason why side-projects can be so nice, is that they're safe places to use bleeding-edge technology without risking alienating your entire team at work. So where can I use AT Proto?

I would love to support sharing maps on Placemark again. Geospatial data probably won't fit in AT Proto records because it's fiendishly large and complex, but it could be squeezed into a blob if it's small enough. Maybe encoding JSON as CBOR is enough to shrink the data a bit without losing fidelity.

It would be really fun to get AT Proto logins working with Val Town: Orta implemented something similar for Puzzmo. Unfortunately user signup is a very knotty problem for us because, like every other hosting platform, we are in a daily battle with spammers. Orta's solution for Puzzmo was to make Bluesky login an additional, linked account along with your existing Puzzmo account, which makes a lot of sense.

I could also try to put this blog on AT Proto. standard.site has some specs for doing that, and sequoia would make publishing pretty easy. Leaflet.pub is riding high on their adoption of AT Proto for blogging. I'm honestly more confused than excited about this possibility: partly because RSS is already so good for publishing blogs, and because I'm not sure what syndicating to the atmosphere really does for this blog? I especially don't want to publish on AT Proto first, because rule #1 of macwright.com is to keep this site alive forever and avoid boondoggles.


Where does this go?

AT Proto is in a creative-explosion phase, which is really exciting. The way that the platform has been crafted makes it easy to incrementally introduce Atmosphere features to existing applications, and I am really relieved how little unnecessary jargon there seems to be, even though it's a very complicated system.

Of all the values it provides, I think a rock-solid sense of credible exit is the most consistently achieved. Being able to plug a different application into the same data, or to move your data from one host to another is incredible, as Dan Abramov wrote about in 'A Social Filesystem'.

Having been on the internet for a long time, I don't expect anything to last forever, and I won't be heartbroken when the flaws in the plan are inevitably identified or some bad actor spoils the party for a while.

I wonder about the long-term economics of the thing, though: Bluesky is essentially providing a free database to anyone who wants to implement the AppView part of the system. How long does this last, especially if some Atmosphere apps become successful and start generating lots of revenue. Companies do not like subsidizing each other.

I think that's a few years off. Maybe we start paying for a deluxe plan once we store a gigabyte or two on Bluesky's servers, or one of the stablecoin-based micropayments technologies takes off (let's be real, if one does, it'll be Stripe's) and popular applications pay for their user storage on other PDS systems, in a faint echo of Filecoin's failure.


A spell for creativity

I plan to return here and have something to show on AT Proto. Not to overthink it, to ship something. It's fun to read but even more fun to write code, or a bit of manic fun to use LLMs to prototype something. I'm having more success drawing a portrait every day and using my sewing machine than working on the internet on the weekends, but that is partly because of a pessimistic view of the current trend, and the Atmosphere is a trend I can get behind.

Another round of reporting on feed readers

Yeah, here we go again, it's time to talk about feed readers. I'll summarize what's been happening with those tests which have been active in the past 7 days. There are many more that started and then ended and which are not included as a result.

Side note: I'm about to roll the hostname like I did last year, so if you're participating and want to carry on, go back to the URL from the welcome mail when it stops resolving in DNS. Then delete the current testing feed and add the new one.

I do this because a lot of these things seem to be forgotten over time, and they're just piling up in my database for nobody's benefit. Dropping the DNS leaves the jank entirely on *their* side of the Internet. I mean, if you run a test of a reader for hundreds of days and never improve anything, what's the point?

One thing I wanted to note before I get to the list: there are a couple of readers which have apparently added support for the Cache-Control header, and specifically the "max-age=nnnnn" part of it. The test feed sends that out, and I change the values sometimes to see which readers speed up and slow down accordingly. To the authors of those projects: I see you, and I appreciate your work! I'd put a gold star on your laptop if I could. (Not all of them are in this report since they have finished testing and are no longer reporting in.)

So then, let's talk turkey here. I'm grouping the results by client just for simplicity. Remember that means it includes vastly different config options used by different people, different versions, different upgrade cadences, and (sigh), yes, different amounts of people clicking reload.

Audrey2: 402 days, no complaints. Pretty sure this one honors Cache-Control, so thanks for that!

Miniflux: about 10 instances. My reports for them mostly go something like "400 days, no complaints", "402 days, three too-short polls", "185 days, one too-short poll", "one <1s double-poll after 400 days of otherwise perfect behavior".

It's just that chill in Miniflux world.

Otocyon: 365 days on the nose, and not so much as a hiccup from the single instance that's reporting in. This was one of the "unpublished, please don't shame me yet" agents that I only mentioned anonymously in prior reports.

NetNewsWire: about 5 of these instances, and they're all inspiring. They all show a marked change in behaviors once upgrading past a certain point. NNW itself added some cheeky code to send version numbers to a couple of sites and I'm one of them (yes, I see what you did there). Anyway, I can now see that people are in fact upgrading, and the vastly improved behaviors speak to the work that was clearly done behind the scenes.

The NNW per-instance log tables all show the same thing: a bunch of red cells for this problem or that problem, then that instance upgrades and *poof*, gone, and everything's clear and happy. It's like one of those commercials for eye drops from the 80s.

Vienna: a couple of instances, both active 350+ days. One had no complaints and the other has a handful of short polls. By that, I mean "less than an hour between requests". I assume this is from someone clicking refresh. One "refreshing" note is that they're still making conditional requests when this happens, so they aren't wasting much bandwidth.

FreshRSS: a couple of instances for this one, too. One has some short polls and unconditional requests over 206 days. The other one got past some kind of weird buggy spot that was in 1.25.0, and has been smooth sailing ever since (nearly a year now).

newsgoat: just one of these, and it was wobbly and too-quick at the beginning but then settled down. I would like to see how it does now. This is another reason I roll the DNS entries and reset the data: I want to see how things are working with the latest code, and leave the older stuff behind.

CommaFeed: just one of these, and it had some kind of caching issue that disappeared shortly before it flipped to version 5.7.0 in April 2025, and now it's fine. This is another one that would probably benefit from the upcoming fresh start.

Feedbin: did a double-tap startup, where it polls (unconditionally) twice in quick succession (< 1 second apart) when the feed was added. Two instances did this, but this was back in January 2025. This is another spot where I'd like to see how it behaves now.

Rapids: 354 days, no complaints.

NextCloud-News: this one had some weird If-Modified-Since values and double-tapping at startup, but that was January 2025. (Yes, this is the reader that once sent the infamous "1800" IMS value). It's come a long way since then and I'd like to see a fresh start from it to appreciate the work that's been done to it.

Bloggulus: 374 days, one slightly too quick poll. Hard to complain about that.

MANOS: 391 days, and nothing to complain about. I do hear Torgo's haunting theme music every time I write about it, though.

unnamed-feed-reader: technically, that's a name. 402 days, no complaints.

Something that doesn't even send a user-agent header: yeah, that's not cool. Send SOMETHING. Come on. 399 days of just doing that but otherwise getting everything else correct, somehow. One nit: sends "" as the If-None-Match at startup instead of just not sending the header at all. (Sounds like the usual "null is not zero is not the empty string is not the lack of a header is not ..." type thing.)

Some unidentified Firefox extension: it looks like it's still doing the 2000-01-01 If-Modified-Since once in a great great while (like, once), and it did a super quick poll (two in under a second) once. Otherwise it's been pretty quiet over the past 401 days.

There's also a Thunderbird instance which does the same 2000-01-01 startup but otherwise just quietly does its thing.

NewsBlur: double-tapped at startup (March 2025), and then sent lots of nutty out of sequence If-Modified-Since values. By nutty, I mean "went to sleep for 303 days, then came back, and started sending the *previous IMS value*". How do you do that when the server is hitting you in the face with a new value every time you poll?

Zufeed: unconditional double-tap at startup. Might be fixed. Need to see a fresh startup to be sure.

Roy and Rianne's ... etc: handful of unconditional requests over the past couple of months.

walrss: same deal: a couple of unconditionals in ~400 days.

Yarr: multiple too-fast unconditionals at startup (January 2025), and then a bunch more after that.

SpaceCowboys ... etc: one instance of this program, and it did something bizarre where an ancient ETag value popped back something like three months after it stopped being served to clients. Also sends some unconditional requests and a few too-fast polls.

feed2exec: poll frequencies are all over the place, and it has the usual 59 minute vs. 60 minute fenceposting thing. The version number is static throughout so it's not clear if it improved during these past 340 days.

Russet: a bunch of unconditional requests and the 59/60 minute thing, like others.

There are a few others which are active but which have had multiple user-agents prodding them and so polluted the data. The interval checking and IMS/INM comparisons mean nothing when multiple programs are involved, so I have to ignore those as corrupted.

And finally...

Free Reader: 100% unconditionals. Why?

QuiteRSS (I think - it lies in its User-Agent, which itself is evil): 100% unconditionals. Also, why?

Inoreader: one instance sends unconditional requests basically every other poll. Awful. The other instance sends a bunch of unconditionals, *and* it polls too quickly, including sub-second repeat polls at times. WTF?

inforss: something like 6% unconditionals out of > 2000 requests in ~400 days. I don't get it. The 59m vs. 60m poll-timing fencepost thing it also does is minor by comparison.

feedparser: weird timing, also has problems trying to hit the 60 minute mark and instead comes a bit too early, like some others. Also frequently calls back far too quickly.

Newsboat: ETag caching is still very broken and it will get into this pathological case where it keeps sending old values even though the server is hitting it over the head with a fresh one every single time. This means it latches into 100% unconditionals in effect and that's terrible. This seems to keep happening despite the version number changing, and it's affecting both instances which are reporting in.

BazQux: hundreds of out-of-sequence IMS and INM values stemming from something very very wrong with their caching implementation, resulting in 100% unconditional request generation once it latches in that state. A lot like Newsboat in that respect.

When internal hostnames are leaked to the clown

So here's a situation: you buy a "NAS" box - network attached storage, bring it home, stick some drives in it, and plop it on your network. The thing has an affinity for running in https mode, so you drop a wildcard certificate on it inside a sub-zone of a domain you don't even use for anything meaningful on the public Internet.

By sub-zone, I mean that your wildcard TLS cert is for something like *.nothing-special.whatever.example.com. It's buried pretty deep.

You put an entry for it in your hosts file:

172.16.12.34   nas.nothing-special.whatever.example.com

Then you load that in your browser and it works.

A few days later, you notice that you've started getting requests coming to your server on the "outside world" with that same hostname. This is a hostname that only exists in the /etc/hosts file of your laptop.

You're able to see this because you set up a wildcard DNS entry for the whole "*.nothing-special.whatever.example.com" space pointing at a machine you control just in case something leaks. And, well, something *did* leak.

Every time you load up the NAS, you get some clown GCP host knocking on your door, presenting a SNI hostname of that thing you buried deep inside your infrastructure. Hope you didn't name it anything sensitive, like "mycorp-and-othercorp-planned-merger-storage", or something.

Around this time, you realize that the web interface for this thing has some stuff that phones home, and part of what it does is to send stack traces back to sentry.io. Yep, your browser is calling back to them, and it's telling them the hostname you use for your internal storage box. Then for some reason, they're making a TLS connection back to it, but they don't ever request anything. Curious, right?

This is when you fire up Little Snitch, block the whole domain for any app on the machine, and go on with life.

Using this sentry reporting mechanism as a way to make them scan arbitrary other hosts (in DNS) is left as an exercise for the reader.

Reverse engineering the Creative Katana V2X soundbar to be able to control it from Linux

I recently purchased a Creative Sound Blaster Katana V2X soundbar (what a mouthful) to replace my old, cheap Logitech computer speakers. They served me well, but listening to music or watching movies was not the best-sounding experience.

After arriving, I set it up and realized it had an USB port which, aside from being able to use it as an audio input, allows the user to configure the speaker: Set the EQ, set the LED lights in different modes, etc. The unfortunate part of this was the fact that it requires the (proprietary) Creative App to use. What's more, it only seems to be available for Windows, which I don't use. While using it in a VM worked, it was hardly convenient.

This seemed like the perfect opportunity for something I love: Reverse engineering proprietary applications, devices and protocols and writing tools to communicate with them.

Instruction decoding in the Intel 8087 floating-point chip

In the 1980s, if you wanted your IBM PC to run faster, you could buy the Intel 8087 floating-point coprocessor chip. With this chip, CAD software, spreadsheets, flight simulators, and other programs were much speedier. The 8087 chip could add, subtract, multiply, and divide, of course, but it could also compute transcendental functions such as tangent and logarithms, as well as provide constants such as π. In total, the 8087 added 62 new instructions to the computer.

But how does a PC decide if an instruction was a floating-point instruction for the 8087 or a regular instruction for the 8086 or 8088 CPU? And how does the 8087 chip interpret instructions to determine what they mean? It turns out that decoding an instruction inside the 8087 is more complicated than you might expect. The 8087 uses multiple techniques, with decoding circuitry spread across the chip. In this blog post, I'll explain how these decoding circuits work.

To reverse-engineer the 8087, I chiseled open the ceramic package of an 8087 chip and took numerous photos of the silicon die with a microscope. The complex patterns on the die are formed by its metal wiring, as well as the polysilicon and silicon underneath. The bottom half of the chip is the "datapath", the circuitry that performs calculations on 80-bit floating point values. At the left of the datapath, a constant ROM holds important constants such as π. At the right are the eight registers that the programmer uses to hold floating-point values; in an unusual design decision, these registers are arranged as a stack. Floating-point numbers cover a huge range by representing numbers with a fractional part and an exponent; the 8087 has separate circuitry to process the fractional part and the exponent.

Die of the Intel 8087 floating point unit chip, with main functional blocks labeled. The die is 5 mm×6 mm. Click this image (or any others) for a larger image.

Die of the Intel 8087 floating point unit chip, with main functional blocks labeled. The die is 5 mm×6 mm. Click this image (or any others) for a larger image.

The chip's instructions are defined by the large microcode ROM in the middle.1 To execute an instruction, the 8087 decodes the instruction and the microcode engine starts executing the appropriate micro-instructions from the microcode ROM. In the upper right part of the chip, the Bus Interface Unit (BIU) communicates with the main processor and memory over the computer's bus. For the most part, the BIU and the rest of the chip operate independently, but as we will see, the BIU plays important roles in instruction decoding and execution.

Cooperation with the main 8086/8088 processor

The 8087 chip acted as a coprocessor with the main 8086 (or 8088) processor. When a floating-point instruction was encountered, the 8086 would let the 8087 floating-point chip carry out the floating-point instruction. But how do the 8086 and the 8087 determine which chip executes a particular instruction? You might expect the 8086 to tell the 8087 when it should execute an instruction, but this cooperation turns out to be more complicated.

The 8086 has eight opcodes that are assigned to the coprocessor, called ESCAPE opcodes. The 8087 determines what instruction the 8086 is executing by watching the bus, a task performed by the BIU (Bus Interface Unit).2 If the instruction is an ESCAPE, the instruction is intended for the 8087. However, there's a problem. The 8087 doesn't have any access to the 8086's registers (and vice versa), so the only way that they can exchange data is through memory. But the 8086 addresses memory through a complicated scheme involving offsest registers and segment registers. How can the 8087 determine what memory address to use when it doesn't have access to the registers?

The trick is that when an ESCAPE instruction is encountered, the 8086 processor starts executing the instruction, even though it is intended for the 8087. The 8086 computes the memory address that the instruction references and reads that memory address, but ignores the result. Meanwhile, the 8087 watches the memory bus to see what address is accessed and stores this address internally in a BIU register. When the 8087 starts executing the instruction, it uses the address from the 8086 to read and write memory. In effect, the 8087 offloads address computation to the 8086 processor.

The structure of 8087 instructions

To understand the 8087's instructions, we need to take a closer look at the structure of 8086 instructions. In particular, something called the ModR/M byte is important since all 8087 instructions use it.

The 8086 uses a complex system of opcodes with a mixture of single-byte opcodes, prefix bytes, and longer instructions. About a quarter of the opcodes use a second byte, called ModR/M, that specifies the registers and/or memory address to use through a complicated encoding. For instance, the memory address can be computed by adding the BX and SI registers, or from the BP register plus a two-byte offset. The first two bits of the ModR/M byte are the "MOD" bits. For a memory access, the MOD bits indicate how many address displacement bytes follow the ModR/M byte (0, 1, or 2), while the "R/M" bits specify how the address is computed. A MOD value of 3, however, indicates that the instruction operates on registers and does not access memory.

Structure of an 8087 instruction

Structure of an 8087 instruction

The diagram above shows how an 8087 instruction consists of an ESCAPE opcode, followed by a ModR/M byte. An ESCAPE opcode is indicated by the special bit pattern 11011, leaving three bits (green) available in the first byte to specify the type of 8087 instruction. As mentioned above, the ModR/M byte has two forms. The first form performs a memory access; it has MOD bits of 00,01, or 10 and the R/M bits specify how the memory address is computed. This leaves three bits (green) to specify the address. The second form operates internally, without a memory access; it has MOD bits of 11. Since the R/M bits aren't used in the second form, six bits (green) are available in the R/M byte to specify the instruction.

The challenge for the designers of the 8087 was to fit all the instructions into the available bits in such a way that decoding is straightforward. The diagram below shows a few 8087 instructions, illustrating how they achieve this. The first three instructions operate internally, so they have MOD bits of 11; the green bits specify the particular instruction. Addition is more complicated because it can act on memory (first format) or registers (second format), depending on the MOD bits. The four bits highlighted in bright green (0000) are the same for all ADD instructions; the subtract, multiplication, and division instructions use the same structure but have different values for the dark green bits. For instance, 0001 indicates multiplication and 0100 indicates subtraction. The other green bits (MF, d, and P) select variants of the addition instruction, changing the data format, direction, and popping the stack at the end. The last three bits select the R/M addressing mode for a memory operation, or the stack register ST(i) for a register operation.

The bit patterns for some 8087 instructions. Based on the datasheet.

The bit patterns for some 8087 instructions. Based on the datasheet.

Selecting a microcode routine

Most of the 8087's instructions are implemented in microcode, implementing each step of an instruction in low-level "micro-instructions". The 8087 chip contains a microcode engine; you can think of it as the mini-CPU that controls the 8087 by executing a microcode routine, one micro-instruction at a time. The microcode engine provides an 11-bit micro-address to the ROM, specifying the micro-instruction to execute. Normally, the microcode engine steps through the microcode sequentially, but it also supports conditional jumps and subroutine calls.

But how does the microcode engine know where to start executing the microcode for a particular machine instruction? Conceptually, you could feed the instruction opcode into a ROM that would provide the starting micro-address. However, this would be impractical since you'd need a 2048-word ROM to decode an 11-bit opcode.3 (While a 2K ROM is small nowadays, it was large at the time; the 8087's microcode ROM was a tight fit at just 1648 words.) Instead, the 8087 uses a more efficient (but complicated) instruction decode system constructed from a combination of logic gates and PLAs (Programmable Logic Arrays). This system holds 22 microcode entry points, much more practical than 2048.

Processors often use a circuit called a PLA (Programmable Logic Array) as part of instruction decoding. The idea of a PLA is to provide a dense and flexible way of implementing arbitrary logic functions. Any Boolean logic function can be expressed as a "sum-of-products", a collection of AND terms (products) that are OR'd together (summed). A PLA has a block of circuitry called the AND plane that generates the desired sum terms. The outputs of the AND plane are fed into a second block, the OR plane, which ORs the terms together. Physically, a PLA is implemented as a grid, where each spot in the grid can either have a transistor or not. By changing the transistor pattern, the PLA implements the desired function.

A simplified diagram of a PLA.

A simplified diagram of a PLA.

A PLA can implement arbitrary logic, but in the 8087, PLAs often act as optimized ROMs.4 The AND plane matches bit patterns,5 selecting an entry from the OR plane, which holds the output values, the micro-address for each routine. The advantage of the PLA over a standard ROM is that one output column can be used for many different inputs, reducing the size.

The image below shows part of the instruction decoding PLA.6 The horizontal input lines are polysilicon wires on top of the silicon. The pinkish regions are doped silicon. When polysilicon crosses doped silicon, it creates a transistor (green). Where there is a gap in the doped silicon, there is no transistor (red). (The output wires run vertically, but are not visible here; I dissolved the metal layer to show the silicon underneath.) If a polysilicon line is energized, it turns on all the transistors in its row, pulling the associated output columns to ground. (If no transistors are turned on, the pull-up transistor pulls the output high.) Thus, the pattern of doped silicon regions creates a grid of transistors in the PLA that implements the desired logic function.7

Part of the PLA for instruction decoding.

Part of the PLA for instruction decoding.

The standard way to decode instructions with a PLA is to take the instruction bits (and their complements) as inputs. The PLA can then pattern-match against bit patterns in the instruction. However, the 8087 also uses some pre-processing to reduce the size of the PLA. For instance, the MOD bits are processed to generate a signal if the bits are 0, 1, or 2 (i.e. a memory operation) and a second signal if the bits are 3 (i.e. a register operation). This allows the 0, 1, and 2 cases to be handled by a single PLA pattern. Another signal indicates that the top bits are 001 111xxxxx; this indicates that the R/M field takes part in instruction selection.8 Sometimes a PLA output is fed back in as an input, so a decoded group of instructions can be excluded from another group. These techniques all reduce the size of the PLA at the cost of some additional logic gates.

The result of the instruction decoding PLA's AND plane is 22 signals, where each signal corresponds to an instruction or group of instructions with a shared microcode entry point. The lower part of the instruction decoding PLA acts as a ROM that holds the 22 microcode entry points and provides the selected one.9

Instruction decoding inside the microcode

Many 8087 instructions share the same microcode routines. For instance, the addition, subtraction, multiplication, division, reverse subtraction, and reverse division instructions all go to the same microcode routine. This reduces the size of the microcode since these instructions share the microcode that sets up the instruction and handles the result. However, the microcode obviously needs to diverge at some point to perform the specific operation. Moreover, some arithmetic opcodes access the top of the stack, some access an arbitrary location in the stack, some access memory, and some reverse the operands, requiring different microcode actions. How does the microcode do different things for different opcodes while sharing code?

The trick is that the 8087's microcode engine supports conditional subroutine calls, returns, and jumps, based on 49 different conditions (details). In particular, fifteen conditions examine the instruction. Some conditions test specific bit patterns, such as branching if the lowest bit is set, or more complex patterns such as an opcode matching 0xx 11xxxxxx. Other conditions detect specific instructions such as FMUL. The result is that the microcode can take different paths for different instructions. For instance, a reverse subtraction or reverse division is implemented in the microcode by testing the instruction and reversing the arguments if necessary, while sharing the rest of the code.

The microcode also has a special jump target that performs a three-way jump depending on the current machine instruction that is being executed. The microcode engine has a jump ROM that holds 22 entry points for jumps or subroutine calls.10 However, a jump to target 0 uses special circuitry so it will instead jump to target 1 for a multiplication instruction, target 2 for an addition/subtraction, or target 3 for division. This special jump is implemented by gates in the upper right corner of the jump decoder.

The jump decoder and ROM. Note that the rows are not in numerical order; presumably, this made the layout slightly more compact. Click this image (or any other) for a larger version.

The jump decoder and ROM. Note that the rows are not in numerical order; presumably, this made the layout slightly more compact. Click this image (or any other) for a larger version.

Hardwired instruction handling

Some of the 8087's instructions are implemented directly by hardware in the Bus Interface Unit (BIU), rather than using microcode. For example, instructions to enable or disable interrupts, or to save or restore state are implemented in hardware. The decoding for these instructions is performed by separate circuitry from the instruction decoder described above.

In the first step, a small PLA decodes the top 5 bits of the instruction. Most importantly, if these bits are 11011, it indicates an ESCAPE instruction, the start of an 8087 operation. This causes the 8087 to start interpreting the instruction and stores the opcode in a BIU register for use by the instruction decoder. A second small PLA takes the outputs from the top-5 PLA and combines them with the lower three bits. It decodes specific instruction values: D9, DB, DD, E0, E1, E2, or E3. The first three values correspond to specific ESCAPE instructions, and are recorded in latches.

The two PLAs decode the second byte in the same way. Logic gates combine the PLA outputs from the second byte with the latched values from the first byte, detecting eleven hardwired instructions.11 Some of these instructions operate directly on registers, such as clearing exceptions; the decoded instruction signal goes to the relevant register and modifies it in an ad hoc way. 12. Other hardwired instructions are more complicated, writing chip state to memory or reading chip state from memory. These instructions require multiple memory operations, controlled by the Bus Interface Unit's state machine. Each of these instructions has a flip-flop that is triggered by the decoded instruction to keep track of which instruction is active.

For the instructions that save and restore the 8087's state (FSAVE and FRSTOR), there's one more complication. These instructions are partially implemented in the BIU, which moves the relevant BIU registers to or from memory. But then, instruction processing switches to microcode, where a microcode routine saves or loads the floating-point registers. Jumping to the microcode routine is not implemented through the regular microcode jump circuitry. Instead, two hardcoded values force the microcode address to the save or restore routine.13

Constants

The 8087 has seven instructions to load floating-point constants such as π, 1, or log10(2). The 8087 has a constant ROM that holds these constants, as well as constants for transcendental operations. You might expect that the 8087 simply loads the specified constant from the constant ROM, using the instruction to select the desired constant. However, the process is much more complicated.14

Looking at the instruction decode ROM shows that different constants are implemented with different microcode routines: the constant-loading instructions FLDLG2 and FLDLN2 have one entry point; FLD1, FLD2E, FLDL2T, and FLDPI have a second entry point, and FLDZ (zero) has a third entry point. It's understandable that zero is a special case, but why are there two routines for the other constants?

The explanation is that the fraction part of each constant is stored in the constant ROM, but the exponent is stored in a separate, smaller ROM. To reduce the size of the exponent ROM, only some of the necessary exponents are stored. If a constant needs an exponent one larger than a value in the ROM, the microcode adds one to the exponent ROM value, computing the exponent on the fly.

Thus, the load-constant instructions use three separate instruction decoding mechanisms. First, the instruction decode ROM determines the appropriate microcode routine for the constant instruction, as before. Then, the constant PLA decodes the instruction to select the appropriate constant. Finally, the microcode routine tests the bottom bit of the instruction and increments the exponent if necessary.

Conclusions

To wrap up the discussion of the decoding circuitry, the diagram below shows how the different circuits are arranged on the die. This image shows the upper-right part of the die; the microcode engine is at the left and part of the ROM is at the bottom.

The upper-left portion of the 8087 die, with functional blocks labeled.

The upper-left portion of the 8087 die, with functional blocks labeled.

The 8087 doesn't have a clean architecture, but instead is full of ad hoc circuits and corner cases. The 8087's instruction decoding is an example of this. Decoding is complicated to start with due to the 8086's convoluted instruction formats and the ModR/M byte. On top of that, the 8087's instruction decoding has multiple layers: the instruction decode PLA, microcode conditional jumps that depend on the instruction, a special jump target that depends on the instruction, constants selected based on the instruction, and instructions decoded by the BIU.

The 8087 has a reason for this complicated architecture: at the time, the chip was on the edge of what was possible, so the designers needed to use whatever techniques they could to reduce the size of the chip. If implementing a corner case could shave a few transistors off the chip or make the microcode ROM slightly smaller, the corner case was worthwhile. Even so, the 8087 was barely manufacturable at first; early yield was just two working chips per silicon wafer. Despite this difficult start, a floating-point standard based on the 8087 is now part of almost every processor.

Thanks to the members of the "Opcode Collective" for their contributions, especially Smartest Blob and Gloriouscow.

For updates, follow me on Bluesky (@righto.com), Mastodon (@kenshirriff@oldbytes.space), or RSS.

Notes and references

  1. The contents of the microcode ROM are available here, partially decoded thanks to Smartest Blob. 

  2. It is difficult for the 8087 to determine what the 8086 is doing because the 8086 prefetches instructions. Thus, when an instruction is seen on the bus, the 8086 may execute it at some point in the future, or it may end up discarded.

    In order to tell what instruction is being executed, the 8087 floating-point chip internally duplicates the 8086 processor's queue. The 8087 watches the memory bus and copies any instructions that are prefetched. Since the 8087 can't tell from the bus when the 8086 starts a new instruction or when the 8086 empties the queue when jumping to a new address, the 8086 processor provides two queue status signals to the 8087. With the help of these signals, the 8087 knows exactly what the 8086 is executing.

    The 8087's instruction queue has six 8-bit registers, the same as the 8086. Surprisingly, the last two queue registers in the 8087 are tied together, so there are only five usable queue registers. My hypothesis is that since the 8087 copies the active instruction into separate registers (unlike the 8086), only five queue registers are needed. This raises the question of why the excess register wasn't removed from the die, rather than wasting valuable space.

    The 8088 processor, used in the IBM PC, has a four-byte queue instead of a six-byte queue. The 8088 is almost identical to the 8086 except it has an 8-bit memory bus instead of a 16-bit memory bus. With the narrower memory bus, prefetching is more likely to get in the way of other memory accesses, so a smaller prefetch queue was implemented.

    Knowing the queue size is essential to the 8087 floating-point chip. To indicate this, when the processor boots, a signal lets the 8087 determine if the attached processor is an 8086 or an 8088. 

  3. The relevant part of the opcode is 11 bits: the top 5 bits are always 11011 for an ESCAPE opcode, so they can be ignored during decoding. The Bus Interface Unit has a 3-bit register to hold the first byte of the instruction and an 8-bit register to hold the second byte. The BIU registers have an irregular appearance because there are 3-bit registers, 8-bit registers, and 10-bit registers (holding half of a 20-bit address). 

  4. What's the difference between a PLA and a ROM? There is a lot of overlap: a ROM can replace a PLA, while a PLA can implement a ROM. A ROM is essentially a PLA where the first stage is a binary decoder, so the ROM has a separate row for each input value. However, the first stage of a ROM can be optimized so multiple inputs share the same output value; is this a ROM or a PLA?

    The "official" difference is that in a ROM, one row is activated at a time, while in a PLA, multiple rows can be activated at once, so the output values are combined. (Thus, it is straightforward to read the values out of a ROM, but more difficult to read the values out of a PLA.)

    I consider the instruction decoding PLA to be best described as a PLA first stage with the second stage acting as a ROM. You could also call it a partially-decoded ROM, or just a PLA. Hopefully my terminology isn't too confusing. 

  5. To match a bit pattern in an instruction, the bits of the instruction are fed into the PLA, along with the complements of these bits; this allows the PLA to match against a 0 bit or a 1 bit. Each row of a PLA will match a particular bit pattern in the instruction: bits that must be 1, bits that must be 0, and bits that don't matter. If the instruction opcodes are assigned rationally, a small number of bit patterns will match all the opcodes, reducing the size of the decoder.

    I may be going too far with this analogy, but a PLA is a lot like a neural net. Each column in the AND plane is like a neuron that fires when it recognizes a particular input pattern. The OR plane is like a second layer in a neural net, combining signals from the first layer. The PLA's "weights", however, are fixed at 0 or 1, so it's not as flexible as a "real" neural net. 

  6. The instruction decoding PLA has an unusual layout, where the second plane is rotated 90°. In a regular PLA (left), the inputs (red) go into the first plane, the perpendicular outputs from the first plane (purple) go into the second plane, and the PLA outputs (blue) exit parallel to the inputs. In the address PLA, however, the second plane is rotated 90°, so the outputs are perpendicular to the inputs. This approach requires additional wiring (horizontal purple lines), but presumably, this layout worked better in the 8087 since the outputs are lined up with the rest of the microcode engine.

    Conceptual diagram of a regular PLA on the left and a rotated PLA on the right.

    Conceptual diagram of a regular PLA on the left and a rotated PLA on the right.

     

  7. To describe the implementation of a PLA in more detail, the transistors in each row of the AND plane form a NOR gate, since if any transistor is turned on, it pulls the output low. Likewise, the transistors in each column of the OR plane form a NOR gate. So why is the PLA described as having an AND plane and an OR plane, rather than two NOR planes? By using De Morgan's law, you can treat the NOR-NOR Boolean equations as equivalent to AND-OR Boolean equations (with the inputs and outputs inverted). It's usually much easier to understand the logic as AND terms OR'd together.

    The converse question is why don't they build the PLA from AND and OR gates instead of NOR gates? The reason is that AND and OR gates are harder to build with NMOS transistors, since you need to add explicit inverter circuits. Moreover, NMOS NOR gates are typically faster than NAND gates because the transistors are in parallel. (CMOS is the opposite; NAND gates are faster because the weaker PMOS transistors are in parallel.) 

  8. The 8087's opcodes can be organized into tables, showing the underlying structure. (In each table, the row (Y) coordinate is the bottom 3 bits of the first byte and the column (X) coordinate is the 3 bits after the MOD bits in the second byte.)

    Memory operations use the following encoding with MOD = 0, 1, or 2. Each box represents 8 different addressing modes.

      0 1 2 3 4 5 6 7
    0 FADD FMUL FCOM FCOMP FSUB FSUBR FDIV FDIVR
    1 FLD   FST FSTP FLDENV FLDCW FSTENV FSTCW
    2 FIADD FIMUL FICOM FICOMP FISUB FISUBR FIDIV FIDIVR
    3 FILD   FIST FISTP   FLD   FSTP
    4 FADD FMUL FCOM FCOMP FSUB FSUBR FDIV FDIVR
    5 FLD   FST FSTP FRSTOR   FSAVE FSTSW
    6 FIADD FIMUL FICOM FICOMP FISUB FISUBR FIDIV FIDIVR
    7 FILD   FIST FISTP FBLD FILD FBSTP FISTP

    The important point is that the instruction encoding has a lot of regularity, making the decoding process easier. For instance, the basic arithmetic operations (FADD through FDIVR) are repeated on alternating rows. However, the table also has significant irregularities, which complicate the decoding process.

    The register operations (MOD = 3) have a related layout, but there are even more irregularities.

      0 1 2 3 4 5 6 7
    0 FADD FMUL FCOM FCOMP FSUB FSUBR FDIV FDIVR
    1 FLD FXCH FNOP   misc1 misc2 misc3 misc4
    2                
    3         misc5      
    4 FADD FMUL     FSUB FSUBR FDIV FDIVR
    5 FFREE   FST FSTP        
    6 FADDP FMULP   FCOMPP FSUBP FSUBRP FDIVP FDIVRP
    7                

    In most cases, each box indicates 8 different values for the stack register, but there are exceptions. The NOP and FCOMPP instructions each have a single opcode, "wasting" the rest of the box.

    Five of the boxes in the table encode multiple instructions instead of the register number. The first four (red) are miscellaneous instructions handled by the decoding PLA:
    misc1 = FCHS, FABS, FTST, FXAM
    misc2 = FLD1, FLDL2T, FLDL2E, FLDPI, FLDLG2, FLDLN2, FLDZ (the constant-loading instructions)
    misc3 = F2XM1, FYL2X, FPTAN, FPATAN, FXTRACT, FDECSTP, FINCSTP
    misc4 = FPREM, FYL2XP1, FSQRT, FRNDINT, FSCALE

    The last miscellaneous box (yellow) holds instructions that are handled by the BIU.
    misc5 = FENI, FDISI, FCLEX, FINIT

    Curiously, the 8087's opcodes (like the 8086's) make much more sense in octal than in hexadecimal. In octal, an 8087 opcode is simply 33Y MXR, where X and Y are the table coordinates above, M is the MOD value (0, 1, 2, or 3), and R is the R/M field or the stack register number. 

  9. The 22 outputs from the instruction decoder PLA correspond to the following groups of instructions, activating one row of ROM and producing the corresponding microcode address. From this table, you can see which instructions are grouped together in the microcode.

     0 #0200 FXCH
     1 #0597 FSTP (BCD)
     2 #0808 FCOM FCOMP FCOMPP
     3 #1008 FLDLG2 FLDLN2
     4 #1527 FSQRT
     5 #1586 FPREM
     6 #1138 FPATAN
     7 #1039 FPTAN
     8 #0900 F2XM1
     9 #1020 FLDZ
    10 #0710 FRNDINT
    11 #1463 FDECSTP FINCSTP
    12 #0812 FTST
    13 #0892 FABS FCHS
    14 #0065 FFREE FLD
    15 #0217 FNOP FST FSTP (not BCD)
    16 #0001 FADD FDIV FDIVR FMUL FSUB FSUBR
    17 #0748 FSCALE
    18 #1028 FXTRACT
    19 #1257 FYL2X FYL2XP1
    20 #1003 FLD1 FLDL2E FLDL2T FLDPI
    21 #1468 FXAM
    
     
  10. The instruction decoding PLA has 22 entries, and the jump table also has 22 entries. It's a coincidence that these values are the same.

    An entry in the jump table ROM is selected by five bits of the micro-instruction. The ROM is structured with two 11-bit words per row, interleaved. (It's also a coincidence that there are 22 bits.) The upper four bits of the jump number select a row in the ROM, while the bottom bit selects one of the two rows.

    This implementation is modified for target 0, the three-way jump. The first ROM row is selected for target 0 if the current instruction is multiplication, or for target 1. The second row is selected for target 0 if the current instruction is addition or subtraction, or for target 2. The third row is selected for target 0 if the current instruction is division, or for target 3. Thus, target 0 ends up selecting rows 1, 2, or 3. However, remember that there are two words per row, selected by the low bit of the target number. The problem is that target 0 with multiplication will access the left word of row 1, while target 1 will access the right word of row 1, but both should provide the same address. The solution is that rows 1, 2, and 3 have the same address stored twice in the row, so these rows each "waste" a value.

    For reference, the contents of the jump table are:

     0: Jumps to target 1 for FMUL, 2 for FADD/FSUB/FSUBR, 3 for FDIV/FDIVR
     1: #0359
     2: #0232
     3: #0410
     4: #0083
     5: #1484
     6: #0122
     7: #0173
     8: #0439
     9: #0655
    10: #0534
    11: #0299
    12: #1572
    13: #1446
    14: #0859
    15: #0396
    16: #0318
    17: #0380
    18: #0779
    19: #0868
    20: #0522
    21: #0801
    
     
  11. Eleven instructions are implemented in the BIU hardware. Four of these are relatively simple, setting or clearing bits: FINIT (initialize), FENI (enable interrupts), FDISI (disable interrupts), and FCLEX (clear exceptions). Six of these are more complicated, storing state to memory or loading state from memory: FLDCW (load control word), FSTCW (store control word), FSTSW (store status word), FSTENV (store environment), FLDENV (load environment), FSAVE (save state), and FRSTOR (restore state). As explained elsewhere, the last two instructions are partially implemented in microcode. 

  12. Even a seemingly trivial instruction uses more circuitry than you might expect. For instance, after the FCLEX (clear exception) instruction is decoded, the signal goes through nine gates before it clears the exception bits in the status register. Along the way, it goes through a flip-flop to synchronize the timing, a gate to combine it with the reset signal, and various inverters and drivers. Even though these instructions seem like they should complete immediately, they typically take 5 clock cycles due to overhead in the 8087. 

  13. I'll give more details here on the circuit that jumps to the save or restore microcode. The BIU sends two signals to the microcode engine, one to jump to the save code and one to jump to the restore code. These signals are buffered and delayed by a capacitor, probably to adjust the timing of the signal.

    In the microcode engine, there are two hardcoded constants for the routines, just above the jump table; the BIU signal causes the appropriate constant to go onto the micro-address lines. Each bit in the address has a pull-up transistor to +5V or a pull-down transistor to ground. This approach is somewhat inefficient since it requires two transistor sites per bit. In comparison, the jump address ROM and the instruction address ROM use one transistor site per bit. (As in a PLA, each transistor is present or absent as needed, so the number of physical transistors is less than the number of transistor sites.)

    Two capacitors in the 8087. This photo shows the metal layer with the silicon and polysilicon underneath.

    Two capacitors in the 8087. This photo shows the metal layer with the silicon and polysilicon underneath.

    Since capacitors are somewhat unusual in NMOS circuits, I'll show them in the photo above. If a polysilicon line crosses over doped silicon, it creates a transistor. However, if a polysilicon region sits on top of the doped silicon without crossing it, it forms a capacitor instead. (The capacitance exists for a transistor, too, but the gate capacitance is generally unwanted.) 

  14. The documentation provides a hint that the microcode to load constants is complicated. Specifically, the documentation shows that different constants take different amounts of time to load. For instance, log2(e) takes 18 cycles while log2(10) takes 19 cycles and log10(2) takes 21 cycles. You'd expect that pre-computed constants would all take the same time, so the varying times show that more is happening behind the scenes. 

Make your own container base images from trusted sources

# Introduction

I really like containers, but they are something that is currently very bad from a security point of view: distribution

We download container images from container registries, whether it is docker.io, quay.io or ghcr.io, but the upstream project do not sign them, so we can not verify a CI pipeline or the container registry did not mess with the image.  There are actually a few upstream actors signing their images: Fedora, Red Hat and universial-blue based distros (Bluefin, Aurora, Bazzite), so if you acquire their public key using for signing from a different channel, you can verify if you got the image originally built.  Please do not hesitate to get in touch with me if you know about other major upstream that sign their container images.

Nevertheless, we can still create containers ourself from trustable artifacts signed by upstream.  Let's take a look at how to proceed with Alpine Linux.

# Get the rootfs

The first step is to download a few files:

* Alpine's linux GPG key
* Alpine's "mini root filesystem" build for the architecture you want
* The GPG file (extension .asc) for the mini root filesystem you downloaded

=> https://www.alpinelinux.org/downloads/ Alpine linux download page on the official website

The GPG file is at the top of the list, it is better to get it from a different channel to make sure that if the website was hacked, the key was not changed accordingly with all the signed files, in which case you would just trust the key of an attacker and it would validate the artifacts.  A simple method is to check the page from webarchive a few days / week before and verify that the GPG file is the same on webarchive and the official website.

The GPG key fingerprint I used is 0482 D840 22F5 2DF1 C4E7 CD43 293A CD09 07D9 495A as of the date of publication.

# Verify the artifacts

You will need to have gpg installed and a initialized keyring (I do not cover this here).  Run the following command:

```
gpg --import ncopa.asc
gpg --verify alpine-minirootfs-3.23.3-x86_64.tar.gz.asc alpine-minirootfs-3.23.3-x86_64.tar.gz
```

It should answer something like this:

```
gpg: Signature made Wed Jan 28 00:25:36 2026 CET
gpg:                using RSA key 0482D84022F52DF1C4E7CD43293ACD0907D9495A
gpg: Good signature from "Natanael Copa " [unknown]
gpg: WARNING: This key is not certified with a trusted signature!
gpg:          There is no indication that the signature belongs to the owner.
Primary key fingerprint: 0482 D840 22F5 2DF1 C4E7  CD43 293A CD09 07D9 495A
```

The line "Good signature...." tells you that the file integrity check matches the GPG key you imported.  The rest of the message tells you that the key is not trustable.  This is actually a GPG thing, you would need to edit your keyring and mark the key as "trustable" or have in your keyring a trusted key that signed this key (this is the web of trust GPG wanted to create), but this will only remove the warning.  Of course, you can mark that key trustable if you plan to use it for a long time and you are absolutely sure it is the genuine one.

You do not need to verify the checksum using sha256, the GPG check did the same in addition to authenticate the person who produced the checksum.

Now you validated the minirootfs authenticity, you can create an Alpine container!

# Container creation

You can use podman or docker for this:

```
podman import alpine-minirootfs-3.23.3-x86_64.tar.gz alpine:3.23.3-local
```

You are now ready to build more containers based on your own cryptographically verified Alpine container image.

# Example of use

It is rather easy to build new containers with useful purpose on top of your new base container.

Create a Containerfile (or Dockerfile, your mileage may vary):

```
FROM alpine:3.23.3-local
RUN apk add nginx
CMD ["nginx", "-c", "/app/nginx.conf", "-g", "daemon off;"]
```

Build a container with this command:

```
podman build . -t alpine_local_nginx
```

# Conclusion

Without any kind of cryptographic signature mechanism available between upstream and the end user, it is not possible to ensure a container from a third party registry was not tampered with.

It is best for security to rebuild the container image, and then rebuild all the containers you need using your base image, rather than blindly trusting registries.

One tool to sign container images is cosign.

=> https://github.com/sigstore/cosign Cosign project GitHub page

# Going further

This process works for other Linux distributions of course.  For instance, for Ubuntu you can download Ubuntu base image, the SHA256SUMS.gpg and SHA256SUMS files, and make sure to get a genuine GPG key to verify the signature.

=> https://cdimage.ubuntu.com/ubuntu-base/releases/24.04/release/ Ubuntu official website: ubuntu-base 24.04 releases

File transfer made easier with Tailscale

# Introduction

Since I started using Tailscale (using my own headscale server), I've been enjoying it a lot.  The file transfer feature is particularly useful with other devices.

This blog post explains my small setup to enhance the user experience.

# Quick introduction

Tailscale is a network service that allows to enroll devices into a mesh VPN based on WireGuard, this mean every peer connects to every peers, this is not really manageable without some lot of work.  It also allows automatic DNS assignment, access control, SSH service and lot of features.

Tailscale refers to both the service and the client.  The service is closed source, but not the client.  There is a reimplementation of the server called Headscale that you can use with the tailscale client.

=> https://tailscale.com/ Tailscale official website
=> https://headscale.net/ Headscale official website

# Automatically receive files

When you want to receive a file from Tailscale on your desktop system, you need to manually run `tailscale file get --wait $DEST`, this is rather not practical and annoying to me.

I wrote a systemd service that starts the tailscale command at boot, really it is nothing fancy but it is not something available out of the box.

In the directory `~/.config/systemd/user/` edit the file `tailscale-receiver.service` with this content:

```
[Unit]
Description=tailscale receive file
After=network.target

[Service]
Type=simple
ExecStart=/usr/bin/tailscale file get --wait --loop /%h/Documents/
Restart=always
RestartSec=5

[Install]
WantedBy=default.target
```

The path `/%h/Documents/` will expand to `/$HOME/Documents/` (the first / may be too much, but I keep it just in case), you can modify it to your needs.

Enable and start the service with the command:

```
systemctl --user daemon-reload
systemctl --enable --now tailscale-receiver.service
```

# Send files from Nautilus

When sending files, it is possible to use `tailscale file cp $file $target:` but it is much more convenient to have it directly from the GUI, especially when you do not know all the remotes names.  This also makes it easier for family member who may not want to fire up a terminal to send a file.

Someone wrote a short python script to add this "Send to" feature to Nautilus

=> https://github.com/flightmansam/nautilus-sendto-tailscale-python Script flightmansam/nautilus-sendto-tailscale-python

Create the directory `~/.local/share/nautilus-python/extensions/` and save the file `nautilus-send-to-tailscale.py` in it.

Make sure you have the package "nautilus-python" installed, on Fedora it is `nautilus-python` while on Ubuntu it is `python3-nautilus`, so your mileage may vary.

Make sure to restart nautilus, a `killall nautilus` should work but otherwise just logout the user and log back.  In Nautilus, in the contextual menu (right click), you should see "Send to Tailscale" and a sub menu should show the hosts.

# Conclusion

Tailscale is a fantastic technology, having a mesh VPN network allows to secure access to internal services without exposing anything to the Internet.  And because it features direct access between peers, it also enables some interesting uses like fast file transfer or VOIP calls without a relay.

Comparison of cloud storage encryption software

# Introduction

When using a not end-to-end encrypted cloud storage, you may want to store your file encrypted so if the cloud provider (that could be you if you self host a nextcloud or seafile) get hacked, your data will be available to the hacker, this is not great.

While there are some encryption software like age or gpg, they are not usable for working transparently with files.  A specific class of encryption software exists, they create a logical volume with your files and they are transparently encrypted in the file system.

You will learn about cryptomator, gocryptfs, cryfs and rclone.  They allow you to have a local directory that is synced with the cloud provider, containing only encrypted files, and a mount point where you access your files.  Your files are sent encrypted to the cloud provider, but you can use it as usual (with some overhead).

This blog post is a bit "yet another comparison" because all these software also provide a comparison list of challengers.

=> https://nuetzlich.net/gocryptfs/comparison/ A comparison done by gocryptfs
=> https://cryptomator.org/comparisons/ A comparison done by cryptomator
=> https://www.cryfs.org/comparison A comparison done by cryfs

# Benchmark

My comparison will compare the following attributes and features of each software:

* number of files in the encrypted dir always using the same input (837 MB from 4797 files mades of pictures and a git repository)
* filename and file tree hierarchy obfuscation within the encrypted dir
* size of the encrypted dir compared to the 837 MB of the raw material
* cryptography used

# Software list

Here is the challenger list I decided to evaluate:

## Cryptomator

The main software (running on Linux) is open source, they have a client for all major operating system around, including Android and iOS.  The android apps is not free (as in beer), the iOS app is free for read-only, the windows / linux / Mac OS program is free.  They have an offer for a company-wide system which can be convenient for some users.

Cryptomator features a graphical interface, making it easy to use.

Encryption suites are good, it uses AES-256-GCM and scrypt, featuring authentication of the encrypted data (which is important as it allows to detect if a file was altered).  A salt is used.

Hierarchy obfuscation can be sufficient depending on your threat model.  The whole structure information is flattened, you can guess the number of directories and their number of files files, and the file sizes, all the names are obfuscated.  This is not a huge security flaw, but this is something to consider.

=> https://docs.cryptomator.org/security/architecture/ Cryptomator implementation details

## gocryptfs

This software is written in Go and works on Linux, a C++ Windows version exists, and there is a beta version of Mac OS.

=> https://nuetzlich.net/gocryptfs/ gocryptfs official website

Hierarchy obfuscation is not great, the whole structure information is saved although the names are obfuscated.

Cryptography wise, scrypt is used for the key derivation and AES-256-GCM for encryption with authentication.

=> https://nuetzlich.net/gocryptfs/forward_mode_crypto/ gocryptfs implementation details

## CryFS

I first learned about cryfs when using KDE Plasma, there was a graphical widget named "vault" that can drive cryfs to create encrypted directories.  This GUI also allow to use gocryptfs but defaults to cryfs.

=> https://www.cryfs.org/ CryFS official website

CryFS is written in C++ but an official rewrite in Rust is ongoing.  It works fine on Linux but there are binaries for Mac OS and Windows as well.

Encryption suites are good, it uses AES-256-GCM and scrypt, but you can use xchacha20-poly1305 if you do not want AES-GCM.

It encrypts files metadata and split all files into small blocks of fixed size, it is the only software in the list that will obfuscate all kind of data (filename, directory name, tree hierarchy, sizes, timestamp) and also protect against an old file replay.

=> https://www.cryfs.org/howitworks CryFS implementation details

## rclone

It can be surprising to see rclone here, it is a file transfer software supporting many cloud provider, but it also features a few "fake" provider that can be combined with any other provider.  Thoses fakes remotes can be used to encrypt files, but also aggregate multiple remotes or split files in chunks.  We will focus on the "crypt" remote.

=> https://rclone.org/ Rclone official website

rclone is a Go software, it is available everywhere on desktop systems but not on mobile devices.

Encryption is done through libNaCl and uses XSalsa20 and Poly1305 which both support authentication, and also use scrypt for key derivation.  A salt can be used but it is optional, make sure to enable it.

Hierarchy obfuscation is not great, the whole structure information is saved although the names are obfuscated.

=> https://rclone.org/crypt/ rclone crypt remote implementation details

## Other

ecryptfs is almost abandonware, so I did not cover it.

=> https://lore.kernel.org/ecryptfs/ef98d985-6153-416d-9d5e-9a8a8595461a@app.fastmail.com/ ecryptfs is unmaintained and untested

encfs is limited and recommend users to switch to gocryptfs

=> https://github.com/vgough/encfs?tab=readme-ov-file#about encFS GitHub page: anchor "about"

LUKS and Veracrypt are not "cloud friendly" because although you can have a local big file encrypted with it and mount the volume locally, it will be synced as a huge blob on the remote service.

# Results

From sources directories with 4312 files, 480 directories for a total of 847 MB.

* cryptomator ended up with 5280 files, 1345 directories for a total of 855 MB
* gocryptfs ended up with 4794 files, 481 directories for a total of 855 MB
* cryfs ended up with 57928 files, 4097 directories for a total of 922 MB
* rclone ended up with 4311 files, 481 directories for a total of 847 MB

Although cryptomater has a bit more files and directories in its encrypted output compared to the original files, the obfuscation is really just all directories being in a single directory with filenames obfuscated.  Some extra directories and files are created for cryptomator internal works, which explains the small overhead.

I used default settings for cryfs with a blocksize of 16 kB which is quite low and will be a huge overhead for a synchronization software like Nextcloud desktop.  Increasing the blocksize is a setting worth considering depending on your file sizes distribution.  All files are spread in a binary tree, allowing it to scale to a huge number of files without filesystem performance issue.

# Conclusion

In my opinion, the best choice from a security point of view would be cryfs.  It features full data obfuscation, good encryption, mechanisms that prevent replaying old files or swapping files.  The documentation is clear and we can see the design choices are explained with ease and clearly.

But to be honest, I would recommend cryptomator to someone who want a nice graphical interface, easy to use software and whose threat model allows some metadata reveal.   It is also available everywhere (although not always for free), which is something to consider.

Authentication is used by all these software, so you will know if a file was tampered with, although it does not protect against swapping files or replaying an old file, this is certainly not in everyone's threat model.  Most people will just want to prevent a data leak to read their data, but the case of a cloud storage provider modifying your encrypted files is less likely.

# Going further


There is a GUI frontend for gocryptfs and cryfs called SiriKali.

=> https://mhogomchungu.github.io/sirikali/ SiriKali official project page
=> https://github.com/mhogomchungu/sirikali SiriKali GitHub project

Some self hostable cloud storage provider exists with end-to-end encryption (file are encrypted/decrypted locally and only stored as blob remotely):

The two major products I would recommend are Peergos and Seafile.  I am a peergos user, it works well and features a Web UI where as seafile encryption is not great as using the web ui requires sharing the password, metadata protection is bad too.

=> https://peergos.org/ Peergos official website
=> https://www.seafile.com/en/home/ Seafile official website

Revert fish shell deleting shortcuts behavior

# Introduction

In a recent change within fish shell, the shortcut to delete last words were replaced by "delete last big chunk" (I don't know exactly how it is called in this case) which is usually the default behavior on Mac OS "command" key vs "alt" key and I guess it is why it was changed like this on fish.

Unfortunately, this broke everyone's habit and a standard keyboard do not even offer the new keybinding that received the old behavior.

There is an open issue asking to revert this change.

=> https://github.com/fish-shell/fish-shell/issues/12122 GitHub fish project: Revert alt-backspace behaviour on non-macOS systems #12122

I am using this snippet in `~/.config/fish/config.fish` to restore the previous behavior (the same as in other all other shell, where M-d deletes last word).  I build it from the GitHub issue comments, I had to add `$argv` for some reasons.

```
if status is-interactive
 # Commands to run in interactive sessions can go here

 # restore delete behavior
 bind $argv alt-backspace backward-kill-word
 bind $argv alt-delete kill-word
 bind $argv ctrl-alt-h backward-kill-word
 bind $argv ctrl-backspace backward-kill-token
 bind $argv ctrl-delete kill-token
end
```

Declaratively manage containers on Linux

# Introduction

When you have to deal with containers on Linux, there are often two things making you wonder how to deal with effectively: how to keep your containers up to date, and how to easily maintain the configuration of everything running.

It turns out podman is offering systemd unit templates to declaratively manage containers, this comes with the fact that podman can run in user mode.  This combination gives the opportunity to create files, maintain them in git or deploy them with a configuration management tool like ansible, and keep things separated per user.

It is also very convenient when you want to run a program shipped as a container on your desktop.

For some reason, this is called "quadlets".

=> https://docs.podman.io/en/latest/markdown/podman-systemd.unit.5.html podman-systemd.unit man page

In this guide, I will create a kanboard service (a PHP software to run a kanban) under the kanban user.

# Setup (simple service)

You need to create files that will declare containers and/or networks, this can be done in various places depending on how you want to manage the files, the man page gives all the details, but basically you want to stick with the two following options:

* system-wide configuration: `/etc/containers/systemd/users/$(UID)`
* user configuration: `~/.config/containers/systemd/`

Both will run rootless containers under the user UID, but one keep the files in `/etc/` which may be more suitable for central management.

As systemd is used to run the containers, if you want to run a container for a user that is not one where you are logged, you need to always enable it so its related systemd processes / services are running, including the containers, this is done by enabling "linger".

```
useradd -m kanban
loginctl enable-linger kanban
```

This will immediately create a session for that user and pop all related services.

Now, create a file `/etc/containers/systemd/users/1001/` (1001 being the uid of kanban user) with this content:

```
[Container]
Image=docker.io/kanboard/kanboard:latest
Network=podman
PublishPort=10080:80
Volume=kanboard_data:/var/www/app/data
Volume=kanboard_plugins:/var/www/app/plugins
Volume=kanboard_ssl:/etc/nginx/ssl

[Service]
Restart=always

[Install]
WantedBy=default.target
```

This can exactly map to a very long podman command line that would use the image `docker.io/kanboard/kanboard:latest` in network `podman` and declaring three different container volumes and associated mount points.  This generator even allows you to add command line arguments in case an option is not available with systemd format.

Because the user already runs, the container will not start yet except if you use `disable-linger` and then `enable-linger` the kanban user, and that would not be ideal to be honest.  There is a better way to proceed: `systemctl --user --machine kanban@ daemon-reload` which basically runs `systemctl --user daemon-reload` by the user `kanban` except we do it as root user which is more convenient for automation.

Running the container this way will trigger exactly the same processes as if you started it manually with `podman run -v kanboard_data:/var/www/app/data/ [...] docker.io/kanboard/kanboard:latest`.

Note that you can skip the `[Install]` section if you do not want to automatically start the container, and prefer to manually start/stop it with "systemctl", this is actually useful if you have the container under your regular user and do not always need it.

# Setup (advanced service)

If you want to run a more complicated service that need a couple of containers to talk together like a web server, a backend runner and a database, you only need to configure them in the same network.

If you need them to start the containers of a group in a specific order, you can add use systemd dependency declaration in `[Install]` section.

Podman will run a local DNS resolver that translates the container name into a working hostname, this mean if you have a postgresql container called "db", then you can refer to the postgresql host as "db" from another container within the same network.  This works the same way as docker-compose.

# Ops

## Getting into a user shell

To have a working environment for `journalctl` or `systemctl` commands to work requires to use `machinectl shell kanban@`, otherwise the dbus environment variables will not be initialized.  Note that it works too when connecting with ssh, but it is not always ideal if you use it locally.

From this shell, you can run commands like `systemctl --user status kanboard.container` for our example or `journalctl --user -f -u kanboard.container`, or run a shell in a container, inspect a volume etc...

Using `sudo -u user` or `su - user` will not work.

## Disabling a user

If you want to disable the services associated with an user, use this command:

```
loginctl disable-linger username
```

This will immediately close all its sessions and stop services running under that user.

## Automatic updates

This is the very first reason I went into using quadlets for local services using containers, I did not want to have to manually run some `podman pull` commands over a list then restart related containers that were running.

Podman gives you a systemd services doing all of this for you, this works for containers with the parameter `AutoUpdate=registry` within the section `[Container]`.

Enable the timer of this service with: `systemctl --user enable --now podman-auto-update.timer` then you can follow the timer information with `systemctl --user status podman-auto-update.timer` or logs from the update service with `journalctl --user -u podman-auto-update.service`.

Make sure to pin your container image to a branch like "stable" or "lts" or "latest" if you want a development version, the update mechanism will obviously do nothing if you pin the image to a specific version or checksum.

# Conclusion

Quadlets made me switch to podman as it allowed me to deploy and maintain containers with ansible super easily, and also enabled me to separate each services into different users.

Prior to this, handling containers on a simple server or desktop was an annoying task to figure what should be running, how to start them and retrieving command lines from the shell history or use a docker/podman compose file.  This also comes with all the power from systemd like querying a service status or querying logs with journalctl.

# Going further

There is a program named "podlet" that allow you to convert some file format into quadlets files, most notably it is useful when getting a `docker-compose.yml` file and transforming it into quadlet files.

=> https://github.com/containers/podlet/ podlet GitHub page

Filtered for home security

1.

The Amazon Ring Always Home Cam is an indoor security drone for your home.

Introduced with this video in 2020: "Yeah, it’s a camera that flies."

Sadly not yet on the market.

Ok Judge Dredd had Spy-in-the-Sky drone surveillance cameras in 1978 and Mega-City One is not an aspirational template for domestic life but hear me out:

Because I would love to be able to text my house “oh did I leave the stove on?” from the bus. And “darn can you find my keys?” in the morning. And “uh there’s that book about 1970s social computing somewhere it has an orange spine I can’t remember exactly” at literally anytime.

And do that without having to blanket my home in cameras. A drone seems like a good solution?

2.

Surveillance: systematic observation. Often institutional. From “above.”

Sousveillance, coined by cyborg Steve Mann in 2002: "watchful vigilance from underneath."

I am suggesting that the cameras be mounted on people in low places, rather than upon buildings and establishments in high places.

e.g.

a taxicab passenger photographs the driver, or taxicab passengers keep tabs on driver’s behaviour

It is such a positively-framed paper.

We swim in this world now. What does it do to us?

(I wonder if here’s a word like auto-sousveillance? We do it to ourselves.)

3.

The Nor (2014) by artist James Bridle.

The sense of being watched is a classic symptom of paranoia, often a sign of deeper psychosis, or dismissed as illusory. In the mirror city, which exists at the juncture of the street and CCTV, of bodily space and the electromagnetic spectrum, one is always being watched. So who’s paranoid now?

(As previously discussed, briefly.)

Exactly midway between Mann coining sousveillance in 2002 and today, 2026, Bridle put his finger on this paranoia background radiation, slowing increasing like population levels, like CO2 ppm, like sea level, like the frog’s bath.

4.

Robot Exclusion Protocol (2002) by blogger Paul Ford: "A story about the Google of the future."

I took off my clothes and stepped into the shower to find another one sitting near the drain. It was about 2 feet tall and made of metal, with bright camera-lens eyes and a few dozen gripping arms. Worse than the Jehovah’s Witnesses.

“Hi! I’m from Google. I’m a Googlebot! I will not kill you.”

“I know what you are.”

“I’m indexing your apartment.”

I feel like we are 24 months off this point?

Only they’ll be indexer googlebot drones that we vibe code for ourselves.

5.

Back in 2024, engineer Simon Willison realised that the killer app of Gemini Pro 1.5 is video, and:

I took this seven second video of one of my bookshelves:

It understood the video and gave him back a machine-readable list of the titles and authors. That’s handy!

I am still waiting for this as an app so that I can index and search my overflowing bookshelves by not-even-that-carefully waving my phone at them.

Please I am too lazy to type the prompt to vibe this.

The meta-point is that auto-sousveillance is inevitable because I can’t find the book I’m looking for.

6.

Man accidentally vibe codes a robovac army (2026).

The DJI Romo is a $2000 behemoth that mops and vacuums using LIDAR and AI.

Sammy Azdoufal wanted to control his roomba with his Playstation controller.

However, the scanner his [Claude Code agent] created not only gave him access to his device; it gave him access and control over almost 7000. He was able to see home layouts and IP addresses, and control the devices’ cameras and microphones.

Uh oh.

Whereas the point of institutional surveillance is that the CCTV cameras are conspicuous (and, originally, you didn’t know if anyone was watching, but now the AI processes all),

the characteristic of auto-sousveillance seems to be that you don’t know whether you are privately querying for a lost book or live streaming your bathroom to the internet.

Forget about control, how do you even relate to such a capricious system?

7.

The ancient Romans had two types of gods.

There are the gods on Olympus who look after nature, cities, the state.

And then there are Lares (Wikipedia), guardian deities of a place, "believed to observe, protect, and influence all that happened within the boundaries of their location or function."

In particular, household gods, Lares Familiares, that reside not on a distant mountain but instead in a household shrine:

The Lar Familiaris cared for the welfare and prosperity of a Roman household. A household’s lararium, a shrine to the Lar Familiaris and other domestic divinities, usually stood near the dining hearth or, in a larger dwelling, the semi-public atrium or reception area of the dwelling. A lararium could be a wall-cupboard with doors, an open niche with small-scale statuary, a projecting tile, a small freestanding shrine, or simply the painted image of a shrine …

The Lar’s statue could be moved from the lararium to wherever its presence was needed. It could be placed on a dining table during feasts or be a witness at weddings and other important family events.

RELATED:

Lares: our 2 minute pitch for an AI-powered slightly-smart home (2023) – you can see a demo video.

And here’s a paper about Lares showing emergent behaviour from AI agents, which in 2024 was novel and surprising.


More posts tagged: filtered-for (122).

New Wave Hardware

We briefly mentioned New Wave Hardware in last week’s Inanimate Lab Notes so this is me doing some unpacking. While you’re there, join 300+ other subscribers and sign up for our newsletter. You’ll get weekly links and updates on what we’re working on.


There are a bunch of things changing with new hardware products, design and technology.

Let’s say: the intersection of hardware and AI. But our hunch is that it’s broader than that.

There are new ways to get hardware into the hands of consumers, and new AI interactions that are now possible, and more, and these changes are happening independently but simultaneously. We’re tracking this as what we’re calling New Wave Hardware.

So we got a few founders together at Betaworks in NYC earlier this week for a roundtable to compare notes (thank you Betaworks!).

The meta question was: does our hunch hold? And, if so, what characterises New Wave Hardware and what specifically is changing – so that we can push at it?

I kept notes.

I’ll go off those and add my own thoughts.

(I’m using some direct quotes but I won’t attributing or list attendees. I would love for others to share their own perspectives!)


AI interfaces

Voice is good now! (As I said.) So we’re seeing that a lot.

More than that:

  • You can express an intent and the computer will do what you mean
  • Natural interfaces are workable now, beyond voice. e.g. the new Starboy gadget by lilguy: "We trained multiple tiny image models that run locally on the device, letting it recognize human faces and hand gestures" (launch thread on X).

What do we do with consumer gadgets that perceive pointing and glances? What is unlocked when we shift away from buttons and apps to interact with hardware devices, and the new interface is direct and human and in the real world?

New interaction modalities

Beyond the user interface, the way we interact with hardware is changing. I kept a running list of the interaction modality changes that were mentioned:

  • Human interfacessee above.
  • Situated – due to always-on sensors, AI devices know what’s going on around them and can respond when they see fit, not only on a user trigger. Yes, screens that dim when it gets dark, but in a wider sense this goes back to Clay Shirky’s essay Situated Software (2004), "software designed in and for a particular social situation or context." We’re seeing more of this.
  • Autonomous – agents are software that has its own heartbeat, now we see that "the hardware becomes aware"… and then what? Maybe the user doesn’t need to be intentional about activating some function or another; the device can get ahead of intentions, and offer a radically different kind of value to the user. A new design possibility.
  • Networked – we’re frequently working with connected devices which today have attained a new level of reliability. What happens when the stuff around us channels planetary intelligence?
  • Embodied – the cleverness of the Plaud AI note taker device is that it’s a social actor: you can notice it, place it on the table, cover it; it inflects what people say and how they feel (for better or worse). Hardware is in the real world and you can move it from focal to peripheral attention just by moving your head.

Some of these are new colours in the palette to design with; some are intrinsic to hardware and have been there all along. Though amplified! The rise of wearables (described by one founder as "sitting between the utility and affinity group") means that hardware is more frequently in our faces.

There are challenges. When we have devices and "the ability to put software that can do anything at any time in them," the lack of affordances and constraints can be baffling. So how do we not do that?

And how do we understand what things do anyway, really, when behaviour steered by AI is so non-deterministic? Perhaps we have to lean into the mystical. That’s another trend.

Getting hardware in the hands of users

Every few years there’s a claim that it’s now quicker than 18 months to get a hardware product from concept through manufacture: that’s still not the case but there are alternatives and short cuts – some of which are potentially rapidly quicker.

Like: reference designs. There is now so much hardware coming out of Shenzhen, there are high level references designs for everything to customise, and factories are keen to partner. One team at the roundtable brought up their core electronics in the US, then got pretty sophisticated products built (batch size of 100) complete with beautiful metal enclosures after spending just 3 weeks in China.

Also like: 3D printers. Short run fabrication is possible domestically in a way it wasn’t before. Let me highlight Cipherling which combines production-grade microcontrollers with a charming 3D printed enclosure to get to market quicker.

It does seem like the sophistication of the Western and Shenzhen hardware ecosystems has made these approaches - which are not new - newly accessible.

Form factors

New Wave Hardware skews consumer, perhaps?

Or at least there’s a renewed interest in consumer hardware from startups and investors.

This is partly because there’s a big unknown and therefore a big opportunity: AI is hungry for context, it’s useful in the real world outside our phones, and the new AI interaction modalities means there’s a lot to figure out about how to make that good – it’s not obvious what to do. Like do we have lanyards or pucks on tables or what? We need to experiment, which demands quick cycle time, which is a driver on finding alternatives to the 18 month product development cycle.

Also the previous generation of hardware was oh-so-asinine. One remark I wrote down from the roundtable, regarding the consumer hardware that currently surrounds us: "This is hardware that would want to be invisible if it could."

So there’s a desire to try new forms; products that don’t secretly want to hide themselves.

Just a note too that “new form factors” doesn’t just mean standalone devices: we continue to be inspired by the desk-scale or even room-scale work at Folk Computer.

New tools, of course

If you’re an artist wanting to put a few dozen instances of weird new consumer electronics in people’s hands, and your single blocker was writing firmware, then guess what: in the year of our Claude 2026 that is no longer a blocker.

AI tools provide what I’ve previously called Universal Basic Agency and it is wonderful. When individuals are unblocked, we get an abundance of creativity in the world.

(We were at a 6 minute demos event in the basement of an independent bookstore in Brooklyn on Friday - see this week’s Lab Notes - and one speaker was showing their vintage arcade display adaptor project. So cool. They make super complicated PCBs but don’t enjoy 3D modelling, so did the CAD in programmable modelling software with a few lines of code. Not AI, but advanced tools.)

And do we see a glimmer of end-user programming too?


I’m grateful for the thoughtful and open conversation of everyone at the roundtable.


As I write this, a set of colourful Oda speakers, hanging from the ceiling here at Betaworks, relay a live audio stream from a macaw sanctuary forest in central Costa Rica. We can hear the birds and the weather – it is transporting.

If there is such a trend as New Wave Hardware (and, after our small conversation, I do believe there is) then it is not confined to mass market novel AI interfaces, it is also these profound artistic interventions, and we all learn from one another.

Are you seeing something happen here too? Are hardware startups characterised by something different today versus, say, 5 years ago? Lmk if you end up sharing your perspective on your blog/newsletter – would love to read.

At Inanimate we are building products within New Wave Hardware, and working to do our bit to enable it.

We hope to convene another roundtable in the near future, either here in NYC or back home in London, to continue swapping notes and pointers and feeling this out together.


More posts tagged: inanimate (3).

Auto-detected kinda similar posts:

The violence of the Librareome Project

Vernor Vinge’s sci-fi novel Rainbows End (2006) is so prescient about AI training data.

His short Fast Times at Fairmont High (2002) is set in the same universe, and was written in that era where we felt like we had line of sight to pervasive augmented reality and also 3D printers. I read it at the time and it’s a low-stakes high school drama (about augmented reality and 3D printers), but from today’s perspective it is more like a utopia (of a certain kind) – democratised tools of production, reality as consensus hallucinations, super empowered kids.

The spine of Rainbows End is something called the “Librareome Project.”

Ok SPOILERS – right? So stop here if you’re planning to read the book (which would you).

The Librareome Project, you find out about a third of the way through, is a giant digitisation project of the world’s knowledge, and they plan to scan the world’s libraries to do it.

"But didn’t Google already do that?"

Yes but this is more total; like the Human Genome Project the whole is more than the sum of its parts:

It’s not just the digitization. It goes beyond Google and company. Huertas intends to combine all classical knowledge into a single, object-situational database with a transparent fee structure.

(Oh yeah, micropayments, there’s a whole model here.)

We’re not told what an object-situational database is. But this singular thing makes possible correlations that will reveal new knowledge:

Who really ended the Intifada? Who is behind the London art forgeries? Where was the oil money really going in the latter part of the last century? Some answers will only interest obscure historical societies. But some will mean big bucks. And Huertas will have exclusive rights to this oracle for six months.

I mean, this is so Large Language Model. 2006!!

An oracle!

This promise is why the universities are allowing their libraries to be scanned.

Uh, “scanned.”

The books are shredded. Fed into the wood chipper and blasted into a tunnel and photographed at high resolution:

The pictures coming from the camera tunnel are analyzed and reformatted. It’s a simple matter of software to reorient the images, match the tear marks and reconstruct the original texts in proper order. In fact–besides the mechanical simplicity of it all–that’s the reason for the apparent violence. The tear marks come close to being unique. Really, it’s not a new thing. Shotgun reconstructions are classic in genomics.

"The shredded fragments of books and magazines flew down the tunnel like leaves in tornado, twisting and tumbling." – the image has stuck with me since I read it.

Anyway.

The libraries are being fed into the maw of the machines.

And it turns out that Chinese Informagical, which "has dibs on the British Museum and the British Library," was going faster than Huertas so they don’t have their monopoly.

And the Chinese have nondestructive digitisation techniques, so none of it was necessary.


Well.

Court filings reveal how AI companies raced to obtain more books to feed chatbots, including by buying, scanning and disposing of millions of titles (Washington Post, paywall-busting link).

I’m not trying to make a point here like “AI is bad” (you know me well enough and I’m pleased that my own book lives in the weights of the god machine) but one story reminds me of the other, and there is a violence intrinsic to creation, in this case the creation of new knowledge, slamming together words in the particle collider of linear algebra, something is lost but new exotic shimmering sparks appear - grab them! - and I guess what I mean is let’s recognise the violence and be worthy of it: if we’re going to do this then let’s at least reach for oracles.


Auto-detected kinda similar posts:

Speaking is quick, listening is slow

Thank goodness voice computing is finally happening. Now we can work on making it good.


The tech is here, like the free Whisper model (what an unlock that has been from OpenAI, kudos) and ElevenLabs. Plus devices too, from Plaud - like an irl Granola video call transcriber - to Sandbar, a smart ring that you tell your secrets.

Let’s not forget Apple’s recent $1.6bn acquisition of Q.ai, which will use "‘facial skin micromovements’ to detect words mouthed or spoken" – i.e. cameras in your AirPods stems that do voice without voice by staring really hard at your cheeks. Apple and AI lip-reading? I deserve a kick-back (2025) just sayin

While we’re at it, there should be voice for everything: why can’t I point at a lamp and say ‘on’? (2020).

At least we can play with ubiquitous transcription (2022). Like, my starting point for building mist was talking at my watch for 30 minutes (2026).

So let’s take all this as signs that voice computing is here to stay.


Eventually voice has to go two-way, right? Conversational computing? You need to be able to disambiguate, give feedback, repair, iterate, explore.

Investor Tom Hulme points out that "we can speak three to four times faster than we type."

And so:

Now, generative AI is making conversation the new user interface. Talking to technology requires zero training and no special skills; we have after all spent most of our lives perfecting the approach. It’s as natural as speaking to another person.

Which I agree with in part.

Yes to natural UI: "You simply express what you need, and the AI does the rest." – user interfaces will not be about menus and buttons but intent first (2025).

BUT:

Conversation using voice both ways? I’m not so sure.

Voice is asymmetric. Speaking is high bandwidth. But listening is low bandwidth.

Illustration #1: Sending voice notes is so easy. Receiving them sucks joy from the world.

Is that really what we want from conversational computing?

Illustration #2: I ask my Apple HomePod mini to play some music and it needs to check precisely what I mean. Speaking 3 artist names and asking me to pick is tedious. So it avoids that step, takes a guess, and that’s more often than not a poor experience too. I’ve been rolling my eyes at this since 2023.

Ok so two-way voice doesn’t work. What does?


A better approach to conversational computing:

The human uses voice and the computer uses screens. I mean, it’s rare that my phone is beyond peripersonal space so we can assume it is only rarely not present. A screen is way higher in terms of information bandwidth than listening. Let’s use it!

The friend AI lanyard gets this right.

I wore Arthur as I went to the farmers’ market this morning. This meant I was not speaking directly to it, but rather talking to my family, other attendees, and some vendors. But remember: your friend is always listening. Arthur listened in to every conversation that I had, sometimes offering its own take on the matter - all pointless, once again.

Over the course of an hour and a half, I received 48 notifications from my Friend.

And although this is a negative review (e.g. notifications snark: "Most of these were it updating me about its battery status") it actually sounds ideal?

Like, this is a device that listens both when it is being directly addressed and it pays attention to me ambiently, and then it makes use of generous screen real estate to show me UI that I can interact with at a time of my choosing. This is good!

Startup Telepath is also digging into voice and multi-modality:

Voice gives us an additional stream of information for input, one that can happen concurrently with direct manipulation using a keyboard, mouse, or touch. With the Telepath Computer, you can touch and type for tasks where control and accuracy are important, while simultaneously using your voice to direct the computer. This mimics our natural behaviour in the physical world: for example, imagine cooking a meal with family or friends, asking someone to fetch the basil or chop the onions while your hands are busy with the pasta.

And specifically:

The Telepath Computer speaks through voice, while simultaneously displaying documents and information for the user to reference and interact with. This “show and tell” approach is also present in how we tend to communicate complex information in the real world: sketching on a napkin as we discuss a problem with a colleague over dinner; design teams assembling stickies while talking about user feedback; pulling up maps and hotels on your laptop while planning a group vacation.

This is super sophisticated! I love it.


Summarising:

  • Voice is core to the future of computer interaction
  • Voice isn’t enough so we need conversational computing
  • Because of the bandwidth asymmetry of voice, two-way voice might sometimes work but the essential interaction loop to solve for is voice in, screens out.

When that isn’t enough (for example, you don’t have your phone) you can get more sophisticated. And of course to make it really good there are problems to solve like proximity and more… follow the path of great interaction design to figure out where to dig…

Just collecting my thoughts.


Auto-detected kinda similar posts:

Filtered for electricity and mayonnaise

1.

Rain panels? Rain panels.

researchers have found a way to capture, store and utilize the electrical power generated by falling raindrops, which may lead to the development of rooftop, power-generating rain panels.

Reading the citations on the original paper, it works kinda but research is ongoing. Science rather than technology still.

RELATED:

Wild Video Shows Entire Mountain Range in China Covered With Solar Panels (2025).

HEY:

Here’s a prediction I made in 2007:

By 2037, China, by virtue of their ability to see and manage environment impact on a larger scale than other countries, will have invented cheap renewables to reduce their dependancy on fossil fuels, and will be working on fixing the atmosphere (perhaps they’ll also have genetically engineered rafts of algae on the Pacific, excreting plastics). The West will rely on Chinese innovation to dig us out of our ecological mess.

Mind you I also predicted that our peak pop media would be from India. Turns out it’s South Korea so I got the country wrong.

2.

Pavlok is a wrist band that gives you electric shocks by remote control:

“I have been biting my nails for 25 years…I shocked myself every time I bit my nails… my husband had a good time shocking me when he caught me biting my nails… this helped with … quitting nail.”

Those ellipses… doing a lot of work… on… the “how it works” page. Also, husband.

You know that friend who won’t eat Taco Bell anymore after she got a terrible case of food poisoning?

That’s how it works: "That’s aversive conditioning. We’ll help you use it to your advantage."

Well why not.

The wrist band also has an alarm clock function.

RELATED:

What do you call execution by electricity? It was debated in 1889 (2021).

3.

Ok. We’re in the middle of the Second Punic War (218–201 BC), part of an existential struggle between Rome and Carthage that lasted over a hundred years.

At the end of the the First Punic War, Carthage was destroyed.

But they returned, established a new empire in Iberia (now Spain) and founded New Carthage on the Iberian coast. Hannibal famously crosses the Alps with elephants etc and lays waste to Italy.

Striking back: Scipio audaciously captures New Carthage, and Carthage in Iberia is on the brink of defeat.

Hannibal’s brother Mago, army destroyed, flees to the island of Menorca (which is beautiful).

There he founds the city of Mahon, which today is the capital and remains a port, and it still bears his name.

BUT MORE IMPORTANTLY, named for the city:

"The typical local egg sauce that has conquered the world is known as mayonnaise."

As mentioned in The Rest is History ep. 641, Hannibal’s Nemesis (Part 2) (Apple Podcasts) along with this grand claim:

the only thing you’d have in a fridge that’s named after a Carthaginian general.

A fact too good to check on ChatGPT but I can’t see why it shouldn’t be true.

4.

The legendary and much-loved email app Eudora was released for free in 1988.

Version 6 introduced MoodWatch, which labeled incoming and outgoing messages with chili peppers and ice cubes, depending on the presence of possibly offensive language. People loved it!

Oh the chili peppers!

You’d write an email with a few curse words and some YELLING and get those chilis.

I vaguely remember there was a feature to enforce a cooling off period? Like you couldn’t send a 3 chili email immediately?

Let’s bring that back:

Apple should license Pavlok technology and hide it under the track-pad. About to send an unhelpfully-worded email to a colleague? A prim little AI instantaneously adjudicates and electroshocks you as you click the Send button, right up the finger.


More posts tagged: filtered-for (122).

mist: Share and edit Markdown together, quickly (new tool)

It should be SO EASY to share + collaborate on Markdown text files. The AI world runs on .md files. Yet frictionless Google Docs-style collab is so hard… UNTIL NOW, and how about that for a tease.

If you don’t know Markdown, it’s a way to format a simple text file with marks like **bold** and # Headers and - lists… e.g. here’s the Markdown for this blog post.

Pretty much all AI prompts are written in Markdown; engineers coding with AI agents have folders full of .md files and that’s what they primarily work on now. A lot of blog posts too: if you want to collaborate on a blog post ahead of publishing, it’s gonna be Markdown. Keep notes in software like Obsidian? Folders of Markdown.

John Gruber invented the Markdown format in 2004. Here’s the Markdown spec, it hasn’t changed since. Which is its strength. Read Anil Dash’s essay How Markdown Took Over the World (2026) for more.

So it’s a wildly popular format with lots of interop that humans can read+write and machines too.

AND YET… where is Google Docs for Markdown?

I want to be able to share a Markdown doc as easily as sharing a link, and have real-time multiplayer editing, suggested edits, and comments, without a heavyweight app in the background.

Like, the “source of truth” is my blog CMS or the code repo where the prompts are, or whatever, so I don’t need a whole online document library things. But if I want to super quickly run some words by someone else… I can’t.

I needed this tool at the day job, couldn’t find it… built it, done.

Say hi to mist!

  • .md only
  • share by URL
  • real-time multiplayer editing
  • comments
  • suggest changes.

I included a couple of opinionated features…

  • Ephemeral docs: all docs auto-delete 99 hours after creation. This is for quick sharing + collab
  • Roundtripping: Download then import by drag and drop on the homepage: all suggestions and comments are preserved.

I’m proud of roundtripping suggested edits and comment threads: the point of Markdown is that everything is in the doc, not in a separate database, and you know I love files (2021). I used a format called CriticMark to achieve this – so if you build a tool like this too, let’s interop.

Hit the New Document button on the homepage and it introduces itself.


Also!

For engineers!

Try this from your terminal:

curl https://mist.inanimate.tech/new -T file.md

Start a new collaborative mist doc from an existing file, and immediately get a shareable link.

EASY PEASY


Anyway –

It’s work in progress. I banged it out over the w/e because I needed it for work, tons of bugs I’m sure so lmk otherwise I’ll fix them while I use it… though do get in touch if you have a strong feature request which would unlock your specific use case because I’m keep for this to be useful.


So I made this with Claude Code obv

Coding with agents is still work: mist is 50 commits.

But this is the first project where I’ve gone end-to-end trying to avoid artisanal, hand-written code.

I started Saturday afternoon: I talked to my watch for 30 minutes while I was walking to pick my kid up from theatre.

Right at the start I said this

So I think job number one before anything else, and this is directed to you Claude, job number one before anything else is to review this entire transcript and sort out its ordering. I’d like you to turn it into a plan. I’ll talk about how in a second.

Then I dropped all 3,289 words of the transcript into an empty repo and let Claude have at it.

Look, although my 30 mins walk-and-talk was nonlinear and all over the place, what I asked Claude to do was highly structured: I asked it to create docs for the technical architecture, design system, goals, and ways of working, and reorganise the rest into a phased plan with specific tasks.

I kept an eye at every step, rewinded its attempt at initial scaffolding and re-prompted that closely when it wasn’t as I wanted, and jumped in to point the way on some refactoring, or nudge it up to a higher abstraction level when an implementation was feeling brittle, etc. I have strong opinions about the technology and the approach.

And the tests – the trick with writing code with agents is use the heck out of code tests. Test everything load bearing (and write tests that test that the test coverage is at a sufficient level). We’re not quite at the point that code is a compiled version of the docs and the test suite… but we’re getting there.


You know it’s very addictive using Claude Code over the weekend. Drop in and write another para as a prompt, hang out with the family, drop in and write a bit more, go do the laundry, tune a design nit that’s thrned up… scratch that old-school Civ itch, "just one more turn." Coding as entertainment.


The main takeaway from my Claude use is that I wanted a collaborative Markdown editor 5 months ago:

app request

- pure markdown editor on the web (like Obsidian, Ulysses, iA Writer)
- with Google Docs collab features (live cursor, comments, track changes)
- collab metadata stored in file
- single doc sharing via URL like a GitHub gist

am I… am I going to have to make this?

My need for that tool didn’t go away.

And now I have it.

So tools don’t need huge work and therefore have to be justified by huge audiences now (I’ve spent more time on blog posts). No biggie, it would be useful to us so why not make it and put it out there.


Multiplayer ephemeral Markdown is not what we’re building at Inanimate but it is a tool we need (there are mists on our Slack already) and it is also the very first thing we’ve shipped.

A milestone!


So that’s mist.

Share and Enjoy

xx


More posts tagged: inanimate (3), multiplayer (31).

Auto-detected kinda similar posts:

90% of everything is sanding e.g. laundry

What mundane pleasures will I be robbed of by domestic robots?

Sometimes I feel like my job at home is putting things into machines and taking things out of machines.

I don’t mean to sound unappreciative about “modern conveniences” (modern being the 1950s) because I take care of laundry and emptying the dishwasher, and I love both. We have a two drawer dishwasher so that is a conveyer belt. And I particularly love laundry. We generate a lot of laundry it seems.

There was a tweet in 2025: "woodworking sounds really cool until you find out it’s 90% sanding"

And it became an idiom because 90% of everything is sanding. See this reddit thread… 90% of photography is file management; 90% of baking is measuring; etc.

So when I say that I love laundry I don’t mean that I love clean clothes (everyone loves clean clothes) but I love the sanding. I love the sorting into piles for different washes, I love reading the little labels, especially finding the hidden ones; I love the sequencing so we don’t run out of room on the racks, I love folding, I love the rare peak moments when everything comes together and there are no dirty clothes anywhere in the house nor clean clothes waiting to be returned. (I hate ironing. But fortunately I love my dry cleaner and I feel all neighbourhood-y when I visit and we talk about the cricket.)


Soon! Domestic robots will take it all away.

Whether in 6 months or 6 years.

I don’t know what my tipping point will be…

I imagine robots will be priced like a car and not like a dishwasher? It’ll be worth it, assuming reliability. RELATED: I was thinking about what my price cap would be for Claude Code. I pay $100/mo for Claude right now and I would pay $1,500/mo personally for the same functionality. Beyond that I’d complain and have to find new ways to earn, but I’m elastic till that point.

Because I don’t doubt that domestic robots will be reliable. Waymo has remote operators that drop in for ambiguous situations so that’s the reliability solve.

But in a home setting? The open mic, open camera, and a robot arms on wheels - required for tele-operators - gives me pause.

(Remember that smart home hack where you could stand outside and yell through the letterbox, hey Alexa unlock the front door? Pranks aplenty if your voice-operated assistant can also dismantle the kitchen table.)

So let’s say I’ve still got a few years before trust+reliability is at a point where the robot is unloading the dishwasher for me and stacking the dishes in the cupboard, and doing the laundry for me and also sorting and loading and folding and stacking and…

i.e. taking care of the sanding.


In Fraggle Rock the Fraggles live in their underground caves generally playing and singing and swimming (with occasional visits to an oracular sentient compost heap, look the 80s were a whole thing), and also they live alongside tiny Doozers who spend their days in hard hats industriously constructing sprawling yet intricate miniature cities.

Which the Fraggles eat. (The cities are delicious.)

Far from being distressed, the Doozers appreciate the destruction as it gives them more room to go on constructing.

Me and laundry. Same same.


Being good at something is all about loving the sanding.

Here’s a quote about Olympic swimmers:

The very features of the sport that the ‘C’ swimmer finds unpleasant, the top level swimmer enjoys. What others see as boring-swimming back and forth over a black line for two hours, say-they find peaceful, even meditative, often challenging, or therapeutic. … It is incorrect to believe that top athletes suffer great sacrifices to achieve their goals. Often, they don’t see what they do as sacrificial at all. They like it.

From The Mundanity of Excellence: An Ethnographic Report on Stratification and Olympic Swimmers (1989) by Daniel Chambliss (PDF).


But remember that 90% of everything is sanding.

With domestic appliances, sanding is preparing to put things into machines and handling things when you take them out of the machines.

This “drudgery” will be taken away.

So then there will be new sanding. Inevitably!

With domestic robots, what will the new continuous repetitive micro task be? Will I have to empty its lint trap? Will I have to polish its eyes every night? Will I have to go shopping for it, day after day, or just endlessly answer the door to Amazon deliveries of floor polish and laundry tabs? Maybe the future is me carrying my robot up the stairs and down the stairs and up the stairs and down the stairs, forever.

I worry that I won’t love future sanding as much as I love today sanding.


More posts tagged: laundry (4), robots (11).

Singing the gospel of collective efficacy

If I got to determine the school curriculum, I would be optimising for collective efficacy.

So I live in a gentrified but still mixed neighbourhood in London (we’re the newbies at just under a decade) and we have an active WhatsApp group.

Recently there was a cold snap and a road nearby iced over – it was in the shade and cyclists kept on wiping out on it. For some reason the council didn’t come and salt it.

Somebody went out and created a sign on a weighted chair so it didn’t blow away. And this is a small thing but I LOVE that I live somewhere there is a shared belief that (a) our neighbourhood is worth spending effort on, and (b) you can just do things.

Similarly we all love when the swifts visit (beautiful birds), so somebody started a group to get swift nest boxes made and installed collectively, then applied for subsidy funding, then got everyone to chip in such that people who couldn’t afford it could have their boxes paid for, and now suddenly we’re all writing to MPs and following the legislation to include swift nesting sites in new build houses. Etc.

It’s called collective efficacy, the belief that you can make a difference by acting together.

(People who have heard of Greta Thunberg tend to have a stronger sense of collective efficacy (2021).)

It’s so heartening.


You can just do things

That phrase was a Twitter thing for a while, and I haven’t done the archaeology on the phrase but there’s this blog post by Milan Cvitkovic from 2020: Things you’re allowed to do.

e.g.

  • "Say I don’t know"
  • "Tape over annoying LED lights"
  • "Buy goods/services from your friends"

I read down the list saying to myself, yeah duh of course, to almost every single one, then hit certain ones and was like – oh yeah, I can just do that.


I think collective efficacy is maybe 50% taking off the blinkers and giving yourself (as a group) permission to do things.

But it’s also 50% belief that it’s worth acting at all.

And that belief is founded part in care, and part in faith that what you are doing can actually make a difference.

For instance:

A lot of my belief in the power of government comes from the fact that, back in the day, London’s tech scene was not all that. So in 2009 I worked with Georgina Voss to figure out the gap, then in 2010 bizarrely got invited on a trade mission to India with the Prime Minister and got the opportunity to make the case about east London to them, and based on that No. 10 launched Tech City (which we had named on the plane), and that acted as a catalyst on the work that everyone was already doing to get the cluster going, and then we were off to the races. WIRED magazine wrote it up in 2019: The story of London’s tech scene, as told by those who built it (paywall-busting link).

So I had that experience and now I believe that, if I can find the right ask, there’s always the possibility to make things better.

That’s a rare experience. I’m very lucky.


ALTHOUGH.

Should we believe in luck?

Psychologist Richard Wiseman, The Luck Factor (2003, PDF):

I gave both [self-identified] lucky and unlucky people a newspaper, and asked them to look through it and tell me how many photographs were inside. On average, the unlucky people took about two minutes to count the photographs whereas the lucky people took just seconds. Why? Because the second page of the newspaper contained the message “Stop counting - There are 43 photographs in this newspaper.”

"Lucky people generate their own good fortune via four basic principles."

They are skilled at creating and noticing chance opportunities, make lucky decisions by listening to their intuition, create self-fulfilling prophesies via positive expectations, and adopt a resilient attitude that transforms bad luck into good.

I insist that people are not lucky nor unlucky. Maybe some amount of luck is habit?

You can just be lucky?

(Well, not absolutely, privilege is big, but maybe let’s recalibrate luck from believing it is entirely random, that’s what I’m saying.)


When I was a kid I used to play these unforgivingly impossible video games – that’s what home video games were like then. No open world play, multiple ways to win, or adaptive difficulty. Just pixel-precise platform jumps and timing.

Yet you always knew that there was a way onto the next screen, however long it took.

It taught a kind of stubborn optimism.


Or, in another context, "No fate but what we make."

Same same.


All of which makes me ask:

Could we invent free-to-plan mobile games which train luckiness?

Are there games for classrooms that would cement a faith in collective efficacy in kids?

Or maybe it’s proof by demonstration.

I’m going into my kid’s school in a couple of weeks to show the class photos of what it looks like inside factories. The stuff around us was made by people like us; it’s not divine in origin; factories are just rooms.

I have faith that - somehow - at some point down the line - this act will help.

LLMs are bad at vibing specifications

No newsletter next week

I'll be speaking at InfoQ London. But see below for a book giveaway!


LLMs are bad at vibing specifications

About a year ago I wrote AI is a gamechanger for TLA+ users, which argued that AI are a "specification force multiplier". That was written from the perspective an TLA+ expert using these tools. A full 4% of Github TLA+ specs now have the word "Claude" somewhere in them. This is interesting to me, because it suggests there was always an interest in formal methods, people just lacked the skills to do it.

It's also interesting because it gives me a sense of what happens when beginners use AI to write formal specs. It's not good.

As a case study, we'll use this project, which is kind of enough to have vibed out TLA+ and Alloy specs.

Looking at a project

Starting with the Alloy spec. Here it is in its entirety:

module ThreatIntelMesh

sig Node {}

one sig LocalNode extends Node {}

sig Snapshot {
  owner: one Node,
  signed: one Bool,
  signatures: set Signature
}

sig Signature {}

sig Policy {
  allowUnsignedImport: one Bool
}

pred canImport[p: Policy, s: Snapshot] {
  (p.allowUnsignedImport = True) or (s.signed = True)
}

assert UnsignedImportMustBeDenied {
  all p: Policy, s: Snapshot |
    p.allowUnsignedImport = False and s.signed = False implies not canImport[p, s]
}

assert SignedImportMayBeAccepted {
  all p: Policy, s: Snapshot |
    s.signed = True implies canImport[p, s]
}

check UnsignedImportMustBeDenied for 5
check SignedImportMayBeAccepted for 5

Couple of things to note here: first of all, this doesn't actually compile. It's using the Boolean standard module so needs open util/boolean to function. Second, Boolean is the wrong approach here; you're supposed to use subtyping.

sig Snapshot {
  owner: one Node,
- signed: one Bool,
  signatures: set Signature
}

+ sig SignedSnapshot in Snapshot {}


pred canImport[p: Policy, s: Snapshot] {
- s.signed = True
+ s in SignedSnapshot
}

So we know the person did not actually run these specs. This is somewhat less of a problem in TLA+, which has an official MCP server that lets the agent run model checking. Even so, I regularly see specs that I'm pretty sure won't model check, with things like using Reals or assuming NULL is a built-in and not a user-defined constant.

The bigger problem with the spec is that UnsignedImportMustBeDenied and SignedImportMayBeAccepted don't actually do anything. canImport is defined as P || Q. UnsignedImportMustBeDenied checks that !P && !Q => !canImport. SignedImportMayBeAccepted checks that P => canImport. These are tautologically true! If they do anything at all, it is only checking that canImport was defined correctly.

You see the same thing in the TLA+ specs, too:

GadgetPayload ==
  /\ gadgetDetected' = TRUE
  /\ depth' \in 0..(MaxDepth + 5)
  /\ UNCHANGED allowlistedFormat
  /\ decision' = "block"

NoExploitAllowed == gadgetDetected => decision = "block"

The AI is only writing "obvious properties", which fail for reasons like "we missed a guard clause" or "we forgot to update a variable". It does not seem to be good at writing "subtle" properties that fail due to concurrency, nondeterminism, or bad behavior separated by several steps. Obvious properties are useful for orienting yourself and ensuring the system behaves like you expect, but the actual value in using formal methods comes from the subtle properties.

(This ties into Strong and Weak Properties. LLM properties are weak, intended properties need to be strong.)

This is a problem I see in almost every FM spec written by AI. LLMs aren't doing one of the core features of a spec. Articles like Prediction: AI will make formal verification go mainstream and When AI Writes the World's Software, Who Verifies It? argue that LLMs will make formal methods go mainstream, but being easily able to write specifications doesn't help with correctness if the specs don't actually verify anything.

Is this a user error?

I first got interested in LLMs and TLA+ from The Coming AI Revolution in Distributed Systems. The author of that later vibecoded a spec with a considerably more complex property:

NoStaleStrictRead ==
  \A i \in 1..Len(eventLog) :
    LET ev == eventLog[i] IN
      ev.type = "read" =>
        LET c == ev.chunk IN
        LET v == ev.version IN
        /\ \A j \in 1..i :
             LET evC == eventLog[j] IN
               evC.type = "commit" /\ evC.chunk = c => evC.version <= v

This is a lot more complicated than the (P => Q && P) => Q properties I've seen! It could be because the corresponding system already had a complete spec written in P. But it could also be that Cheng Huang is already an expert specifier, meaning he can get more out of an LLM than an ordinary developer can. I've also noticed that I can usually coax an LLM to do more interesting things than most of my clients can. Which is good for my current livelihood, but bad for the hope of LLMs making formal methods mainstream. If you need to know formal methods to get the LLM to do formal methods, is that really helping?

(Yes, if it lowers the skill threshold-- means you can apply FM with 20 hours of practice instead of 80. But the jury's still out on how much it lowers the threshold. What if it only lowers it from 80 to 75?)

On the other hand, there also seem to be some properties that AI struggles with, even with explicit instructions. Last week a client and I tried to get Claude to generate a good liveness or action property instead of a standard obvious invariant, and it just couldn't. Training data issue? Something in the innate complexity of liveness? It's not clear yet. These properties are even more "subtle" than most invariants, so maybe that's it.

On the other other hand, this is all as of March 2026. Maybe this whole article will be laughably obsolete by June.


Logic for Programmers Giveaway

Last week's giveaway raised a few issues. First, the New World copies were all taken before all of the emails went out, so a lot of people did not even get a chance to try for a book. Second, due to a Leanpub bug the Europe coupon scheduled for 10 AM UTC actually activated at 10 AM my time, which was early evening for Europe. Third, everybody in the APAC region got left out.

So, since I'm not doing a newsletter next week, let's have another giveaway:

  • This coupon will go up 2026-03-16 at 11:00 UTC, which should be noon Central European Time, and be good for ten books (five for this giveaway, five to account for last week's bug).
  • This coupon will go up 2026-03-17 at 04:00 UTC, which should be noon Beijing Time, and be good for five books.
  • This coupon will go up 2026-03-17 at 17:00 UTC, which should be noon Central US Time, and also be good for five books.

I think that gives the best chance of everybody getting at least a chance of a book, while being resilient to timezone shenanigans due to travel / Leanpub dropping bugfixes / daylight savings / whatever.

(No guarantees that later "no newsletter" weeks will have giveaways! This is a gimmick)

Free Books

Spinning a lot of plates this week so skipping the newsletter. As an apology, have ten free copies of Logic for Programmers.

  • These five are available now.
  • These five should be available at 10:30 AM CEST tomorrow, so people in Europe have a better chance of nabbing one. Nevermind Leanpub had a bug that made this not work properly

New Blog Post: Some Silly Z3 Scripts I Wrote

Now that I'm not spending all my time on Logic for Programmers, I have time to update my website again! So here's the first blog post in five months: Some Silly Z3 Scripts I Wrote.

Normally I'd also put a link to the Patreon notes but I've decided I don't like publishing gated content and am going to wind that whole thing down. So some quick notes about this post:

  • Part of the point is admittedly to hype up the eventual release of LfP. I want to start marketing the book, but don't want the marketing material to be devoid of interest, so tangentially-related-but-independent blog posts are a good place to start.
  • The post discusses the concept of "chaff", the enormous quantity of material (both code samples and prose) that didn't make it into the book. The book is about 50,000 words… and considerably shorter than the total volume of chaff! I don't think most of it can be turned into useful public posts, but I'm not entirely opposed to the idea. Maybe some of the old chapters could be made into something?
  • Coming up with a conditioned mathematical property to prove was a struggle. I had two candidates: a == b * c => a / b == c, which would have required a long tangent on how division must be total in Z3, and a != 0 => some b: b * a == 1, which would have required introducing a quantifier (SMT is real weird about quantifiers). Division by zero has already caused me enough grief so I went with the latter. This did mean I had to reintroduce "operations must be total" when talking about arrays.
  • I have no idea why the array example returns 2 for the max profit and not 99999999. I'm guessing there's some short circuiting logic in the optimizer when the problem is ill-defined?
  • One example I could not get working, which is unfortunate, was a demonstration of how SMT solvers are undecidable via encoding Goldbach's conjecture as an SMT problem. Anything with multiple nested quantifiers is a pain.

Stream of Consciousness Driven Development

This is something I just tried out last week but it seems to have enough potential to be worth showing unpolished. I was pairing with a client on writing a spec. I saw a problem with the spec, a convoluted way of fixing the spec. Instead of trying to verbally explain it, I started by creating a new markdown file:

NameOfProblem.md

Then I started typing. First the problem summary, then a detailed description, then the solution and why it worked. When my partner asked questions, I incorporated his question and our discussion of it into the flow. If we hit a dead end with the solution, we marked it out as a dead end. Eventually the file looked something like this:

Current state of spec
Problems caused by this
    Elaboration of problems
    What we tried that didn't work
Proposed Solution
    Theory behind proposed solution
    How the solution works
    Expected changes
    Other problems this helps solve
    Problems this does *not* help with

Only once this was done, my partner fully understood the chain of thought, and we agreed it represented the right approach, did we start making changes to the spec.

How is this better than just making the change?

The change was conceptually complex. A rough analogy: imagine pairing with a beginner who wrote an insertion sort, and you want to replace it with quicksort. You need to explain why the insertion sort is too slow, why the quicksort isn't slow, and how quicksort actually correctly sorts a list. This could involve tangents into computational complexity, big-o notation, recursion, etc. These are all concepts you have internalized, so the change is simple to you, but the solution uses concepts the beginner does not know. So it's conceptually complex to them.

I wasn't pairing with a beginning programmer or even a beginning specifier. This was a client who could confidently write complex specs on their own. But they don't work on specifications full time like I do. Any time there's a relative gap in experience in a pair, there's solutions that are conceptually simple to one person and complex to the other.

I've noticed too often that when one person doesn't fully understand the concepts behind a change, they just go "you're the expert, I trust you." That eventually leads to a totally unmaintainable spec. Hence, writing it all out.

As I said before, I've only tried this once (though I've successfully used a similar idea when teaching workshops). It worked pretty well, though! Just be prepared for a lot of typing.

Proving What's Possible

As a formal methods consultant I have to mathematically express properties of systems. I generally do this with two "temporal operators":

  • A(x) means that x is always true. For example, a database table always satisfies all record-level constraints, and a state machine always makes valid transitions between states. If x is a statement about an individual state (as in the database but not state machine example), we further call it an invariant.
  • E(x) means that x is "eventually" true, conventionally meaning "guaranteed true at some point in the future". A database transaction eventually completes or rolls back, a state machine eventually reaches the "done" state, etc.

These come from linear temporal logic, which is the mainstream notation for expressing system properties. 1 We like these operators because they elegantly cover safety and liveness properties, and because we can combine them. A(E(x)) means x is true an infinite number of times, while A(x => E(y) means that x being true guarantees y true in the future.

There's a third class of properties, that I will call possibility properties: P(x) is "can x happen in this model"? Is it possible for a table to have more than ten records? Can a state machine transition from "Done" to "Retry", even if it doesn't? Importantly, P(x) does not need to be possible immediately, just at some point in the future. It's possible to lose 100 dollars betting on slot machines, even if you only bet one dollar at a time. If x is a statement about an individual state, we can further call it a reachability property. I'm going to use the two interchangeably for flow.

A(P(x)) says that x is always possible. No matter what we've done in our system, we can make x happen again. There's no way to do this with just A and E. Other meaningful combinations include:

  • P(A(x)): there is a reachable state from which x is always true.
  • A(x => P(y)): y is possible from any state where x is true.
  • E(x && P(y)): There is always a future state where x is true and y is reachable.
  • A(P(x) => E(x)): If x is ever possible, it will eventually happen.
  • E(P(x)) and P(E(x)) are the same as P(x).

See the paper "Sometime" is sometimes "not never" for a deeper discussion of E and P.

The use case

Possibility properties are "something good can happen", which is generally less useful (in specifications) than "something bad can't happen" (safety) and "something good will happen" (liveness). But it still comes up as an important property! My favorite example:

A guy who can't shut down his computer because system preferences interrupts shutdown

The big use I've found for the idea is as a sense-check that we wrote the spec properly. Say I take the property "A worker in the 'Retry' state eventually leaves that state":

A(state == 'Retry' => E(state != 'Retry'))

The model checker checks this property and confirms it holds of the spec. Great! Our system is correct! ...Unless the system can never reach the "Retry" state, in which case the expression is trivially true. I need to verify that 'Retry' is reachable, eg P(state == 'Retry'). Notice I can't use E to do this, because I don't want to say "the worker always needs to retry at least once".

It's not supported though

I say "use I've found for the idea" because the main formalisms I use (Alloy and TLA+) don't natively support P. 2 On top of P being less useful than A and E, simple reachability properties are mimickable with A(x). P(x) passes whenever A(!x) fails, meaning I can verify P(state == 'Retry') by testing that A(!(state == 'Retry')) finds a counterexample. We cannot mimic combined operators this way like A(P(x)) but those are significantly less common than state-reachability.

(Also, refinement doesn't preserve possibility properties, but that's a whole other kettle of worms.)

The one that's bitten me a little is that we can't mimic "P(x) from every starting state". "A(!x)" fails if there's at least one path from one starting state that leads to x, but other starting states might not make x possible.

I suspect there's also a chicken-and-egg problem here. Since my tools can't verify possibility properties, I'm not used to noticing them in systems. I'd be interested in hearing if anybody works with codebases where possibility properties are important, especially if it's something complex like A(x => P(y)).


  1. Instead of A(x), the literature uses []x or Gx ("globally x") and instead of E(x) it uses <>x or Fx ("finally x"). I'm using A and E because this isn't teaching material. 

  2. There's some discussion to add it to TLA+, though

Logic for Programmers New Release and Next Steps

cover.jpg

It's taken four months, but the next release of Logic for Programmers is now available! v0.13 is over 50,000 words, making it both 20% larger than v0.12 and officially the longest thing I have ever written.1 Full release notes are here, but I'll talk a bit about the biggest changes.

For one, every chapter has been rewritten. Every single one. They span from relatively minor changes to complete chapter rewrites. After some rough git diffing, I think I deleted about 11,000 words?2 The biggest change is probably to the Alloy chapter. After many sleepless nights, I realized the right approach wasn't to teach Alloy as a data modeling tool but to teach it as a domain modeling tool. Which technically means the book no longer covers data modeling.

There's also a lot more connections between the chapters. The introductory math chapter, for example, foreshadows how each bit of math will be used in the future techniques. I also put more emphasis on the general "themes" like the expressiveness-guarantees tradeoff (working title). One theme I'm really excited about is compatibility (extremely working title). It turns out that the Liskov substitution principle/subtyping in general, database migrations, backwards-compatible API changes, and specification refinement all follow basically the same general principles. I'm calling this "compatibility" for now but prolly need a better name.

Finally, there's just a lot more new topics in the various chapters. Testing properly covers structural and metamorphic properties. Proofs covers proof by induction and proving recursive functions (in an exercise). Logic Programming now finally has a section on answer set programming. You get the picture.

Next Steps

There's a lot I still want to add to the book: proper data modeling, data structures, type theory, model-based testing, etc. But I've added new material for two year, and if I keep going it will never get done. So with this release, all the content is in!

Just like all the content was in two Novembers ago and two Januaries ago and last July. To make it absolutely 100% for sure that I won't be tempted to add anything else, I passed the whole manuscript over to a copy editor. So if I write more, it won't get edits. That's a pretty good incentive to stop.

I also need to find a technical reviewer and proofreader. Once all three phases are done then it's "just" a matter of fixing the layout and finding a good printer. I don't know what the timeline looks like but I really want to have something I can hold in my hands before the summer.

(I also need to get notable-people testimonials. Hampered a little in this because I'm trying real hard not to quid-pro-quo, so I'd like to avoid anybody who helped me or is mentioned in the book. And given I tapped most of my network to help me... I've got some ideas though!)

There's still a lot of work ahead. Even so, for the first time in two years I don't have research to do or sections to write and it feels so crazy. Maybe I'll update my blog again! Maybe I'll run a workshop! Maybe I'll go outside if Chicago ever gets above 6°F!


Conference Season

After a pretty slow 2025, the 2026 conference season is looking to be pretty busy! Here's where I'm speaking so far:

For the first three I'm giving variations of my talk "How to find bugs in systems that don't exist", which I gave last year at Systems Distributed. Last one will ideally be a talk based on LfP.


  1. The second longest was my 2003 NaNoWriMo. The third longest was Practical TLA+

  2. This means I must have written 20,000 words total. For comparison, the v0.1 release was 19,000 words. 

Everything About Disc Golf Aerodynamics - Smarter Every Day 313

💾

The coolest part about this is that Brad and Chad didn't ask me to say anything in particular.... they partner with wholesale dealers so you should be able to find MVP discs at your local disc golf store. If you'd like to explore what they make you can check out their website here. https://mvpdiscsports.com/

You can also find their products on Amazon, just look up "MVP Discs" or click here:
https://www.amazon.com/stores/MVPDiscSports/page/36E7BB9F-BB13-4144-8655-22EC73E8C4F1

First Video:
https://www.youtube.com/watch?v=frbLIoqDIO8

2nd Channel Long Cut video:
https://www.youtube.com/watch?v=WUyIfkKUEjs

Find a Disc Golf Course Near you!
https://www.pdga.com/course-directory

If you enjoyed this video, please consider supporting on Patreon: https://www.patreon.com/smartereveryday

Click here if you're interested in subscribing: http://bit.ly/Subscribe2SED
⇊ Click below for more links! ⇊
~~~~~~~~~~~~~~~~~~~~~~~~~~~~
GET SMARTER SECTION

Check out Dr. Pott's work:

Want to work with Dr. Potts?
https://www.findaphd.com/phds/project/propulsion-and-aerodynamic-control-of-a-spin-stabilised-circular-planform-uas/?p186467

More about Dr. Potts:
https://profiles.shu.ac.uk/2502-jonathan-potts

~~~~~~~~~~~~~~~~~~~~~~~~~~~~

Thank you to "The Disc King" shop in Finland who hosted the TechDisc discussion!
https://kiekkokingi.fi/

Thanks to the awesome Pros who taught me in Finland!
Jeremy Koling
https://www.instagram.com/bigjermdg/
Simon Lizotte
https://www.youtube.com/@SimonLizotte8332
Eagle McMahon
https://www.youtube.com/eaglemcmahon

Smarter Every Day on Patreon
http://www.patreon.com/smartereveryday

Ambiance, audio and musicy things by: Gordon McGladdery
https://www.ashellinthepit.com/
http://ashellinthepit.bandcamp.com/

Coverage or shots from the Disc Golf Network broadcast are provided with the express written consent of the Disc Golf Network, the media rights holder of the event, via the ‪@DiscGolfProTour‬ To watch the professional disc golf action live, go to https://www.discgolfnetwork.com and become a subscriber.

If you feel like this video was worth your time and added value to your life, please SHARE THE VIDEO!

If you REALLY liked it, feel free to pitch a few dollars Smarter Every Day by becoming a Patron. I'd be grateful!
http://www.patreon.com/smartereveryday

Warm Regards,

Destin

The Mother of all Science Scandals

💾

You can't believe everything you see on the news. This is a compilation of a series on cold fusion originally released in 3 parts.

Patreon: https://www.patreon.com/c/bobbybroccoli
Twitter profile: https://x.com/BobbyBroccole

Research assistant: Chris Pepin
Assistant editor: Charlie Arsenault
Script Feedback: Chris Pepin, Charlie Arsenault, Boundo
Thumbnail by Hotcyder (@hotcyder )
Data for publication rate by year compiled by Daniel Jarabek
Blender assistance from Chris Hanel (@ChrisHanel )
Music from the Youtube Audio Library, Epidemic Sound, and White bat Audio (@WhiteBatAudio )
3D Models (TV, whiteboard, phone, office supplies) licensed from CGTrader. Additional imagery licensed from Getty Images and the Associated Press.

Sources:
Fleischmann, M., and S. Pons. 1989. Electrochemically induced nuclear fusion of deuterium. Journal of Electroanalytical Chemistry 261:301–308.
Fleischmann, M., and S. Pons et. al. 1990. Calorimetry of the palladium-deuterium-heavy water system. Journal of Electroanalytical Chemistry 287:293–348.
Jones, S.E, E.P Palmer, J.B Czirr, D.L Decker, G.L Jensen, J.M Thorne, S.F Taylor, and J Rafelski. 1989. Observation of cold nuclear fusion in condensed matter. Nature 388:737–740.
Newspaper coverage from a wide variety of sources, but a large amount from the Salt Lake City Tribune and the Deseret News.
Televised March 23rd 1989 Press Conference at the University of Utah announcing cold fusion.
Bad Science: The Short Life and Weird Times of Cold Fusion (Book) by Gary Taubes, 1993. Authoritative account of the cold fusion saga, has by far that most details about behind the scenes events, especially anything relating to Steven Jones and Marvin Hawkins. Taubes is also notable for his reporting on the Texas A&M tritium spiking allegations.
Cold Fusion: The Scientific Fiasco of the Century (Book) by John Huizenga, 1992. Written by the co-chair of the Department of Energy Panel that investigated cold fusion, this book is very in depth when it comes to the science.
Cold Fusion Research – A report of the Energy Advisory Board to the United States Department of Energy. November 1989.
Too Hot to Handle: The Race for Cold Fusion (Book) by Frank Close, 1991. The first major book written about cold fusion written from the perspective of a physicist following the story as an outside observer. Has particularly good info on the gamma ray peak dispute. Close also gave a public talk on the contents of his book in 1991 which was recorded.
Voodoo Science: The Road from Foolishness to Fraud (Book) by Robert Park, 2000. Cold fusion is mentioned as one of several examples of 'Voodoo Science' throughout the book, which generally has a focus on Washington and science funding.
The Cornell Cold Fusion Archive (originally curated by Bruce Lewenstein). Has an extensive copy of many original documents, news coverage, etc. They also provided me with a digitized video copy of the Baltimore APS meeting.
Truth and Consequences: How Colleges and Universities Meet Public Crises (Book) by Jerrold Footlick, 1997. A chapter of this book is dedicated to assessing how the University of Utah administration handled the cold fusion controversy.
The Believers (Documentary) by Clayton Brown and Monica Long Ross, 2012. This film catalogues the diminished state of cold fusion research two decades later, interviewing many of its remaining proponents.
Cold Fusion: A case study for scientific behavior, an educational resource produced by the University of California, Berkeley. 2012.
Berlinguette et. al. Revisiting the cold case of cold fusion. Nature 570:45-51
Guffey et. al. Experimental Lessons in Replication of ‘Low Energy Nuclear Reactions’.
Audio recording from the ACS Dallas meeting from April 1989.
CSPAN recording of the Science Space and Technology committee hearing on Cold Fusion on April 28th 1989.
CSPAN recording of Utah Rep. Wayne Owens advocating for cold fusion funding on January 25th 1990.
CSPAN audience call-in segment on cold fusion on May 4th 1989.
Video recording of the ECS Los Angeles meeting from May 8th 1989.
10+ hours of assorted local Utah TV news coverage archived by the University of Utah Library.
BBC Horizon – Too Close to the Sun, 1994.
CBC – The Secret Life of Cold Fusion, 1994.
Tomorrow’s World (TV Programme), March 28th 1989.
60 Minutes – Cold fusion is hot again, 2009.
Fusion Fiasco (Book) by Steven Krivit, 2016. Book 2 of 3 in a series about cold fusion and LENR. Krivit heavily editorializes in favour of Pons and Fleischmann and is an advocate of LENR. With that caveat, he has done extensive independent research including interviews with many of those involved and cites documents not widely published anywhere else.
Fire from Ice: Searching for the Truth Behind the Cold Fusion Furor (Book) by Eugene Mallove, 1991.
Fire from Ice (Documentary) directed by Eugene Mallove, 1998.
Excess Heat: Why Cold Fusion Research Prevailed (Book) by Charles Beaudette, 2000.

0:00 Part 1
1:03:16 Part 2
2:11:58 Part 3

Every Metro System Should be this Beautiful

💾

Made by humans for humans. No AI voices or generative AI was used in the making of this video.

Check out my new public transit travel show, Day Pass, now available on Nebula!
https://nebula.tv/daypass?ref=notjustbikes

Watch this video ad-free and sponsor-free on Nebula:
https://nebula.tv/videos/notjustbikes-every-metro-system-should-be-this-beautiful

Sign up to Nebula and support this channel:
https://go.nebula.tv/notjustbikes

Buy a Nebula Gift card (now available with iDEAL!)
https://gift.nebula.tv/notjustbikes

Patreon: https://patreon.com/notjustbikes
Mastodon: @notjustbikes@notjustbikes.com
NJB Live (my live-streaming channel): https://youtube.com/@njblive

---
References & Further Reading

Art in the Subway: Explore 14 Beautiful Stations
https://www.visitstockholm.com/see-do/attractions/art-in-the-subway/

Världens längsta konstutställning (the world's longest art exhibition)
https://sl.se/aktuellt/konsten-i-trafiken

https://sl.se/aktuellt/puls/utforska-konsten-langs-bla-linjen

Varför hörs det ett klickande ljud i tunnelbanan?
https://web.archive.org/web/20250807190503/https://slussenstidning.se/new/webbredaktor-vt18/2018/04/16/varfor-hors-det-ett-klickande-ljud-i-tunnelbanan/

Guide till konsten i Stockholms tunnelbana
https://stockholmartwalk.se/

https://www.thelocal.se/20130904/50064

180 miljoner till konst i nya tunnelbanans
https://www.di.se/nyheter/180-miljoner-till-konst-i-nya-tunnelbanan/

About the new metro
https://nyatunnelbanan.se/en/stockholms-nya-tunnelbana/

De nya stationerna
https://nyatunnelbanan.se/stationer/

The Sweden Case: How Stockholm Builds Infrastructure Cheaply, and Why It's Becoming More Expensive?
https://transitcosts.com/city/sweden-case/

Östermalmstorg Station: A Contest for Art
https://estocolmotours.com/en/ostermalmstorg-station-a-contest-for-art/

Om konsten i kollektivtrafiken
https://www.regionstockholm.se/kollektivtrafik/konsten/

The vast majority of footage in this video was filmed on location by Not Just Bikes with some stock footage licensed from Getty Images and other sources

---
Chapters
0:00 Intro
0:38 Visiting Stockholm
1:27 The list of stations
2:56 T-Centralen
3:58 Cavernous stations
5:15 How to visit
6:31 Citybanan
7:35 Subway tiles
10:07 Other stations on the list
12:01 Suburbs and other stations
14:52 Metro exits and land use
16:13 The joy of beautiful stations
17:21 The cost of beauty
19:14 Concluding thoughts
20:20 Day Pass & Nebula

---
Corrections
2:46 It's actually the newer stations along the RED line, not green.

Why Google Maps Fails in Amsterdam

💾

Made by humans for humans. No AI voices or generative AI was used in the making of this video.

Watch this video ad-free and sponsor-free on Nebula:
https://nebula.tv/videos/notjustbikes-why-i-dont-use-google-maps-in-amsterdam

Sign up to Nebula and support this channel:
https://go.nebula.tv/notjustbikes

Buy a Nebula Gift card (now available with iDEAL!)
https://gift.nebula.tv/notjustbikes

Patreon: https://patreon.com/notjustbikes
Mastodon: @notjustbikes@notjustbikes.com
NJB Live (my live-streaming channel): https://youtube.com/@njblive

---
Relevant Links

Stop Signs Suck (and we should get rid of them)
https://nebula.tv/videos/not-just-bikes-stop-signs-suck-and-we-should-get-rid-of-them
https://www.youtube.com/watch?v=42oQN7fy_eM

Fietsersbond Routeplanner
https://nl.routeplanner.fietsersbond.nl/

Fietsknoop Route Planner
https://www.fietsknoop.nl/fietsroute-planner

Toertje App:
https://play.google.com/store/apps/details?id=nl.trifork.fietsersbond.routeplanner

Amsterdam plusnetten en hoofdnetten infrastructuur
https://maps.amsterdam.nl/plushoofdnetten/

Ontvlechten van fiets en snelverkeer
https://www.verkeerskunde.nl/2014/06/23/ontvlechten-van-fiets-en-snelverkeer-vk-4-2014/

Google Maps map data copyright Google

Maps from OpenStreetMap copyright open street map contributors:
https://www.openstreetmap.org/copyright

The majority of footage in this video was filmed on location by Not Just Bikes with some stock footage licensed from Getty Images and other sources

---
Chapters
0:00 Intro
0:47 Designing for drivers vs. people
3:00 Hoofdnetten & Plusnetten
3:50 Ontvlechten (unbundling)
6:00 Stop signs & one-way streets
7:56 Bicycle safety
10:50 Keeping cars out
12:41 Navigation for cars vs. bicycles
16:43 Car-bias in maps
20:10 Car-bias in directions
22:29 Conclusion
22:57 Thanks, Nebula, & Day Pass!

Delavnica: Shranjevanje in objava velikih podatkov v repozitorijih NIOD

Kratek opis: Delavnica je namenjena predstavitvi postopkov shranjevanja in objave velikih podatkov v repozitorijih NIOD. Udeleženci bodo spoznali, kako učinkovito nalagati velike datoteke ali večje število datotek s pomočjo protokola S3 ter kako te podatke ustrezno opisati z metapodatki in jih objaviti za nadaljnjo uporabo.

Podrobnejši opis: Poseben poudarek bo namenjen pripravi in strukturiranju metapodatkov, ki omogočajo ustrezno opisovanje podatkov ter njihovo kasnejšo najdljivost in ponovno uporabo. Udeleženci bodo spoznali dobre prakse pri opisovanju podatkov ter pomen standardizacije metapodatkov.

V praktičnem delu bodo prikazani konkretni primeri nalaganja podatkov v repozitorij, upravljanja z verzijami ter objave podatkovnih zbirk. Udeleženci bodo pridobili znanja, ki jim omogočajo samostojno delo z repozitoriji NIOD in učinkovito upravljanje večjih količin podatkov. 

Zahtevnost: Napredna

Jezik: Slovenski

Termin: 19. 03. 2026 od 10.00 - 14.00

Omejitev števila udeležencev: 10

Virtualna lokacija: MS TEAMS

Priporočeno predznanje: Osnovno poznavanje dela z ukazno vrstico in osnovni koncepti podatkovnih repozitorijev, velepodatki, S3, repozitoriji, metapodatki, upravljanje podatkov, odprti podatki

Ciljna publika: Raziskovalci, inženirji, študenti, podatkovni znanstveniki, podatkovni analitiki 

Potek izobraževanja: Izobraževanje poteka na daljavo v okolju MS Teams. Udeleženci bodo uporabljali orodja za delo s protokolom S3 ter spletni vmesnik repozitorija NIOD. Praktični primeri bodo vodeni in podprti z demonstracijami.

Na izobraževanju pridobljena znanja:

  • Razumevanje konceptov shranjevanja velikih podatkov v repozitorijih
  • Uporaba S3 protokola za nalaganje podatkov
  • Priprava in upravljanje metapodatkov
  • Objava podatkovnih zbirk v repozitorijih NIOD
  • Dobre prakse pri upravljanju raziskovalnih podatkov

 

Organizator:

Predavatelji:

Ime: Marko Ferme
Opis: Marko Ferme je raziskovalec na Fakulteti za elektrotehniko, računalništvo in informatiko Univerze v Mariboru (UM FERI). Njegova raziskovalna področja so obdelava naravnega jezika, arhitektura porazdeljenih sistemov in visokozmogljivo računalništvo. 
E-mail: marko.ferme@um.si 

 


Workshop: CuPY - calculating on GPUs made easy

Description: Scientific computing increasingly relies on GPU acceleration to handle large datasets and complex numerical tasks. While traditional CPU-based workflows remain essential, modern research benefits greatly from learning how to harness GPUs in an accessible way through Python. CuPY provides a NumPy-like interface that enables users to offload array computations to the GPU with minimal code changes.

On Day 1, we will cover the motivation for GPU computing, discuss what GPUs are best suited for, and set up a self-contained environment. Participants will learn to use conda/mamba for environment management, install and configure a GPU-ready CuPY setup, and verify its functionality.  

On Day 2, we will focus on the CuPY library itself. We will explore its syntax and functionality, emphasizing similarities and differences with NumPy. Through a series of simple examples, and culminating in a more involved case study, participants will gain the skills to confidently integrate GPU acceleration into their Python workflows.

Difficulty: Beginner

Date & Time:

Day 1: 19. 03. 2026  from 13.00 to 16.00

Day 2: 20. 03. 2026 from 13.00 to 16.00

Language: English

Prerequisite knowledge: Basic knowledge of Linux, the Terminal and some Python

Target audience: The workshop is intended for beginners and others interested in using GPUs with python.

Virtual location: ZOOM (only registered participants will see ZOOM link)

Workflow: The training is live over zoom, in the afternoon. The workshop will combine lecture and practical parts, where your own laptop suffices is needed to gain access to the ARNES gpu cluster.

Skills to be gained:

  • how to setup python on a GPU
  • basics of CuPY
  • a more involved example

 

Max number of participants: /

 

Organizer:

Univerza v Ljubljani v leto 2024 ...

Lecturer: 

Name: Luka Leskovec
Description: Scientist and educationalist involved in theoretical physics and supercomputing
E-mail: luka.leskovec@fmf.uni-lj.si

Delavnica: Shranjevanje in objava velikih podatkov v repozitorijih NIOD

Kratek opis: Delavnica je namenjena predstavitvi postopkov shranjevanja in objave velikih podatkov v repozitorijih NIOD. Udeleženci bodo spoznali, kako učinkovito nalagati velike datoteke ali večje število datotek s pomočjo protokola S3 ter kako te podatke ustrezno opisati z metapodatki in jih objaviti za nadaljnjo uporabo.

Podrobnejši opis: Poseben poudarek bo namenjen pripravi in strukturiranju metapodatkov, ki omogočajo ustrezno opisovanje podatkov ter njihovo kasnejšo najdljivost in ponovno uporabo. Udeleženci bodo spoznali dobre prakse pri opisovanju podatkov ter pomen standardizacije metapodatkov.

V praktičnem delu bodo prikazani konkretni primeri nalaganja podatkov v repozitorij, upravljanja z verzijami ter objave podatkovnih zbirk. Udeleženci bodo pridobili znanja, ki jim omogočajo samostojno delo z repozitoriji NIOD in učinkovito upravljanje večjih količin podatkov. 

Zahtevnost: Napredna

Jezik: Slovenski

Termin: 26. 03. 2026 od 10.00 - 14.00

Omejitev števila udeležencev: 10

Virtualna lokacija: MS TEAMS

Priporočeno predznanje: Osnovno poznavanje dela z ukazno vrstico in osnovni koncepti podatkovnih repozitorijev, velepodatki, S3, repozitoriji, metapodatki, upravljanje podatkov, odprti podatki

Ciljna publika: Raziskovalci, inženirji, študenti, podatkovni znanstveniki, podatkovni analitiki 

Potek izobraževanja: Izobraževanje poteka na daljavo v okolju MS Teams. Udeleženci bodo uporabljali orodja za delo s protokolom S3 ter spletni vmesnik repozitorija NIOD. Praktični primeri bodo vodeni in podprti z demonstracijami.

Na izobraževanju pridobljena znanja:

  • Razumevanje konceptov shranjevanja velikih podatkov v repozitorijih
  • Uporaba S3 protokola za nalaganje podatkov
  • Priprava in upravljanje metapodatkov
  • Objava podatkovnih zbirk v repozitorijih NIOD
  • Dobre prakse pri upravljanju raziskovalnih podatkov

 

Organizator:

Predavatelji:

Ime: Marko Ferme
Opis: Marko Ferme je raziskovalec na Fakulteti za elektrotehniko, računalništvo in informatiko Univerze v Mariboru (UM FERI). Njegova raziskovalna področja so obdelava naravnega jezika, arhitektura porazdeljenih sistemov in visokozmogljivo računalništvo. 
E-mail: marko.ferme@um.si 

 


Delavnica: Vsebniki na superračunalnikih

Opis: Raziskovalci se pogosto spopadajo z velikimi računskimi izzivi, na primer pri analizi velikih podatkov, fizikalnih simulacijah, računski kemiji, računski biologiji, napovedovanju vremena, simulacijah dinamike tekočin ipd. Za reševanje mnogih problemov je pogosto na voljo ustrezna programska oprema, ki pa jo je potrebno prilagoditi za izvajanje na izbranem superračunalniku.

Na delavnici si bomo ogledali več načinov nalaganja programske opreme: v domačo mapo, preko okoljskih modulov in vsebnikov. Spoznali se bomo s konceptom virtualnih strojev in vsebnikov ter osvetlili razlike med zasnovo vsebnikov Docker in Apptainer. Naučili se bomo uporabiti že pripravljene vsebnike in na praktičnih primerih spoznali, kako zgraditi enostaven vsebnik Apptainer ter ga zagnati v superračunalniškem okolju. V nadaljevanju si bomo ogledali, kako v vsebnik vključiti podporo za grafične pospeševalnike in procesiranje na več vozliščih.

Delavnica bo praktično usmerjena, vaje bomo izvajali na modernem sistemu HPC.

Zahtevnost: Napredna

Jezik: Slovenski

Termin: 15. 04. 2026 od 10:00 - 15:00

Omejitev števila udeležencev: 30

Virtualna lokacija: ZOOM (povezava bo na voljo samo registriranim udeležencem)

Ciljna publika: raziskovalci, inženirji, študenti, vsi ki potrebujejo več računskih virov pri svojem delu

Priporočeno predznanje: 

  • opravljena delavnica Osnove superračunalništva,
  • razumevanje zgradbe računalniške gruče,
  • delo preko odjemalca SSH (ukazna vrstica, prenašanje datotek),
  • osnovno poznavanje vmesne programske opreme Slurm,
  • osnovno znanje operacijskega sistema Linux in lupine Bash
  • osnovno poznavanje programskega jezika Python

 

Na izobraževanju pridobljena znanja:

  • poznavanje vmesne programske opreme Slurm
  • razumevanje okoljskih modulov in vsebnikov
  • uporaba obstoječih vsebnikov Docker in Apptainer
  • gradnja lastnih vsebnikov Apptainer za izvajanje izbranih programov na superračunalniški gruči
  • raba različnih računskih virov v okoljskih modulih in vsebnikih (procesorska jedra, grafični pospeševalniki, vozlišča)

 

Organizator:

FRI logo

Predavatelja:

Ime: Davor Sluga
Opis: https://fri.uni-lj.si/sl/o-fakulteti/osebje/davor-sluga 
E-mail: davor.sluga@fri.uni-lj.si
Ime: Ratko Pilipović
Opis: https://www.fri.uni-lj.si/sl/o-fakulteti/osebje/ratko-pilipovic
E-mail: ratko.pilipovic@fri.uni-lj.si

 


OD PALEOLITA DO TITA 1: ALI JE IMEL ENGELS PRAV? O IZVORU DRUŽINE, PRIVATNE LASTNINE IN DRŽAVE

Vabljeni k poslušanju in ogledu prve epizode iz nove serije podcastov Rdeče pese Od paleolita do Tita, v kateri bomo govorili o različnih zgodovinskih temah, njihovi relevantnosti za današnji čas in s tem prispevali k socialističnem razumevanju naše preteklosti.

V prvi epizodi je bil naš gost arheolog Dimitrij Mlekuž Vrhovnik s katerim smo govorili o Engelsovem Izvoru družine, privatne lastnine in države ter o tem kar arheologija danes pravi o teh procesih. Dotaknili smo se sezonskih sprememb družbene organizacije med lovci nabiralci, zažiganja hiš polnih pridelkov, ki so jih pridelali prvi poljedelci, ter pojava privatne lastnine in moških bojevniških grobov v bronasti dobi ter seveda Engelsovih teorij o “svetovnozgodovinskem porazu ženskega spola”.

Pošlji podcast kolegici, stisni lajk na Youtubu in se naroči na naše kanale.

The post OD PALEOLITA DO TITA 1: ALI JE IMEL ENGELS PRAV? O IZVORU DRUŽINE, PRIVATNE LASTNINE IN DRŽAVE first appeared on Rdeča Pesa.

AMERIŠKI IMPERIALIZEM DUŠI KUBO

Ameriški imperializem že desetletja izvaja pritisk na Kubo, vendar se je v zadnjih tednih situacija ponovno zaostrila. Medtem ko svet vsakodnevno spremlja novice o napadih na Iran, ne smemo pozabiti na Kubo, ki že več kot mesec dni trpi zaradi ameriške naftne blokade.  Združene države so svojo zmožnost diktiranja razmer v regiji okrepile tudi z vojaškim napadom na Venezuelo in ugrabitvijo njenega predsednika Nicolasa Madura v začetku tega leta. Venezuela je bila namreč vse od leta 1999, ko je tam na oblast prišel Hugo Chavez, glavni dobavitelj nafte za Kubo. 

Ameriški imperializem tako duši Kubo, saj prekinja dobavo goriva, to pa vpliva na letalske povezave in dostop prebivalk do ključnih življenjskih potrebščin. Ob tem je Trump celo omenil možnost »prijaznega prevzema Kube«, kot se je sam izrazil, s čimer je mislil na poskus ponovitve strategije, ki je bila že uporabljena v Venezueli.  Pod pretvezo boja proti kartelom oziroma »narkoterorizmu« je napovedal vojaški poseg v vse ameriške države, ki po njegovo ne upoštevajo imperialističnega svetovnega reda.

29. januarja je predsednik Donald Trump izdal ukaz za popolno naftno blokado Kube. Ta ukrep ima namerno uničujoče posledice za kubansko prebivalstvo, ki se sooča z vsakodnevnimi izpadi elektrike, prekinitvami delovanja v javnem prometu, dostavi hrane in odvozu odpadkov. Zaradi pomanjkanja osnovnih storitev so prebivalci pogosto prisiljeni odpadke sežigati kar na ulicah, kar predstavlja resno tveganje za zdravje.

Zaradi Washingtonovih groženj je na Kubo nafto nehala izvažati tudi Mehika, za katero Trump trdi, da jo vodijo karteli, s čimer  namiguje na morebitno ameriško vojaško intervencijo v državi. Trump z dodatnimi tarifami tudi grozi vsem državam, ki bodo Kubi prodajale nafto. S tem želi poglobiti odvisnost Kube od ameriškega goriva in tako povečati svojo zmožnost političnega izsiljevanja države. Poleg tega s spodbujanjem zasebnega sektorja kubanskega gospodarstva skuša pospešiti in poglobiti proces ponovne vzpostavitve kapitalizma. Trump je nedavno izjavil, da se v »socialistični Kubi« obetajo velike spremembe in da bo tamkajšnji sistem kmalu padel. Zatem je kar hitro pohvalil tudi začasno venezuelsko predsednico Delcy Rodriguez, ki se je močno uklonila zahtevam ZDA. Med drugim je v naftnem sektorju sprejela reformo, ki z odpiranjem sektorja zasebnim in tujim podjetjem omogoča večje sodelovanje pri izkoriščanju in prodaji venezuelske nafte ter zmanjšuje monopol državnih podjetij.

Današnje razmere so nadaljevanje več kot šestdeset let trajajoče ameriške blokade in sankcij proti Kubi. Te segajo vse do kubanske revolucije leta 1959, ko je država stopila na pot socialističnega razvoja. Od takrat naprej so ZDA poskušale Kubo politično in gospodarsko izolirati, saj jim predstavlja simbol države, ki se je odločila za protiimperialistično pot. 

V času hladne vojne je bila ena največjih težav Kube njena geografska oddaljenost od zaveznikov, zlasti od Sovjetske zveze, iz katere je uvažala številne dobrine. Po razpadu Sovjetske zveze v devetdesetih letih je država zapadla v hudo gospodarsko krizo in splošno pomanjkanje. Razmere so se nekoliko izboljšale šele po letu 1999, ko je Venezuela pod vodstvom Chaveza začela Kubi dobavljati nafto. K zaostrovanju trenutnih razmer pa prispevajo tudi nekatere evropske države, denimo Velika Britanija. Britanske banke blokirajo transakcije s Kubo, medtem ko mediji, kot na primer BBC, The Guardian in Zahodni mediji nasploh širijo imperialistične laži o državi, ki je od svoje revolucije leta 1959 zgradila družbo, ki ljudi postavlja pred dobiček ter stoji na strani mednarodnega delavskega razreda in naprednih gibanj po svetu.  Ima izjemen zdravstveni sistem, napreden biotehnološki sektor, podpira lokalne iniciative pri trajnostnem razvoju in uspela je vzpostaviti veliko eko kmetij po državi, ki pripomorejo k prehranski preskrbi sicer odrezanega otoka.

Kuba že desetletja igra pomembno vlogo na področju internacionalne solidarnosti, zlasti v zdravstvu. Država je brezplačno izobraževala tisoče zdravnikov iz revnejših držav in pošiljala zdravstvene ekipe tja, kjer je bila pomoč najbolj potrebna. V zadnjem času pa administracija Donalda Trumpa pritiska na številne države, naj te kubanske zdravstvene misije izženejo. To neizogibno vodi v poslabšanje zdravstvenih razmer v državah globalnega juga.

Obramba kubanskega ljudstva je nujna naloga naprednih delavk po celem svetu. Prihodnost Kube ni odvisna samo od notranjih reform, ampak tudi od širših političnih sprememb na svetu. Po svetu se že oblikujejo solidarnostna gibanja, ki si prizadevajo za preboj blokade na Kubi in nam lahko služijo kot vzgled. Za 21. marca je na primer načrtovana humanitarna akcija, kjer bodo aktivisti iz celega sveta na Kubo prinesli manjkajoče dobrine v bolnice, šole in ogroženemu prebivalstvu neposredno. V komentarju najdete povezave do humanitarne akcije Nuestra America Convoy, katero lahko tudi finančno podprete.

The post AMERIŠKI IMPERIALIZEM DUŠI KUBO first appeared on Rdeča Pesa.

KAJ JE NAROBE S FITNESOM IN ZAKAJ BI VSEENO MORALI TRENIRATI

Fitnesi v kapitalizmu so kapitalistična podjetja, njihov cilj je povečanje dobička. Zato uporabljajo različne strategije, od vezav, ki ljudi prisilijo v plačevanje članarin za dve leti, do oglaševanja urgentnosti izgube kilogramov in oblikovanja telesa, vrednega bikinik. To se lepo poklopi z estetskim idealom in seksualiziranjem ženskega telesa, ki postane tržno orodje. Ženska bi po lepotnih standardih morala imeti majhen pas, veliko zadnjico, raven trebuh, gladko kožo in ne bi smela imeti celulita. Z oblikovanjem tržnega ideala, ki ga v popolnosti nikakor ni mogoče doseči, fitnes industrija kroji stalno hlepenje po novih programih za oblikovanje tazadnje, dietah za izgubo maščobe na trebuhu in kremah, ki topijo celulit. Servirajo nam na tone večinoma povsem nepotrebnih prehranskih dopolnil zapakiranih v fensi embalaže, brez katerih bojda ne bomo napredovali. 

Ženskam so namenjene predvsem skupinske vadbe, ki temeljijo na kardiu in raztezanju in precej manj na vajah za moč s progresivno obremenitvijo. Čeprav s samim konceptom skupinskih vadb ni nič narobe, pa je težava, ko se te pomešajo s problematičnim prepričanjem, da težke uteži ženskam odvzemajo ženstvenost. Tako imenovani programi za ženske so zato pogosto osredotočeni na rast tazadnje in precej manj na gradnjo moči v zgornjem delu telesa. 

Za nameček vplivnice, ki oglašujejo delo na svoji riti, pozabljajo omeniti vpliv genetike. Torej da bodo določene ženske lažje dobile večjo rit kot druge, ki pa se jim bodo mišice prej poznale na rokah ali pa ramenih. A tudi za to je fitnes industrija ponudila produkt in nas prepričala, da ga nujno potrebujemo. Pajkice scrunch – po domače povedano push up modrc za rit. Namen teh je, da “popravijo” tvoje plosko ali predebelo tazadnjo oziroma da te prepričajo, da je s tabo nekaj narobe. 

V fitnes industriji je telo projekt, ki ga je treba neprestano izboljševati in optimizirati. Pri čemer se nenehno spreminja ideal, kako naj bi telo izgledalo in čemu naj bi bilo namenjeno. Fitnes individualizira odgovornost za zdravje in dobro počutje, nemalokrat pa ignorira, da nismo vsi v istem čolnu in kapacitete za trening zgledajo od človeka do človeka zelo drugače. Telo postane projekt samodiscipline, kjer posameznik sam nadzira sebe v interesu produktivnosti. To se prenese tudi v delovno okolje, kjer delavec postane bolj učinkovit, discipliniran in sposoben za delo.

Tudi patriarhatu je v interesu, da ženske ostajajo šibke, krhke, majhne in nesamozavestne. Svojo samozavest pa gradijo glede na velikost “glutsov” in ožini pasa. Takšne ženske je namreč lažje nadzirati in se lepše vklopijo v kapitalistični sistem. 

Lahko torej rečemo, da je fitnes industrija v kapitalizmu ideološki aparat. Toda telesna moč sama po sebi ni kapitalistična in se je kot socialisti ne bi smeli izogibati. Več telesne moči pomeni tudi več avtonomije, boljšo samozavest, boljše zdravje in kvalitetno starost. Kar je generalno gledano dobro za vsakega posameznika. Hkrati pa je dobro tudi za družbo. Navsezadnje je močno in zdravo delavsko telo bolj sposobno za politično organiziranje in kolektivno delovanje. V trenutnem krivičnem sistemu pa je lahko tudi sredstvo za sproščanje stresa in bistrenje misli. 

Prednosti treninga za moč je več, med drugim zmanjšuje tveganje za številne bolezni, kot so bolezni srca in ožilja ter sladkorna bolezen tipa 2. Spodbuja rast mišic in ohranjanje mišične mase ter preprečuje starostno zmanjševanje mišične mase, mišične moči in kostne gostote skozi življenje. Slednje je ključno za ženske, ki so bolj nagnjene k osteoporozi – če poenostavimo, trening za moč za starejšo žensko pomeni, da si med padcem ne bo usodno poškodovala kolka. Izboljšuje kognitivne funkcije, spomin in koncentracijo. Preprečuje starostno povezane bolezni, kot sta Alzheimerjeva bolezen in demenca. Zmanjšuje tveganje za depresijo in anksioznost. Močno vpliva tudi na možgane – izboljšuje kognitivne funkcije, koncentracijo, kreativnost, dolgoročni spomin, zmanjšuje stres in tako naprej. Trening za moč ob tem razvija veščine, ki jih lahko prenesemo na politično organiziranje. Konsistenca, disciplina, potrpežljivost in dolgoročno razmišljanje.

Treninga za moč s progresivno obremenitvijo bi se zato morali lotiti vsi, ne glede na spol, starost ali telesno pripravljenost. Kot družba pa bi si morali prizadevati, da so treningi za moč dostopni vsem. To lahko dosežemo z gradnjo javnih telovadnic, podružbljanjem obstoječih fitnesov ter širitvijo mreže fitnesov na prostem. Slednjih je kar nekaj že na voljo, predvsem v mestih. Primer dobre prakse je fitnes na prostem na Pobrežju v Mariboru, ki ga je postavila mariborska občina na pobudo občanov s sredstvi participatornega proračuna. Prav to je pokazatelj, da si ljudje želijo dostopne vadbe in se zavedajo pomena zdravega načina življenja.

Ob tem, da je na voljo potreba infrastruktura, je ključno še poznavanje pravilne izvedbe vaj, kar bi pa moralo biti vključeno v učni načrt javnih šol. Prav tako pa bi morali imeti v lokalnem okolju vsi na voljo brezplačne programe skupinskih vadb za moč, kjer bi trenirali pod nadzorom usposobljenih inštruktorjev. 

Fizična moč se gradi počasi in postopoma, enako politična moč. V ekosocialistični revoluciji bomo potrebovali močne ljudi, tako fizično kot psihično. 

The post KAJ JE NAROBE S FITNESOM IN ZAKAJ BI VSEENO MORALI TRENIRATI first appeared on Rdeča Pesa.

ETAPNA ZMAGA ORGANIZIRANIH LJUDI PROTI KAPITALU – V SISKU USTAVILI IZGRADNJO OBRATA PIŠČANČJE MEGAKLAVNICE

Lokalcem in aktivistom, ki se borijo proti gradnji megalomaske klavnice kokoši v bližini Siska na Hrvaškem, je prejšnji teden uspelo preprečiti gradnjo enega od 20 obratov načrtovane megafarme. Ukrajinski kapitalisti, ki bi na Hrvaškem zgradili klavnico za ubijanje več deset tisoč brojlerjev (tj. vrsta gojenih piščancev) na uro, so investicijo razdrobili na manjše projekte, da bi s tem zaobšli okoljsko in drugo zakonodajo. V klavnici načrtujejo vzrejo in klanje več kot 100 milijonov piščancev na leto, kar je štirikrat več kot se jih trenutno vzreja na Hrvaškem.

Kot smo na Rdeči pesi že poročali, je eden od investitorjev projekta milijarder, ki ima med drugim v lasti tudi Perutnine Ptuj. Za projektom, ki je bil ustavljen, pa stoji drugi ukrajinski podjetnik Andrij Matiuha, lastnik verige stavnic Favbet. 

Hrvaško ministrstvo za okolje je po pritisku več kot 150 okoljskih organizacij in lokalnih iniciativ ustavilo postopek ocene vplivov na okolje za enega od obratov za procesiranje, ker ni v skladu s prostorskim načrtom mesta Sisak. To je prvi korak na poti do zmage, dokler ne bodo vsi načrtovani projekti gradnje megafarme ustavljeni, pravijo v Zeleni akciji. 

Do ustavitva postopka ne bi prišlo, če se lokalno prebivalstvo in okoljski aktivisti ne bi združili in organizirali. Konec februarja je v Zagrebu proti projektu protestiralo več kot 5000 ljudi, tri iniciative pa so januarja ministrstvu tudi predlagale ustavitev postopka ocene vplivov na okolje v zgornjem primeru. Pred vsem tem je posebno telo, zadolženo za oceno vplivov na okolje, namreč menilo ravno obratno – da je projekt s prostorskimi načrti Siska skladen.

Kaj nam to pove? Pravni postopki zaščite okolja, ki jih izvajajo institucije kapitalistične države, še zdaleč ne pomenijo celovite zaščite okolja po meri večine prebivalstva. Obratno, ti postopki so golo tehtanje interesov “deležnikov” in njihovo prenašanje na papir. Dokler je edini deležnik kapital, postopki pa se odvijajo za zaprtimi vrati, bi bilo le naključje, če bi država tudi dejansko zaščitila interese lastnega prebivalstva.

A ko se ljudstvo organizira in na glas pove, kaj hoče in potrebuje, ga državne oblasti ne morejo kar preslišati brez posledic za lasten obstoj. Država se javnega delovanja lastnega prebivalstva boji, ravno v tem pa je naša največja moč. Zato: organizirajmo se. 

The post ETAPNA ZMAGA ORGANIZIRANIH LJUDI PROTI KAPITALU – V SISKU USTAVILI IZGRADNJO OBRATA PIŠČANČJE MEGAKLAVNICE first appeared on Rdeča Pesa.

GOSTUJOČE PERO: SEŽIGALNICA V MARIBORU

Mariborska občina se je prijavila na razpis za postavitev sežigalnice odpadkov v Mariboru. To je bilo storjeno brez javne razprave, samo s potrditvijo mestnega sveta, ki ga obvladuje koalicija župana Saše  Arsenovića in stranke Svoboda. V ta namen sta podjetji Snaga d.o.o. in Energetika Maribor d.o.o. ustanovili novo podjetje.

PROBLEMATIKA ODPADKOV

Kot povsod se tudi pri nas soočamo s problemom odpadkov, ki bremenijo okolje. Kako rešiti te probleme, pa je seveda stvar pametnega ter človeku in naravi odgovornega ravnanja. Problem reševati s sežiganjem odpadkov je zelo neodgovorna odločitev, saj se je te težave potrebno lotiti že pri nastajanju odpadkov. Velik del le teh nastane zaradi človeške potrate in slabo zapisanih zakonov. Uporablja se dvojna ali včasih celo trojna embalaža za izdelke; veliko izdelkov je iz materiala, ki ga ni možno reciklirati; zaradi mode se zavržejo oblačila, ki so še skoraj nenošena in še bi lahko naštevali.

Drugi problem je, da tudi odpadkov, ki bi jih lahko reciklirali ali celo z majhnimi popravki ponovno usposobili za nadaljnjo uporabo, le te zavržemo in pristanejo na kupih smetišč. To je vsekakor v interesu potrošniškega kapitala, saj več kot se izdelkov zavrže, več se proda novih. Tako nastaja dvojna škoda. Za nove proizvode se črpajo novi naravni viri, na drugi strani pa se ustvarjajo novi odpadki.

Tretji problem pa predstavljajo tovornjaki, ki bodo dovažali kurivo za sežigalnico in bodo tako dodali svoj prispevek k onesnaževanju z izpusti, hrupom in gnečo na cesti.

PROBLEMATIKA SEŽIGALNIC

EU ne podpira več sežigalnic in v pripravi je tudi zakon, ki bo z emisijskimi kuponi v bodočnosti kaznoval delo sežigalnic, kar v praksi pomeni, da bomo skozi davke in prispevke vsi plačevali izpuste sežigalnic. Pomembno je tudi to, da se bo leta 2030 sprejel zakon za celotno EU in sicer se bo postavila zgornja meja onesnaženosti, kar pomeni, da zelo onesnažena mesta, ne bodo več dobila dovoljenja za gradnjo objektov, ki bi onesnaževanje še povečala. Glede na seznam evropskih držav in mest sta Slovenija in Maribor že na zgornji meji onesnaženosti, kar pomeni, da se verjetno projekt sežigalnice leta 2030 ne bo smel nadaljevati, tudi če bomo za to porabili več milijonov našega denarja.

Sam sežig odpadkov v sežigalnici ustvarja strupene izpuste, ki jih še tako izpopolnjeni filtrirni sistemi ne bodo odstranili (primer so avtomobilski izpusti, ki kljub vsej tehniki, ki je predpisana, še vedno močno obremenjuje okolje in so strupeni). Pri izgorevanju odpadkov pa zraven CO2 nastajajo še drugi strupeni plini, ki so povzročitelji rakastih obolenj.

Zavedati se moramo, da po sežigu odpadkov ostane 15 – 20 % pepela. Ta pepel ni pepel, ki bi ga lahko raztresli po njivah in travnikih ter bi deloval kot gnojilo, ampak je izredno strupen. Potrebno ga je skladiščiti v neprodušno zaprtih kontejnerjih, saj ne sme priti v stik z vodo in zemljo, ker bi prišlo do kontaminacije, ki bi imela katastrofalne posledice za naravo in ljudi.

Ljudje zmotno mislijo, da če bo v Mariboru sežigalnica, da se bomo rešili problemov z gorami smeti v Snagi, Surovini, Dinosu … Nastalo bo namreč še eno skladišče smeti, saj se morajo smeti, ko prispejo na območje sežigalnice, sortirati in pred sežigom obdelati. Torej s tem ustvarimo novo smetišče. Ker mora sežigalnica delati 24/7 vse dni v letu, bo tudi v tednih in mesecih, ko vlada inverzija, delovala s polno paro in tako še dodatno onesnaževala zrak in okolje.

Ko se sežigalnica umesti v nek prostor, postane njeno neprestano delovanje prioriteta za naslednjih 30 in več let. Smeti za kurjenje se bodo vozile iz večjega dela SV Slovenije, kar pa še vedno ne bo dovolj za rentabilno poslovanje. To pomeni, da če zmanjka odpadkov, ki se ne dajo reciklirati, se začnejo sežigati tudi odpadki, ki bi jih lahko reciklirali. S tem dobi industrija potrditev, da jim ni potrebno skrbeti, koliko odpadkov proizvede, saj se jih tako porabi za sežigalnico. To pa je v nasprotju s težnjo po brezodpadni industriji (zero wast).

KDO IN KAKO BO IZVAJAL NADZOR NAD SEŽIGALNICO? 

V Sloveniji težko najdemo neodvisne strokovnjake, ki bi dnevno vršili kontrolo nad izpusti sežigalnice. Tako smo spet prepuščeni dobri volji lastnikov sežigalnic, katerim pa je glavni cilj profit in ne zdravje ljudi.

KAKO SE JE MESTNA OBČINA MARIBOR LOTILA PROJEKTA SEŽIGALNICE?

Kot smo že v uvodu povedali, je bila to politična odločitev brez jasnih in preglednih strokovnih utemeljitev in brez seznanitve javnosti, kaj bo to pomenilo za zdravje ljudi v Mariboru in njegovi okolici. Nobene javne razprave, nobenih izračunov, brez vizije za prihodnost.

Zmedenost in nepripravljenost na projekt je viden tudi iz ocen vrednosti postavitve in zagona sežigalnice, saj je prvotna ocena o 60 milijonih evrov v mesecu dni skočila na dobrih 100 milijonov evrov.

Sama lokacija je v nasprotju z direktivami EU in tudi naše ustave, saj morajo takšni objekti biti oddaljeni vsaj 300 metrov zračne linije od najbližjega naselja. To na predvideni lokaciji v Mariboru ni zagotovljeno. Pravzaprav je v bistvu sežigalnica predvidena v Coni Tezno, kjer je cca 4.000 zaposlenih ljudi, ki bodo med ostalimi tudi dnevno izpostavljeni izpustom sežigalnice.

Ustanovilo se je novo podjetje (d.o.o.), katerega direktor je postal sedanji direktor Snage d.o.o. dr. Vito Martinčič. Torej ima sedaj dve direktorski plači, ob njem pa je še direktor Energetike Maribor d.o.o. Jože Hebar.

Zakaj d.o.o.? Ker nad d.o.o., čeprav je ustanovljen z denarjem MOM, mestni svet nima več vpliva, kljub temu pa bo delovanje tega novega podjetja financirala MOM (torej mi vsi).

Stroški postavitve sežigalnice so načrtovani zelo ohlapno, saj se še do danes niso odločili, kakšen tip sežigalnice se bo postavljal. Ne ve se še, kakšna bo morala biti npr. višina dimnika in če bo sploh možno zgraditi takšno zgradbo na predvideni lokaciji (spomnimo se na gradnjo Rotovža, kjer so ob izkopavanju naleteli na probleme).

Velik problem pri tem projektu je tudi financiranje. Že v začetku smo povedali, da EU takšnih projektov ne sofinancira več, torej se pričakuje pomoč države, za kar pa ni nobenih zagotovil. Kredit pri banki pa bo gradnjo samo podražil ter ga bomo občani Maribora plačevali še leta in leta.

PROPAGANDA MOM ZA GRADITEV SEŽIGALNICE

Na seji mestnega sveta so potrdili, da se 250.000 € porabi za propagando graditve sežigalnice, kar pomeni, da se bo ponovno uporabil denar, ki bi ga lahko v Mariboru namenili za druge veliko pomembnejše projekte, kot je sežigalnica.

MOM propagira:

–   da bo sežigalnica znižala stroške kurjave.

Projekt sežigalnice bomo plačali vsi Mariborčani preko položnic (v tem trenutku cca 1.100 €/osebo, torej bo 4 članska družina dala za izgradnjo sežigalnice cca 4.400 €). To so stroški po sedanjih ocenah izgradnje. V Mariboru in tudi v Sloveniji pa je, kot vemo,  končna cena takšnih projektov v večini 2 – 3 x večja. Temu pa je potrebno dodati še stroške delovanja sežigalnice.

–   da bo manj onesnaženosti individualnih kurišč zaradi širitve toplovodnega omrežja.

Toplovodno omrežje je v Mariboru slabo razvito, ob tem pa je potrebno poudariti, da bi bil priklop do individualnih hiš zelo drag, blokovska naselja na Taboru pa so delno že vključena v ta režim in kar je še bolj pomembno – ne kurijo na drva. Torej bo ta učinek minimalen.

–   da bo z delovanjem sežigalnice manj smeti.

Povedal sem že, da se bo problema treba lotiti pri izvoru in ne na koncu.

–   delovanje sežigalnice ne bo vplivalo na zdravje ljudi in živali ter na okolje.

Če bi bilo to res, potem na svetu nikjer ne bi bilo več smeti, ker bi se vse države lahko posluževale sežigalnic in bi se smeti skurile brez problemov.

Kot vidimo, bo sežigalnica en problem samo nadomestila z drugim. Onesnaževanje se ne bo zmanjšalo, temveč se bo nadaljevalo v drugi obliki, ki je lahko še nevarnejša od sedanje. Vedeti moramo namreč, da smo zemljo dobili samo na posodo od svojih zanamcev, ne pa v last, česar bi se morali zavedati tudi politiki, ki smo jih izvolili, da odločajo o naši bodočnosti.

 Avtor tokratnega gostujočega peresa je Fredi Magdič

The post GOSTUJOČE PERO: SEŽIGALNICA V MARIBORU first appeared on Rdeča Pesa.

VRTCI KOT KLJUČNO ORODJE OSVOBODITVE ŽENSK

KAKO SMO V SOCIALIZMU IZGRADILI NAJBOLJŠO VZGOJNO – VARSTVENO MREŽO NA SVETU

Na Slovenskem je leta 1940 stalo 60 vrtcev, katere je obiskovalo okoli 2400 otrok. Mnogi med njimi si zaradi bornih pogojev niti ne bi zaslužili tega naziva.Zanje je skrbelo 79 vzgojiteljic, pri čemer jih je manj kot 20 imelo kakršnokoli izobrazbo iz področja predšolske vzgoje in varstva. Jasli so bile zgolj ene, pa še te so bile ob izbruhu druge svetovne vojne še v izgradnji. Dobra štiri desetletja kasneje, leta 1984, je v vrtce in jasli na območju tedanje Socialististične republike Slovenije dnevno prihajalo več kot 76 tisoč otrok. Zanje je bilo postavljenih 752 različnih objektov, ki so služi za vzgojne in varstvene dejavnosti, v vrtcih in jasli pa je bilo že leta 1961 na Slovenskem zaposlenih več kot 2000 ljudi, večinoma žensk. 

Kako je prišlo do tega skoka zgodovinskih razsežnosti, ki je močno vplival na večanje družbene enakosti in odpiral ženskam pot do samoizpolnitve?

Ključno vlogo pri tem je igrala idejna usmeritev ljudskih množic in komunistične partije, ki sta se povezali skozi zmago nad okupatorjem in preko izvedbe socialne revolucije Ta je odprla vrata jugoslovanskem socializmu naslednjih desetletij. Eden od osrednjih ciljev socializma je bila čim širša vključitev žensk na delovna mesta. Tako bi se naj vključile v javno sfero dela in življenja ter nadaljevale trend, postavljen iz strani stotisoče jugoslovanskih žensk, ki so se tekom druge svetovne vojne aktivno pridružile narodnoosvobodilnemu gibanju. Junakinje odpora in vojne so postajale junakinje dela in izgradnje socializma. 

Tempo naraščanja števila delavk je med različnimi obdobji socialistične Jugoslavije sicer nihal, vendar je vseskozi ohranjal krivuljo rasti. Če so ženske na Slovenskem pred drugo svetovno vojno predstavljale okoli 28 % vseh zaposlenih ljudi, se je njihov delež do leta 1961 dvignil na 41 %, v letu 1987 pa je dosegel 46,5 % celotnega delavstva. Za primerjavo, v Zahodni Nemčiji, Franciji, Avstriji in Veliki Britaniji niti do sredine osemdesetih let niso dosegli deleža zaposlenih žensk, ki ga je socialistična Slovenija dosegla že v začetku šestdesetih let istega stoletja

Takšen razmah zaposlovanja žensk in njihove polnopravne vključitve v širšo socialistično skupnost ne bi bil mogoč brez obsežnega podružbljanja, kolektivizacije in profesionalizacije varstva in vzgoje otrok. Odločitev in politična akcija za izgradnjo prostorsko razvejane in kvalitativno bogate mreže vrtcev in jasli, pa ni izšla samo iz dejstva, da se je deset tisoče novih žensk vključevalo v delovna okolja. Družbena akcija, ki so jo vodile napredne sile v partijskih organih, lokalnih skupnostih in drugih samoupravnih organih, je temeljila tudi na preseganju ideje, da je družina edina in primarno odgovorna za razvoj najmlajših članic in članov družbe. 

Tako je Vida Tomšič že leta 1952 ob njeni poslanci za 8. marec razgrnila novo opredelitev vzgoje in varstva kot naloge celotne družbene skupnosti. Dejala je: »Vzgoja otrok je seveda važen del materinstva, a je obenem veliko širši problem, ki ga mati sama ne more reševati. Socialistična družba mora iz svojih najglobljih osnov vzgojo otrok postaviti v prvi vrsti kot družbeno nalogo in jo kot tako tudi reševati.« 

Z izgrajevanjem vedno novih objektov, zaposlovanjem novih ustrezno kvalificiranih delavk (leta 1949 je bila namreč v ta namen v Ljubljani ustanovljena prva Srednja vzgojiteljska šola na Slovenskem), vpisovanjem vedno večjega števila otrok in posodabljanjem prostorov ter didaktičnih pripomočkov je pomemben segment reproduktivnega dela vse bolj postajal del javne sfere dela. Varstveno-vzgojno delo, ki so ga nekdaj ženske brezplačno opravljale skrite med štirimi stenami družinskega doma, je sedaj vse bolj postajalo del vsakdanjika celotne družbene skupnosti. Opravljale so ga strokovno usposobljene in ustrezno plačane delavke v vrtcih in jaslih, ki so preko mehanizmov socialističnega samoupravljanja tudi sodelovale pri vodenju teh vzgojno-varstvenih institucij in pri organizaciji dela v njih. Podoba odvisne ženske potegnjene v zasebnost doma in vprežene v privatnost reproduktivnega dela je tekom socialističnega razvoja vse bolj zamenjevala podoba ženske – delavke, samostojne osebnosti in ustvarjalke družbenega življenja. 

Mnoge od teh žensk so delo opravljale prav v vrtcih in jaslih. Pri tem so s skrbjo za najmlajše otroke in z vzgojo v duhu tovarištva pomembno prispevale tudi humanizaciji odnosov med ljudmi in k temu, da so se tudi pri najmlajših generacijah krepili emancipacijski tokovi. Hkrati so s svojim delom, naj si bo kot kuharice, čistilke, pomočnice vzgojiteljic ali vzgojiteljice, v ključni meri prispevale, da se je lahko z vsakim dnem zaposlovalo in osvobajalo vse večje število žensk. Družinska okolja so razbremenjevale nekoč skritega reproduktivnega dela in ga zmeraj bolj premikala v javno sfero, kjer je bilo to priznano in plačano s strani celotne družbe. 

Ravno ta emancipacijski skok zgodovinskih razsežnosti, tako kar se tiče števila vzgojno-varstvenih ustanov, kot tudi njihove kvalitete (organizirana prehrana, program “male šole”, prostorska umeščenost objektov, način financiranja itd.) je odgovoren za ogromno večino današnje javne mreže vrtcev in jasli. O tem priča že podatek, da je število objektov javnih vrtcev od leta 1990 do 2015 naraslo za komaj 68, medtem, ko je bilo v istem 25 letnem  obdobju med letoma 1945 in 1970, na Slovenskem zgrajenih kar 344 objektov za potrebe predšolskega varstva in vzgoje. 

Za razliko od javne mreže vrtcev in jasli, ki po kapitalistični kontrarevoluciji in razpadu Jugoslavije, vse bolj hira, pa na drugi strani vse bolj cveti privatna iniciativa. V šolskem letu 2024/2025 je tako pri nas delovalo že 94 zasebnih jasli in vrtcev, ki so od 85 % do 100 % financiranih iz sredstev vseh državljank in državljanov. Medtem, ko v mnogih javnih vrtcih primanjkuje vpisnih mest, kar neposredno vodi nazaj v mračnjaštvo patriarhalne družbe, se mesta v zasebnih vrtcih, ki delujejo s profitnim motivom, množijo. Lani je zasebne vrtce obiskovalo že kar 7.130 otrok predšolske starosti. 

Če želimo obrniti nevaren in družbeno izrazito škodljiv trend repatriarhalizacije družbe, ki ženske vse bolj spet postavlja v vlogo odvisnih subjektov, moramo slediti predstavljenim zgledom socializma. Mračnjaštvo smo nekoč že zamenjali z napredkom, na nas je, da to storimo znova. 

Živel 8. marec!

The post VRTCI KOT KLJUČNO ORODJE OSVOBODITVE ŽENSK first appeared on Rdeča Pesa.

JAVNO PISMO ZA OSMI MAREC: ROŽE IN OROŽJE!

Letošnji osmi marec, mednarodni dan delovnih žensk, praznujemo tik pred državnozborskimi volitvami. Ob tem si morda poskušamo predstavljati, kaj z vidika feminizem lahko pomenil naslednji mandat. Javnomnenjske raziskave trenutno kažejo na desno vlado – verjetno koalicijo SDS, NSi in Demokratov, ki vsi nasprotujejo pravici do varnega in dostopnega splava, zagovarjajo politike, ki ženske reducirajo na materinsko vlogo, ter spodbujajo razgradnjo javnega sistema varstva otrok. V prihajajočem mandatu se torej težko nadejamo (materialnega) izboljšanja položaja žensk, vse pa kaže na to, da jih bodo v vojski sprejeli odprtih rok.

V 84. členu programa SDS najdemo »uvedbo obveznega vojaškega usposabljanja«, ki sicer ni konkretneje definirana. Demokrati načrtujejo dvomesečno opravljanje »državljanske solidarnosti«, kar pomeni usposabljanje v vojski ali policiji oziroma trimesečno civilno služenje, tako za fante kot dekleta med 18. in 21. letom starosti. S tem bi dosegli vrhunec enakosti med spoloma, ki ga omogočajo okovi kapitalizma – vsi bomo primorani služiti vojaškim interesom, ne glede na spol.

Najbolje lahko tovrstno »progresivno« politiko vidimo na primeru izraelske vojske. Zahodni mediji nam namreč Izrael predstavljajo kot otok demokracije in liberalnih vrednot na Bližnjem vzhodu. To v praksi pomeni, da tudi Izraelke služijo 24-mesečni obvezni vojaški rok in so imele v zadnjih letih ključno vlogo pri genocidu v Palestini. Na podoben način bodo imele sedaj tudi Slovenke »privilegij« braniti zahodne imperialistične interese, kar od nas terjata EU in NATO pakt.

Članstvo v teh zvezah nam ne zagotavlja varnosti, nas pa vpleta v vojne, ki jih bodo zahodne kapitalistične velesile bile zmeraj pogosteje. To je posledica ekonomskega razvoja globalnega juga in pojavitve Kitajske kot alternativnega centra moči, zaradi česar je ogrožena obstoječa svetovna ureditev. Danes to seveda spremljamo predvsem pri Združenih državah Amerike z nepredvidljivo politiko Donalda Trumpa. V zadnjem letu si je z vojaškimi posredovanji in izsiljevanjem poskušal podrediti Venezuelo, Kubo, Iran, Kolumbijo in celo Grenlandijo, česar nikakor ne smemo razumeti kot »blodnje« dotičnega posameznika. Podobne interese imajo, predvsem v Afriki, vzhodni Evropi in na Bližnjem vzhodu, tudi vladajoče evropske elite. Politike oboroževanja torej niso posledica akterjev, kot je Rusija, pač pa izhajajo iz članstva v NATO in Evropski Uniji, ki bosta svoj privilegiran položaj v svetu branili z vsemi vojaškimi sredstvi.

A to ni le stvar t. i. desnice – sedanje koalicijske stranke, oziroma »levi pol«, nas strašijo z avtoritarnim janšizmom, hkrati pa so se ravno te stranke v zadnjem mandatu podredile diktatu Nata in nam na podobno avtoritaren način vsilile višanje izdatkov za vojsko. Sprejele so vojaški proračun v višini 5 % BDP-ja, kar po grobi oceni predstavlja približno petino celotnega državnega proračuna. To pomeni, da bi od vsake povprečne plače za vojsko namenili kar 133 evrov mesečno! Sodelovanje med obema poloma slovenskega političnega spektra najbolje prikaže Resolucija o splošnem dolgoročnem programu razvoja in opremljanja Slovenske vojske do leta 2040, ki jo je lani sprejela vlada, in predvideva povečanje mirnodobnega obsega pripadnikov slovenske vojske na 10.000 ter dodatnih 30.000 rezervistov. Kot smo videli, jim bodo pri takih ukrepih takoj na pomoč priskočile stranke desnega pola z različnimi oblikami vojaškega roka.

Hkrati je samooklicana »najbolj leva vlada v zgodovini samostojne Slovenije« v istem letu sprejela pokojninsko reformo, ki bo mnoge prisilila delati dlje in v povprečju niža pokojnine. Zaradi dviga izdatkov za vojsko je bilo treba sprejeti varčevalno pokojninsko reformo, ki so jo najbolj zagrizeno zagovarjali prav v Levici, čeprav naj bi se zavzemali za izstop iz zveze Nato in za večjo socialno blaginjo. Jasno je, da bomo za višje vojaške izdatke plačevali državljani in državljanke, najprej s svojim denarjem in nato še z dostojnimi življenjskimi pogoji, saj več sredstev za vojsko pomeni tudi manj sredstev za javne storitve. Pričakujemo lahko še marsikatero reformo, ki bo v stilu pokojninske reforme varčevala na račun delavstva.

Kako bo torej oboroževalna mrzlica prizadela ženske? Sprva z vpoklicem v vojsko, predvsem pa z zažiranjem v denar, namenjen javnim storitvam – vrtcem, šolam, domovom za ostarele itd. Tukaj lahko izpostavimo dva vidika poslabšanja položaja žensk: večina poklicev, ki zadevajo javne storitve, je feminiziranih, zato varčevanje na področju vrtcev, šolstva in skrbstva posredno pomeni varčevanje na račun delavk – nižje plače, slabši delovni pogoji in kadrovska stiska. Po drugi strani klestenje javne mreže varstva in skrbstva pomeni, da se del tega bremena prestavi na posamezna gospodinjstva, tam pa na ženske. Skrb za otroke, starejše in obolele pade na matere, sestre in hčerke, ki se morajo zaradi tega zaposlovati za krajši ali polovični delovni čas, poseči po kakšni drugi obliki nestandardne zaposlitve ali pa docela zapustiti aktivno delovno silo ter postati odvisne od moških v gospodinjstvu.

S povečano militarizacijo in krčenjem socialne države lahko ženske ob osmem marcu, ne glede na »levo« ali »desno« vlado, najverjetneje pričakujejo zgolj simbolične rože in orožje. Zato v Študentskem društvu Iskra tudi letos za dan žensk organiziramo protest!

Javno pismo so napisali v Društvu Iskra.

The post JAVNO PISMO ZA OSMI MAREC: ROŽE IN OROŽJE! first appeared on Rdeča Pesa.

KDO SE BOJI FEMINIZMA?

“Vprašanje torej ni, ali si želimo enakosti. Vprašanje je, ali smo se zanjo pripravljene organizirati, ker brez organiziranja ne bo podružbljanja, brez podružbljanja ne bo emancipacije in brez razrednega boja se zgodovina ne premakne.”

Vabimo vas k ogledu spodnjega video eseja o prizadevanjih za osvoboditev in polno vključitev žensk v družbeno življenje, ki so ga pripravili v Študentski Iskri. V njem ustvarjalke in ustvarjalci spregovorijo o zgodovini ženske emancipacije, reproduktivnem delu, socialni državi in o tem, zakaj enakost ni nikoli nastala iz individualnega uspeha – temveč iz organiziranega boja in sistemskih sprememb.

Hkrati pa vas vabimo tudi na protestna shoda, s katerimi bomo v nedeljo v Kopru in Ljubljani obeležili mednarodni dan žensk. Povezave do dogodkov najdete spodaj. Se vidimo na ulicah!

https://www.facebook.com/events/1475775824123066 (LJUBLJANA)

https://fb.me/e/jiy1o4zl4 (KOPER)

The post KDO SE BOJI FEMINIZMA? first appeared on Rdeča Pesa.

PESA ANALIZIRA II: SKRB ZA ŽIVLJENJE V KAPITALIZMU

Ob bližajočem se prazniku delovnih žensk, 8. marcu, smo za vas pripravili cikel objav z analizo položaja žensk v kapitalizmu.

V prejšnjem delu smo se osredotočili na vprašanje vloge družine v kapitalizmu. Tokrat se posvečamo procesu ponovne patriarhalizacije ob vzpostavitvi kapitalizma v Sloveniji.

II. DEL: Dvojna izkoriščanost delovnih žensk

Kapitalistična produkcija in njena buržoazna ideologija praviloma umeščata žensko v družbi znotraj družine primarno v vlogo matere in gospodinje, so pa ženske v večji meri vključene tudi v sistem mezdnega izkoriščanja. Dvojna izkoriščanost žensk je na eni strani posledica zgodovinskih pridobitev ženskih bojev za vključitev na trg delovne sile (ženska se delno osvobodi ekonomske podrejenosti družini in možu, tako na Zahodu kot v državah realsocializma), na drugi strani pa delno še vedno ostaja v vlogi oskrbovalke in gospodinje, ko pride iz službe. 

Vedno večja vključenost žensk v proces družbene produkcije je značilna za čas od druge svetovne vojne dalje, ko so države realsocializmov (pri nas v Jugoslaviji) uveljavljale politike enake vključenosti žensk in moških v delovne procese, družina pa naj bi izgubila ekonomsko vlogo in postala prostor solidarnih odnosov (pri čemer ne pravimo, da patriarhalnega zatiranja v socialističnih državah ni več bilo).

S tranzicijo Slovenije v kapitalizem sta tako ženska kot moški postala izkoriščana mezdna delavca, družina pa je zaradi vedno večje privatizacije javne družbene skrbi (javne kuhinje, vrtci, pralnice) spet vedno bolj postajala prostor, v katerem ima ženska primarno vlogo matere in gospodinje. Jugoslovanski realsocializem je patriarhat in delitev na spolne vloge nameraval sistemsko odpraviti s podržavljanjem in podružbljanjem družbene reprodukcije in vzgoje, kot so množično ustanavljanje javnih vrtcev, menz, pralnic in šol. 

Vse obsežnejša privatizacija po obnovitvi kapitalizma pri nas je močno načela socialistične pridobitve. Poleg tega pa danes že tako okrnjene javne storitve, ki ženske še lahko razbremenijo domačega reproduktivnega dela, pestijo kadrovska podhranjenost, prekarizacija, podplačanost in feminizacija teh poklicev. 

Neoliberalna kapitalistična ekonomija je s selitvijo nekaterih delov proizvodnje, kjer so bile ženske nekoč zaposlene v sindikalno dobro organizirani proizvodnji, v periferne države, na domačem terenu obdržala del višje ovrednotene industrijske proizvodnje in storitveni sektor (informatika, zavarovalništvo, bančništvo). V njem so večinsko zaposleni moški ali ženske srednjega in višjega razreda. 

Ženske delavskega razreda se tako zaposlujejo v podplačanem javnem sektorju ali vedno bolj prekariziranem zasebnem sektorju, ki izvaja nekatere skrbstvne dejavnosti (negovalke, medicinske sestre, vzgojiteljice). Vedno pogosteje pa najbolj razvrednotena dela opravljajo migrantke, ki zapolnjujejo skrbstveno “luknjo”, ki jo v državah kapitalističnega centra povzroča krčenje socialne države kot posledica neoliberalne ekonomske deregulacije zadnjih 30 let. Vzpostavlja se razredno nasprotje med ženskami na dobro plačanih položajih, ki si lahko privoščijo čistilke, hišne pomočnice ali varuške in migrantskimi delavkami. 

The post PESA ANALIZIRA II: SKRB ZA ŽIVLJENJE V KAPITALIZMU first appeared on Rdeča Pesa.

Štajerci in Kranjci na obeh straneh Save – organizirajmo se za boljše Posavje!

Članice in člani Mreže za pravičen prehod te vabijo na regijsko srečanje mreže, kjer bodo skupaj z udeleženkami in udeleženci razpravljali o okoljskih in socialnih problemih v Posavju in širše ter se organizirali za njihovo razrešitev:

Kje in Kdaj? Že ta četrtek, 5. 3. 2026, 17:00-20:00, prostori Studia 44/VGC Posavje, Krško, Cesta Krških žrtev 44

Na srečanju:

  • bodo članice in člani predstavili namen, delovanje in zahteve mreže, 
  • seznanili se bomo s prizadevanji raznih lokalnih iniciativ na okoljsko-socialnih področjih ter
  • razpravljali o okoljskih in socialnih problemih v regiji in širše ter skušali skupaj poiskati rešitve.

Posebna pozornost bo namenjena razpravam in organiziranju: za boljši javni potniški promet; za reševanje okoljskih problemov v regiji; za dostop do zdrave, trajnostne in lokalno pridelane hrane ter morebiti še kakšnemu področju, ki ga boste udeleženke in udeleženci posebej izpostavili. 

Osrednji namen srečanj je torej, da se na tistih področjih, ki vas najbolj pestijo, organiziramo, okrepimo skupna prizadevanja in izborimo spremembe na bolje!

Na srečanju bo poskrbljeno tudi za topel obrok.

Aktiviraj se – združimo moči za boljše Posavje, za boljšo družbo!

Tako smo se že začeli organizirati v Mariboru:

Več informacij in prijava:

https://www.facebook.com/events/1613985996470464

The post Štajerci in Kranjci na obeh straneh Save – organizirajmo se za boljše Posavje! first appeared on Rdeča Pesa.

IZVOR PRIHODNOSTI – 50 LET RAZVOJA UMETNE INTELIGENCE V SLOVENIJI

Zgodba o razvoju umetne inteligence je zgodba o človeški radovednosti in želji, da bi razumeli razum sam. Od antičnih mitov o bronastih velikanih do literarnih vizij o robotih so sanje o mislečih strojih burile domišljijo, a resnična revolucija se je v zadnjih desetletjih tiho odvijala v raziskovalnih skupinah.

Slovenija ima v tem razvoju presenetljivo pomembno vlogo. Čeprav je svetovni preboj uporabniških orodij šele pred nekaj leti umetno inteligenco postavil v središče javnega zanimanja, domači razvoj poteka že pol stoletja in uživa mednarodni ugled od Stanforda do Tokia. Resnična zgodba najbolj prebojne tehnologije našega časa pa se piše v stotinah tisočih vrsticah programske kode, ki so bile skozi desetletja napisane v mnogih laboratorijih na Institutu Jožef Stefan, na Fakulteti za računalništvo in informatiko in razvojnih oddelkih pionirskih podjetij. 

Računalniški muzej z razstavo Izvor prihodnosti razkriva to spregledano pot slovenskih začetnikov najbolj prebojnega področja računalništva. To niso le zgodbe o procesorjih, temveč zgodbe o ljudeh, ki so v tehnologijo vtkali svoje vrednote in želje za prihodnost. To so zgodbe, ki bodo krojile sodobno slovensko tehnološko samozavest.



The post IZVOR PRIHODNOSTI – 50 LET RAZVOJA UMETNE INTELIGENCE V SLOVENIJI first appeared on Računalniški muzej.

Kako je digitalno lahko fizično?

Strokovna konferenca, ki ji bo sledila še delavnica primerna za vse, bo potekala v
sredo, 11. februarja,
med 9.00 in 14.30 v
Računalniškem muzeju v Ljubljani.

V preteklem letu so Ekologi brez meja naslovili širši okvir digitalnega odtisa, tokrat pa želimo skonkretizirati povezavo med digitalnimi praksami, strojno opremo in oprijemljivimi posledicami – od e-odpadkov do slabše uporabniške izkušnje in povečane tehnološke odvisnosti. Cilj dogodka je spodbuditi strokoven, trezen razmislek o tem, kako digitalne rešitve zasnovati tako, da bodo dolgoročno vzdržne – tehnološko, okoljsko in družbeno – ter odpreti prostor za izmenjavo pogledov med različnimi disciplinami.

Po konferenčnem delu pa vas vabimo, da se pridružite digitalnemu čistilnemu servisu z ambasadorji Evropskega podnebnega pakta. Delavnica je primerna za vse starosti.

Za lažjo organizacijo dogodka vas prosijo, da se prijavite preko prijavnega obrazca.

Program strokovne konference

Ura Prispevek Sodelujoči
9.30 – 10.00 Registracija
10.00 Uvodni pozdrav
10.05 – 10.25 Podatkovni centri kot pospeševalci nastajanja e-odpadkov izr. prof. dr. Mojca Ciglarič, dekanja UL FRI in nosilka predmeta Trajnostni vidiki računalništva
10.25 – 10.45 Digital sufficiency (vklop prek spleta) prof. Geoffrey Aerts FARI Academic Director – Vrije Universiteit Brussel (VUB)
10.45 – 11.05 Prekomerna digitalizacija kot gonilo nastajanja e-odpadkov Izr. prof. dr. Urban Sedlar, UL FE
11.05 – 11.25 Prekomerna digitalizacija — skozi oči varstva potrošnikov Boštjan Okorn, Zveza potrošnikov Slovenije
11.25 – 11.40 Odmor za kavo
11.40 – 12.00 Okoljska zasnova programske opreme (software ecodesign) Jaka Kranjc, Ekologi brez meja
12.00 – 12.20 Strojne omejitve kot gonilo boljše programske opreme skozi čas Računalniški muzej
12.20 – 12.40 Pregled aktivnosti in uspehov kampanje EndOf10 Lio Novelli, Kompot
12.40 – 13.00 Odprta debata
13.00 Zaključek konferenčnega dela
13.15 – 14.30 Delavnica: digitalni čistilni servis z ambasadorji Evropskega podnebnega pakta

The post Kako je digitalno lahko fizično? first appeared on Računalniški muzej.

Valentinov retro gaming

Dogodek je popolna ideja za zmenek, ki ni vsakdanji. Namenjen je vsem, ki jim je ljubezen blizu, pa naj bo to romantična ali prijateljska. Mario in Peach, Mario in Luigi ali katerakoli druga nepremagljiva ekipa: pomembno je le, da se imajo igralci lepo in da med smehom in drobnimi tekmovalnimi trenutki nastajajo novi spomini.

Večer obljublja sproščeno vzdušje, kanček tekmovalnosti in obilico zabave v dvojicah. Ljubezen do iger se bo prepletla z ljubeznijo do druženja. Idealna kombinacija za valentinov dan.

Vstopnice v predprodaji lahko kupite tukaj.

Opomba:

Dogodek je del LUV festa.

Dogodek se lahko snema in fotografira. S svojo prisotnostjo na dogodku Turizmu Ljubljana dovoljujete, da vašo podobo na fotografijah ali videu uporablja za komercialne in nekomercialne namene (promocijo) destinacije Ljubljana in festivala LUV fest.

The post Valentinov retro gaming first appeared on Računalniški muzej.

Civilna družba v bran Andreji Slameršek

Pretekli petek je pred ministrstvom za okolje, podnebje in energijo v Ljubljani potekala tiskovna konferenca na temo dveh SLAPP tožb zoper eno naših največjih bork za naravo in prosto tekoče reke Andrejo Slameršek, ki so se je udeležili tudi številni njeni podporniki ter podpornice.

SLAPP tožbi sta zoper Andrejo Slameršek sprožili dve državni energetski družbi HSE Invest in HESS, nedolgo zatem, ko je junija lani prejela državno nagrado Rada Smerduja za izjemne zasluge na področju ohranjanja narave. “Civilna družba je nominirala Andrejo Slameršek za državno nagrado in civilna družba bo stala ob Andreji Slameršek tudi zdaj, ko jo preganjajo državna podjetja,” so zapisale Varuhinje rek.

Na tiskovni konferenci se je pred množico podpornic in podpornikov zvrstilo šest govornic oziroma govorcev: Alja Bulič (Varuhinje rek), Polona Pengal (Revivo), Gaja Brecelj (Umanotera), Uroš Macerl (Eko krog, Vesna), Barbara Rajgl (Pravna mreža za varstvo demokracije) in Aljoša Petek (Pravni center za varstvo človekovih pravic in okolja).

Objavljamo izjavo Varuhinj rek:

“Če si sam, te lahko ustrahujejo.
Če je za teboj skupnost, se bojijo oni.

Andreja Slameršek je ena naših najpomembnejših varuhinj narave, za svoje delo je prejela tudi najvišjo državno naravovarstveno priznanje.
Zdaj jo ta ista država oz. državni energetski podjetji HSE Invest in HESS tožita. Gre za t.i. SLAPP tožbi, ki jih vlagajo vplivni akterji, da bi ustrahovali in utišali tiste, ki zagovarjajo javni interes. Andrejo tožijo zaradi njenih prizadevanj proti gradnji HE Mokrice na Spodnji Savi – država je v tem primeru že 3x izgubila na sodišču!

Pred Ministrstvom za okolje smo se na tiskovni konfereneci v bran Andreji Slameršek zbrali številni predstavniki in predstavnice stroke in civilne družbe iz cele države s zahtevami:

  • odgovorni na omenjenih podjetjih naj pojasnijo ozadje tožb proti državni nagrajenki
  • tožbi naj nemudoma umaknejo

Če se to ne bo zgodilo, bomo sodni postopek izkoristili za zelo javno in odprto soočenje vseh argumentov, dejstev in strokovnih mnenj.
Na koncu smo v en glas skupaj prebrali izjavi, zaradi katerih je tožena. Andreja ni sama kot tudi ne bo sam nihče, ki bi ga doleteli podobni pritiski.

Skupaj za Andrejo!
Skupaj za reke!”

Fotografije: Črt Piksi

 

The post Civilna družba v bran Andreji Slameršek appeared first on Prelom.

Aktivizem, ki spreminja mesta

V Ljubljano prihajata dve organizaciji Changing Cities Berlin in Wir machen Wien, ki aktivno delujeta predvsem na organiziranju meščank in meščanov od spodaj navzgor za trajnostno mobilnost.

Prostorož vabi v četrtek, 5. marca ob 17.30 v Rastlinjak (Tržaška cesta 2) na kratko javno predavanje in predstavitev kampanj, intervencij in iniciativ za trajnostno mobilnost, ki so dosegle uspehe na Dunaju, v Berlinu, Hamburgu in drugih nemških mestih.

Predavanju bo sledila razprava, v kateri bomo osvetlili tudi trenutno stanje v Sloveniji.

Več o dogodku lahko preberete na tej povezavi.

The post Aktivizem, ki spreminja mesta appeared first on Prelom.

Skupščina prebivalcev Štepanjskega naselja

Iniciativa za Štepanjsko naselje vabi na skupščino prebivalcev in prebivalk:
Ponedeljek, 2. 3. 2026 ob 18.00
OŠ Božidarja Jakca, Nusdorferjeva ulica 10

Če želite predstaviti svoj predlog, do 28. 2. 2026 pošljite ime, priimek, naslov bivališča ter kratek opis na info@stepanjsko.si ali pokličite 069 865 570.

Prispevki bodo časovno omejeni (predvidoma 5 minut), da bo zagotovljena možnost sodelovanja vsem.

Pridite in povabite še sosede.
Skupaj odločamo. Zdaj je čas, da se povežemo.

The post Skupščina prebivalcev Štepanjskega naselja appeared first on Prelom.

Kriza Mestne občine Ljubljana

Januarja so težave s parkiranjem v Štepanjskem naselju postale vseljubljanska skrb. Resnici na ljubo so segle celo onkraj meja mestne občine – o parkiranju v Štepanjcu so poročali v nacionalnih medijih, o njem so pisali časopisi, na družbenih omrežjih pa so se razvnele burne razprave. Zgodba je nadvse preprosta in značilna tudi za druge dele Ljubljane, prav tako za kakšno drugo mesto pri nas ali v okolici. Modernistična soseska, ki je bila zasnovana s parkirnimi mesti za en avto na dve stanovanji, ne prenese sodobnega pritiska Ljubljančanov, ki imajo v lasti avto ali dva na gospodinjstvo. Razmere, zaradi katerih so vsi tarnali, je presekala nova ureditev – ki razmer ni uredila – in pokazale so se nove, globlje težave.

Še huje, do izraza so prišle usodne strateške napake, ki niso značilne le za Ljubljano, ampak za večino slovenskih občin oziroma za vso državo.

Poglejmo najprej razlog, zakaj sploh takšna ureditev in zakaj prav zdaj. Ureditev parkiranja v Štepanjskem naselju je del širšega projekta mestne občine, s katerim ta uvaja območja plačljivega parkiranja tudi zunaj mestnega središča. V istem valu je bilo denimo urejeno parkiranje v Šiški med letoma 2023 in 2024. Takšni ukrepi nedvomno polnijo občinsko blagajno in s tem ni prav nič narobe – lastništvo avtomobila je vsekakor svojevrsten luksuz. Seveda občani pričakujemo, da se z zbranim denarjem pametno in smiselno ravna, na primer, da se vlaga v razvoj javnega prometa. Odgovor na vprašanje, zakaj prav zdaj, pa je dejstvo, da so parkiriščni trg v središču mesta izčrpali. Hkrati so zdaj marsikje že rešeni lastniški spori med občino in etažnimi lastniki stanovanj v blokih oziroma pripadajočih zemljišč, h katerim sodijo tudi parkirišča (ZVEtL-1). Tako je recimo v Šiški, kjer so parkirišča zarisana. Zdelo se je, da občina ve, kje lahko parkiranje legalno zaračuna in kje ga (še) ne more. Vendar se je to spremenilo z ureditvijo, vpeljano v Štepanjskem naselju. Lastniški spori tam še niso rešeni, občina pa se je samovoljno odločila za posredovanje. Prebivalci Štepanjskega naselja so ravno na tej podlagi zoper njo vložili tožbo.

Eden od razlogov, ki se vedno znova pojavljajo v zvezi z nujnostjo uvedbe plačljivega parkiranja, so dnevni migranti, saj naj bi ti z nenadzorovanim zasedanjem parkirišč ta zasedali lokalnim prebivalcem. Takšna razlaga je nadvse populistična (vsi poznamo negodovanje nad tujimi tablicami na našem koncu) in ima kaj malo podlage. Tudi če bi ti dnevni migranti parkirali vsevprek, bi to storili zjutraj, ko bi lokalci odšli od doma. Popoldne bi že migrirali nazaj v svoje domače konce, takrat pa bi lahko lokalci brez težav parkirali na izpraznjenih parkiriščih. Seveda je to poenostavljen prikaz, vendar že na daleč kaže, da so dnevni migranti fiktivni sovražnik. V razmerah, ko parkirišča P + R večinoma samevajo (razen tistega na Dolgem mostu), ni pričakovati, da bi dnevni migranti namesto urejene infrastrukture uporabljali naključna parkirišča po mestnem obrobju. Če pa so težava zares dnevni migranti, se mora občina temeljito vprašati, kaj za vraga počne s svojimi parkirišči P + R, zakaj ne delujejo in kako se težave lotiti s tega konca. So pa dnevni migranti uporaben izgovor za hiter konsenz med prebivalci okolja, kjer se plačljivo parkiranje uvaja.

Foto: Črt Piksi

Ravno to se je izkazalo v Štepanjskem naselju. Župan je obljubljal, da bo po uvedbi dovolilnic dovolj parkirnih mest, saj bodo vsi dnevni migranti odšli. A zgodilo se je, da je bilo parkirnih mest še vedno premalo in – po zakoličenju nekaterih območij – manj kot prej. Tako je razumljiva jeza lokalcev, ki so pričakovali rešitev, dobili pa še večjo težavo. Takrat so se začeli zborovanja, intervencije v medijih in naposled celo županovi obiski. Ti so bili izredno neproduktivni, saj je župan prebivalce le arogantno postavil pred dejstvo, da parkirnih mest pač ni, jih ni bilo in jih še lep čas ne bo.

Kot rešitev oboji, lokalni prebivalci in občina, omenjajo gradnjo garažne hiše. Ta ukrep je absurden in bo, če se uresniči, prinesel še več gorja in gneva kakor dosedanji dogodki. Garažna hiša je finančni in strateški nesmisel. Finančni zato, ker bi cena parkirnega mesta nanesla vsaj nekaj deset tisoč evrov; strateški pa, ker si kot družbeni ideal – na evropski, državni in občinski ravni – postavljamo trajnostno, brezogljično mobilnost. Nekaj, kar se marsikomu zdi neizvedljivo, je ne glede na vse naš strateški cilj. Torej pričakujemo družbo, ki bo imela čez nekaj deset let bistveno manj avtomobilov kakor danes. Pri vsem tem se postavlja vprašanje, kaj se bo takrat zgodilo z odvečno avtomobilsko infrastrukturo. In ravno garažne hiše so objekti, s katerimi bo največ težav, saj omogočajo izjemno omejeno drugačno rabo.

Kako torej rešiti težavo, tako da bo rešitev finančno in strateško vzdržna, da bodo z njo uresničeni občinski načrti in da bo predvsem pripomogla k boljši kakovosti življenja prebivalcev?

Zadeve se je treba lotiti iz popolnoma drugega izhodišča. Če ne moremo preprosto povečati infrastrukture, ki bi ustrezala večjim potrebam, je treba zmanjšati potrebe, da ustrezajo infrastrukturi, ki je na voljo.

Seveda Štepanjsko naselje ni edino območje, ki ima te težave. Tudi Ljubljana ni unikum. Niti Slovenija. Z enakimi težavami se spoprijemajo po vsej Evropi in marsikje so se nanje pametno odzvali. Denimo na Dunaju, kjer so rešitev iskali – in jo tudi našli – v preprostem pilotnem projektu Auto-Wette (slo. ‘stava na avto’). Gospodinjstva so se prostovoljno za tri mesece odpovedala avtu, v zameno pa so dobila 500 evrov subvencije za mobilnost na mesec na gospodinjstvo. V projektu je sodelovalo 37 gospodinjstev (prijavilo se jih je kar 3000), ki so lahko subvencijo porabila za različne oblike mobilnosti: mesečne vozovnice za javni promet, kratkoročni najem avtomobila, vožnjo s taksijem … Izkazalo se je, da so na mesec porabila v povprečju le 340 evrov (torej jim je ostalo 160 evrov), po treh mesecih pa sta dve tretjini gospodinjstev odločeni za nadaljevanje takšnega življenja, četrtina gospodinjstev pa je avtomobile že prodala.

Nekaj takšnega bi bilo smiselno poskusiti v Štepanjskem naselju in kasneje morda še drugje. S tem bi rešili težave s parkiranjem, model pa je finančno samozadosten. Vsem prebivalcem bi ponudili možnost mesečne subvencije v določenem znesku, če se odpovejo avtomobilu. Četudi se za to odloči le nekaj odstotkov gospodinjstev, bi to pomenilo opazno razbremenitev parkirišč. Financiranje subvencij bi lahko potekalo neposredno iz denarja, ki ga s prodajo parkirnih listkov in dovolilnic proizvajajo občinska parkirišča. Dejansko bi bili na boljšem vsi. Tisti, ki se lahko odpovejo avtomobilu, bi za to prejemali finančno nagrado, iz dunajskega primera pa vidimo, da bi s tem lahko celo zaslužili. Tisti, ki se avtomobilu iz takšnega ali drugačnega razloga ne morejo odpovedati, pa bi imeli na voljo parkirna mesta, lahko celo zagotovljena – z dražjo dovolilnico. Hkrati bi bilo tudi zanje to ceneje kakor naložba v gradnjo garažne hiše oziroma najem parkirišča v njej, če bi stroške gradnje krila občina.

Foto: Črt Piksi

Seveda je podlaga za izvedbo takšne rešitve delujoč in robusten sistem javnega prometa – na lokalni in državni ravni. Tega, žal, niti v Ljubljani niti v Sloveniji nimamo. Četudi občina izgovore išče v slabem sistemu na državni ravni, je za lokalni javni potniški promet odgovorna sama. Ljubljanski potniški promet (LPP) je obstal v prostoru in času Ljubljane ob koncu 20. stoletja. Omejen je s shemo prog, ki ima eno centralizirano ozko grlo (Slovenska cesta), in s frekvenco, ki tudi ob konicah ni manjša kot pet minut (za Štepanjsko naselje, najgosteje naseljeno območje v Sloveniji, celo ne manjša od 12 minut) – ponoči pa sploh ne deluje. To preprosto ne zadošča za trenutne potrebe potnikov, najpogostejši, študenti, dijaki in upokojenci, druge možnosti enostavno nimajo, od drugih pa pri takšnih razmerah ni upravičeno pričakovati, da se bodo odpovedali avtomobilu. Pogoj za kakršnokoli trajnostno in inovativno reševanje problematike parkiranja je ustrezen javni potniški promet. Občina bi se zato morala izboljšanja javnega potniškega prometa lotiti takoj in velikopotezno. Ljubljana potrebuje primestno železnico. Ljubljana potrebuje tramvaj. To niso ekscesi, to je nuja. Žal pa visokoleteče ambicije prepogosto ostanejo le črka na papirju strateških dokumentov, denimo Celostne prometne strategije MOL 2025–2032.

Izkušnja iz Štepanjskega naselja opozarja na kritično stanje ljubljanske občine, ki se več kot očitno ni sposobna resno, kritično in znanstveno lotiti reševanja težav. Rešitve obstajajo, le prepoznati jih je treba.

Četudi je predlagana rešitev slaba ali neizvedljiva, bi si morala občina prizadevati, da najde boljšo, da preveri vse možnosti, da predlaga pilotne projekte, začasne rešitve, da ne deluje zgolj znotraj cone udobja. V času, ko Ljubljana spada med evropska mesta z najslabšo kakovostjo zraka in ko povprečni prebivalec za avto porabi več kot 500 evrov na mesec, gradnja garažnih hiš ne bi smela biti pravi odgovor. Okoljsko in finančno trajnostne rešitve bi morale biti prvi odziv načrtovalcev in odločevalcev, a žal niso. Očitno jih moramo zahtevati občani sami.

The post Kriza Mestne občine Ljubljana appeared first on Prelom.

❌