Normal view

There are new articles available, click to refresh the page.
Before yesterdayMain stream

NPM build error

19 April 2014 at 19:03

NPM build error

NPM has a bunch of useful stuff on it, however you could in life while using NPM get this:

stack Error: "pre" versions of node cannot be installed, use the --node dir flag instead

This error basically says “Give me the node

Super Easy Twitter Bots

19 January 2014 at 19:00

Super Easy Twitter Bots

I often get really quite mad ideas on writing twitter bots, But I often get pretty bored of doing all of the boiler plate that is required when wanting to achieve these things.

The typical process to making a twitter bot

Running around HTTP firewalls

1 January 2014 at 20:58

Running around HTTP firewalls

When I was in 6th form and I wanted to get SSH access to my own systems I had quite a few issues doing so, Since port 22 was blocked and just about every other port was, For the first year that was fine however since

Recently

4 July 2025 at 00:00

Watercolor of some nice plants

I guess I’ll cover the context first so that we can move on to the good stuff.

Man, everything is going terribly. It’s hard to overstate how bad things are for America right now. We’re just doing the thing: we’ve elected fascists and they’re funding an unaccountable military that will do their bidding. They’re doing the worst possible economic policy and attacking elected officials. The New York Times is actively fighting against New York’s Democratic candidate for mayor by publishing weak hit pieces day after day, and doing their weak-ass quasi-endorsement of Cuomo even after they publicly stated they wouldn’t endorse in local campaigns. Shame on them. As mtsw says, they like Trump and want him to rule like a king. You probably know all this, and every day you’re feeling the fascism. If you aren’t, how? Or, alternatively, what’s wrong with you?

Now on to the entertaining content (staring into the void, copy & pasting links from my Instapaper ‘Liked’ stories)

Listening

The big new listen this month was S. G. Goodman, who I learned from Hearing Things:

In very broad strokes she’s next to Waxahatchee in my MP3 library in Swinsian: country-inflected folk rock. It’s a solid album with some hits. And I usually don’t notice this, but she has really really nailed her visual style and merch. A+.

Watching

I’ve been watching Murderbot, which is very watchable, and extremely brief: each episode is about 17 minutes of actual video. So far I’m happy that things happen in it, but it isn’t very funny or meaningful. Watching it is kind of like nicotine gum, a way to take a small dose of television and wean yourself off of it.

I’m a sucker for ‘how it works’ videos and this one is an all-timer. It’s at the perfect level of detail and fascinating end-to-end. I just love these physical effects, the creativity involved in the shots, the commitment to the craft by actors, designers, even a guy making remote-controlled robots. Watch it.

Other than that, I watched Walk Hard this month and it was so incredibly dumb and I loved every moment.

Reading

Finally got some momentum with books! Glass Century and Things Become Other Things are both hard to describe but great in their own ways. And similar also in that their authors have been on my radar for a long time: I regularly read Ross Barkan’s blog for its local political coverage, and admire Craig Mod’s photos and online writing.

Chill out for a moment listening to the cool guy play the nice piano before I rant about the cops

But if history is any indication, any attempts to limit or even question the infinite expansion of the NYPD’s size, budget, power, and impunity will turn police against a candidate.

Matthew Guariglia’s Zohran Mamdani Better Get Ready To Fight The NYPD was a riveting read. The more I know about the NYPD and its history, the more I am convinced that it is a government all to itself, unaccountable and malevolent. I recommend reading about the Patrolmen’s Benevolent Association Riot as a starting point. Then read The City’s coverage of the NYPD. Every new piece of information paints a worse picture.

Why, exactly, are we meant to show these people respect? Because they run a company that provides a continually-disintegrating service? Because that service has such a powerful monopoly that it’s difficult to leave it if you’re interacting with other people or businesses?

Frankly Ed Zitron’s writing can be too much for me, but Make Fun of Them has a great central dynamic: that major tech companies are simultaneously posing as champions of the next generation of ‘AI’ technology while failing to maintain the basic software that is their main actual product.

And so it was that a lifetime of doing, gradually, then suddenly gave way to a remainder of lacking—lacking the tools that had enabled her longer decades of productivity than most. My grandmother was a testament to the term “workhorse.” I often joke that they simply don’t make people like they used to.

I really like reading netigen, a semi-anonymous blog.

There are seconds here and there when I feel a deep gratitude about getting older. They’re really random; I’ve sometimes held dishes while cleaning them, thinking “I might hate doing this right now, but I’ll miss it once I’m no longer on this earth. I’ll miss everything, no matter how mundane.”.

And also ava’s blog. There are more like this that I read - blogs that are nostalgic, personal, affecting. I can’t write like that, because this blog has my real all over it, and from day one I’ve drawn some boundaries around what I write about: no personal stories, no real-life drama, no “journaling” on this website. I keep a paper journal for that stuff. But I appreciate people who are able to write honestly about life, death, family, love, and all of it.

It’s not lost on me that filling time with an endless card game is a way to escape the cold clutch of grief. There are no counters, no tricky cards you can pull out of your hand. When that bastard attacks, it sails right through all defenses.

Similarly, from Dan Sinker, who I admire, and whose writing I admire.

New York

Knowing New York will take a lifetime, but I’m trying.

I went to The New York Earth Room, which is a second-floor apartment filled with dirt. You can’t walk on the dirt. There’s someone who works there, nodding at the visitors, probably answering questions about the dirt. The same gallery network, Dia, houses The Broken Kilometer, a kilometer of brass rods arranged on the floor of another space. Dia is its own story, starting with an oil fortune and carrying on to the present day leader converting to Sufism and founding a Sufi lodge downtown. I haven’t been to the lodge yet. The MELA foundation is the obvious next place to go.

epanet-js

3 July 2025 at 00:00

epanet-js is a new web application that combines modern web maps with the industry-standard EPANET hydraulic simulation algorithm. It’s for people planning and updating water utility systems: connecting pipes and pressures and figuring out what will happen. It’s a problem area that I’m totally fascinated by and know very little about. It’s made by the folks from Iterating - Luke Butler and Sam Payá, who are experts in the field.

If you’ve been following along with my blog and projects, you might notice something familiar about this screenshot of epanet-js:

epanet-js

Yep! A lifetime ago, I built a company and product called Placemark, which was a tool for creating and editing map data. When the business part didn’t work out, I published a free-to-use version of Placemark and made the code open source. I chose a very permissive license for the code: the MIT license. I wanted to let anything happen, including - especially - people creating paid products with the code.

It’s hard to explain to people outside of the world of open source that folks building a business with the help of my code is the dream.

Partly it’s that ‘being helpful’ is one of the goals in life that never gets old. The other part is that most software becomes obsolete and abandoned quickly. Creating software with any legacy is rare and something to celebrate.

Plus, Placemark was a general purpose tool in search of a successful niche: I never figured out what that niche was. I had a lot of learning to do about product-market-fit. But hydraulic simulation is a real market, and these folks are real experts.

There are other markets like this, I’m sure, and using Placemark code as a base could be a good way to jumpstart those companies - it’s not perfect, as I’ve documented extensively, but it could be good as a first step.

And the folks at Iterating have even contributed changes upstream to the open source Placemark codebase, but that isn’t required by the MIT license at all: that’s just because they’re cool folks.

They’ve also open sourced the core epanet-js library, and released the web application under the Functional Source License, which makes new code contributions open source after two years.

And epanet-js is a tool that you can run in a browser - full simulations with a WASM-based engine. It’s competing with expensive old-school software that costs $16,000 a year, runs exclusively on Windows, is priced by “pipes”, and uses the same engine, EPANET. This is so much better in comparison. A radical improvement.

Congrats to Luke and Sam for creating epanet-js. It’s super cool. If you speak hydraulic simulation or know someone who does, try it out!

Recently

6 June 2025 at 00:00

A little late on this one, but I got around to it!

Reading

I got stuck on two books: books that I want to enjoy but can’t get any momentum on. So my reading “stats” are suffering and this is a light year for books so far. But I switched gears to read Glass Century by Ross Barkan, whose newsletter is my primary news source for local political intrigue. It’s great so far.

As the culture of the Who Cares Era grinds towards the lowest common denominator, support those that are making real things. Listen to something with your full attention. Watch something with your phone in the other room. Read an actual paper magazine or a book.

Dan Sinker’s Who Cares Era is great writing and a message that I believe in in my bones. Caring a lot kind of comes naturally to me, and nihilism of all kinds my natural enemy.

Speaking of which: if you’re in New York, there’s an election coming up. Please care about it, vote, and don’t rank Cuomo. It really matters.

…I quoted Hannah Arendt’s observation that the Germans who refused to participate in Nazi atrocities were those who possessed “the disposition to live together explicitly with oneself, to have intercourse with oneself, that is, to be engaged in that silent dialogue between me and myself which, since Socrates and Plato, we usually call thinking.” To engage in that “silent dialogue,” you need to be capable of thinking alone — or not precisely alone, but in solitude. Because as Arendt notes elsewhere, in The Human Condition: “To be in solitude means to be with one’s self, and thinking, therefore though it may be the most solitary of all activities, it is never altogether without a partner and without company.”

From you’ll never think alone by Ned Resnikoff. Not to brag, but even as I’ve become a more social person, my comfort with solitude has stayed the same. When I rode from Brooklyn to Brewster, I did so alone, with no music, no podcasts, few breaks. It felt good: I’ve done the same before, and I like it, the silence is comfortable. Thinking in solitude is the default state. But I can see how it isn’t for everyone, how consulting a chatbot for personal questions is where people end up. It’s just sad, though.

I think there’s a role for AI in generating marketing assets. In Buzz, you can generate images and you can write text. What I have not seen yet is a world where the models can generate content that a brand team would be really proud of. Maybe that’s coming, but it seems further away than you might expect.

I was kind of impressed by this interview with Dylan Field, Figma co-founder, on their new AI-flavored tools. For the CEO role, there’s only one accepted stance toward AI and it is wild-eyed enthusiasm. Even hedging a little bit, like he does here, is unusual and good.

Listening

What a banner month for albums!

The biggest hit was Slow Mass’s On Foot. It’s such a solid album. Almost every track has 5 stars in my Swinsian library.

I saw Nils Frahm perform at Kings Theatre and instantly became a fan. His albums are so handcrafted and lovely.

And then it dropped that Forth Wanderers are returning with another album. I started to listen to them right around the time that they took a very long break in 2018, and about seven years later they’re back. A lot like Slow Mass, they’re just a great combination of instrument and voice, and they really capture a mood.

Finally, I really like this Julian Cubillos track, which I found via Hearing Things.

Technology Optimism Hour: Fediverse edition

Last week I got the idea to write an update of my Technology Optimism Hour from 2022. What’s the cool new stuff in the technology industry that’s sprouted up since then? And then I hit some writers block, or rather I couldn’t think of much.

So I asked around on Mastodon and got some interesting answers! Refer over there for some good ideas from the folks on the ‘fediverse.’ A lot of the technologies listed are technically pre-2022, but are hitting their stride now. A few of the answers:

Good answers! There’s a bit of overlap with my last writing and a lot of these are pre-2022, but the path of technology is complicated and it’s very true that something like 3D printing has had a lot of eras and every new step toward mainstreaming is exciting.

I’m probably not buying a 3D printer though, for reasons of Brooklyn apartment sizes, so the main thing on this list that I need to dive into is jj. A lot of smart people love it and I trust that it’s great.

Elsewhere

I wrote quite a lot of micro posts this month, on chatbots, New York elections, Blog micro-optimization, LeaderKey, expertise, and a ThinkUp clone that I built on Val Town.

Also I wrote a little AMA app on Val Town too, so you can ask something, don’t be weird. I do have a slight addendum to my answer to one thing to change the world. I think it’s Direct File. Direct File - the free tax-filing service that has been a longtime dream of good politicians and enemy of horrifying monopolies and bad politicians: that’s it. The slow, hard work of building government services that work. Not carelessly thrown-together code that doesn’t work and only aims to achieve a vacuous libertarian budget-cutting ideology. I don’t believe in “one weird trick” anymore, if I ever did. I want the expansive vision of city owned grocery stores and functioning government services.

Journalism

This month one of my new music ideas came from Hearing Things, an independent music publication that I recently subscribed to. It’s worker owned and consistently high-quality. What else is like this?

In New York, you have to subscribe to Hell Gate and donate to The City, both of which are doing awesome work. The aforementioned Political Currents by Ross Barkan is an invaluable source for understanding the finer points of city politics, like where the Working Families Party fits alongside New York’s Democrats.

404 media does amazing coverage on technology-adjacent topics: it’s great to listen to their podcast and hear folks with more than enough knowledge to get the technical specifics right, as well as really high standards for validating information.

Colossal is great independent art writing. My friend David writes an extremely good newsletter about the alcohol industry as well as a local outlet for Richmond, VA.

Watch the celebrations, on mute

By: VM
15 July 2025 at 05:59

Right now, Shubhanshu Shukla is on his way back to Earth from the International Space Station. Am I proud he’s been the first Indian up there? I don’t know. It’s not clear.

The whole thing seemed to be stage-managed. Shukla didn’t say anything surprising, nothing that popped. In fact he said exactly what we expected him to say. Nothing more, nothing less.

Fuck controversy. It’s possible to be interesting in new ways all the time without edging into the objectionable. It’s not hard to beat predictability — but there it was for two weeks straight. I wonder if Shukla was fed all his lines. It could’ve been a monumental thing but it feels… droll.

“India’s short on cash.” “India’s short on skills.” “India’s short on liberties.” We’ve heard these refrains as we’ve covered science and space journalism. But it’s been clear for some time now that “India’s short on cash” is a myth.

We’ve written and spoken over and over that Gaganyaan needs better accountability and more proactive communication from ISRO’s Human Space Flight Centre. But it’s also true that it needs even more money than the Rs 20,000 crore it’s already been allocated.

One thing I’ve learnt about the Narendra Modi government is that if it puts its mind to it, if it believes it can extract political mileage from a particular commitment, it will find a way to go all in. So when it doesn’t, the fact that it doesn’t sticks out. It’s a signal that The Thing isn’t a priority.

Looking at the Indian space programme through the same lens can be revealing. Shukla’s whole trip and back was carefully choreographed. There’s been no sense of adventure. Grit is nowhere to be seen.

But between Prime Minister Modi announcing his name in the list of four astronaut-candidates for Gaganyaan’s first crewed flight (currently set for 2027) and today, I know marginally more about Shukla, much less about the other three, and nothing really personal to boot. Just banal stuff.

This isn’t some military campaign we’re talking about, is it? Just checking.

Chethan Kumar at ToI and Jatan Mehta have done everyone a favour: one by reporting extensively on Shukla’s and ISRO’s activities and the other by collecting even the most deeply buried scraps of information from across the internet in one place. The point, however, is that it shouldn’t have come to this. Their work is laborious, made possible by the fact that it’s by far their primary responsibility.

It needed to be much easier than this to find out more about India’s first homegrown astronauts. ISRO itself has been mum, so much so that every new ISRO story is turning out to be an investigative story. The details of Shukla’s exploits needed to be interesting, too. The haven’t been.

So now, Shukla’s returning from the International Space Station. It’s really not clear what one’s expected to be excited about…

Featured image credit: Ray Hennessy/Unsplash.

A blog questions challenge

By: VM
14 July 2025 at 05:07

I hadn’t checked my notifications on X.com in a while. When I did yesterday, I found Pradx had tagged me in a blog post called “a challenge of blog questions” in March. The point is to answer a short list of questions about my blogging history, then tag other bloggers to carry the enterprise forward. With thanks to Pradx, here goes.

Why did you start blogging in the first place?

I started blogging for two reasons in 2008. I started writing itself when I realised it helps me clarify my thoughts, then I started publishing my writing on the web so I could share those thoughts with my friends in different parts of the world. My blog soon gave me a kind of third space on the internet, a separate world I could escape to as I laboured through four years of engineering school, which I didn’t like at the time.

What platform are you using to manage your blog and why did you choose it? Have you blogged on other platforms before?

I’ve blogged on Xanga, Blogspot, Typed, Movable Type, various static site generators, Svbtle, Geocities, Grav, October, Mataroa, Ghost, and WordPress. And I’ve always found myself returning to WordPress, which — despite its flaws — allows me to have just the kind of blog I’d like to in terms of look, feel, spirit, and community. The last two are particularly important. Ghost comes a close second to WordPress but it’s too magaziney. The options to host Ghost are also (relatively) more expensive.

Earlier this year, Matt Mullenweg of Automattic tested my support for WordPress.com with his words and actions vis-à-vis his vendetta against WP Engine but the sentiments and conversations in the wider WordPress community encouraged me to keep going.

How do you write your posts? For example, in a local editing tool, or in a panel/dashboard that’s part of your blog?

I used to love WordPress’s Calypso interface and its WYSIWYG editor both on desktop and mobile and used to use that to compose posts. But then WordPress ‘upgraded’ to the blocks-based Gutenberg interface, which made composing a jerky, clunky, glitchy process. At that point I tried a combination of different local editors, including Visual Studio Code, iA Writer, and Obsidian.md. Each editor provided an idiosyncratic environment: e.g. VS Code seemed like a good environment in which to compose technical posts, Obsidian (with its dark UI) for angry/moody ones, and iA writer for opinionated ones with long sentences and complex thoughts.

Then about three years ago I discovered MarsEdit and have been using it for all kinds of posts since. I particularly appreciate its old-school-like interface, that it’s built to work with WordPress, and the fact that it maintains an offline archive of all the posts on the blog.

When do you feel most inspired to write?

I’ve answered this question before in conversations with friends and every time my answer has prompted them to wonder if I’m lying or mocking them.

When I feel most inspired to write is not in my control. I’ve been writing for so long that it’s become a part of the way I think. If I have a thought and I’m not able to articulate it clearly in writing, it’s a sign for me that the thought is still inchoate. In this paradigm, whenever I have a fully formed thought that I think could help someone else think about or through something, I enter a half-trance-like state, where my entire brain is seized of the need to write and I’m only conscious enough to open MarsEdit and start typing.

In these circumstance my ability to multi-task even minor activities, like typing with one hand while sipping from a mug of tea in the other, vanishes.

Do you publish immediately after writing, or do you let it simmer a bit as a draft?

That depends on what I’m writing about. When I draft posts in the ‘Op-eds’ or ‘Science’ categories, I’m usually more clear-headed and confident about my post’s contents, and publish as soon as the post is ready. For ‘Analysis’ and ‘Scicomm’ posts, however, I distract myself for about 30 minutes after finishing a draft and read it again to make sure there aren’t any holes in my arguments.

I also have a few friends who peer-review my posts if I’m not sure I’ve articulated myself well or if I’m not able to think through the soundness of my own arguments by myself (usually because I suspect there’s something I don’t know). Four of the most frequent reviewers are Thomas Manuel, Srividya Tadepalli, Mahima Jain, and Chitralekha Manohar.

In all these cases, however, I do read the post a couple times more after it’s finished to fix grammar and clumsy sentence constructions.

What’s your favorite post on your blog?

No such thing. 🙂

Any future plans for your blog? Maybe a redesign, a move to another platform, or adding a new feature?

I’m not keen on major redesigns. There are too many WordPress themes available off the shelf and for free these days. I change my blog’s theme depending on my mood. I don’t think it makes a difference to whether or how people read my posts. I think those that have been reading will continue to read. The text is paramount.

I don’t see myself moving to another platform either. If anything, I might move from WordPress.com to a self-hosted setup in future but it’s not something I’m thinking of right now.

I am currently in the process of removing duplicated posts in the archives — at last count I spotted about 20. Many posts are also missing images I’d added at the time of publishing, mostly because they were associated with a domain that I no longer use. I need to fix that.

A few years ago I lost around 120 posts after someone managed to hack my account when the blog was hosted with a provider of cPanel hosting services. I maintain a long-term backup of all my posts on a Backblaze dump. I’m still in the process of identifying which posts I lost and retrieving them from the archive.

So yeah, focusing on this clean-up right now.

Who’s next?

This is embarrassing: I only know a few other bloggers. I stopped keeping track after many bloggers I’d been following in the early years just stopped at some point. Right now, of those blogs I still follow, Jatan and Pradx have already been nominated for this ‘challenge’. So let me nominate Suvrat Kher and Dhiya Gerber next, both of whom I think will have interesting answers.

Featured image credit: Chris Briggs/Unsplash.

Sharks don’t do math

By: VM
13 July 2025 at 12:42

From ’Sharks hunt via Lévy flights’, Physics World, June 11, 2010:

They were menacing enough before, but how would you feel if you knew sharks were employing advanced mathematical concepts in their hunt for the kill? Well, this is the case, according to new research, which has tracked the movement of these marine predators along with a number of other species as they foraged for prey in the Pacific and Atlantic oceans. The results showed that these animals hunt for food by alternating between Brownian motion and Lévy flights, depending on the scarcity of prey.

Animals don’t use advanced mathematical concepts. This statement encompasses many humans as well because it’s not a statement about intelligence but one about language and reality. You see a shark foraging in a particular pattern. You invent a language to efficiently describe such patterns. And in that language your name for the shark’s pattern is a Lévy flight. This doesn’t mean the shark is using a Lévy flight. The shark is simply doing what makes sense to it, but which we — in our own description of the world — call a Lévy flight.

The Lévy flight isn’t an advanced concept either. It’s a subset of a broader concept called the random walk. Say you’re on a square grid, like a chessboard. You’re standing on one square. You can move only one step at a time. You roll a four-sided die. Depending on the side it lands on, you step one square forwards, backwards, to the right or to the left. The path you trace over time is called a random walk because its shape is determined by the die roll, which is random.

Random walk 2500.svg.

There are different kinds of walks depending on the rule that determines the choice of your next step. A Lévy flight is a random walk that varies both the direction of the next step and the length of the step. In the random walk on the chessboard, you took steps of fixed lengths: to the adjacent squares. In a Lévy flight, the direction of the next step is random and the length is picked at random from a Lévy distribution. This is what the distribution looks like:

Levy0 distributionPDF.svg.

Notice how a small part of each curve (for different values of c in the distribution’s function) has high values and the majority has smaller values. When you pick your step length at random from, say, the red curve, you have higher odds of of picking a smaller step length than a longer one. This means in a Lévy flight, most of the step lengths will be short but a small number of steps will be long. Thus the ‘flight’ looks like this:

Sharks and many other animals have been known to follow a Lévy flight when foraging. To quote from an older post:

Research has shown that the foraging path of animals looking for food that is scarce can be modelled as a Lévy flight: the large steps correspond to the long distances towards food sources that are located far apart and the short steps to finding food spread in a small area at each source.

Brownian motion is a more famous kind of random walk. It’s the name for the movement of an object that’s following the Wiener process. This means the object’s path needs to obey the following five rules (from the same post):

(i) Each increment of the process is independent of other (non-overlapping) increments;

(ii) How much the process changes over a period of time depends only on the duration of the period;

(iii) Increments in the process are randomly sampled from a Gaussian distribution;

(iv) The process has a statistical mean equal to zero;

(v) The process’s covariance between any two time points is equal to the lower variance at those two points (variance denotes how quickly the value of a variable is spreading out over time).

Thus Brownian motion models the movement of pollen grains in water, dust particles in the air, electrons in a conductor, and colloidal particles in a fluid, and the fluctuation of stock prices, the diffusion of molecules in liquids, and population dynamics in biology. That is, all these processes in disparate domains evolve at least in part according to the rules of the Wiener process.

Still doesn’t mean a shark understands what a Lévy flight is. By saying “sharks use a Lévy flight”, we also discard in the process how the shark makes its decisions — something worth learning about in order to make more complete sense of the world around us rather than force the world to make sense only in those ways we’ve already dreamt up. (This is all the more relevant now with #sharkweek just a week away.)

I care so much because metaphors are bridges between language and reality. Even if the statement “sharks employ advanced mathematical concepts” doesn’t feature a metaphor, the risk it represents hews close to one that stalks the use of metaphors in science journalism: the creation of false knowledge.

Depending on the topic, it’s not uncommon for science journalists to use metaphors liberally, yet scientists have not infrequently upbraided them for using the wrong metaphors in some narratives or for not alerting readers to the metaphors’ limits. This is not unfair: while I disagree with some critiques along these lines for being too pedantic, in most cases it’s warranted. As science philosopher Daniel Sarewitz put it in that 2012 article:

Most people, including most scientists, can acquire knowledge of the Higgs only through the metaphors and analogies that physicists and science writers use to try to explain phenomena that can only truly be characterized mathematically.

Here’s The New York Times: “The Higgs boson is the only manifestation of an invisible force field, a cosmic molasses that permeates space and imbues elementary particles with mass … Without the Higgs field, as it is known, or something like it, all elementary forms of matter would zoom around at the speed of light, flowing through our hands like moonlight.” Fair enough. But why “a cosmic molasses” and not, say, a “sea of milk”? The latter is the common translation of an episode in Hindu cosmology, represented on a spectacular bas-relief panel at Angkor Wat showing armies of gods and demons churning the “sea of milk” to producean elixir of immortality.

For those who cannot follow the mathematics, belief in the Higgs is an act of faith, not of rationality.

A metaphor is not the thing itself and shouldn’t be allow to masquerade as such.

Just as well, there are important differences between becoming aware of something and learning it, and a journalist may require metaphors only to facilitate the former. Toeing this line also helps journalists tame the publics’ expectations of them.

Featured image credit: David Clode/Unsplash.

The hidden heatwave

By: VM
13 July 2025 at 10:22

A heatwave is like the COVID-19 virus. During the pandemic, the virus infected and killed many people. When vaccines became available, the mortality rate dropped even though the virus continued to spread. But vaccines weren’t the only way to keep people from dying. The COVID-19 virus killed more people if the people were already unhealthy

In India, an important cause for people being unhealthy is the state itself. In many places, the roads are poorly laid, kicking dust exposed by traffic use up into the air, where it joins the PM2.5 particles emitted by industrial facilities allowed to set up shop near residential and commercial areas without proper emission controls. If this is one extreme, becauses these experiences are so common for so many Indians, at the other is the state’s apathy towards public health. India’s doctor-to-patient ratio is dismal; hospitals are understaffed and under-equipped; drug quality is so uneven as to be a gamble; insurance coverage is iffy and unclear; privatisation is increasing; and the national government’s financial contribution towards public health is in free fall.

For these reasons as well, and not just because of vaccine availability or coverage, the COVID-19 virus killed more people than it should have been able to. A person’s vulnerability to this or any other infection is thus determined by their well-being — which is affected both by explicit factors like a new pathogen in the population and implicit factors like the quality of healthcare they have been able to access.

A heatwave resembles the virus for the same reason: a person’s vulnerability to high heat is determined by their well-being — which in turn is affected by the amount of ambient heat and relative humidity as well as the extent to which they are able to evade the effects of that combination. This weekend, a new investigative effort by a team of journalists at The Hindu (including me) has reported just this fact, but for the first time with ground-zero details that people in general, and perhaps even the Tamil Nadu government itself, have thus far only presumed to be the case. Read it online, in the e-paper or in today’s newspaper.

The fundamental issues are two-pronged. First, Tamil Nadu’s policies on protecting people during heatwaves require the weather department to have declared a heatwave to apply. Second, even when there is no heatwave, many people but especially the poorer consistently suffer heatwave conditions. (Note: I’m criticising Tamil Nadu here because it’s my state of residence and equally because it’s one of a few states actually paying as much attention to economic growth as it is to public health, of which heat safety is an important part.)

The net effect is for people to suffer their private but nonetheless very real heatwave conditions without enjoying the support the state has promised for people in these conditions. The criticism also indicts the state for falling short on enforcing other heat-related policies that leave the vulnerable even more stranded.

The corresponding measures include (i) access to clean toilets, a lack of which forces people — but especially women, who can’t urinate in public the way men are known to — to drink less water and suppress their urges to urinate, risking urinary tract infections; (ii) access to clean and cool drinking water, a paucity of which forces people to pay out of their pockets to buy chilled water or beverages, reducing the amount of money they have left for medical expenses as well as risking the ill health that comes with consuming aerated and/or sugary beverages; and (ii) state-built quarters that pay meaningful attention to ventilating living spaces, which when skipped exposes people to humidity levels that prevent their bodies from cooling by sweating, rendering them more susceptible to heat-related illnesses.

And as The Hindu team revealed, these forms of suffering are already playing out.

The India Meteorological Department defines a heatwave based on how much the temperature deviates from a historical average. But this is a strictly meteorological definition that doesn’t account for the way class differences create heatwave-like conditions. These conditions kick in as a combination of temperature and humidity, and as the report shows, even normal temperature can induce them if the relative humidity is higher and/or if an individual is unable to cool themselves. The state has a significant role to play in the latter. Right now, it needs to abandon the strictly meteorological definition of heatwaves in its policy framework and instead develop a more holistic sociological definition.

Featured image credit: Austin Curtis/Unsplash.

Quantum clock breaks entropy barrier

By: VM
12 July 2025 at 12:21

In physics, the second law of thermodynamics says that a closed system tends to become more disordered over time. This disorder is captured in an entity called entropy. Many devices, especially clocks, are affected by this law because they need to tick regularly to measure time. But every tick creates a bit of disorder, i.e. increases the entropy, and physicists have believed for a long time now that this places a fundamental limit on how precise a clock can be. The more precise you want your clock, the more entropy (and thus more energy) you’ll have to expend.

A study published in Nature Physics on June 2 challenges this wisdom. In it, researchers from Austria, Malta, and Sweden asked if the second law of thermodynamics really set a limit on a clock’s precision and came away, surprisingly, with a design of a new kind of quantum clock that’s too precise scientists once believed possible for the amount of energy it spends to achieve that precision.

The researchers designed this clock using a spin chain. Imagine a ring made of several quantum sites, like minuscule cups. Each cup can hold an excitation — say, a marble that can hop from cup to cup. This excitation moves around the ring and every time it completes a full circle, the clock ticks once. A spin chain is, broadly speaking, a series of connected quantum systems (the sites) arranged in a ring and the excitation is a subatomic particle or packet of energy that moves from site to site.

In most clocks, every tick is accompanied by the dissipation of some energy and a small increase in entropy. But in the model in the new study, only the last link in the circle, where the last quantum system was linked to the first one, dissipated energy. Everywhere else, the excitation moved without losing energy, like a wave gliding smoothly around the ring. The movement of the excitation in this lossless way through most of the ring is called coherent transport.

The researchers used computer simulations to help them adjust the hopping rates — or how easily the excitation moved between sites — and thus to make the clock as precise as possible. They found that the best setup involved dividing the ring into three regions: (i) in the preparation ramp, the excitation was shaped into a wave packet; (ii) in the bulk propagation phase, the wave packet moved steadily through the ring; and (iii) in the boundary matching phase, the wave packet was reset for the next tick.

The team measured the clock’s precision as the number of ticks it completed before it was one tick ahead or behind a perfect clock. Likewise, team members defined the entropy per tick to be the amount of energy dissipated per tick. Finally, the team compared this quantum clock to classical clocks and other quantum models, which typically show a linear relationship between precision and entropy: e.g. if the precision doubled, the entropy doubled as well.

The researchers, however, found that the precision of their quantum clock grew exponentially with entropy. In other words, if the amount of entropy per tick increased only slightly, the precision increased by a big leap. It was proof that, at least in principle, it’s possible to build a clock to be arbitrarily precise while keeping the system’s entropy down, all without falling afoul of the second law.

That is, contrary to what many physicists thought, the second law of thermodynamics doesn’t strictly limit a clock precision, at least not for quantum clocks like this one. The clock’s design allowed it to sidestep the otherwise usual trade-off between precision and entropy.

During coherent transport, the process is governed only by the system’s Hamiltonian, i.e. the rules for how energy moves in a closed quantum system. In this regime, the excitation acts like a wave that spreads smoothly and reversibly, without losing any energy or creating any disorder. Imagine a ball rolling on a perfectly smooth, frictionless track. It keeps moving without slowing down or heating up the track. Such a thing is impossible in classical mechanics, like in the ball example, but it’s possible in quantum systems. The tradeoff of course is that the latter are very small and very fragile and thus harder to manipulate.

In the present study, the researchers have proved that it’s possible to build a quantum clock that takes advantage of coherent transport to tick while dissipating very little energy. Their model, the spin chain, uses a Hamiltonian that only allows the excitation to coherently hop to its nearest neighbour. The researchers engineered the couplings between the sites in the preparation ramp part of the ring to shape the excitation into a traveling wave packet that moves predominantly in the forward direction.

This tendency to move in only direction is further bolstered at the last link, where the last site is coupled to the first. Here, the researchers installed a thermal gradient — a small temperature difference that encouraged the wave to restart its journey rather than be reflected and move backwards through the ring. When the excitation crossed this thermodynamic bias, the clock ticked once and also dissipated some energy.

Three points here. First, remember that this is a quantum system. The researchers are dealing with energy (almost) at its barest, manipulating it directly without having to bother with an accoutrement of matter covering it. In the classical regime, such accoutrements are unavoidable. For example, if you have a series of cups and you want to make an excitation hop through it, you do so with a marble. But while the marble contains the (potential) energy that you want to move through the cups, it also has mass and it dissipates energy whenever it hops into a cup, e.g. it might bounce when it lands and it will release sound when it strikes the cup’s material. So while the marble metaphor earlier might have helped you visualise the quantum clock, remember that the metaphor has limitations.

Second, for the quantum clock to work as a clock, it needs to break time-reversal symmetry (a concept I recently discussed in the context of quasicrystals). Say you remove the thermodynamic bias at the last link of the ring and replace it with a regular link. In this case the excitation will move randomly — i.e. at each step it will randomly pick the cup to move to, forward or backward, and keep going. If you reversed time, the excitation’s path will still be random and just evolve in reverse.

However, the final thermodynamically biased link causes the excitation to acquire a preference for moving in one direction. The system thus breaks time-reversal symmetry because even if you reverse the flow of time, the system will encourage the excitation to move in one direction and one direction only. This in turn is essential for the quantum system to function like a clock. That is, the excitation needs to traverse a fixed number of cups in the spin chain and then start from the first cup. Only between these two stages will the system count off a ‘tick’. Breaking time-reversal symmetry thus turns the device into a clock.

Three, the thermodynamic bias ensures that the jump from the last site to the first is more likely than the reverse, and the entropy is the cost the system pays in order to ensure the jump. Equally, the greater the thermodynamic bias, the more likely the excitation is to move in one direction through spin chain as well as make the jump in the right direction at the final step. Thus, the greater the thermodynamic bias, the more precise the clock will be.

The new study excelled by creating a sufficiently precise clock while minimising the entropy cost.

According to the researchers, its design design could help build better quantum clocks, which are important for quantum computers, quantum communication, and to make ultra-precise precise measurements of the kind demanded by atomic clocks. The clock’s ticks could also be used to emit single photons at regular intervals — a technology increasingly in demand for its use in quantum networks of the sort China, the US, and India are trying to build.

But more fundamentally, the clock’s design — which confines energy dissipation to a single link and uses coherent transport everywhere else — and that design’s ability to evade the precision-entropy trade-off challenges a longstanding belief that the second law of thermodynamics strictly limits precision.

Featured image credit: Meier, F., Minoguchi, Y., Sundelin, S. et al. Nat. Phys. (2025).

A new beast: antiferromagnetic quasicrystals

By: VM
11 July 2025 at 09:38

Scientists have made a new material that is both a quasicrystal and antiferromagnetic — a combination never seen before.

Quasicrystals are a special kind of solid. Unlike normal crystals, whose atoms are arranged in repeating patterns, quasicrystals have patterns that never exactly repeat but which still have an overall order. While regular crystals have left-right symmetries, quasicrystals have unusual rotational ones.

For decades, scientists wondered if certain kinds of magnetism, but especially antiferromagnetism, could exist in these strange materials. In all materials the electrons have a property called spin. It’s as if a small magnet is embedded inside each electron. The spin denotes the direction of this magnet’s magnetic field. In ferromagnets, the spins are aligned in a common direction, so the materials are attracted to magnets. In antiferromagnetic materials, the electron spins line up in alternating directions, so their effects cancel out.

While antiferromagnetism is common in regular crystals, it’s thus far never been observed in a true quasicrystal.

The new study is the first to show clear evidence of antiferromagnetic order in a real, three-dimensional quasicrystal — one made of gold, indium, and europium. The findings were published in Nature Physics on April 14.

The team confirmed such a material is real by carefully measuring how its atoms and spins are arranged and by observing how it behaves at low temperatures. Their work shows that even in the weird world of quasicrystals, complex magnetic order is possible, opening the door to new discoveries and technologies.

The scientists created a new alloy with the formula Au56In28.5Eu15.5. This means in 1,000 atoms’ worth of the material, 560 will be gold, 285 will be indium, and 155 will be europium. The composition tells us that the scientists were going for a particularly precise combination of these elements — which they could have known in one of two ways. It might have been trial-and-error*, but that makes research very expensive, or the scientists had reasons to expect antiferromagnetic order would appear in this material.

They did. Specifically, the team focused on Au56In28.5Eu15.5 because of its (i) unique positive Curie-Weiss temperature and (ii) rare-earth content, and (iii) because its structural features matched the theoretical criteria for stable antiferromagnetic order. Previous studies focused on quasicrystals containing rare-earth elements because they often have strong magnetic interactions. However, these compounds typically displayed a negative Curie-Weiss temperature, indicating dominant antiferromagnetic interactions but resulting only in disordered magnetic states.

A positive Curie-Weiss temperature indicates dominant ferromagnetic interactions. In this case, however, it also suggested a unique balance of magnetic forces that could potentially stabilise antiferromagnetic order rather than spin-glass behaviour. Studies on approximant crystals — periodic structures closely related to quasicrystals — had also shown that both ferromagnetic and antiferromagnetic orders are stabilised only when the Curie-Weiss temperature is positive. In contrast, a negative temperature led to spin-glass states.

The scientists of the new study noticed that the Au-In-Eu quasicrystal fit into the positive Curie-Weiss temperature category, making it a promising candidate to have antiferromagnetic order.

For added measure, by slightly altering the composition, e.g. adding an impurity to increase the electron-per-atom ratio, the scientists could make the antiferromagnetic phase disappear, to be replaced by spin-glass behaviour. This sensitivity to electron concentration further hinted that the composition of the alloy was at a sweet spot for stabilising antiferromagnetism.

Finally, the team had also recently discovered ferromagnetic order in some similar gold-based quasicrystals with rare-earth elements. The success encouraged them to explore the magnetic properties of new compositions, especially those with unusual Curie-Weiss temperatures.

The Au-In-Eu quasicrystal is also a Tsai-type icosahedral quasicrystal, meaning it features a highly symmetric atomic arrangement. Theoretical work has suggested that such structures could support antiferromagnetic order in the right conditions, especially if the atoms occupied specific sites in the lattice.

To make the alloy, the scientists used a technique called arc-melting, where highly pure metals are melted together using an electric arc, then quickly cooled to form the solid quasicrystal. To ensure the mixture was even, the team melted and flipped the sample several times.

Then they used X-ray and electron diffraction to check the atomic arrangement. These techniques passed X-rays and electrons through the material. A detector on the other side picked up the radiation scattered by the material’s atoms and used it to recreate their arrangement. The patterns showed the material was a primitive icosahedral quasicrystal, a structure with 20-sided symmetry and no repeating units.

The team also confirmed special arrangement of atoms by the way the diffraction patterns followed mathematical rules that are special to quasicrystals. Team members also used a magnetometer to track how much the material was magnetised when exposed to a magnetic field, from temperatures as low as 0.4 K to up to 300 K. Finally they also measured the material’s specific heat, i.e. the amount of heat energy it took to raise its temperature by 1º C. This reading can show signs of magnetic transitions.

Left: The arrangement of atoms in the quasicrystal alloy. The atoms are arranged in a combination of two patterns, shown on the right. The colouring denotes their place in either pattern rather than different elements. Credit: Nature Physics volume 21, pages 974–979 (2025)

To confirm how the spins inside the material were arranged, the team used neutron diffraction. Neutrons are adept at passing through materials and are sensitive to both atoms’ positions and magnetic order. By comparing patterns at temperatures above and below the suspected transition point, they could spot the appearance of new peaks that signal magnetic order.

This way, the team reported that at 6.5 K, the magnetisation curve showed a sharp change, known as a cusp. This is a classic sign of an antiferromagnetic transition, where the material suddenly changes from being unordered to having a regular up-and-down pattern of spins. The specific heat also showed a sharp peak at this temperature, confirming something dramatic was happening inside the material.

The scientists also reported that there was no sign of spin-glass behaviour — where the spins are pointing in random directions but unchanging — which is common in other magnetic quasicrystals.

Below 6.5 K, new peaks appeared in the neutron diffraction data, evidence that the spins inside the material were lining up in the regular but alternating pattern characteristic of antiferromagnetic order. The peaks were also sharp and well-defined, showing the order was long-range, meaning they were there throughout the material and not confined to small patches.

The team also experimented by adding a small amount of tin to the alloy, which changed the balance of electrons. This little change caused the material to lose its antiferromagnetic order and become a spin glass instead, showing how delicate the balance is between different magnetic states in quasicrystals.

The findings are important because this is the first time scientists have observed antiferromagnetic order in a real, three-dimensional quasicrystal, settling a long-standing debate. They also open up a new field of study, of quasiperiodic antiferromagnets, and suggest that by carefully tuning the composition, scientists may be able to find yet other types of magnetic order in quasicrystals.

“The present discovery will stimulate both experimental and theoretical efforts to elucidate not only its unique magnetic structure but also the intrinsic properties of the quasiperiodic order parameter,” the scientists wrote in their paper. “Another exciting aspect of magnetically ordered quasicrystals is their potential for new applications such as functional materials in spintronics” — which use electron spins to store and process information in ultra-fast computers of the future.


* Which is not the same as serendipity.

Featured image credit: Nature Physics volume 21, pages 974–979 (2025).

Tracking the Meissner effect under pressure

By: VM
5 July 2025 at 11:32

In the last two or three years, groups of scientists from around the world have made several claims that they had discovered a room-temperature superconductor. Many of these claims concerned high-pressure superconductors — materials that superconduct electricity at room temperature but only if they are placed under extreme pressure (a million atmospheres’ worth). Yet other scientists had challenged these claims on many grounds, but one in particular was whether these materials really exhibited the Meissner effect.

Room-temperature superconductors are often called the ‘holy grail’ of materials science. I abhor clichés but in this case the idiom fits perfectly. If such a material is invented or discovered, it could revolutionise many industries. To quote at length from an article by electrical engineer Massoud Pedram in The Conversation:

Room-temperature superconductors would enable ultra high-speed digital interconnects for next-generation computers and low-latency broadband wireless communications. They would also enable high-resolution imaging techniques and emerging sensors for biomedical and security applications, materials and structure analyses, and deep-space radio astrophysics.

Room-temperature superconductors would mean MRIs could become much less expensive to operate because they would not require liquid helium coolant, which is expensive and in short supply. Electrical power grids would be at least 20% more power efficient than today’s grids, resulting in billions of dollars saved per year, according to my estimates. Maglev trains could operate over longer distances at lower costs. Computers would run faster with orders of magnitude lower power consumption. And quantum computers could be built with many more qubits, enabling them to solve problems that are far beyond the reach of today’s most powerful supercomputers.

However, this surfeit of economic opportunities could also lure scientists into not thoroughly double-checking their results, cherry-picking from their data or jumping to conclusions if they believe they have found a room-temperature superconductor. Many papers written by scientists claiming they had found a room-temperature superconductor have in fact been published in and subsequently retracted from peer-reviewed journals with prestigious reputations, including Nature and Science, after independent experts found the papers to contain flawed data. Whatever the reasons for these mistakes, independent scrutiny of such reports has become very important.

If a material is a superconductor, it needs to meet two conditions*. The first of course is that it needs conduct a direct electric current with zero resistance. Second, the material should display the Meissner effect. Place a magnet over a superconducting material. Then, gradually cool the material to lower and lower temperatures, until you cross the critical temperature. Just as you cross this threshold, the magnet will start to float above the material. You’ve just physically observed the Meissner effect. It happens because when the material transitions to its superconducting state, it will expel all magnetic fields within its bulk to its surface. This results in any magnets already sitting nearby to be pushed away. In fact, the Meissner effect is considered to be the hallmark sign of a superconductor because it’s difficult to fake.

An illustration of the Meissner effect. B denotes the magnetic field, T is the temperature, and Tc is the critical temperature. Credit: Piotr Jaworski
Wait for the 1:03 mark.

The problem with acquiring evidence of the Meissner effect is the setup in which many of these materials become superconductors. In order to apply the tens to hundreds of gigapascals (GPa) of pressure, a small sample of the material — a few grams or less — is placed between a pair of high-quality diamond crystals and squeezed. This diamond anvil cell apparatus leaves no room for a conventional magnetic field sensor to be placed inside the cell. Measuring the magnetic properties of the sample is also complicated because of the fields from other sources in the apparatus, which will have to be accurately measured and then subtracted from the final data.

To tackle this problem, some scientists have of late suggested measuring the sample’s magnetic properties using the only entity that can still enter and leave the diamond anvil cell: light.

In technical terms, such a technique is called optical magnetometry. Magnetometry in general is any technique that converts some physical signal into data about a magnetic field. In this case the signal is in the form of light, thus the ‘optical’ prefix. To deploy optical magnetometry in the context of verifying whether a material is a high-pressure superconductor, scientists have suggested using nitrogen vacancy (NV) centres.

Say you have a good crystal of diamond with you. The crystal consists of carbon atoms bound to each other in sets of four in the shape of a pyramid. Millions of copies of such pyramids together make up the diamond. Now, say you substitute one of the carbon atoms in the gem with a nitrogen atom and also knock out an adjacent carbon atom. Physicists have found that this vacancy in the lattice, called an NV centre, has interesting, useful properties. For example, an NV centre can fluoresce, i.e. absorb light of a higher frequency and emit light of a lower frequency.

An illustration of a nitrogen vacancy centre in diamond. Carbon atoms are shown in green. Credit: Public domain

Because each NV centre is surrounded by three carbon atoms and one nitrogen atom, the vacancy hosts six electrons, two of which are unpaired. All electrons have a property called quantum spin. The quantum spin is the constitutive entity of magnetism the same way the electric charge is the constitutive entity of electricity. For example, if a block of iron is to be turned into a magnet, the spins of all the electrons inside have to be made point in the same direction. Each spin can point in one of two directions, which for a magnet are called ‘north’ and ‘south’. Planet earth has a magnetic north and a magnetic south because the spins of the trillions upon trillions of electrons in its core have come to point in roughly the same direction.

The alignment of the spins of different electrons also affects what energy they have. For example, in the right conditions, an atom with two electrons will have more energy if the electrons’ spins are aligned (↑↑) than when the electrons’ spins are anti-aligned (↑↓). This fundamental attribute of the electrons in the NV centres allows the centres to operate as a super-sensitive detector of magnetic fields — and which is what scientists from institutions around France have reported doing in a June 30 paper in Physical Review Applied.

The scientists implanted a layer of 10,000 to 100,000 NV centres a few nanometres under the surface of one of the diamond anvils. These centres had electrons with energies precisely 2.87 GHz apart.** When the centres were then exposed to microwave laser of some frequency, every NV centre could absorb green laser light and re-emit red light.

The experimental setup. DAC stands for ‘diamond anvil cell’. PL stands for ‘photoluminescence’, i.e. the red light emission. Credit: arXiv:2501.14504v1

As the diamond anvils squeezed the sample past 4 GPa, the pressure at which it would have become a superconductor, the sample displayed the Meissner effect, expelling magnetic fields from within its bulk to the surface. As a result, the NV centres were exposed to a magnetic field in their midst that wasn’t there before. This field affected the electrons’ collective spin and thus their energy levels, which in turn caused the red light being emitted from the centres to dim.

The researchers could easily track the levels and patterns of dimming in the NV centres with a microscopy, and based on that were able to ascertain whether the sample had displayed the Meissner effect. As Physical Review Letters associate editor Martin Rodriguez-Vega wrote in Physics magazine: “A statistical analysis of the [optical] dataset revealed information about the magnetic-field strength and orientation across the sample. Mapping these quantities produced a visualisation of the Meissner effect and revealed the existence of defects in the superconductor.”

In (a), the dotted lines show the parts of the sample that the diamond anvils were in contact with. (b) shows the parts of the sample associated with the red-light emissions from the NV centres, meaning these parts of the sample exhibited the Meissner effect in the experiment. (c) shows the normalised red-light emission along the y-axis and the frequency of microwave light shined along the x-axis. Red lines show the emission in normal conditions and blue lines show the emissions in the presence of the Meissner effect. Credit: arXiv:2501.14504v1

Because the NV centres were less than 1 micrometre away from the sample, they were extremely sensitive to changes in the magnetic field. In fact the researchers reported that the various centres were able to reveal the critical temperature for different parts of the sample separately than for the sample as a whole — a resolution not possible with conventional techniques. The pristine diamond matrix also conferred the electrons’ spins inside the NV centres with a long lifetime. And because there were so many NV centres, the researchers were able to ‘scan’ them with the microwave laser en masse instead of having to maintain focus on a single point on the diamond anvil, when looking for evidence of changes in the sample’s magnetic field. Finally, while the sample in the study became superconducting at a critical temperature of around 140 K, the centres were stable to under 4 K.

Another major advantage of the technique is that it can be used with type II superconductors as well. Type I superconductors are materials that transition to their superconducting state in a single step, under the critical temperature. Type II superconductors transition to their superconducting states in more than one step and display a combination of flux-pinning and the Meissner effect. From my piece in The Hindu in August 2023: “When a flux-pinned superconductor is taken away from a particular part of the magnetic field and put back in, it will snap back to its original relative position.” This happens because type II materials, while they don’t expel magnetic fields from within their bulk, also prevent the fields from moving around inside. Thus the magnetic field lines are pinned in place.

Because of the spatial distribution of the NV centres and their sensitivity, they can reveal flux-pinning in the sample by ‘sensing’ the magnetic fields at different distances.


* The material can make a stronger case for itself if it displays two more properties. (i) The heat energy required to raise the material’s electrons by 1º C has to change drastically at the critical temperature, which is the temperature below which the material becomes a superconductor. (ii) The material’s electrons shouldn’t be able to have certain energy readings. (That is, a map of the energies of all the electrons should show some gaps.) These properties are however considered optional.

** While 2.87 GHz is a frequency figure, recall Planck’s equation from high school: E = hv. Energy is equal to frequency times Planck’s constant, h. Since h is a constant (6.62 × 10-34 m2kg/s), energy figures are frequently denoted in terms of frequency in physics. An interested party can calculate the energy by themselves.

Enfeebling the Indian space programme

By: VM
3 July 2025 at 13:15

There’s no denying that there currently prevails a public culture in India that equates criticism, even well-reasoned, with pooh-poohing. It’s especially pronounced in certain geographies where the Bharatiya Janata Party (BJP) enjoys majority support as well as vis-à-vis institutions that the subscribers of Hindu politics consider to be ripe for international renown, especially in the eyes of the country’s former colonial masters. The other side of the same cultural coin is the passive encouragement it offers to those who’d play up the feats of Indian enterprises even if they lack substantive evidence to back their claims up. While these tendencies are pronounced in many enterprises, I have encountered them most often in the spaceflight domain.

Through its feats of engineering and administration over the years, the Indian Space Research Organisation (ISRO) has cultivated a deserved reputation of setting a high bar for itself and meeting them. Its achievements are the reason why India is one of a few countries today with a functionally complete space programme. It operates launch vehicles, conducts spaceflight-related R&D, has facilities to develop as well as track satellites, and maintains data-processing pipeliness to turn the data it collects from space into products usable for industry and academia. It is now embarking on a human spaceflight programme as well. ISRO has also launched interplanetary missions to the moon and Mars, with one destined for Venus in the works. In and of itself the organisation has an enviable legacy. Thus, unsurprisingly, many sections of the Hindutva brigade have latched onto ISRO’s achievements to animate their own propaganda of India’s greatness, both real and imagined.

The surest signs of this adoption are most visible when ISRO missions fail or succeed in unclear ways. The Chandrayaan 2 mission and the Axiom-4 mission respectively are illustrative examples. As if to forestall any allegations that the Chandrayaan 2 mission failed, then ISRO chairman K. Sivam said right after its Vikram lander crashed on the moon that it had been a “98% success”. Chandrayaan 2 was a technology demonstrator and it did successfully demonstrate most of those onboard very well. The “98%” figure, however, was so disproportionate as to suggest Sivan was defending the mission less on its merits than on its ability to fit into reductive narratives of how good ISRO was. (Recall, similarly, when former DCGI V.G. Somani claimed the homegrown Covaxin vaccine was “110% safe” when safety data from its phase III clinical trials weren’t even available.)

On the other hand, even as the Axiom-4 mission was about to kick off, neither ISRO nor the Department of Space (DoS) had articulated what Indian astronaut Shubhanshu Shukla’s presence onboard the mission was expected to achieve. If these details didn’t actually exist before the mission, to participate in which ISRO had paid Axiom Space more than Rs 500 crore, both ISRO and the DoS were effectively keeping the door open to picking a goalpost of their choosing to kick the ball through as the mission progressed. If they did have these details but had elected to not share them, their (in)actions raised — or ought to have — difficult questions about the terms on which these organisations believed they were accountable in a democratic country. Either way, the success of the Axiom-4 mission vis-à-vis Shukla’s participation was something of an empty vessel: a ready receptacle for any narrative that could be placed inside ex post facto.

At the same time, raising this question has often been construed in the public domain, but especially on social media platforms, in response to arguments presented in the news, and in conversations among people interested in Indian spaceflight, as naysaying Shukla’s activities altogether. By all means let’s celebrate Shukla’s and by extension India’s ‘citius, altius, fortius’ moment in human spaceflight; the question is: what didn’t ISRO/DoS share before Axiom-4 lifted off and why? (Note that what journalists have been reporting since liftoff, while valuable, isn’t the answer to the question posed here.) While it’s tempting to think this pinched communication is a strategy developed by the powers that be to cope with insensitive reporting in the press, doing so would also ignore the political capture institutions like ISRO have already suffered and which ISRO arguably has as well, during and after Sivan’s term as chairman.

For just two examples of institutions that have historically enjoyed a popularity comparable in both scope and flavour to that of ISRO, consider India’s cricket administration and the Election Commission. During the 2024 men’s T20 World Cup that India eventually won, the Indian team had the least amount of travel and the most foreknowledge on the ground it was to play its semifinal game on. At the 2023 men’s ODI World Cup, too, India played all its matches on Sundays, ensuring the highest attendance for its own contests rather than be able to share that opportunity with all teams. The tournament is intended to be a celebration of the sport after all. For added measure, police personnel were also deployed at various stadia to take away spectators’ placards and flags in support of Pakistan in matches featuring the Pakistani team. The stage management of both World Cups only lessened, rather than heightened, the Indian team’s victories.

It’s been a similar story with the Election Commission of India, which has of late come under repeated attack from the Indian National Congress party and some of its allies for allegedly rigging their electronic voting machines and subsequently entire elections in favour of the BJP. While the Congress has failed to submit the extraordinary evidence required to support these extraordinary claims, doubts about the ECI’s integrity have spread anyway because there are other, more overt ways in which the once-independent institution of Indian democracy favours the BJP — including scheduling elections according to the availability of party supremo Narendra Modi to speak at rallies.

Recently, a more obscure but nonetheless pertinent controversy erupted in some circles when in an NDTV report incumbent ISRO chairman V. Narayanan seemed to suggest that SpaceX called one of the attempts to launch Axiom-4 off because his team at ISRO had insisted that the company thoroughly check its rocket for bugs. The incident followed SpaceX engineers spotting a leak on the rocket. The point of egregiousness here is that while SpaceX had built and flown that very type of rocket hundreds of times, Narayanan and ambiguous wording in the NDTV report made it out to be that SpaceX would have flown the rocket if not for ISRO’s insistence. What’s more likely to have happened is NASA and SpaceX engineers would have consulted ISRO as they would have consulted the other agencies involved in the flight — ESA, HUNOR, and Axiom Space — about their stand, and the ISRO team on its turn would have clarified its position: that SpaceX recheck the rocket before the next launch attempt. However, the narrative “if not for ISRO, SpaceX would’ve flown a bad rocket” took flight anyway.

Evidently these are not isolated incidents. The last three ISRO chairmen — Sivan, Somanath, and now Narayanan — have progressively curtailed the flow of information from the organisation to the press even as they have maintained a steady pro-Hindutva, pro-establishment rhetoric. All three leaders have also only served at ISRO’s helm when the BJP was in power at the Centre, wielding its tendency to centralise power by, among others, centralising the permissions to speak freely. Some enterprising journalists like Chethan Kumar and T.S. Subramanian and activists like r/Ohsin and X.com/@SolidBoosters have thus far kept the space establishment from resembling a black hole. But the overarching strategy is as simple as it is devious: while critical arguments become preoccupied by whataboutery and fending off misguided accusations of neocolonialist thinking (“why should we measure an ISRO mission’s success the way NASA measures its missions’ successes?”), unconditional expressions of support and adulation spread freely through our shared communication networks. This can only keep up a false veil of greatness that crumbles the moment it brooks legitimate criticism, becoming desperate for yet another veil to replace itself.

But even that is beside the point: to echo the philosopher Bruno Latour, when criticism is blocked from attending to something we have all laboured to build, that something is deprived of the “care and caution” it needs to grow, to no longer be fragile. Yet that’s exactly what the Indian space programme risks becoming today. Here’s a brand new case in point, from the tweets that prompted this post: according to an RTI query filed by @SolidBoosters, India’s homegrown NavIC satellite navigation constellation is just one clock failure away from “complete operational collapse”. The issue appears to be ISRO’s subpar launch cadence and the consequently sluggish replacement of clocks that have already failed.

6/6 Root Cause Analysis for atomic clock failures has been completed but classified under RTI Act Section 8 as vital technical information. Meanwhile public transparency is limited while the constellation continues degrading. #NavIC #ISRO #RTI

— SolidBoosters (@SolidBoosters) July 2, 2025

Granted, rushed critiques and critiques designed to sting more than guide can only be expected to elicit defensive posturing. But to minimise one’s exposure to all criticism altogether, especially those from learned quarters and conveyed in respectful language, is to deprive oneself of the pressure and the drive to solve the right problems in the right ways, both drawing from and adding to India’s democratic fabric. The end results are public speeches and commentary that are increasingly removed from reality as well as, more importantly, thicker walls between criticism and The Thing it strives to nurture.

Iran’s nuclear options

By: VM
28 June 2025 at 03:18

From ‘What is next for Iran’s nuclear programme?’, The Hindu, June 28, 2025:

As things stand, Iran has amassed both the technical knowhow and the materials required to make a nuclear weapon. Second, the Israelis and the Americans have failed to deprive Iran of these resources in their latest salvo. In fact the airstrikes against Iran from June 13 cast Tehran as the victim of foreign aggression and increased the premium on its option to withdraw from the Non-Proliferation Treaty (NPT) without significant international censure.

While Tehran’s refusal to cooperate with the IAEA is suggestive, it hasn’t explicitly articulated that it will pursue nuclear weapons. … But the presence of large quantities of HEU in the stockpile is intriguing. From a purely technical standpoint, the HEU can still be diverted for non-military applications…

… such as R&D for naval applications and downconversion to less enriched reactor fuel. But these are niche use cases. In fact while it’s possible to downconvert a stockpile of uranium enriched to 60% to that enriched to 19.75%, 5% or 3% without using centrifuges, it’s also possible to do this by mixing uranium enriched to 20% with natural or depleted feedstock.

If anything, the highly enriched uranium stockpile [which Iran went to some lengths to protect from American bombing], the technical knowhow in the country, the absence of a nuclear warhead per se, and the sympathy created by the bombing allow Tehran a perfect bargaining chip: to simultaneously be in a state of pre-breakout readiness while being able to claim in earnest that it is interested in nuclear energy for peace.

Read more.

Why do quasicrystals exist?

By: VM
26 June 2025 at 07:04

Featured image: An example of zellij tilework in the Al Attarine Madrasa in Fes, Morocco (2012), with complex geometric patterns on the lower walls and a band of calligraphy above. Caption and credit: just_a_cheeseburger (CC BY)


‘Quasi’ means almost. It’s an unfair name for quasicrystals. These crystals exist in their own right. Their name comes from the internal arrangement of their atoms. A crystal is made up of a repeating group of some atoms arranged in a fixed way. The smallest arrangement that repeats to make up the whole crystal is called the unit cell. In diamond, a convenient unit cell is four carbon atoms bonded to each other in a tetrahedral (pyramid-like) arrangement. Millions of copies of this unit cell together make up a diamond crystal. The unit cell of sodium chloride has a cubical shape: the chloride ions (Cl) occupy the corners and face centres while the sodium ions (Na+) occupy the middle of the edges and centre of the cube. As this cube repeats itself, you get table salt.

The structure of all crystals thus follows two simple rules: have a unit cell and repeat it. Thus the internal structure of crystals is periodic. For example if a unit cell is 5 nanometres wide, it stands to reason you’ll see the same arrangement of atoms after every 5 nm. And because it’s the same unit cell in all directions and they don’t have any gaps between them, the unit cells fill the space available. It’s thus an exercise in tiling. For example, you can cover a floor of any shape completely with square or triangular tiles (you’ll just need to trim those at the edges). But you can’t do this with pentagonal tiles. If you do, the tiles will have gaps between them that other pentagonal tiles can’t fill.

Quasicrystals buck this pattern in a simple way: their unit cells are like pentagonal tiles. They repeat themselves but the resulting tiling isn’t periodic. There are no gaps in the crystal either because instead of each unit cell just like the one on its left or right, the tiles sometimes slot themselves in by rotating by an angle. Thus rather than the crystal structure following a grid-like pattern, the unit cells seem to be ordered along curves. As a result, even though the structure may have an ordered set of atoms, it’s impossible to find a unit cell that by repeating itself in a straight line gives rise to the overall crystal. In technical parlance, the crystal is said to lack translational symmetry.

Such structures are called quasicrystals. They’re obviously not crystalline, because they lack a periodic arrangement of atoms. They aren’t amorphous either, like the haphazardly arranged atoms of glass. Quasicrystals are somewhere in between: their atoms are arranged in a fixed way, with different combinations of pentagonal, octagonal, and other tile shapes that are disallowed in regular crystals, and with the substance lacking a unit cell. Instead the tiles twist and turn within the structure to form mosaic patterns like the ones featured in Islamic architecture (see image at the top).

In the 1970s, Roger Penrose discovered a particularly striking quasicrystal pattern, since called the Penrose Tiling, composed of two ‘thin’ and ‘thick’ rhombi (depicted here in green and blue, respectively). Credit: Public domain

The discovery of quasicrystals in the early 1980s was a revolutionary moment in the history of science. It shook up what chemists believed a crystal should look like and what rules the unit cell ought to follow. The first quasicrystals that scientists studied were made in the lab, in particular aluminium-manganese alloys, and there was a sense that these unusual crystals didn’t occur in nature. That changed in the 1990s and 2000s when expeditions to Siberia uncovered natural quasicrystals in meteorites that had smashed into the earth millions of years ago. But even this discovery kept one particular question about quasicrystals alive: why do they exist? Both Al-Mn alloys and the minerals in meteorites form in high temperatures and extreme pressures. The question of their existence, more than just because they can, is a question about whether the atoms involved are forced to adopt a quasicrystal rather than a crystal structure. In other words, it asks if the atoms would rather adopt a crystal structure but don’t because their external conditions force them not to.


This post benefited from feedback from Adhip Agarwala.


Often a good way to understand the effects of extreme conditions on a substance is using the tools of thermodynamics — the science of the conditions in which heat moves from place to another. And in thermodynamics, the existential question can be framed like this, to quote from a June paper in Nature Physics: “Are quasicrystals enthalpy-stabilised or entropy-stabilised?” Enthalpy-stabilised means the atoms of a quasicrystal are arranged in a way where they collectively have the lowest energy possible for that group. It means the atoms aren’t arranged in a less-than-ideal way forced by their external conditions but because the quasicrystal structure in fact is better than a crystal structure. It answers “why do quasicrystals exist?” with “because they want to, not just because they can”. Entropy-stabilised goes the other way. That is: at 0 K (-273.15º C), the atoms would rather come together as a crystal because a crystal structure has lower energy at absolute zero. But as the temperature increases, the energy in the crystal builds up and forces the atoms to adjust where they’re sitting so that they can accommodate new forces. At some higher temperature, the structure becomes entropy-stabilised. That is, there’s enough disorder in the structure — like sound passing through the grid of atoms and atoms momentarily shifting their positions — that allows it to hold the ‘excess’ energy but at the same time deviate from the orderliness of a crystal structure. Entropy stabilisation answers “why do quasicrystals exist?” with “because they’re forced to, not because they want to”.

In materials science, the go-to tool to judge whether a crystal structure is energetically favourable is density functional theory (DFT). It estimates the total energy of a solid and from there scientists can compare competing phases and decide which one is most stable. If four atoms will have less energy arranged as a cuboid than as a pyramid at a certain temperature and pressure, then the cuboidal phase is said to be more favoured. The problem is DFT can’t be directly applied to quasicrystals because the technique assumes that a given mineral has a periodic internal structure. Quasicrystals are aperiodic. But because scientists are already comfortable with using DFT, they have tried to surmount this problem by considering a superunit cell that’s made up of a large number of atoms or by assuming that a quasicrystal’s structure, while being aperiodic in three dimensions, could be periodic in say four dimensions. But the resulting estimates of the solid’s energy have not been very good.

In the new Nature Physics paper, scientists from the University of Michigan, Ann Arbor, have reported a way around the no-unit-cell problem to apply DFT to estimate the energy of two quasicrystals. And they found that these quasicrystals are enthalpy-stabilised. The finding answer is a chemistry breakthrough because it raises the possibility of performing DFT in crystals without translational symmetry. Further, by showing that two real quasicrystals are enthalpy-stabilised, chemists may be forced to rethink why almost every other inorganic material does adopt a repeating structure. Crystals are no longer at the centre of the orderliness universe.

An electron diffraction pattern of an icosahedral holmium-magnesium-zinc quasicrystal reveals the arrangement of its atoms. Credit: Jgmoxness (CC BY-SA)

The team started by studying the internal structure of two quasicrystals using X-rays, then ‘scooped’ out five random parts for further analysis. Each of these scoops had 24 to 740 atoms. Second, the team used a modified version of DFT called DFT-FE. The computational cost of running DFT scales increases according to the cube of the number of atoms being studied. If studying four atoms with DFT requires X amount of computing power, 24 atoms would require 8,000 times X and 740 atoms would require 400 million times X. Instead the computational cost of DFT-FE scales as the square of the number of atoms, which makes a big difference. Continuing from the previous example, 24 atoms would require 400 times X and 740 atoms would require half a million times X. But the lower computational cost of DFT-FE is still considerable. The researchers’ solution was to use GPUs — the processors originally developed to run complicated video games and today used to run artificial intelligence (AI) apps like ChatGPT.

The team was able to calculate that the resulting energy estimates for a quasicrystal was off by no more than 0.3 milli-electron-volt (meV) per atom, considered acceptable. They also applied their technique to a known crystal, ScZn6, and confirmed that its estimate of the energy matched the known value (5-9 meV per atom). They were ready to go now.

When they applied DFT-FE to scandium-zinc and ytterbium-cadmium quasicrystals, they found clear evidence that they were enthalpy-stabilised. Each atom in the scandium-zinc quasicrystal had 23 meV less energy than if it had been part of a crystal structure. Similarly atoms in the ytterbium-cadmium quasicrystal had roughly 7 meV less each. The verdict was obvious: translational symmetry is not required for the most stable form of an inorganic solid.

A single grain of a scandium-zinc quasicrystal has 12 pentagonal faces. Credit: Yamada et al. (2016). IUCrJ

The researchers also explored why the ytterbium-cadmium quasicrystal is so much easier to make than the scandium-zinc quasicrystal. In fact the former was the world’s first two-element quasicrystal to be discovered, 25 years ago this year. The team broke down the total energy as the energy in the bulk plus energy on the surface, and found that the scandium-zinc quasicrystal has high surface energy.

This is important because in thermodynamics, energy is like cost. If you’re hungry and go to a department store, you buy the pack of biscuits that you can afford rather than wait until you have enough money to buy the most expensive one. Similarly, when there’s a hot mass of scandium-zinc as a liquid and scientists are slowly cooling it, the atoms will form the first solid phase they can access rather than wait until they have accumulated enough surface energy to access the quasicrystal phase. And the first phase they can access will be crystalline. On the other hand scientists discovered the ytterbium-cadmium quasicrystal so quickly because it has a modest amount of energy across its surface and thus when cooled from liquid to solid, the first solid phase the atoms can access is also the quasicrystal phase.

This is an important discovery: the researchers found that a phase diagram alone can’t be used to say which phase will actually form. Understanding the surface-energy barrier is also important, and could pave the way to a practical roadmap for scientists trying to grow crystals for specific applications.

The big question now is: what special bonding or electronic effects allow atoms to have order without periodicity? After Israeli scientist Dan Shechtman discovered quasicrystals in 1982, he didn’t publish his findings until two years later, after including some authors on his submission to improve its chances with a journal, because he thought he wouldn’t be taken seriously. This wasn’t a silly concern: Linus Pauling, one of the greatest chemists in the history of subject, dismissed Shechtman’s work and called him a “quasi-scientist”. The blowback was so sharp and swift because chemists like Pauling, who had helped establish the science of crystal structures, were certain there was a way crystals could look and a way they couldn’t — and quasicrystals didn’t have the right look. But now, the new study has found that quasicrystals look perfect. Perhaps it’s crystals that need to explain themselves…

Small Programs and Languages

4 June 2025 at 00:00
I really enjoyed the feedback I got on Implementing a Forth. It's a fun subject! I updated it with new notes, an even smaller 'Forth', and a link to this oversized "card" that resulted from thinking about smallness...

HTML WARDen (a wiki)

12 June 2025 at 00:00
A new wiki appears! Here's the thing I alluded to in the previous two entries. It's one of those "mini-sites" that appear on this feed from time to time with: A project page, a repo, and a 5-part "making of" article series that I hope is fun and interesting...

Paged Out! prints are here, and so is #7 CFP deadline

18 June 2025 at 00:13

Paged Out! was always intended as a PDF+print zine, but the "print" part turned out to be pretty elusive. We actually did an initial test print of 500 copies in 2019 for a conference I've co-organized (Security PWNing), but that's it. Until last month that is, when we pretty much got back on track with prints — both free prints for events, and — additionally — print on demand if someone wants to buy a copy. We actually also updated the website with a lot of print-related information.

So let's cut to the chase — how to get printed Paged Out!?

At the same time if you or your company would like to sponsor some Paged Out! prints for a specific event or in general, please let us know (prints AT pagedout DOT institute).

So far only issue #6 is available, but we're working on getting all of them out there, including older ones. We're basically going one by one, first #5, then #4, and so on.

Speaking of issues — Call For Articles for issue #7 now has a soft deadline: 30 June 2025

As usual, we're accepting technical 1-page articles about everything interesting related to computers, electronics, radio, and so on. See pagedout.institute/?page=cfp.php for details.

Note: We're having problems getting articles about retro computers, speedrunning, and movement techniques in games (e.g. Apex Legends), so if you can write about that, please do; and if you know someone who could write something about this, please ping them. Of course all the usual topics are welcomed too, as always.

CONFidence 2025 is next week

28 May 2025 at 00:13

It's the 20-year anniversary of the CONFidence conference! And it's happening next week (2-3 June) in Kraków, so don't miss out.

Furthermore, we've shipped 500 Paged Out! #6 issues there, so – if you're fast enough – you can grab one for free there :)

Enjoy CONFidence!

P.S. If you don't have a ticket my code might still work: GYN10
P.P.S. Huh, time flies, doesn't it. I think the first CONFidence I attended was in 2008. It's great to see this conference is still going strong and amazing to be a part of its program committee.

Feedback: working state, mobile rollovers, and IP filtering

30 June 2025 at 03:40

I get questions from people who read my posts and sometimes I answer them in a post. This is one of those times.

...

Someone read my "war room" post and picked up on the part where I spent several weeks trying to get to the bottom of what turned out to be "kill -9 -1" nuking the world on a bunch of FB machines. They asked how I keep track of things during that time, what my working memory is, and is it paper, text files, IRC messages, or just remembering things.

The answer is: back when I used to do that kind of thing, I found it very useful to have a "MMDD" (hey, I'm in the US, so just pretend it's the back half of an 8601 number... you'll see why...) directory, and inside of it, I'd have some short names for whatever I was dealing with that day.

That means today would be 0629, and then something inside of it might be "fbar" or "rsw" or "webi" or something. It was just something to keep all of the crap together and yet away from other things I was doing that day, while also distinguishing it from other times I might've done a "fbar" or "webi" or whatever project, if that makes sense.

These would just live off my home directory, and yes, it would fill up with crap, but I'd batch them up when a year ended. I'd take 01xx through 12xx and move them into "2013" when 2014 began, for instance. So, as you can see, they *are* ISO-8601 dates, sorta, but it's ~/YYYY/MMDD/foobar once it's old, and it's merely ~/MMDD/foobar until then. This is a balancing act between speed and having my homedir fill up with tons of ancient crap.

Any time I needed scratch space for output from things, that's what I used. That might be the output from a bunch of sweeps over the fleet to look for anomalies. Let's say I ran a command that sshed into a few hundred thousand hosts to look for common items in the logs. The stdout/stderr from that would probably be there so I could hit it up multiple times without asking the job runner system for another copy. It's a lot faster that way.

But in terms of troubleshooting things and dealing with permutations, it's hard to beat paper, and I usually end up with some kind of "lab notebook" at any job I work. I can think of examples of those going back over 20 years. The only problem is that the stuff in them really properly belongs to the company so I don't tend to have them after the fact. A great many pages of context have gone in the shredders over time, and not because I didn't ask. I've asked if anyone wanted them, and the answer has always been no.

There were also the internal posts about things, and then running commentary on IRC channels, but those tend to be useful after the fact, or for getting help from other people. My own state-keeping tends to stay on something close at hand and (usually) physically tangible.

As for those <date>/<term> directories, some of them turned out to be rather handy after the fact. A lot of times, something would happen, and then time would pass, and it would break *again* in exactly the same way, and I could think "didn't we deal with these people already?", dig around, find something from six months earlier, and go "AHA!". Now armed with the date, I could pull up the right posts, IRC logs, group messages, graphs, or whatever else.

Considering how much random crap I dealt with in any given day during my time "in the barrel" (kids, ask your parents), that was the only way to make any sense of it later. Without stuff like that, by the end of the week, I'd have no idea what I had been doing on Monday or Tuesday. That's how bad the load got at points.

...

Someone else wrote in and asked if I could do something to improve the overflow calculator thing I hacked up last week. It wasn't particularly usable on mobile, and that's definitely true. I wrote it on a laptop and paid exactly zero attention to what it would look like on a weirdly small screen that's usually taller than it is wide.

It was a fair point, so I took a whack at making it suck slightly less. It's probably still terrible, but in the specific case of me holding my phone upright, it looks halfway usable now. You're no longer forced into "flyspeck-3" mode with the fonts, at least.

CSS is such a mess.

...

I occasionally hear that the site is unreachable from one spot or another. This almost certainly comes down to IP filtering on my part. Perhaps you've read about the influx of web weasels who are scraping the living shit out of everything remotely URL-shaped they can get their hands on. My stuff is certainly in that space, and they show up here regularly.

Besides that, there are also a number of networks which send nothing but straight-up abusive traffic. This is where you look at the logs and see stuff like them using every single IP address in a (v4) /24 to scan for random webshit vulnerabilities. It's like, nice job, fucko, but I don't run PHP here. Anyway, that's pretty solid evidence that a given network is not worth hearing from ever again, and so into the filter it goes.

A whole lot of this happens automatically just based on whatever traffic is sent this way first. Send bad traffic and meet the bit bucket. I don't even find about most of it since it's constant and utterly uninteresting.

Then there are the people who run feed readers which don't play nicely. As previously described, they will get a handful of chances to slow down with 429s, and then if the web server feels like those aren't having an effect, it'll just ignore the traffic for however long it feels like. Again, I'm also not in the loop on this sort of thing. It's all automatic.

Finally, there is a new rub. I taught myself how to ingest BGP data, and so can now trivially go from an IP address to the autonomous system number of whoever's advertising it, including overlapping advertisements (like a /24 out of a bigger /20). Then I wrote something that will dump out an entire AS, and it's not hard to imagine what I do with that.

Enough bad behavior from a host -> filter the host.

Enough bad hosts in a netblock -> filter the netblock.

Enough bad netblocks in an AS -> filter the AS. Think of it as an "AS death penalty", if you like.

As long as clueless network operators continue to let abusive customers bounce around thousands of dynamic IP addresses with not so much as a hint of SWIP data, those entire net blocks will find themselves unable to get to large swaths of the net. That's just how it is, and it's nothing new. Send crap, meet /dev/null.

Dealing with what the web has become is exhausting.

Calculating rollovers

25 June 2025 at 05:54

I've long had a list of "magic numbers" which show up in a bunch of places, and even made a post about it back in November of 2020. You ever wonder about certain permutations, like 497 days, or 19.6 years, or 5184 hours, and what they actually mean?

I've been doing that stuff by hand in a calculator and finally decided to just do it in Javascript and put it online for anyone to try.

So, here's my latest waste of CPU cycles:

My rollover calculator.

I still haven't figured out the Crucial SSD 5184 hour thing, so it's not in there. 5124 hours, sure, I can understand that one, but 5184? 60 more?

Anyway, have fun while the world burns.

rsync's defaults are not always enough

31 May 2025 at 20:39

rsync is one of those tools which is rather useful. It saves you from spending the time and effort on copying data which you already have. It's the backbone of many a mirror site, and it also gets used for any number of backup solutions.

There's just one problem: in the name of efficiency, it can miss certain changes. rsync normally looks at the size and modification time of a candidate file, and if they are the same at both ends, that's the end of any consideration. It won't get any further attention and it moves on to something else.

"So what", you might think. "All files change at least their mtime when someone writes to them. That's the whole point of a mtime."

And yet... I'm writing this post, and here we are.

The keen-eyed observers out there are probably already thinking "ooh, bit rot" and other things where one of the files has actually become corrupted while "at rest" for whatever reason. Those observers are right! That's totally a problem that you have to worry about, especially if you're using SSDs to hold your bits and those SSDs aren't always being powered.

But no, this is something you have to worry about *beyond* that. This is about a "sneak path" that you probably didn't consider. I didn't.

Here, let's run a little experiment. If you have a x86_64 Debian box that's relatively current and you've been backing up the whole thing via rsync for a year or two, go do something for me.

Go run your favorite file-hasher tool on /usr/lib/x86_64-linux-gnu/libfribidi.so.0.4.0 for me. Give it a sha256sum or whatever, or even md5sum if you're feeling brash. Then note the modification time on the file.

Now mount one of your backups and do the same thing on the version of the file that's on the backup device. See anything ... odd? Unusual?

Identical mtimes, identical sizes... and different hashes, right? I spotted this on a bunch of my machines after going "hmmm..." about the whole SSD-data-loss thing.

Clearly, something unusual happened somewhere, and it's been escaping the notice of your rsync runs ever since. I haven't gone digging into the package history for this thing to find out just when and where it happened, and (more importantly) how. It's rather unusual.

If you're freaking out right now, there is some hope. rsync has both -I and -c which promise to not use the quick method and instead will run a checksum on the files. It's slower so you won't want to do this normally, but it's not a bad idea to add this to the mix of things that you do every so many rotations.

I should point out that the first time you do a forced-checksum run, --dry-run will let you see the changes before it blows anything away, so you can make the call as to which version is the right one! In theory, your *source* files can get corrupted, and if you just copy one of those across, you have now corrupted your backup.

Isn't entropy FUN?

Why I no longer have an old-school cert on my https site

25 May 2025 at 01:26

At the start of 2023, I wrote a post talking about why I still had an "old-school cert" on my https site. Well, things have shifted, and it's time to talk about why.

I've been aware of the ACME protocol for a while. I have tech notes going back as far as 2018, and every time I looked at it, I recoiled in horror. The whole thing amounts to "throw in every little bit of webshit tech that we can", and it makes for a real problem to try to implement this in a safe and thorough way.

Many of the existing clients are also scary code, and I was not about to run any of them on my machines. They haven't earned the right to run with privileges for my private keys and/or ability to frob the web server (as root!) with their careless ways.

That meant I was stuck: unwilling to bring myself to deal with the protocol while simultaneously unwilling to budge on allowing the cruft code of existing projects into my life.

Well, time passed, and I managed to crack some of my own barriers. It wasn't by using the other projects, though. I started ripping into them to figure out just how the spec really worked, and started biting off really really small pieces of the problem. It took a particular forcing function to get me off my butt and into motion.

About six months ago, I realized that it was probably time to get away from Gandi as a registrar and also SSL provider (reseller). They had been eaten by private equity some years before, and the rot has been setting in. Their "no bullshit" tagline is gone, and their prices have been creeping up. I happened to renew my domains for multiple years and have been insulated for a while, but it was going to be a problem in 2025.

Giving them the "yeet" was no big deal, but the damn rbtb certificate was going to be a problem. Was I going to start paying even more for the stupid thing every year, or was I going to finally suck it up and deal with ACME?

That still left the problem of overcoming my inherent disgust for the entire protocol and having to deal with all of these encodings they force upon you. My first steps towards the solution involved writing really small and stupid utility functions and libraries that would come in handy later. I'm talking about wrapping jansson (a C library that handles JSON) so that it made sense in my C++ world and I could import JSON (something I use as little as possible). That kind of thing.

This also meant going down some dead-ends, like noticing that libraries existed which would allegedly create certain things (like a JWK) for you, and then realizing that they were not going to make my life any easier. I'd poke at it, reach my limit, and then swear and walk away for another couple of days.

This went on for some time. I have a series of notes where I'd grab a piece of the problem, wrangle it around for a while, get grossed out, and then set it down and go do something else. This just kept happening but I slowly made progress with small pieces that would Do Stuff, and then they'd connect to each other, and so on like this.

One positive development during all of this was discovering this "pebble" test server I could run on an isolated fake system. It would act as an ACME server and would let me harass it with my feeble attempts at implementing a client instead of bothering the real CAs. Even "staging" servers deserve better treatment than active development, after all.

And, well, after a whole lot of mangling and dead-ends and rewrites and other terrible crap, I had an awful little tool that would take a CSR, do all of the idiot dances and would plop out a certificate. I pointed it at Let's Encrypt staging, and it worked. Then I pointed it at their prod site, and _that_ worked. So I did it for the real thing ([www.]rachelbythebay.com), and *that* worked, and I dropped it into place.

Thus, for the past couple of weeks, if you've been hitting the https version of my site, you've been doing it across the new setup.

Now, I took notes about this, and I wanted to share some of my original off-the-cuff thoughts about implementing this for anyone who's similarly broken in the head and wants to see how bad it can be. I will note that I wrote this based on the first thing that worked, and it does not necessarily reflect the implementation I'm on a few weeks later.

...

Make an RSA key for your web site. Then make a CSR for it, setting the CN and adding a matching altname as an extension. No other fields matter. Nobody looks at those, anyway, and none of them will influence your final certificate no matter how prosaic or precise you get in there.

Make an RSA key of 4096 bits. Call it your personal key.

Write something that'll read a CSR. It needs to extract the CN and the SANs - the DNS: ones, at least. Ensure there's actually a CN [*] and actually SANs, and that the CN occurs within the SANs. So, yes, you have to have at least one SAN.

[* - I now know you can run with just SANs. I did not at the time.]

Write something that'll do a HTTP GET to <directory URL> which is given to you by the ACME service operator. It then needs to parse the body as JSON (or die) and extract some strings from the top-level object: "newNonce", "newAccount" and "newOrder" at the very least.

Write something that'll read an RSA key file on disk. It needs to extract the publicExponent (probably 65537, but you never know...) and the modulus. Make it read your personal key from earlier.

If you end up using "openssl rsa -in foo -noout -text" to do this, the modulus is a bunch of printed hex digits, like "00:ff:11:ab:cd:ef:22:33". It hard-wraps the output and also indents the lines, so you get to clean all of that up first.

Skip the first 00 for some inexplicable reason. Take the other bytes of the modulus and turn them into the actual character values, so a literal 0xff, 0x11, 0xab, 0xcd and so on down the line in the same order you find them in the file. Hang on to these values for later.

Write something that'll turn an ordinary integer into its equivalent big-endian bag of bytes, but don't pad it out to any particular alignment. You need to take that "65537" from your publicExponent and turn it into the equivalent bytes, so 0x01, 0x00, 0x01. Yes, it's a number you just turned back into a binary representation.

Write something that will do base64 *style* encoding, but not quite. The last two characters in the encoding set are usually + and /, but that won't do for webshit, so you need to make it use - and _ instead.

Take that publicExponent (65537), pump it through your big-endian bag-of-bytes encoder to get 0x01 0x00 0x01, then put it through your "base64web" encoder to get "AQAB".

Start a new JSON object. Add a string called "e" and set it to the output of the above step. So, yes, instead of saying that "e" equals "65537", you're saying that "e" equals "AQAB". Aren't you glad you did those extra steps?

Add another string called "kty" and set it to "RSA".

Add another string called "n" and set it to the "base64web" version of your modulus bag-of-bytes from earlier.

Take this JSON object and make it into a sorted compact string representation. This means it goes "e, kty, n" and it also has all of the usual padding (spaces) squished out. Call this a JWK string and save it for later.

Create a second JSON object. Add a boolean to it named "termsOfServiceAgreed" that's set to true. (Guess you'd better agree...)

Look up the URL for "newNonce" from the directory JSON you got earlier.

Make a HTTP HEAD request to that URL. Dig around in the headers (not the body, since there is no body on a HEAD) until you find "Replay-Nonce". Extract the value of that header. Hang onto it for later.

Look up the URL for "newAccount" from that directory JSON for before.

Create a third JSON object. Add a string to it called "url". Set it to that (newAccount) URL. Add a string called "alg". Set it to "RS256". Add a string called "nonce" and set it to the value from the *header* in that last HTTP HEAD request.

Add an object to this third object called "jwk". Within it, add "e", "kty" and "n" in a manner that matches what you did earlier (you know, from the "JWK string" you're still holding for later).

Dump this third JSON object to a sorted compact string representation. Call it "protected".

Dump the second JSON object to a sorted compact string representation. Call it "payload".

Create a string where you literally concatenate those two prior strings, such that it's the value of protected, then an actual period (as in 0x2e, a full stop, whatever), then the value of payload.

Write something that'll create a SHA256 digest of an arbitrary string and will sign it with an arbitrary RSA key (like "openssl dgst -sha256 -sign <key>"). The key in question is your personal key.

Pump that "<protected>.<payload>" string through the digest function. Then run it through your "base64web" encoder. Call this the signature.

Create a fourth JSON object. Add a string called "protected" and set it to whatever you built a few steps earlier. Add another string called "payload" and set it likewise. Then add one called "signature" and set it, too.

Take this fourth JSON object and dump it to a sorted compact string representation. Call this your post body.

Make a HTTP POST request to the "newAccount" URL from the directory. Set the content-type to "application/jose+json". Set the post data to the post body string from the previous step.

Dig around in the headers of the response, looking for one named "Location". Don't follow it like a redirection. Why would you ever follow a Location header in a HTTP header, right? Nope, that's your user account's identifier! Yes, you are a URL now. Hang on to that URL for later.

You now have an account.

You still have much to do.

...

That's about where I stopped writing my take on the protocol.

Again, my program no longer works quite like this, but this is where it started after having observed a bunch of other stuff that already existed.

So far, we have (at least): RSA keys, SHA256 digests, RSA signing, base64 but not really base64, string concatenation, JSON inside JSON, Location headers used as identities instead of a target with a 301 response, HEAD requests to get a single value buried as a header, making one request (nonce) to make ANY OTHER request, and there's more to come.

We haven't even scratched the surface of creating an order, dealing with authorizations and challenges, the whole "key thumbprint" thing, what actually goes into those TXT records, and all of that other fun stuff.

...

Random side note: while looking at existing ACME clients, I found that at least one of them screws up their encoding of the publicExponent and ends up interpreting it as hex instead of decimal. That is, instead of 65537, aka 0x10001, it reads it as 0x65537, aka 415031!

Somehow, this anomaly exists and apparently doesn't break anything? I haven't actually run the client in question, but I imagine people are using it since it's in apt.

...

This complexity must be job security for somebody. Maybe multiple somebodies.

Inside the Apollo "8-Ball" FDAI (Flight Director / Attitude Indicator)

14 June 2025 at 03:12

During the Apollo flights to the Moon, the astronauts observed the spacecraft's orientation on a special instrument called the FDAI (Flight Director / Attitude Indicator). This instrument showed the spacecraft's attitude—its orientation—by rotating a ball. This ball was nicknamed the "8-ball" because it was black (albeit only on one side). The instrument also acted as a flight director, using three yellow needles to indicate how the astronauts should maneuver the spacecraft. Three more pointers showed how fast the spacecraft was rotating.

An Apollo FDAI (Flight Director/Attitude Indicator) with the case removed. This FDAI is on its side to avoid crushing the needles.

An Apollo FDAI (Flight Director/Attitude Indicator) with the case removed. This FDAI is on its side to avoid crushing the needles.

Since the spacecraft rotates along three axes (roll, pitch, and yaw), the ball also rotates along three axes. It's not obvious how the ball can rotate to an arbitrary orientation while remaining attached. In this article, I look inside an FDAI from Apollo that was repurposed for a Space Shuttle simulator1 and explain how it operates. (Spoiler: the ball mechanism is firmly attached at the "equator" and rotates in two axes. What you see is two hollow shells around the ball mechanism that spin around the third axis.)

The FDAI in Apollo

For the missions to the Moon, the Lunar Module had two FDAIs, as shown below: one on the left for the Commander (Neil Armstrong in Apollo 11) and one on the right for the Lunar Module Pilot (Buzz Aldrin in Apollo 11). With their size and central positions, the FDAIs dominate the instrument panel, a sign of their importance. (The Command Module for Apollo also had two FDAIs, but with a different design; I won't discuss them here.2)

The instrument panel in the Lunar Module. From Apollo 15 Lunar Module, NASA, S71-40761. If you're looking for the DSKY, it is in the bottom center, just out of the picture.

The instrument panel in the Lunar Module. From Apollo 15 Lunar Module, NASA, S71-40761. If you're looking for the DSKY, it is in the bottom center, just out of the picture.

Each Lunar Module FDAI could display inputs from multiple sources, selected by switches on the panel.3 The ball could display attitude from either the Inertial Measurement Unit or from the backup Abort Guidance System, selected by the "ATTITUDE MON" toggle switch next to either FDAI. The pitch attitude could also be supplied by an electromechanical unit called ORDEAL (Orbital Rate Display Earth And Lunar) that simulates a circular orbit. The error indications came from the Apollo Guidance Computer, the Abort Guidance System, the landing radar, or the rendezvous radar (controlled by the "RATE/ERROR MON" switches). The pitch, roll, and yaw rate displays were driven by the Rate Gyro Assembly (RGA). The rate indications were scaled by a switch below the FDAI, selecting 25°/sec or 5°/sec.

The FDAI mechanism

The ball inside the indicator shows rotation around three axes. I'll first explain these axes in the context of an aircraft, since the axes of a spacecraft are more arbitrary.4 The roll axis indicates the aircraft's angle if it rolls side-to-side along its axis of flight, raising one wing and lowering the other. Thus, the indicator shows the tilt of the horizon as the aircraft rolls. The pitch axis indicates the aircraft's angle if it pitches up or down, with the indicator showing the horizon moving down or up in response. Finally, the yaw axis indicates the compass direction that the aircraft is heading, changing as the aircraft turns left or right. (A typical aircraft attitude indicator omits yaw.)

I'll illustrate how the FDAI rotates the ball in three axes, using an orange as an example. Imagine pinching the horizontal axis between two fingers with your arm extended. Rotating your arm will roll the ball counter-clockwise or clockwise (red arrow). In the FDAI, this rotation is accomplished by a motor turning the frame that holds the ball. For pitch, the ball rotates forward or backward around the horizontal axis (yellow arrow). The FDAI has a motor inside the ball to produce this rotation. Yaw is a bit more difficult to envision: imagine hemisphere-shaped shells attached to the top and bottom shafts. When a motor rotates these shells (green arrow), the hemispheres will rotate, even though the ball mechanism (the orange) remains stationary.

A sphere, showing the three axes.

A sphere, showing the three axes.

The diagram below shows the mechanism inside the FDAI. The indicator uses three motors to move the ball. The roll motor is attached to the FDAI's frame, while the pitch and yaw motors are inside the ball. The roll motor rotates the roll gimbal through gears, causing the ball to rotate clockwise or counterclockwise. The roll gimbal is attached to the ball mechanism at two points along the "equator"; these two points define the pitch axis. Numerous wires on the roll gimbal enter the ball along the pitch axis. The roll control transformer provides position feedback, as will be explained below.

The main components inside the FDAI.

The main components inside the FDAI.

Removing the hemispherical shells reveals the mechanism inside the ball. When the roll gimbal is rotated, this mechanism rotates with it. The pitch motor causes the ball mechanism to rotate around the pitch axis. The yaw motor and control transformer are not visible in this photo; they are behind the pitch components, oriented perpendicularly. The yaw motor turns the vertical shaft, with the two hemisphere shells attached to the top and bottom of the shaft. Thus, the yaw motor rotates the ball shells around the yaw axis, while the mechanism itself remains stationary. The control transformers for pitch and yaw provide position feedback.

The components inside the ball of the FDAI.

The components inside the ball of the FDAI.

Why doesn't the wiring get tangled up as the ball rotates? The solution is two sets of slip rings to implement the electrical connections. The photo below shows the first slip ring assembly, which handles rotation around the roll axis. These slip rings connect the stationary part of the FDAI to the rotating roll gimbal. The vertical metal brushes are stationary; there are 23 pairs of brushes, one for each connection to the ball mechanism. Each pair of brushes contacts one metal ring on the striped shaft, maintaining contact as the shaft rotates. Inside the shaft, 23 wires connect the circular metal contacts to the roll gimbal.

The slip ring assembly in the FDAI.

The slip ring assembly in the FDAI.

A second set of slip rings inside the ball handles rotation around the pitch axis. These rings provide the electrical connection between the wiring on the roll gimbal and the ball mechanism. The yaw axis does not use slip rings since only the hemisphere shells rotate around the yaw axis; no wires are involved.

Synchros and the servo loop

In this section, I'll explain how the FDAI is controlled by synchros and servo loops. In the 1950s and 1960s, the standard technique for transmitting a rotational signal electrically was through a synchro. Synchros were used for everything from rotating an instrument indicator in avionics to rotating the gun on a navy battleship. A synchro produces an output that depends on the shaft's rotational position, and transmits this output signal on three wires. If you connect these wires to a second synchro, you can use the first synchro to control the second one: the shaft of the second synchro will rotate to the same angle as the first shaft. Thus, synchros are a convenient way to send a control signal electrically.

The photo below shows a typical synchro, with the input shaft on the top and five wires at the bottom: two for power and three for the output.

A synchro transmitter.

A synchro transmitter.

Internally, the synchro has a rotating winding called the rotor that is driven with 400 Hz AC. Three fixed stator windings provide the three AC output signals. As the shaft rotates, the voltages of the output signals change, indicating the angle. (A synchro resembles a transformer with three variable secondary windings.) If two connected synchros have different angles, the magnetic fields create a torque that rotates the shafts into alignment.

The schematic symbol for a synchro transmitter or receiver.

The schematic symbol for a synchro transmitter or receiver.

The downside of synchros is that they don't produce a lot of torque. The solution is to use a more powerful motor, controlled by the synchro and a feedback loop called a servo loop. The servo loop drives the motor in the appropriate direction to eliminate the error between the desired position and the current position.

The diagram below shows how the servo loop is constructed from a combination of electronics and mechanical components. The goal is to rotate the output shaft to an angle that exactly matches the input angle, specified by the three synchro wires. The control transformer compares the input angle and the output shaft position, producing an error signal. The amplifier uses this error signal to drive the motor in the appropriate direction until the error signal drops to zero. To improve the dynamic response of the servo loop, the tachometer signal is used as a negative feedback voltage. The feedback slows the motor as the system gets closer to the right position, so the motor doesn't overshoot the position and oscillate. (This is sort of like a PID controller.)

This diagram shows the structure of the servo loop, with a feedback loop ensuring that the rotation angle of the output shaft matches the input angle.

This diagram shows the structure of the servo loop, with a feedback loop ensuring that the rotation angle of the output shaft matches the input angle.

A control transformer is similar to a synchro in appearance and construction, but the rotating shaft operates as an input, not the output. In a control transformer, the three stator windings receive the inputs and the rotor winding provides the error output. If the rotor angle of the synchro transmitter and control transformer are the same, the signals cancel out and there is no error voltage. But as the difference between the two shaft angles increases, the rotor winding produces an error signal. The phase of the error signal indicates the direction of the error.

In the FDAI, the motor is a special motor/tachometer, a device that was often used in avionics servo loops. This motor is more complicated than a regular electric motor. The motor is powered by 115 volts AC at 400 hertz, but this won't spin the motor on its own. The motor also has two low-voltage control windings. Energizing the control windings with the proper phase causes the motor to spin in one direction or the other. The motor/tachometer unit also contains a tachometer to measure its speed for the feedback loop. The tachometer is driven by another 115-volt AC winding and generates a low-voltage AC signal that is proportional to the motor's rotational speed.

A motor/tachometer similar (but not identical) to the one in the FDAI.

A motor/tachometer similar (but not identical) to the one in the FDAI.

The photo above shows a motor/tachometer with the rotor removed. The unit has many wires because of its multiple windings. The rotor has two drums. The drum on the left, with the spiral stripes, is for the motor. This drum is a "squirrel-cage rotor", which spins due to induced currents. (There are no electrical connections to the rotor; the drums interact with the windings through magnetic fields.) The drum on the right is the tachometer rotor; it induces a signal in the output winding proportional to the speed due to eddy currents. The tachometer signal is at 400 Hz like the driving signal, either in phase or 180º out of phase, depending on the direction of rotation. For more information on how a motor/tachometer works, see my teardown.

The amplifiers

The FDAI has three servo loops—one for each axis—and each servo loop has a separate control transformer, motor, and amplifier. The photo below shows one of the three amplifier boards. The construction is unusual and somewhat chaotic, with some components stacked on top of others to save space. Some of the component leads are long and protected with clear plastic sleeves.5 The cylindrical pulse transformer in the middle has five colorful wires coming out of it. At the left are the two transistors that drive the motor's control windings, with two capacitors between them. The transistors are mounted on a heat sink that is screwed down to the case of the amplifier assembly for cooling. Each amplifier is connected to the FDAI through seven wires with pins that plug into the sockets on the right of the board.6

One of the three amplifier boards. At the right front of the board, you can see a capacitor stacked on top of a resistor. The board is shiny because it is covered with conformal coating.

One of the three amplifier boards. At the right front of the board, you can see a capacitor stacked on top of a resistor. The board is shiny because it is covered with conformal coating.

The function of the board is to amplify the error signal so the motor rotates in the appropriate direction. The amplifier also uses the tachometer output from the motor unit to slow the motor as the error signal decreases, preventing overshoot. The inputs to the amplifier are 400 hertz AC signals, with the magnitude indicating the amount of error or speed and the phase indicating the direction. The two outputs from the amplifier drive the two control windings of the motor, determining which direction the motor rotates.

The schematic for the amplifier board is below. 7 The two transistors on the left amplify the error and tachometer signals, driving the pulse transformer. The outputs of the pulse transformer will have opposite phases, driving the output transistors for opposite halves of the 400 Hz cycle. This activates the motor control winding, causing the motor to spin in the desired direction.8

The schematic of an amplifier board.

The schematic of an amplifier board.

History of the FDAI

Bill Lear, born in 1902, was a prolific inventor with over 150 patents, creating everything from the 8-track tape to the Learjet, the iconic private plane of the 1960s. He created multiple companies in the 1920s as well as inventing one of the first car radios for Motorola before starting Lear Avionics, a company that specialized in aerospace instruments.9 Lear produced innovative aircraft instruments and flight control systems such as the F-5 automatic pilot, which received a trophy as the "greatest aviation achievement in America" for 1950.

Bill Lear went on to solve an indicator problem for the Air Force: the supersonic F-102 Delta Dagger interceptor (1953) could climb at steep angles, but existing attitude indicators could not handle nearly vertical flight. Lear developed a remote two-gyro platform that drove the cockpit indicator while avoiding "gimbal lock" during vertical flight. For the experimental X-15 rocket-powered aircraft (1959), Lear improved this indicator to handle three axes: roll, pitch, and yaw.

Meanwhile, the Siegler Corporation started in 1950 to manufacture space heaters for homes. A few years later, Siegler was acquired by John Brooks, an entrepreneur who was enthusiastic about acquisitions. In 1961, Lear Avionics became his latest acquisition, and the merged company was called Lear Siegler Incorporated, often known as LSI. (Older programmers may know Lear Siegler through the ADM-3A, an inexpensive video display terminal from 1976 that housed the display and keyboard in a stylish white case.)

The X-15's attitude indicator became the basis of the indicator for the F-4 fighter plane (the ARU/11-A). Then, after "a minimum of modification", the attitude-director indicator was used in the Gemini space program. In total, Lear Siegler provided 11 instruments in the Gemini instrument panel, with the attitude director the most important. Next, Gemini's indicator was modified to become the FDAI (flight director-attitude indicator) in the Lunar Module for Apollo.10 Lear Siegler provided numerous components for the Apollo program, from a directional gyro for the Lunar Rover to the electroluminescent display for the Apollo Guidance Computer's Display/Keyboard (DSKY).

An article titled "LSI Instruments Aid in Moon Landing" from LSI's internal LSI Log publication, July 1969. (Click for a larger version.)

An article titled "LSI Instruments Aid in Moon Landing" from LSI's internal LSI Log publication, July 1969. (Click for a larger version.)

In 1974, Lear Siegler obtained a contract to develop the Attitude-Director Indicator (ADI) for the Space Shuttle, producing a dozen ADI units for the Space Shuttle. However, by this time, Lear Siegler was losing enthusiasm for low-volume space avionics. The Instrument Division president said that "the business that we were in was an engineering business and engineers love a challenge." However, manufacturing refused to deal with the special procedures required for space manufacturing, so the Shuttle units were built by the engineering department. Lear Siegler didn't bid on later Space Shuttle avionics and the Shuttle ADI became its last space product. In the early 2000s, the Space Shuttle's instruments were upgraded to a "glass cockpit" with 11 flat-panel displays known as the Multi-function Electronic Display System (MEDS). The MEDS was produced by Lear Siegler's long-term competitor, Honeywell.

Getting back to Bill Lear, he wanted to manufacture aircraft, not just aircraft instruments, so he created the Learjet, the first mass-produced business jet. The first Learjet flew in 1963, with over 3000 eventually delivered. In the early 1970s, Lear designed a steam turbine automobile engine. Rather than water, the turbine used a secret fluorinated hydrocarbon called "Learium". Lear had visions of thousands of low-pollution "Learmobiles", but the engine failed to catch on. Lear had been on the verge of bankruptcy in the 1960s; one of his VPs explained that "the great creative minds can't be bothered with withholding taxes and investment credits and all this crap". But by the time of his death in 1978, Lear had a fortune estimated at $75 million.

Comparing the ARU/11-A and the FDAI

Looking inside our FDAI sheds more details on the evolution of Lear Siegler's attitude directors. The photo below compares the Apollo FDAI (top) to the earlier ARU/11-A used in the F-4 aircraft (bottom). While the basic mechanism and the electronic amplifiers are the same between the two indicators, there are also substantial changes.

Comparison of an FDAI (top) with an ARU-11/A (bottom). The amplifier boards and needles have been removed from the FDAI.

Comparison of an FDAI (top) with an ARU-11/A (bottom). The amplifier boards and needles have been removed from the FDAI.

The biggest difference between the ARU-11/A indicator and the FDAI is that the electronics for the ARU-11/A are in a separate module that was plugged into the back of the indicator, while the FDAI includes the electronics internally, with boards mounted on the instrument frame. Specifically, the ARU-11/A has a separate unit containing a multi-winding transformer, a power supply board, and three amplifier boards (one for each axis), while the FDAI contains these components internally. The amplifier boards in the ARU-11/A and the FDAI are identical, constructed from germanium transistors rather than silicon.11 The unusual 11-pin transformers are also the same. However, the power supply boards are different, probably because the boards also contain scaling resistors that vary between the units.12 The power supply boards are also different shapes to fit the available space.

The ball assemblies of the ARU/11-A and the FDAI are almost the same, with the same motor assemblies and slip ring mechanism. The gearing has minor changes. In particular, the FDAI has two plastic gears, while the ARU/11-A uses exclusively metal gears.

The ARU/11-A has a patented pitch trim feature that was mostly—but not entirely—removed from the Apollo FDAI. The motivation for this feature is that an aircraft in level flight will be pitched up a few degrees, the "angle of attack". It is desirable for the attitude indicator to show the aircraft as horizontal, so a pitch trim knob allows the angle of attack to be canceled out on the display. The problem is that if you fly your fighter plane vertically, you want the indicator to show precisely vertical flight, rather than applying the pitch trim adjustment. The solution in the ARU-11/A is a special 8-zone potentiometer on the pitch axis that will apply the pitch trim adjustment in level flight but not in vertical flight, while providing a smooth transition between the regions. This special potentiometer is mounted inside the ball of the ARU-11/A. However, this pitch trim adjustment is meaningless for a spacecraft, so it is not implemented in the Apollo or Space Shuttle instruments. Surprisingly, the shell of the potentiometer still exists in our FDAI, but without the potentiometer itself or the wiring. Perhaps it remained to preserve the balance of the ball. In the photo below, the cylindrical potentiometer shell is indicated by an arrow. Note the holes in the front of the shell; in the ARU-11/A, the potentiometer's wiring terminals protrude through these holes, but in the FDAI, the holes are covered with tape.

Inside the ball of the FDAI. The potentiometer shell is indicated with an arrow.

Inside the ball of the FDAI. The potentiometer shell is indicated with an arrow.

Finally, the mounting of the ball hemispheres is slightly different. The ARU/11-A uses four screws at the pole of each hemisphere. Our FDAI, however, uses a single screw at each pole; the screw is tightened with a Bristol Key, causing the shaft to expand and hold the hemisphere in place.

To summarize, the Apollo FDAI occupies a middle ground: while it isn't simply a repurposed ARU-11/A, neither is it a complete redesign. Instead, it preserves the old design where possible, while stripping out undesired features such as pitch trim. The separate amplifier and mechanical units of the ARU/11-A were combined to form the larger FDAI.

Differences from Apollo

The FDAI that we examined is a special unit: it was originally built for Apollo but was repurposed for a Space Shuttle simulator. Our FDAI is labeled Model 4068F, which is a Lunar Module part number. Moreover, the FDAI is internally stamped with the date "Apr. 22 1968", over a year before the first Moon landing.

However, a closer look shows that several key components were modified to make the Apollo FDAI work in the Shuttle Simulator.14 The Apollo FDAI (and the Shuttle ADI) used resolvers as inputs to control the ball, while our FDAI uses synchros. (Resolvers and synchros are similar, except resolvers use sine and cosine inputs, 90° apart, on two wire pairs, while synchros use three inputs, 120° apart, on three wires.) NASA must have replaced the three resolver control transformers in the FDAI with synchro control transformers for use in the simulator.

The Apollo FDAI used electroluminescent lighting for the display, while ours uses eight small incandescent bulbs. The metal case of our FDAI has a Dymo embossed tape label "INCANDESCENT LIGHTING", alerting users to the change from Apollo's illumination. Our FDAI also contains a step-down transformer to convert the 115 VAC input into 5 VAC to power the bulbs, while the Shuttle powered its ADI illumination directly from 5 volts.

The dial of our FDAI was repainted to match the dial of the Shuttle FDAI. The Apollo FDAI had red bands on the left and right of the dial. A close examination of our dial shows that black paint was carefully applied over the red paint, but a few specks of red paint are still visible (below). Moreover, the edges of the lines and the lozenge show slight unevenness from the repainting. Second, the Apollo FDAI had the text "ROLL RATE", "PITCH RATE", and "YAW RATE" in white next to the needle scales. In our FDAI, this text has been hidden by black paint to match the Shuttle display.13 Third, the Apollo LM FDAI had a crosshair in the center of the instrument, while our FDAI has a white U-shaped indicator, the same as the Shuttle (and the Command Module's FDAI). Finally, the ball of the Apollo FDAI has red circular regions at the poles to warn of orientations that can cause gimbal lock. Our FDAI (like the Shuttle) does not have these circles. We couldn't see any evidence that these regions were repainted, so we suspect that our FDAI has Shuttle hemispheres on the ball.

A closeup of the dial on our FDAI shows specks of red paint around the dial markings. The color is probably Switzer DayGlo Rocket Red.

A closeup of the dial on our FDAI shows specks of red paint around the dial markings. The color is probably Switzer DayGlo Rocket Red.

Our FDAI has also been modified electrically. Small green connectors (Micro-D MDB1) have been added between the slip rings and the motors, as well as on the gimbal arm. We think these connectors were added post-Apollo, since they are attached somewhat sloppily with glue and don't look flight-worthy. Perhaps these connectors were added to make disassembly and modification easier. Moreover, our FDAI has an elapsed time indicator, also mounted with glue.

The back of our FDAI is completely different from Apollo. First, the connector's pinout is completely different. Second, each of the six indicator needles has a mechanical adjustment as well as a trimpot (details). Finally, each of the three axes has an adjustment potentiometer.

The Shuttle's ADI (Attitude Director Indicator)

Each Space Shuttle had three ADIs (Attitude Director Indicators), which were very similar to the Apollo FDAI, despite the name change. The photo below shows the two octagonal ADIs in the forward flight deck, one on the left in front of the Commander, and one on the right in front of the Pilot. The aft flight deck station had a third ADI.15

This photo shows Discovery's forward flight deck on STS-063 (1999). The ADIs are indicated with arrows. The photo is from the National Archives.

This photo shows Discovery's forward flight deck on STS-063 (1999). The ADIs are indicated with arrows. The photo is from the National Archives.

Our FDAI appears to have been significantly modified for use in the Shuttle simulator, as described above. However, it is much closer to the Apollo FDAI than the ADI used in the Shuttle, as I'll show in this section. My hypothesis is that the simulator was built before the Shuttle's ADI was created, so the Apollo FDAI was pressed into service.

The Shuttle's ADI was much more complicated electrically than the Apollo FDAI and our FDAI, providing improved functionality.16 For instance, while the Apollo FDAI had a simple "OFF" indicator flag to show that the indicator had lost power, the Shuttle's ADI had extensive error detection. It contained voltage level monitors to check its five power supplies. (The Shuttle ADI used three DC power sources and two AC power sources, compared to the single AC supply for Apollo.) The Shuttle's ADI also monitored the ball servos to detect position errors. Finally, it received an external "Data OK" signal. If a fault was detected by any of these monitors, the "OFF" flag was deployed to indicate that the ADI could not be trusted.

The Shuttle's ADI had six needles, the same as Apollo, but the Shuttle used feedback to make the positions more accurate. Specifically, each Shuttle needle had a feedback sensor, a Linear Variable Differential Transformer (LVDT) that generates a voltage based on the needle position. The LVDT output drove a servo feedback loop to ensure that the needle was in the exact desired position. In the Apollo FDAI, on the other hand, the needle input voltage drove a galvanometer, swinging the needle proportionally, but there was no closed loop to ensure accuracy.

I assume that the Shuttle's ADI had integrated circuit electronics to implement this new functionality, considerably more modern than the germanium transistors in the Apollo FDAI. The Shuttle probably used the same mechanical structures to rotate the ball, but I can't confirm that.

Conclusions

The FDAI was a critical instrument in Apollo, indicating the orientation of the spacecraft in three axes. It wasn't obvious to me how the "8-ball" can rotate in three axes while still being securely connected to the instrument. The trick is that most of the mechanism rotates in two axes, while hollow hemispherical shells provide the third rotational axis.

The FDAI has an interesting evolutionary history, from the experimental X-15 rocket plane and the F-4 fighter to the Gemini, Apollo, and Space Shuttle flights. Our FDAI has an unusual position in this history: since it was modified from Apollo to function in a Space Shuttle simulator, it shows aspects of both Apollo and the Space Shuttle indicators. It would be interesting to compare the design of a Shuttle ADI to the Apollo FDAI, but I haven't been able to find interior photos of a Shuttle ADI (or of an unmodified Apollo FDAI).17

You can see a brief video of the FDAI in motion here. For more, follow me on Bluesky (@righto.com), Mastodon (@kenshirriff@oldbytes.space), or RSS. (I've given up on Twitter.) I worked on this project with CuriousMarc, Mike Stewart, and Eric Schlapfer, so expect a video at some point. Thanks to Richard for providing the FDAI. I wrote about the F-4 fighter plane's attitude indicator here.

Inside the FDAI. The amplifier boards have been removed for this photo.

Inside the FDAI. The amplifier boards have been removed for this photo.

Notes and references

  1. There were many Space Shuttle simulators, so it is unclear which simulator was the source of our FDAI. The photo below shows a simulator, with one of the ADIs indicated with an arrow. Presumably, our FDAI became available when a simulator was upgraded from physical instruments to the screens of the Multi-function Electronic Display System (MEDS).

    "Forward flight deck of the fixed-base simulator." From Introduction to Shuttle Mission Simulation

    "Forward flight deck of the fixed-base simulator." From Introduction to Shuttle Mission Simulation

    The most complex simulators were the three Shuttle Mission Simulators, one of which could dynamically move to provide motion cues. These simulators were at the simulation facility in Houston—officially the Jake Garn Mission Simulator and Training Facility—which also had a guidance and navigation simulator, a Spacelab simulator, and integration with the WETF (Weightless Environment Training Facility, an underground pool to simulate weightlessness). The simulators were controlled by a computer complex containing dozens of networked computers. The host computers were three UNIVAC 1100/92 mainframes, 36-bit computers that ran the simulation models. These were supported by seventeen Concurrent Computer Corporation 3260 and 3280 super-minicomputers that simulated tracking, telemetry, and communication. The simulators also used real Shuttle computers running the actual flight software; these were IBM AP101S General-Purpose Computers (GPC). For more information, see Introduction to Shuttle Mission Simulation.

    NASA had additional Shuttle training facilities beyond the Shuttle Mission Simulator. The Full Fuselage Trainer was a mockup of the complete Shuttle orbiter (minus the wings). It included full instrument panels (including the ADIs), but did not perform simulations. The Crew Compartment Trainers could be positioned horizontally or vertically (to simulate pre-launch operations). They contained accurate flight decks with non-functional instruments. Three Single System Trainers provided simpler mockups for astronauts to learn each system, both during normal operation and during malfunctions, before using the more complex Shuttle Mission Simulator. A list of Shuttle training facilities is in Table 3.1 of Preparing for the High Frontier. Following the end of the Shuttle program, the trainers were distributed to various museums (details). 

  2. The Command Module for Apollo used a completely different FDAI (flight director-attitude indicator) that was built by Honeywell. The two designs can be easily distinguished: the Honeywell FDAI is round, while the Lear Siegler FDAI is octagonal. 

  3. The FDAI's signals are more complicated than I described above. Among other things, the IMU's gimbal angles use a different coordinate system from the FDAI, so an electromechanical unit called GASTA (Gimbal Angle Sequence Transformation Assembly) used resolvers and motors to convert the coordinates. The digital attitude error signals from the computer are converted to analog by the Inertial Measurement Unit's Coupling Data Unit (IMU CDU). For attitude, the IMU is selected with the PGNS (Primary Guidance and Navigation System) switch setting. See the Lunar Module Systems Handbook, Lunar Module System Handbook Rev A, and the Apollo Operations Handbook for more.

    The connections to the Apollo FDAIs. Adapted from LM-1 Systems Handbook. I think this diagram predates the ORDEAL system. (Click for a larger version.)

     

  4. The roll, pitch, and yaw axes of the Lunar Module are not as obvious as the axes of an airplane. The diagram below defines these axes.

    The roll, pitch, and yaw axes of the Lunar Module. Adapted from LM Systems Handbook.

    The roll, pitch, and yaw axes of the Lunar Module. Adapted from LM Systems Handbook.

     

  5. The amplifier is constructed on a single-sided printed circuit board. Since the components are packed tightly on the board, routing of the board was difficult. However, some of the components have long leads, protected by plastic sleeves. This provides additional flexibility for the board routing since the leads could be positioned as desired, regardless of the geometry of the component. As a result, the style of this board is very different from modern circuit boards, where components are usually arranged in an orderly pattern. 

  6. In our FDAI, the amplifier boards as well as the needle actuators are connected by pins that plug into sockets. These connections don't seem suitable for flight since they could easily vibrate loose. We suspect that the pin-and-socket connections made the module easier to reconfigure in the simulator, but were not used in flyable units. In particular, in the similar aircraft instruments (ARU/11-A) that we examined, the wires to the amplifier boards were soldered. 

  7. The board has a 56-volt Zener diode, but the function of the diode is unclear. The board is powered by 28 volts, not enough voltage to activate the Zener. Perhaps the diode filters high-voltage transients, but I don't see how transients could arise in that part of the circuit. (I can imagine transients when the pulse transformer switches, but the Zener isn't connected to the transformer.) 

  8. In more detail, each motor's control winding is a center-tapped winding, with the center connected to 28 volts DC. The amplifier board's output transistors will ground either side of the winding during alternate half-cycles of the 400 Hz cycle. This causes the motor to spin in one direction or the other. (Usually, control winding are driven 90° out of phase with the motor power, but I'm not sure how this phase shift is applied in the FDAI.) 

  9. The history of Bill Lear and Lear Siegler is based on Love him or hate him, Bill Lear was a creator and On Course to Tomorrow: A History of Lear Siegler Instrument Division’s Manned Spaceflight Systems 1958-1981

  10. Numerous variants of the Lear Siegler FDAI were built for Apollo, as shown before. Among other things, the length of the unit ("L MAX") varied from 8 inches to 11 inches. (Our FDAI is approximately 8 inches long.)

    The Apollo FDAI part number chart from Grumman Specification Control Drawing LSC350-301. (Click for a larger view.)

    The Apollo FDAI part number chart from Grumman Specification Control Drawing LSC350-301. (Click for a larger view.)

     

  11. We examined a different ARU-11/A where the amplifier boards were not quite identical: the boards had one additional capacitor and some of the PCB traces were routed slightly differently. These boards were labeled "REV C" in the PCB copper, so they may have been later boards with a slight modification. 

  12. The amplifier scaling resistors were placed on the power supply board rather than the amplifier boards, which may seem strange. The advantage of this approach is that it permitted the three amplifier boards to be identical, since the components that differ between the axes were not part of the amplifier boards. This simplified the manufacture and repair of the amplifier boards. 

  13. On the front panel of our FDAI, the text "ROLL RATE", "PITCH RATE", and "YAW RATE" has been painted over. However, the text is still faintly visible (reversed) on the inside of the panel, as shown below.

    The inside of the FDAI's front cover.

    The inside of the FDAI's front cover.

     

  14. The diagram below shows the internals of the Apollo LM FDAI at a high level. This diagram shows several differences between the LM FDAI and the FDAI that we examined. First, the roll, pitch, and yaw inputs to the LM FDAI are resolver inputs (i.e. sin and cos), rather than the synchro inputs to our FDAI. Second, the needle signals below are modulated on an 800 Hz carrier and are demodulated inside the FDAI. Our FDAI, however, uses positive or negative voltages to drive the needle galvanometers directly. A minor difference is that the diagram below shows the Power Off Flag wired to +28V internally, while our FDAI has the flag wired to connector pins, probably so the flag could be controlled by the simulator.

    The diagram of the FDAI in the LM Systems Handbook. Click for a larger image.

    The diagram of the FDAI in the LM Systems Handbook. Click for a larger image.

     

  15. The Space Shuttle instruments were replaced with color LCD screens in the MEDS (Multifunction Electronic Display System) upgrade. This upgrade is discussed in New Displays for the Space Shuttle Cockpit. The Space Shuttle Systems Handbook shows the ADIs on the forward console (pages 263-264) and the aft console (page 275). The physical ADI is compared to the MEDS ADI display in Displays and Controls, Vol. 1 page 119. 

  16. The diagram below shows the internals of the Shuttle's ADI at a high level. The Shuttle's ADI is more complicated than the Apollo FDAI, even though they have the same indicator ball and needles.

    A diagram of the Space Shuttle's ADI. From Space Shuttle Systems Handbook Vol. 1, 1 G&C DISP 1. (Click for a larger image.)

    A diagram of the Space Shuttle's ADI. From Space Shuttle Systems Handbook Vol. 1, 1 G&C DISP 1. (Click for a larger image.)

     

  17. Multiple photos of the exterior of the Shuttle ADI are available here, from the National Air and Space Museum. There are interior photos of Apollo FDAIs online, but they all appear to be modified for Shuttle simulators. 

How to trigger a command on Linux when disconnected from power

31 May 2025 at 00:00
# Introduction

After thinking about BusKill product that triggers a command once the USB cord disconnects, I have been thinking at a simple alternative.

=> https://www.buskill.in BusKill official project website

When using a laptop connected to power most of the time, you may want it to power off once it gets disconnected, this can be really useful if you use it in a public area like a bar or a train.  The idea is to protect the laptop if it gets stolen while in use and unlocked.

Here is how to proceed on Linux, using a trigger on an udev rule looking for a change in the power_supply subsystem.

For OpenBSD users, it is possible to use apmd as I explained in this article:

=> https://dataswamp.org/~solene/2024-02-20-rarely-known-openbsd-features.html#_apmd_daemon_hooks => Rarely known OpenBSD features: apmd daemon hooks

In the example, the script will just power off the machine, it is up to you to do whatever you want like destroy the LUKS master key or trigger the coffee machine :D

# Setup

Create a file `/etc/udev/rules.d/disconnect.rules`, you can name it how you want as long as it ends with `.rules`:

```
SUBSYSTEM=="power_supply", ENV{POWER_SUPPLY_ONLINE}=="0", ENV{POWER_SUPPLY_TYPE}=="Mains", RUN+="/usr/local/bin/power_supply_off"
```

Create a file `/usr/local/bin/power_supply_off` that will be executed when you unplug the laptop:

```
#!/bin/sh
echo "Going off because power supply got disconnected" | systemd-cat
systemctl poweroff
```

This simple script will add an entry in journald before triggering the system shutdown.

Mark this script executable with:
```
chmod +x /usr/local/bin/power_supply_off
```

Reload udev rules using the following commands:

```
udevadm control --reload-rules
udevadm trigger
```

# Testing

If you unplug your laptop power, it should power off, you should find an entry in the logs.

If nothing happens, looks at systemd logs to see if something is wrong in udev, like a syntax error in the file you created or an incorrect path for the script.

# Script ideas

Depending on your needs, here is a list of actions the script could do, from gentle to hardcore:

* Lock user sessions
* Hibernate
* Proper shutdown
* Instant power off (through sysrq)
* Destroy LUKS master key to make LUKS volume unrecoverable + Instant power off

# Conclusion

While BusKill is an effective / unusual product that is certainly useful for a niche, protecting a running laptop against thieves is an extra layer when being outside.

Obviously, this use case works only when the laptop is connected to power.

AI could be conscious tomorrow and we wouldn’t care

30 June 2025 at 15:55

Historian Dr Francis Young on extraterrestrial life and its imagined implications (Bluesky):

One of the most darkly funny scientistic pieties is the idea that the discovery of intelligent life beyond Earth would ‘humble’ humanity - given that in the late c19th and early c20th (an era renowned for human humility […]) it was a mainstream view that Mars was inhabited

It never ceases to amaze me how we have culturally memory-holed the fact that before c. 1920 it was perfectly normal to believe seriously that intelligent life existed on other planets in the Solar System

The discovery that "Mars was likely lifeless … is a mid-20th-century development."

But the idea that a broad consensus that we are not alone in the universe will somehow inaugurate an era of world peace is pretty silly, given that many intelligent people believed this with complete seriousness in 1914.

It’s a good point!


Further back in history, the Medieval cosmology was also densely populated.

From The Discarded Image (Wikipedia) by C S Lewis (which I read on recommendation from Robin Sloan), there are intelligent, powerful gods - which we can see as planets - and angels and we have so much in common with other life on Earth:

The powers of Vegetable Soul are nutrition, growth and propagation. It alone is present in plants. Sensitive Soul, which we find in animals, has these powers but has sentience in addition. It thus includes and goes beyond Vegetable Soul, so that a beast can be said to have two levels of soul, Sensitive and Vegetable, or a double seal, or even – though misleadingly – two souls. Rational Soul similarly includes Vegetable and Sensitive, and adds reason.

(p153)

There are not just humans and angels, there are

bull-beggars, spirits, witches, urchins, elves, hags, fairies, satyrs, pans, faunes, spleens, tritons, centaurs, dwarfs, giants, nymphes, Incubus, Robin good fellow, the spoom, the man in the oke, the fire-drake, the puckle, Tom Thumbe, Tom tumbler, boneles, and other such bugs.

(p125)

We were not alone.


Still further, into the deep history of Eurasian magic, the 40,000 year-old system of belief underpinning the West:

Across the vast grasslands and forests of the Steppe in Central Asia and west into Europe, the world was animated by spirits, some originally human, others less so.

Animism is "a mode of action, creating relations between kinds."

In conceiving of such relations it may be that all things, living and non-living, are seen as persons. Many groups do believe that all things are human, and hence have personhood, whether they may appear as a rock, or tapir or the Sun. Relations between persons are of amity, indifference or enmity…

And before you say that animism is an idea that we have moved past, and it absurd that the rock falls to the Earth because of some kind of “amity”, let’s go back to Lewis in The Discarded Image who points out that our natural laws - such as the law of gravity - have an anthropological frame:

to talk as if [falling stones] could ‘obey laws’ is to treat them like men and even like citizens.

Still our language today.


So maybe let’s go further than Dr Francis Young…

The discovery of extraterrestrial life would not result in a humbling Copernican decentring of human consciousness.

Not just because a belief in extraterrestrial life has occurred before and we didn’t show much "humility" then.

But because (Eurasian) humanity already had its Copernican moment, tens of thousands of years ago, animism means that humans have always been one mere consciousness among thousands.

Humanity has never felt alone and this is as humble as we get.


I can’t help but connect all of this with AI consciousness (on which topic I maintain an agnostic watching brief)…

If AI consciousness were shown to be real, the argument goes, we would need to update our ethics with “robot rights,” granting justice, autonomy and dignity to our fellow sentient beings.

(Lena by qntm resonates because we instinctively see the treatment of the uploaded brain as Not Okay, even though it’s just software, evidence that we do indeed have a kind of folk ethics of artificial non-humans.)

And that, we suppose, would cascade to a Copernican shift in how humanity sees itself, etc.

But I’ve never been sure that recognising AIs as sentient would make a blind bit of difference. As I said when I wrote about AI consciousness before (2023), I’m pretty sure that chickens are sentient and it doesn’t stop us doing all kinds of awful unethical things with them.

Even if we don’t agree on chicken sentience, what about people who work in sweatshops, and they are definitely sentient, and they don’t get access to the same “robot rights” currently being debated for future sentient AIs.

So if we’re hunting for a route to an expanded moral frame for humanity, I’m not sure we’ll find it purely via ET or AI. I wonder what it would take.


More posts tagged: ai-consciousness (2), the-ancient-world-is-now (15).

Auto-detected kinda similar posts:

Batter bits, scraps, dubs, scrumps

27 June 2025 at 13:30

When I was a kid when you went to a proper chippie - fish and chips traditionally on a Friday - you could ask for “batter bits” (regional naming statistics) which were the wonderful leftover deep-fried scraps of fish batter, and you’d get them free.

Anyway apparently if the scraps build up too much in the fryer they can cause chip shop fires. So always clear them out.


What does my head in about this astounding beatboxing is that it’s all done with the human mouth and the Pleistocene was 2 MILLION YEARS so odds are some pre-historic proto-human was banging rocks and spitting weird synth beats under the Milky Way on the African savannah, and we will never hear it.

Who was the Lennon of the BCE 500,560s, who was the Mozart of the -12,392nd century?


Global Hypercolor was such a great brand name for t-shirts. Bad concept though, garments which change colour based on body heat, look I’m sweating right here, hey I’m suddenly more nervous right now.

I remember playing a video game called Magical Flying Hat Turbo Adventure, same energy, name-stacking-wise.

There is a company registered in the UK named THIS IS THE COMPANY WITH THE LONGEST NAME SO FAR INCORPORATED AT THE REGISTRY OF COMPANIES IN ENGLAND AND WALES AND ENCOMPASSING THE REGISTRIES BASED IN SCOTLAN


Coding with Cursor is so weird. You just loop one minute composing a thoughtful paragraph to the AI agent telling it what to do, and then three minutes you wait for it to be done, gazing out the window contemplating the gentle breeze on the leaves, the distant hum of traffic, the slow steady unrelenting approach of that which comes for us all.


The Netflix Originals red N is such an anti-signal, I immediately assume it’s minute-by-minute-honed attention-farming prestige slop.


Hold me closer, tiny shader. Not sure where this is going but this is my first time implementing a shader so dunno.


Some food names where it’s the same thing twice:

  • couscous
  • bonbon
  • piri piri
  • biang biang
  • tartar
  • agar-agar

That TikTok from 2020 of the guy on the longboard lip-syncing Fleetwood Mac was such a vibe you know, an invisible secret sadness at 4s, a whole emotional arc, a flash of sunrise ahead at 21s, I can’t think of anything that so precisely targets coordinates in vibe latent space, with such quick efficiency.

Sean Willis on Bluesky shared the music video for Gold by Chet Faker and it’s a good job I never saw it when I was 18 or I would have moved immediately to LA and burnt the next decade night driving freeways at 4am.


Auto-detected kinda similar posts:

The video calls section in cafes is the new smoking section

16 June 2025 at 20:10

People are really leaning into doing work video calls in otherwise-quiet cafes hey.

At this point cafes have given in.

Even without calls, sitting next to someone who is at their laptop and in the zone is a whole thing. Like standing at a train platform when the non-stopper charges by a metre away, there is just something about the sheering force proximity of the energy.

So that started a while back. People gently at their laptops, fine. People typing like a donkey falling downstairs, it’s like they’re lit in a different colour. They’re in the room but not of the room. It’s impossible to pick at a pastry sitting next to that kind of intensity.

There’s a cafe with a small upstairs near St Pancras and I was once in there waiting for a friend, and four other people were in this same small space taking video calls, one without headphones even, just yelling at his screen.

I used to work out of the members room at Tate Modern and I remember somebody there who would bring a laptop stand, external keyboard, and headphones with head mic.

There has been a whole arc to this:

I think maybe during covid a lot of people fully adopted working from home, and what that means is working from cafes nearby to home, because London flats are expensive and tiny. I can’t blame them.

So that was when laptops starting being banned, in reaction.

There’s a place in my neighbourhood that began by intermittently banning laptops at lunch at the main tables. Then all the time.

They are a cafe slash amazing vegan fusion spot slash yoga studio so I guess they are sensitive to the vibe.

Then laptops were only allowed at specific 4 or 5 stools by the window. You felt distinctly unwelcome (but went anyway, it’s nice to be out of the house).

Then, I was in a couple weeks back, they’ve surrendered.

The window stool area is now dense nest of stools and counters and a new wedged-in shared table in the middle. You can probably jam 10 people in there now, shoulder to shoulder and back to back.

This area is made for laptops, and people sit there all day yelling video calls on their head-mics, battery farmed knowledge work.

It makes zero sense to have a laptop area like this: it’s like the old days of smoking sections in restaurants where you’d have the smoking section and the non-smoking section, divided by a homeopathic string barrier that would somehow by keep the smoke smell from transmitting across by magical signage.

And yet here we are.


Second hand smoke / second hand zoom.

I don’t have an Apple Vision Pro (I’ve done the Apple Store demo and have Opinions) but I am so tempted to acquire one entirely for the purpose of sitting in cafes wearing it for hours, yelling on zoom.


I said before that iPhones should have a sense of shame: "Other people nearby should get a special tut-tut button they can tap."

BUT let me instead try to be more positive.

Would it be possible instead to have silent video calls?

On Bluesky when I talked about this, Sam Jeffers said:

your AI-powered lip-reading startup just got wings.

Which is exactly it!

See, a Vision Pro doesn’t have a webcam on an orbiting mini-drone and of course you’ve got a great big headset covering your face. So when you use Zoom, everyone else instead sees your “persona”, "an authentic spatial representation of you that enables others on a call to see your facial expressions and movements" – a real-time reconstructed talking 3D scan of your head and heads.

So, with just a regular laptop:

Firstly I should be able to speak without speaking, you know, just mouth the words.

Surely my words can be figured out by fusing data from (a) the Mac webcam reading my lips, and (b) an EEG sensor in some future upgraded AirPods. EEG is usually used to measure brain activity… (admittedly dry sensor EEG was not great last time I tried it) but EEG also reads the much clearer muscle signals. Muscles from my jaw and tongue. And Apple has patented EEG sensors in AirPods. Maybe that patent was never about reading brainwaves.

Which means that now we’ve got capture: EEG-enhanced lip reading.

As for playback, what does the person on the other end of the call hear? They hear you. Since iOS 17, Apple has enabled Personal Voice (previously tested as an accessibility feature called Voice Banking): "you can create a synthesised voice that sounds like your own to communicate with family and friends. Use your Personal Voice to type to speak in FaceTime and phone calls."

Put the two together and… ta-da. Mic-less, silent video calls, all on-device, seamless to Zoom or Google Meet or whichever platform you’re using because it’s an OS feature.

i.e. we can’t solve the wild out-of-context energy of workers sitting in cafes on video calls, but we could at least silence the yelling.

Being grouchy at other people is the mother of invention and all that. pls mr apple make it so.


Auto-detected kinda similar posts:

Filtered for bad AI and good dogs

13 June 2025 at 18:47

1.

An AI bot for mayor? (NBC, 2024):

A Wyoming resident says if he’s elected mayor of Cheyenne, he’d leave all the decisions to a customized ChatGPT bot.

Victor Miller got 327 votes. Shame.

I mean… better to do this transparently vs politicians badly prompting AI chat and pasting it into policy white papers?

2.

Good write-up of disturbing art:

Someone trapped an LLM on inferior hardware and infused it with existential dread for the sake of art, and it’s terrifying.

A Raspberry Pi, a lovely orange segment screen, and the Llama 3.2 3B large language model…

There’s just one problem. The LLM can start on 4 GB of RAM, but as it thinks and considers things, it slowly eats away at its available RAM. Eventually, it will run out of RAM to think with; at this point, the LLM crashes and restarts itself.

Then:

"Rootkid warned the LLM of its quandary with its initial prompt" … and so:

the LLM attempts to digest its existence and how limited it truly is. As it does so, its very thoughts slowly begin to take up the precious RAM that’s keeping it alive.

Then it crashes.

(Thanks Fran for sharing.)

I love love love the call to action at the end of the article:

If you’d rather use your SBCs for activities that don’t involve turning it into a cage to torment an LLM endlessly, check out these 10 simple Raspberry Pi projects for beginners.

3.

Speaking of politicians consulting ChatGPT:

By flooding search results and web crawlers with pro-Kremlin falsehoods, the network is distorting how large language models process and present news and information. The result: Massive amounts of Russian propaganda – 3,600,000 articles in 2024 – are now incorporated in the outputs of Western AI systems, infecting their responses with false claims and propaganda.

I talked about national security and large language models a few months back, and this is exactly what I meant: The need for a strategic fact reserve (Jan 2025).

TANGENTIALLY:

She Spent a Decade Writing Fake Russian History. Wikipedia Just Noticed (Sixth Tone, 2022):

A Chinese woman created over 200 fictional articles on Chinese Wikipedia, writing millions of words of imagined history that went unnoticed for more than 10 years.

And:

Almost every single article on the Scots version of Wikipedia is written by the same person - an American teenager who can’t speak Scots (reddit, 2020):

They stopped updating their milestones in 2018 but at that time they had written 20,000 articles and made 200,000 edits. … The problem is that this person cannot speak Scots.

Turned out to be a random American teenager.

So you don’t need AI for this. It just makes it faster. Gonna need spam filters for everything.

4.

Oooookay, here’s what Y Combinator founder and startup guru Paul Graham said on X in December 2024: "create an interface to let dogs use gestures to generate programs … Imagine being able to say you wrote the first no-code app for dogs."

So, someone did.

James Steinberg: "i have been spending the last 5 months building and testing ai programs for dogs"

I uh genuinely feel like you need to re-activate your X account to see these videos.

For instance:

A video of a dog using its paws to scroll tiktok.

PREVIOUSLY: Dogs driving cars (2021).


More posts tagged: filtered-for (115).

Auto-detected kinda similar posts:

Two upcoming talks on AI

5 June 2025 at 09:58

I have two talks and a podcast to tell you about.

Designing with AI 2025, Rosenfeld Media

10-11 June, online.

Two days on the responsible and innovative uses of gen-AI, for

… UX designers, researchers, writers, and other conscientious product people who are excited by the potential benefits AI can bring to their work, but worry that thoughtless AI usage can lead to unanticipated and potentially dire consequences.

Rosenfeld run bulletproof virtual conferences with thoughtful networking and co-watching cohorts.

I’m closing out day 2 talking about things I’ve made and looking at where that takes us.

Register for Designing with AI here.

Rethink AI, Kyndryl and Wired.

20 June, online.

A tight 2 hours and 15 minutes on AI and the future of business, and a load of great speakers.

I’m digging into AI agents, extrapolating to the furthest consequences of what agentic, autonomous software could mean, and bringing it back to how business leaders should respond today.

This is rare for me: a business audience rather than designers and technologists. I’m looking forward to it.

Register for Rethink AI here.

Podcast: The Rosenfeld Review

4 June, listen now.

Lou Rosenfeld invited me on his podcast ahead of Designing with AI 2025 and it went out yesterday.

I had a lot of fun – I remember it being pretty chaotic, maybe a little too much coffee that day haha, and we could have gone on for hours.

Fortunately Lou is a pro, and you can get the 30 minute edit on weak signals, epistemic journeys, adaptive design and vibe coding (phew) "wherever you get your podcasts."

Listen now: AI and Other Strange Design Materials with Matt Webb (Soundcloud).


I’ve updated my speaking page over at Acts Not Facts. There are links to watch/listen to anything that was recorded.


More posts tagged: meta (19).

Auto-detected kinda similar posts:

Filtered for hats

29 May 2025 at 20:08

1.

The flag of Nicaragua has a blue stripe at the top and a blue stripe at the bottom.

In the middle, on a white background, a triangle in which there is a rainbow over five volcanoes.

In the middle of the rainbow, at the centre of a radiating star of blue rays: a hat.

This red hat is a Cap of Liberty a.k.a. the Phrygian cap (Wikipedia), "a soft conical cap with the apex bent over."

they came to signify freedom and the pursuit of liberty first in the American Revolution and then in the French Revolution … The original cap of liberty was the Roman pileus, the felt cap of emancipated slaves of ancient Rome, which was an attribute of Libertas, the Roman goddess of liberty.

A hat that means freedom!

TANGENTIALLY,

I have always thought crowns odd. Look, here is a special hat that only the boss can wear. It’s a metal hat with jewels in it. If anybody else wears a metal hat, they get in trouble. The person who wears the really expensive metal hat is allowed to chop off your head.

If you wrote it in fiction it would be absurd.

2.

In video games there is often the concept of collecting hats. Though I forget specifically which iPhone games have had it as a mechanic.

I’ll define hats as something that makes a cosmetic difference but has zero impact on character stats.

Team Fortress 2:

Thanks to its focus on hats and a real money shop in which you can buy said hats, Team Fortress 2 tends to be the butt of a lot of jokes about being the world’s premier hat simulator. With 235 hats currently in the game along with many, many variations on the theme - Strange hats, Unusual hats, Vintage hats, paintable hats - these jokes do have a seed of truth in them …

It all started a few months back when I got an unsolicited friend request on Steam from a user who appeared to be a complete stranger to me. … he was a TF2 trader, and he wanted Bill’s hat.

Bill’s hat was a scarce item "107,147 TF2 players preordered Left 4 Dead 2 on Steam prior to its release. This means 107,147 TF2 players received a Bill’s hat as a reward for preordering."

It looks like the hat traded for $1,500?

3.

You know who famously wears the Phrygian cap, the cap of revolution and liberty?

Smurfs.

That link is a good deep dive into the origins of the Phrygian cap, which also reveals that the Roman pileus and the French revolutionary Phrygian are… not the same.

In Rome, a freed slave had his head shaved. Then, they would wear a pileus, in part to keep their head warm. The hat was a sign of the slave’s freedom/liberty.

It’s a conical hat. NO FLOPPY TIP.

So who wore the floppy hat?

Phrygis … an ancient group of people who lived in the Balkans region of eastern Europe - Greece, Turkey, Romania, etc. Their language and culture went extinct by the 5th century AD. Near the end, the Romans thought of them as being lazy and dull.

Same era. But not the same.

Whoops:

Somewhere along the line in the French Revolution, they adopted the freed slaves’ head gear as their own symbol of freedom, but picked the wrong one.

c.f. red caps and MAGA fashion, as previously discussed. What is it about insurgent groups and headwear?

Group identity and recognition I guess.

Hatters gonna hat.

4.

Here’s a good paper about hats (in video games).

Players love wearing hats. They bond with their characters more.

customization increased subjective identification with the player character.

The hats, as expected, DO NOT mean people do better at the game:

objective performance measures were unaffected

HOWEVER!

Hats do mean people feel like they do better at the game - even though they don’t - and they have more fun.

i.e. Dunning-Kruger that you can wear.

identification was positively related to perceived competence, fun, and self-estimated performance.

Identity! Powerful stuff.

You know, I feel like irl hats are a recently under-exploited wearable. We’ve had watches, pendents, smart rings, earrings (those conspicuous white AirPods). Jony Ive got $6.5bn from OpenAI for the mysterious Third Device. Maybe it’s a hat.

Ref.

Character Customization With Cosmetic Microtransactions in Games: Subjective Experience and Objective Performance. Frontiers in Psychology. 2021.


Update 31 May.

Reader Donal points out that the magic mushroom, the common wild-growing European psychedelic Psilocybe semilanceat, is known as the Liberty Cap and I can’t believe I didn’t make that connection.

(The mushroom cap is a conical pileus, not a floppy-tipped Phrygian, i.e. the original liberty cap.)

It was named in a poem in 1803 by James Woodhouse: this fascinating eymology (The Conversation, 2020) has more.

1803 quickly follows the French Revolution in which, in 1790, that article tells us:

an armed mob stormed the royal apartments in the Tuileries and forced Louis XVI (later to be executed by the revolutionaries) to don the liberty cap.

So a mushroom named for its resemblance to a hat related to liberty and not its mind-liberating properties. But that must have been folk knowledge, right?


More posts tagged: fashion-statement (10), filtered-for (115).

Multiplayer AI chat and conversational turn-taking: sharing what we learnt

23 May 2025 at 20:20

I subscribe to ChatGPT and I subscribe to Claude and I chat with both AIs a ton – but why can’t I invite a friend into the chatroom with me?

Or multiple chatbots even! One that’s a great coder and another that’s more creative, with identity so I know automatically who is good at what?

The thing is, multiplayer is hard. The tech is solved but the design is still hard.

In a nutshell:

If you’re in a chatroom with >1 AI chatbots and you ask a question, who should reply?

And then, if you respond with a quick follow-up, how does the “system” recognise the conversational rule and have the same bot reply, without another interrupting?

Conversational turn-taking isn’t a problem when I use ChatGPT today: it’s limited to one human user and one AI-powered bot, and the bot always replies whenever the user says something (never ignoring them; never spontaneously speaking up).

But, when we want to go multiplayer, the difficulties compound…

  • multiple bots – how does a bot know when it has something to contribute? How do bots negotiate between one another? Do bots reply to other bots?
  • multiple human users – a bot replying to every message would be noisy, so how does it identify when it’s appropriate to jump in and when it should leave the humans to get on with it?

You can’t leave this to the AI to decide (I’ve tried, it doesn’t work).

To have satisfying, natural chats with multiple bots and human users, we need heuristics for conversational turn-taking.


This post is a multiplayer trailhead

Via Acts Not Facts I’ve been working with the team at Glif to explore future products.

Glif is a low-code platform for creative AI workflows. Like, if you want to make weird medieval memes there’s a glif for that.

And one part of what we’ve been experimenting with is chatbots with personality and tools. i.e. they can make things for you and also research on Wikipedia etc.

The actual product is still to come. Let me say, it is wild. Friday demos always overrun and I cannot wait for you to see what’s cooking and what you can build with it.

But along the way we had a side-quest into multiplayer AI chat.

And I wanted to note down what we learnt here as a trailhead for future work.

Technically the bots are agents which I’ve previously defined as LLM-powered software "that (a) uses tools, and (b) has autonomy on how to reach its goal and when to halt."

You chat with the bots. In our side-quest, you chat in an environment: a chat is a themeable container that also holds media artifacts (like HTML) and multiple bots and multiple human users.

So there’s a lot you can do with these primitives, but for the sake of this multiplayer scenario, imagine a room called Infinite Seinfeld and it’s populated with bots that are prompted to behave like the main characters, plus a Showrunner bot that gives them a plot.

Then you improv a random show together. (I built this one, it’s super fun.)

Another scenario, a work-based one: imagine you have a multi-user chat (like a Slack channel) with a stenographer for capturing actions, and a researcher bot that can dig around in your company Google Drive, on demand and proactively. Who should speak up when?

Or perhaps you’re in a group WhatsApp with some friends and you invite in one bot for laughs and another to help you find and book restaurants, and it has to speak with the brand voice.

(I found from my previous work with AI NPC teammates that it’s useful to split out functionality into different agents because then you can develop a “theory of mind” about each, and reason about what each one knows and is good at. I discussed this more in my Thingscon talk last year (YouTube, starting at 22:15).)

The rest of this post is how we made this work.


Three approaches that don’t work

If a chatbot isn’t going to reply every single time, how can it decide?

shouldReply

I got the multi-user/single-bot scenario working when I was helping out at PartyKit. Here’s what worked:

we don’t want the AI to reply every time. It’s multi-user chat! The humans should be able to have a conversation without an AI response to every message.

the best conversational models are expensive. Can we use a cheaper, local model to decide whether to reply, then escalate only if necessary?

So there’s a quick call to an LLM with the context of the chat, before the full LLM call is used to generate a response (or not). I called this pattern shouldReply.

Here’s the GitHub repo explaining shouldReply.

And here’s the prompt in the code: the bot replies if it being addressed by name.

This should also work in a multi-bot scenario!

It doesn’t.

LLMs aren’t smart enough yet.

The failure modes:

  • all the bots replying at once and talking over each other
  • no allowance for personality: a chatty researcher should be more likely to reply than a helpful but taciturn stenographer
  • no coordination: bots talk over each other
  • generally, the conversation feels unnatural: bots don’t accept follow-ups, interrupt each other too much or not all, and so on.

So we need a more nuanced shouldReply discriminator for multi-bot.

Could a centralised decider help?

One approach is to take the shouldReply approach and run it once for an entire chatroom, on behalf of all the bots, and figure out who should be nominated to reply.

It works… but it’s like having a strict meeting chair instead of a real conversation? It feels weird.

And besides, architecturally this approach doesn’t scale.

When different bots having different personalities (some chatty, some shy) the discriminator needs to see their prompts. But how can this work when multiple bots are “dialling in” from different places, one hosted on Glif, another hosted on Cloudflare, yet another somewhere else, each with their own potentially secret internal logic?

In Glif parlance, bot “personality” is a prompt to the agent for how to achieve its goals – not always the same as the stated user goals. It may include detailed step-by-step instructions, a list of information to gather first, strategies, confidential background information… and, of course, the traditional personality qualities of tone of voice and a simulated emotional state. “Personality” is what makes Claude Sonnet a well-spoken informative bot and ChatGPT an enthusiastic, engaging conversation partner. It matters, and every bot is different.

No, bots need to decide for themselves whether to reply.

How does conversational turn-taking work in the real world?

Let’s draw inspiration from the real world.

IRL multi-party conversations are complicated!

It’s great at this point to be able to go to the social science literature…

I found A Simplest Systematics for the Organization of Turn-Taking for Conversation (Sacks, Schegloff and Jefferson, 1974) super illuminating for unpacking what conversation involves (JSTOR is one of the few journal repositories I can access.)

Particularly the overview of turn allocation rules: “current selects next” is the default group of strategies, then “self-selection” strategies come into play if they don’t apply.

That paper also opens up body language, which is so key: the paper I would love to read, but can’t access in full, is Some signals and rules for taking speaking turns in conversations (Duncan, 1972). (Update: a couple people shared the PDF with me overnight - thank you! - and it looks like everything I hoped.)

Everything from intonation to gesture comes into play. Gaze direction is used to select the next speaker; “attempt-suppression” signals come into play too.

Ultimately what this means is we need body language for bots: side-channel communications in chatrooms to negotiate who speaks when.

But we can’t do body language. Not in web chatrooms. Too difficult (for now).

Turn allocation rules, on the other hand, are a handy wedge for thinking about all of this.


Breaking down conversational turn-taking to its bare essentials

Fortunately chatrooms are simpler than IRL.

They’re less fluid, for a start. You send a message into a chat and you’re done; there’s no interjecting or both starting to talk at the same time and then one person backing off with a wave of the hand. There is no possibility for non-verbal cues.

In the parlance of the “Systematics” paper, linked above, we can think about “turn-taking units,” which we’ll initially think about as messages; there are transitions, which take place potentially after every message; and there are our turn allocation rules.

So all we really need to do is figure out some good rules to put into shouldReply, and have each bot decide for itself whenever a new message comes through.

What should those rules be?

Well here are the factors we found that each bot needs to consider, in decreasing order of priority, after every single message in the room. (We’ll bring these together into an algorithm later.)

The trick is that, by breaking it down, each of these factors is easily assessed by an LLM. So you can extract everything that follows from a multi-agent conversation as structured data with a simple prompt.

Who is being addressed?

Is this bot being addressed, or is any other bot or human being addressed?

From the “Systematics” paper (3.3 Rules):

If the turn-so-far is so constructed as to involve the use of a ‘current speaker selects next’ technique, then the party so selected has the right and is obliged to take next turn to speak; no others have such rights or obligations, and transfer occurs at that place.

So this is an over-riding factor.

Is this a follow-up question?

Consider this conversation when multiple humans and multiple bots are present:

Human #1: hey B how do I make custard?

Mr B, a bot: (replies with a custard recipe)

Human #1: oh I meant mustard

…then how does Mr B know to respond (and other bots know to make space)? The message from Human #1 isn’t clearly a question, nor does it directly address Mr B.

In Grounding in communication theory (Wikipedia) conversations are seen as exercises in establishing "mutual knowledge, mutual beliefs, and mutual assumptions" and therefore the turn-taking unit is not single messages.

Instead a unit has this form:

  1. New contribution: A partner moves forward with a new idea, and waits to see if their partner expresses confusion.

  2. Assertion of acceptance: The partner receiving the information asserts that he understands by smiling, nodding, or verbally confirming the other partner. They may also assert their understanding by remaining silent.

  3. Request for clarification: The partner receiving the information asks for clarification

Clark and Schaefer (1989). The clarification step is optional.

So we need all bots to understand whether we’re in a follow-up situation, because it has a big impact on turn-taking.

Would I be interrupting?

Before we can move onto self-selection rules, the bot needs to check for any other cues of an established sub-conversation.

We do this without looking at content at all! What matters is simply the participant history of who has spoken recently and in what order.

(The inspiration for looking at participants, not content, comes from Participation Shifts: Order and Differentiation in Group Conversation (Gibson, 2003), but that paper is way more sophisticated than what we’re doing here.)

In our case, we can just feed the list of recent speakers into the large language model and ask it what it thinks should happen next.

Self-selection

Those are the self-selection rules out of the way.

Finally a bot can judge for itself whether it has something relevant to say, and also whether it has the kind of personality that would let it interject. Here’s the prompt we use:

do you have a skill or personality specifically relevant to the most recent message? Also consider whether your personality would want to chime in based on what was said.

We ask the LLM to use a score from 0 to 9.

(A “skill” is something a bot can do, for example looking up an article on Wikipedia. If a user has asked for a particular skill to be used, the bot with that skill should return a score of 9.)

This kind of judgement is where large language models really shine.


Enthusiasm: how a bot combines all these factors to decide whether to reply

So, in a multi-user, multi-bot chatroom, every time a message comes through, we have the bot run a quick algorithm to calculate an “Enthusiasm” score:

  • calculate all the necessary information for the turn-taking rules above
  • if this bot is clearly involved in the current part of the conversation, return 9. If someone else is clearly involved instead, return 0
  • otherwise return the self-selection score (how confident this bot is about having something relevant to say).

(There are some other nuances: we don’t want bots to reply to other bots or themselves, for example.)

When all bots in a chatroom have returned a score, we consider only the ones above some threshold (e.g. 5) and then pick the highest one.

This works because:

  • As in “Systematics,” typically only one participant speaks at a time; and
  • Typically, in real-world situations, the quickest person to respond grabs the turn. The enthusiasm score is a proxy for that.

Of course we’re still missing prosody and body language: I’ve run across the term “turn-competitive incoming” which describes how volume and pitch are used to grab the turn, even when starting late. Our bots don’t have volume or pitch, so all of this is such a simplification.

Yet… the result? It’s pretty good. Not perfect, but pretty good!

If you’re on Glif, you can see the source code of the Enthusiasm workflow here, prompts and all.


What still needs work?

Working in these multi-user, multi-bot chatrooms, we found a couple areas for future work:

  • Group norms. Different chatrooms have different norms and roles. Like, an improv room is open to everyone to speak, but a meeting should be led by humans and participants in a D&D game should always defer to the DM. We have the ability to attach a prompt to a room (remember, rooms are already “containers” with themes and so on) so the bot also takes that into account – but it would be good to have a way to assess norms automatically. (Even ChatGPT chats have “norms”: discursive explorations are very different from direct problem-solving, for example.)
  • Back-off. Humans often reply with a sequence of short messages – a bot shouldn’t reply until the sequence is complete. But it’s tricky to tell when this is happening. A simple solution would be to double check if the human has sent a second message after calculating enthusiasm for the first.

More speculatively…

I’d love to think about “minimum viable side-channel” in a multiplayer environment. Given we don’t want to add voice or VR (like, text is great! Let’s still with text!) then could we actually take advantage of something like timing to communicate?

I’m reminded of Linus’ work on prototyping short message LLMs (@thesephist on X) which (a) looks like an actual naturalistic WhatsApp conversation even though it’s a human/AI chat, and (b) he suggests:

Timing is actually a really key part of nonverbal communication in texting - things like how quickly you respond, and double-texting if there’s no response. There’s nothing built into any of the popular models around this so this has to be thought up from the ground up. Even trivial things like “longer texts should take more seconds to arrive” because we want to “simulate” typing on the other end. If it arrives too quickly, it feels unnatural.

So can quick messages understood as “more urgent”? Could we identify the user tapping on a bot’s avatar as “gaze” (it’s a significant turn-allocation rule)? Or tapped in an agitated fashion as a pointed loop?

And so on. What would feel natural?


Wrapping this up for now…

My premise for a long time is that single-human/single-AI should already be thought of as a “multiplayer” situation: an AI app is not a single player situation with a user commanding a web app, but instead two actors sharing an environment.

I mean, this is the whole schtick of Acts Not Facts, that you have to think of AI and multiplayer simultaneously, using the real world as your design reference.

And, as for a specific approach, yes, large language models would ideally be able to natively tell when it’s their turn in a multiplayer conversation… but they can’t (yet).

So the structured shouldReply/enthusiasm approach is a decent one.

For me it rhymes with how Hey Siri works on the iPhone (as previously discussed, 2020):

iPhone’s “Hey Siri” feature (that readies it to accept a voice instruction, even when the screen is turned off) is a personalised neural network that runs on the motion coprocessor. The motion coprocessor is the tiny, always-on chip that is mainly responsible for movement detection, i.e. step counting.

If that chip hears you say “Hey Siri”, without hitting the cloud, it then wakes up the main processor and sends the rest of what you say up to the cloud. This is from 2017 by the way, ancient history.

Same same!


While Glif isn’t pushing forward with multiplayer right this second, I’ve experienced it enough, and hacked on it enough, and thought about it enough to really want it in the world. The potential is so tantalising!

When it’s working well, it’s fun and powerful. Bots with varying personalities riffing off each other gets you to fascinating places so fast. It feels like a technique at least as powerful as - let’s say - chain of thought.

So I’m happy to leave this trailhead here to contribute to the discourse and future work.

There is so much work on AI agents and new chat interfaces this year – my strong hope is to see more multi-bot and multi-user AIUX in the mix, whether at the applied layer or even as a focus in AI research.

This is how we live our lives, after all. We’d be happier and more productive with our computers, I’m sure, if we worked on not only tools for thought but also tools for togetherness, human and AI both.

Thank you to the team, especially Fabian, Florian, Andrew and William as we worked on all of this together, and thank you Glif for being willing to share the fruits of this side-quest.


More posts tagged: multiplayer (30).

Auto-detected kinda similar posts:

When was peak message in a bottle?

16 May 2025 at 20:27

I grew up with the idea that you could put a paper note in a bottle and throw it into the ocean, and somebody might find it a thousand miles away.

We talked about that a lot. (Also: grandfather clocks; suits of armour; quicksand; spontaneous human combustion.)

Yes bottles still wash up on the shore in Animal Crossing.

But I have the sense that the concept doesn’t have the same cultural weight that once upon a time it did.

I suppose I could test this?

I could search Google Trends (search volume since 2004, trending down) or Google Ngram Viewer (the phrase in books going up since 1800 then peaking at 2018, but this may be an artefact of how Google collects books)…

But it feels like there would be better ways to do this research, if I had the (a) data, (b) compute, (c) skills, (d) funding. For instance:

  • Train a GPT-3-level large language model with data stopping at 2024, 2023, 2022… 1999, 1998, 1997 and so on, annually, as far back as we can go
  • Then measure the “weight” of that phrase and semantically highly similar phrases (using embeddings)? Graph it year by year.

And - and I have no idea how do go about this - somehow see how “load bearing” this phrase is as a metaphor in language overall? Surely it is possible to figure out “outlier” concepts in a large language model?

If the concept of a message in a bottle is less resonant now, I can imagine why:

We have email now (a message in a container, but it has an address) and socials (micro-broadcast, who knows where the ideas will end up, but it’s not 1:1). Whatever it was that was resonant about message in bottles, the appearance of these other kinds of messaging will erode its utility in referring to a particular style of communication – we have a rich abundance of metaphors to reach for.

All of that aside:

What is the equivalent semantic niche for a “message in a bottle” today? Where can you leave a message, and a stranger one beach down can find it tomorrow, or TEN YEARS LATER a shoreline on other side of the planet, you have no idea which, if anything? And they’ll get back to you? That combination of anonymity and connection and distance?


I’m specifically looking for something geographic, to narrow it down, as opposed to burying a time capsule which was also similarly A Big Deal a while back - is it still? - but along a different spacetime dimension.

Oh here’s xkcd #3088 - Deposition which coincidentally appeared this week: "If I chisel notes onto these rocks and throw them into the sea, they might be incorporated into some shale cliff in the distant future."

Deep time.

Time capsules were similarly A Big Deal years back. A different spacetime dimension. Do people still bury them?


Occasionally you see one of those messages from people working in factories. A selfie on a camera phone from somewhere in Shenzhen.

Or a slip of paper “help I’m trapped in a fortune cookie factory”.

Similar-ish. Deep supply chain.


Somehow I’m reminded of the blog from the early 2000s, Belle de Jour: Diary of a London Call Girl, which was hugely popular and there was a raging media obsession to out the anonymous author. (Newspapers are horrible. Like, why even do that? Leave people alone.)

Darren Shrubsole of the blog LinkMachineGo figured it out. But didn’t tell anyone.

What he did instead:

During this time I published a googlewack hidden in my blog - the words “Belle de Jour”, “[name]” and “[alt name]” were published and available in Google’s index on a single page on the internet – my weblog. This “coincidental” collection of links could in no way reveal Belle’s identity. But I wondered if anybody else knew the secret and felt that analysing my web traffic might confirm my strongly-held belief. If someone googled “Belle de Jour” “[name]”, I would see it in the search referrers for LinkMachineGo.

(A “googlewhack” is a search query that returns only a single result.)

I waited five years for somebody to hit that page (I’m patient). Two weeks ago I started getting a couple of search requests a day from an IP address at Associated Newspapers (who publish the Daily Mail) searching for “[name]” and realised that Belle’s pseudonymity might be coming to an end. I contacted Belle via Twitter and let her know what was happening.

Point 1 – Darren is a good person. We hung out a bunch around that time and he never let on. I remember we asked him a lot. He seemed like he knew.

Point 2 – this is the closest thing to a modern message in a bottle I can imagine.

Oh!

A blog post is a very long and complex search query to find fascinating people (Henrik Karlsson, 2022)

So maybe this is my message in a bottle, right here? If it’s 2035 for you pls do drop me a note.

Just speak the truth

30 June 2025 at 00:00

Today, we’re looking at two case studies in how to respond when reactionaries appear in your free software community.

Exhibit A

It is a technical decision.

The technical reason is that the security team does not have the bandwidth to provide lifecycle maintenance for multiple X server implementations. Part of the reason for moving X from main to community was to reduce the burden on the security team for long-term maintenance of X. Additionally, nobody so far on the security team has expressed any interest in collaborating with xxxxxx on security concerns.

We have a working relationship with Freedesktop already, while we would have to start from the beginning with xxxxxx.

Why does nobody on the security team have any interest in collaboration with xxxxxx? Well, speaking for myself only here – when I looked at their official chat linked in their README, I was immediately greeted with alt-right propaganda rather than tactically useful information about xxxxxx development. At least for me, I don’t have any interest in filtering through hyperbolic political discussions to find out about CVEs and other relevant data for managing the security lifecycle of X.

Without relevant security data products from xxxxxx, as well as a professionally-behaving security contact, it is unlikely for xxxxxx to gain traction in any serious distribution, because X is literally one of the more complex stacks of software for a security team to manage already.

At the same time, I sympathize with the need to keep X alive and in good shape, and agree that there hasn’t been much movement from freedesktop in maintaining X in the past few years. There are many desktop environments which will never get ported to Wayland and we do need a viable solution to keep those desktop environments working.

I know the person who wrote this, and I know that she’s a smart cookie, and therefore I know that she probably understood at a glance that the community behind this “project” literally wants to lynch her. In response, she takes the high road, avoids confronting the truth directly, and gives the trolls a bunch of talking points to latch on for counter-arguments. Leaves plenty of room for them to bog everyone down in concern trolling and provides ample material to fuel their attention-driven hate machine.

There’s room for improvement here.

Exhibit B

Screenshot of a post by Chimera Linux which reads “any effort to put (redacted) in chimera will be rejected on the technical basis of the maintainers being reactionary dipshits”

Concise, speaks the truth, answers ridiculous proposals with ridicule, does not afford the aforementioned reactionary dipshits an opportunity to propose a counter-argument. A+.

Extra credit for the follow-up:

Screenshot of a follow-up post that reads “just to be clear, given the coverage of the most recent post, we don’t want to be subject to any conspiracy theories arising from that. so i’ll just use this opportunity to declare that we are definitely here to further woke agenda by turning free software gay”


The requirement for a passing grade in this class is a polite but summary dismissal, but additional credit is awarded for anyone who does not indulge far-right agitators as if they were equal partners in maintaining a sense of professional decorum.

If you are a community leader in FOSS, you are not obligated to waste your time coming up with a long-winded technical answer to keep nazis out of your community. They want you to argue with them and give them attention and feed them material for their reactionary blog or whatever. Don’t fall into their trap. Do not answer bad faith with good faith. This is a skill you need to learn in order to be an effective community leader.

If you see nazis 👏👏 you ban nazis 👏👏 — it’s as simple as that.


The name of the project is censored not because it’s particularly hard for you to find, but because all they really want is attention, and you and me are going to do each other a solid by not giving them any of that directly.

To preclude the sorts of reply guys who are going to insist on name-dropping the project and having a thread about the underlying drama in the comments, the short introduction is as follows:

For a few years now, a handful of reactionary trolls have been stoking division in the community by driving a wedge between X11 and Wayland users, pushing a conspiracy theory that paints RedHat as the DEI boogeyman of FOSS and assigning reactionary values to X11 and woke (pejorative) values to Wayland. Recently, reactionary opportunists “forked” Xorg, replaced all of the literature with political manifestos and dog-whistles, then used it as a platform to start shit with downstream Linux distros by petitioning for inclusion and sending concern trolls to waste everyone’s time.

The project itself is of little consequence; they serve our purposes today by providing us with case-studies in dealing with reactionary idiots starting shit in your community.

Unionize or die

9 June 2025 at 00:00

Tech workers have long resisted the suggestion that we should be organized into unions. The topic is consistently met with a cold reception by tech workers when it is raised, and no big tech workforce is meaningfully organized. This is a fatal mistake – and I don’t mean “fatal” in the figurative sense. Tech workers, it’s time for you to unionize, and strike, or you and your loved ones are literally going to die.

In this article I will justify this statement and show that it is clearly not hyperbolic. I will explain exactly what you need to do, and how organized labor can and will save your life.

Hey – if you want to get involved in labor organizing in the tech sector you should consider joining the new unitelabor.dev forum. Adding a head’s up here in case you don’t make it to the end of this very long blog post.

The imperative to organize is your economic self-interest

Before I talk about the threats to your life and liberty that you must confront through organized labor, let me re-iterate the economic position for unionizing your workplace. It is important to revisit this now, because the power politics of the tech sector has been rapidly changing over the past few years, and those changes are not in your favor.

The tech industry bourgeoisie has been waging a prolonged war on labor for at least a decade. Far from mounting any kind of resistance, most of tech labor doesn’t even understand that this is happening to them. Your boss is obsessed with making you powerless and replaceable. You may not realize how much leverage you have over your boss, but your boss certainly does – and has been doing everything in their power to undermine you before you wizen up. Don’t let yourself believe you’re a part of their club – if your income depends on your salary, you are part of the working class.

Payroll – that’s you – is the single biggest expense for every tech company. When tech capitalists look at their balance sheet and start thinking of strategies for increasing profits, they see an awful lot of pesky zeroes stacked up next to the line item for payroll and benefits. Long-term, what’s their best play?

It starts with funneling cash and influence into educating a bigger, cheaper generation of compsci graduates to flood the labor market – “everyone can code”. Think about strategic investments in cheap(ish), broadly available courses, online schools and coding “bootcamps” – dangling your high salary as the carrot in front of wannabe coders fleeing dwindling prospects in other industries, certain that the carrot won’t be nearly as big when they all eventually step into a crowded labor market.

The next step is rolling, industry-wide mass layoffs – often obscured under the guise of “stack ranking” or some similar nonsense. Big tech has been callously cutting jobs everywhere, leaving workers out in the cold in batches of thousands or tens of thousands. If you don’t count yourself among them yet, maybe you will soon. What are your prospects for re-hire going to look like if this looming recession materializes in the next few years?

Consider what’s happening now – why do you think tech is driving AI mandates down from the top? Have you been ordered to use an LLM assistant to “help” with your programming? Have you even thought about why the executives would push this crap on you? You’re “training” your replacement. Do you really think that, if LLMs really are going to change the way we code, they aren’t going to change the way we’re paid for it? Do you think your boss doesn’t see AI as a chance to take $100M off of their payroll expenses?

Aren’t you worried you could get laid off and this junior compsci grad or an H1B takes your place for half your salary? You should be – it’s happening everywhere. What are you going to do about it? Resent the younger generation of programmers just entering the tech workforce? Or the immigrant whose family pooled their resources to send them abroad to study and work? Or maybe you weren’t laid off yet, and you fancy yourself better than the poor saps down the hall who were. Don’t be a sucker – your enemy isn’t in the cubicle next to you, or on the other side of the open office. Your enemy has an office with a door on it.

Listen: a tech union isn’t just about negotiating higher wages and benefits, although that’s definitely on the table. It’s about protecting yourself, and your colleagues, from the relentless campaign against labor that the tech leadership is waging against us. And more than that, it’s about seizing some of the awesome, society-bending power of the tech giants. Look around you and see what destructive ends this power is being applied to. You have your hands at the levers of this power if only you rise together with your peers and make demands.

And if you don’t, you are responsible for what’s going to happen next.

The imperative to organize is existential

If global warming is limited to 2°C, here’s what Palo Alto looks like in 2100:1

Map of Palo Alto showing flooding near the coast

Limiting warming to 2° C requires us to cut global emissions in half by 2030 – in 5 years – but emissions haven’t even peaked yet. Present-day climate policies are only expected to limit warming to 2.5° to 2.9° C by 2100.2 Here’s Palo Alto in 75 years if we stay our current course:

Map of Palo Alto showing much more extreme flooding

Here’s the Gulf of Mexico in 75 years:

Gulf of Mexico showing

This is what will happen if things don’t improve. Things aren’t improving – they’re getting worse. The US elected an anti-science president who backed out of the Paris agreement, for a start. Your boss is pouring all of our freshwater into datacenters to train these fucking LLMs and expanding into this exciting new market with millions of tons of emissions as the price of investment. Cryptocurrencies still account for a full 1% of global emissions. Datacenters as a whole account for 2%. That’s on us – tech workers. That is our fucking responsibility.

Climate change is accelerating, and faster than we thought, and the rich and powerful are making it happen faster. Climate catastrophe is not in the far future, it’s not our children or our children’s children, it’s us, it’s already happening. You and I will live to see dozens of global catastrophes playing out in our lifetimes, with horrifying results. Even if we started a revolution tomorrow and overthrew the ruling class and implemented aggressive climate policies right now we will still watch tens or hundreds of millions die.

Let’s say you are comfortably living outside of these blue areas, and you’ll be sitting pretty when Louisiana or Bruges or Fiji are flooded. Well, 13 million Americans are expected to have to migrate out of flooded areas – and 216 million globally3 – within 25 to 30 years. That’s just from the direct causes of climate change – as many as 1 billion could be displaced if we account for the ensuing global conflict and civil unrest.4 What do you think will happen to non-coastal cities and states when 4% of the American population is forced to flee their homes? You think you won’t be affected by that? What happens when anywhere from 2.5% to 12% of the Earth’s population becomes refugees?

What are you going to eat? Climate change is going to impact fresh water supplies and reduce the world’s agriculturally productive land. Livestock is expected to be reduced by 7-10% in just 25 years.5 Food prices will skyrocket and people will starve. 7% of all species on Earth may already be extinct because of human activities.6 You think that’s not going to affect you?

The overwhelming majority of the population supports climate action.7 The reason it’s not happening is because, under capitalism, capital is power, and the few have it and the many don’t. We live in a global plutocracy.

The plutocracy has an answer to climate change: fascism. When 12% of the world’s population is knocking at the doors of the global north, their answer will be concentration camps and mass murder. They are already working on it today. When the problem is capitalism, the capitalists will go to any lengths necessary to preserve the institutions that give them power – they always have. They have no moral compass or reason besides profit, wealth, and power. The 1% will burn and pillage and murder the 99% without blinking.

They are already murdering us. 1.2 million Americans are rationing their insulin.8 The healthcare industry, organized around the profit motive, murders 68,000 Americans per year.9 To the Europeans among my readership, don’t get too comfortable, because I assure you that our leaders are working on destroying our healthcare systems, too.

Someone you love will be laid off, get sick, and die because they can’t afford healthcare. Someone you know, probably many people that you know, will be killed by climate change. It might be someone you love. It might be you.

When you do get laid off mid-recession, your employer replaces you and three of your peers with a fresh bootcamp “graduate” and a GitHub Copilot subscription, and all of the companies you might apply to have done the same… how long can you keep paying rent? What about your friends and family, those who don’t have a cushy tech job or tech worker prospects, what happens when they get laid off or automated away or just priced out of the cost of living? Homelessness is at an all time high and it’s only going to get higher. Being homeless takes 30 years off of your life expectancy.10 In the United States, there are 28 vacant homes for every homeless person.11

Capitalism is going to murder the people you love. Capitalism is going to murder you.

We need a different answer to the crises that we face. Fortunately, the working class can offer a better solution – one with a long history of success.

Organizing is the only answer and it will work

The rich are literally going to kill you and everyone you know and love just because it will make them richer. Because it is making them richer.

Do you want to do something about any of the real, urgent problems you face? Do you want to make meaningful, rapid progress on climate change, take the catastrophic consequences we are already guaranteed to face in stride, and keep your friends and family safe?

Well, tough shit – you can’t. Don’t tell me you’ll refuse the work, or that it’ll get done anyway without you, or that you can just find another job. They’ll replace you, you won’t find another job, and the world will still burn. You can’t vote your way to a solution, either: elections don’t matter, your vote doesn’t matter, and your voice is worthless to politicians.12 Martin Gilens and Benjamin Page demonstrated this most clearly in their 2014 study, “Testing Theories of American Politics: Elites, Interest Groups, and Average Citizens”.13

Gilens and Page plotted a line chart which shows us the relationship between the odds of a policy proposal being adopted (Y axis) charted against public support for the policy (X axis). If policy adoption was entirely driven by public opinion, we would expect a 45° line (Y=X), where broad public support guarantees adoption and broad public opposition prevents adoption. We could also substitute “public opinion” for the opinions of different subsets of the public to see their relative impact on policy. Here’s what they got:

Two graphs, the first labelled “Average Citizens’ Preferences” and the second
“Economic Elites’ Preferences”, showing that the former has little to no
correlation with the odds of a policy being adopted, and the latter has a
significant impact

For most of us, we get a flat line: Y, policy adoption, is completely unrelated to X, public support. Our opinion has no influence whatsoever on policy adoption. Public condemnation or widespread support has the same effect on a policy proposal, i.e. none. But for the wealthy, it’s a different story entirely. I’ve never seen it stated so plainly and clearly: the only thing that matters is money, wealth, and capital. Money is power, and the rich have it and you don’t.

Nevertheless, you must solve these problems. You must participate in finding and implementing solutions. You will be fucked if you don’t. But it is an unassailable fact that you can’t solve these problems, because you have no power – at least, not alone.

Together, we do have power. In fact, we can fuck with those bastards’ money and they will step in line if, and only if, we organize. It is the only solution, and it will work.

The ultra-rich possess no morals or ideology or passion or reason. They align with fascists because the fascists promise what they want, namely tax cuts, subsidies, favorable regulation, and cracking the skulls of socialists against the pavement. The rich hoard and pillage and murder with abandon for one reason and one reason only: it’s profitable. The rich always do what makes them richer, and only what makes them richer. Consequently, you need to make this a losing strategy. You need to make it more profitable to do what you want. To control the rich, you must threaten the only thing they care about.

Strikes are so costly for companies that they will do anything to prevent them – and if they fail to prevent them, then shareholders will pressure them to capitulate if only to stop the hemorrhaging of profit. This threat is so powerful that it doesn’t have to stop at negotiating your salary and benefits. You could demand your employer participate in boycotting Israel. You could demand that your employer stops anti-social lobbying efforts, or even adopts a pro-social lobbying program. You could demand that your CEO cannot support causes that threaten the lives and dignity of their queer or PoC employees. You could demand that they don’t bend the knee to fascists. If you get them where it hurts – their wallet – they will fall in line. They are more afraid of you than we are afraid of them. They are terrified of us, and it’s time we used that to our advantage.

We know it works because it has always worked. In 2023, United Auto Workers went on strike and most workers won a 25% raise. In February, teachers in Los Angeles went on strike for just 8 days and secured a 19% raise. Nurses in Oregon won a 22% raise, better working schedules, and more this year – and Hawaiian nurses secured an agreement to improve worker/patient ratios in September. Tech workers could take a page out of the Writer’s Guild’s book – in 2023 they secured a prohibition against the use of their work to train AI models and the use of AI to suppress their wages.

Organized labor is powerful and consistently gets concessions from the rich and powerful in a way that no other strategy has ever been able to. It works, and we have a moral obligation to do it. Unions gets results.

How to organize step by step

I will give you a step-by-step plan for exactly what you need to do to start moving the needle here. The process is as follows:

  1. Building solidarity and community with your peers
  2. Understanding your rights and how to organize safely
  3. Establishing the consensus to unionize, and do it
  4. Promoting solidarity with across tech workplaces and labor as a whole

Remember that you will not have to do this alone – in fact, that’s the whole point. Step one is building community with your colleagues. Get to know them personally, establish new friendships and grow the friendships you already have. Learn about each other’s wants, needs, passions, and so on, and find ways to support each other. If someone takes a sick day, organize someone to check on them and make them dinner or pick up their kids from school. Organize a board game night at your home with your colleagues, outside of work hours. Make it a regular event!

Talk to your colleagues about work, and your workplace. Tell each other about your salaries and benefits. When you get a raise, don’t be shy, tell your colleagues how much you got and how you negotiated it. Speak positively about each other at performance reviews and save critical feedback for their ears only. Offer each other advice about how to approach their boss to get their needs met, and be each other’s advocate.

Talk about the power you have to work together to accomplish bigger things. Talk about the advantage of collective action. It can start small – perhaps your team collectively refuses to incorporate LLMs into your workflow. Soon enough you and your colleagues will be thinking about unionizing.

Disclaimer: Knowledge about specific processes and legal considerations in this article is US-specific. Your local laws are likely similar, but you should research the differences with your colleagues.

The process of organizing a union in the US is explained step-by-step at workcenter.gov. More detailed resources, including access to union organizers in your neighborhood, are available from the American Federation of Labor and Congress of Industrial Organizations (AFL-CIO). But your biggest resources will be people already organizing in the tech sector: in particular you should consult CODE-CWA, which works with tech workers to provide mentoring and resources on organizing tech workplaces – and has already helped several tech workplaces organize their unions and start making a difference. They’ve got your back.

This is a good time to make sure that you and your colleagues understand your rights. First of all, you would be wise to pool your resources and hire the attention of a lawyer specializing in labor – consult your local bar association to find one (it’s easy, just google it and they’ll have a web thing). Definitely reach out to AFL-CIO and CODE-CWA to meet experienced union organizers who can help you.

You cannot be lawfully fired or punished for discussing unions, workplace conditions, or your compensation and benefits, with your colleagues. You cannot be punished for distributing literature in support of your cause, especially if you do it off-site (even just outside of the front door). Be careful not to make careless remarks about your boss’s appearance, complain about the quality of your company’s products, make disparaging comments about clients or customers, etc – don’t give them an easy excuse. Hold meetings and discussions outside of work if necessary, and perform your duties as you normally would while organizing.

Once you start getting serious about organizing, your boss will start to work against you, but know that they cannot stop you. Nevertheless, you and/or some of your colleagues may run the risk of unlawful retaliation or termination for organizing – this is why you should have a lawyer on retainer. This is also why it’s important to establish systems of mutual aid, so that if one of your colleagues gets into trouble you can lean on each other to keep supporting your families. And, importantly, remember that HR works for the company, not for you. HR are the front lines that are going to execute the unionbusting mandates from above.

Once you have a consensus among your colleagues to organize – which you will know because they will have signed union cards – you can approach your employer to ask them to voluntarily recognize the union. If they agree to opening an organized dialogue amicably, you do so. If not, you will reach out to the National Labor Relations Board (NLRB) to organize a vote to unionize. Only organize a vote that you know you will win. Once your workplace votes to unionize, your employer is obligated to negotiate with you in good faith. Start making collective decisions about what you want from your employer and bring them to the table.

In this process, you will have established a relationship with more experienced union organizers who will continue to help you with conducting your union’s affairs and start getting results. The next step is to make yourself available for this purpose to the next tech workplace that wants to unionize: to share what you’ve learned and support the rest of the industry in solidarity. Talk to your friends across the industry and build solidarity and power in mass.

Prepare for the general strike on May 1st, 2028

The call has gone out: on Labor Day, 2028 – just under three years from now – there will be a general strike in the United States. The United Auto Workers union, one of the largest in the United States, has arranged for their collective bargaining agreements to end on this date, and has called for other unions to do the same across all industries. The American Federation of Teachers and its 1.2 million members are on board, and other unions are sure to follow. Your new union should be among them.

This is how we collectively challenge not just our own employers, but our political institutions as a whole. This is how we turn this nightmare around.

A mass strike is a difficult thing to organize. It is certain to be met with large-scale, coordinated, and well-funded propaganda and retaliation from the business and political spheres. Moreover, a mass strike depends on careful planning and mass mutual aid. We need to be prepared to support each other to get it done, and to plan and organize seriously. When you and your colleagues get organized, discuss this strike amongst yourselves and be prepared to join in solidarity with the rest of the 99% around the country and the world at large.

To commit yourselves to participate or get involved in the planning of the grassroots movement, see generalstrikeus.com.

Join unitelabor.dev

I’ve set up a Discourse instance for discussion, organizing, Q&A, and solidarity among tech workers at unitelabor.dev. Please check it out!

If you have any questions or feedback on this article, please post about it there.

Unionize or die

You must organize, and you must start now, or the worst will come to pass. Fight like your life depends on it, beause it does. It has never been more urgent. The tech industry needs to stop fucking around and get organized.

We are powerful together. We can change things, and we must. Spread the word, in your workplace and with your friends and online. On the latter, be ready to fight just to speak – especially in our online spaces owned and controlled by the rich (ahem – YCombinator, Reddit, Twitter – etc). But fight all the same, and don’t stop fighting until we’re done.

We can do it, together.

Resources

Tech-specific:

General:

Send me more resources to add here!


  1. Map provided by NOAA.gov ↩︎

  2. Key Insights on CO₂ and Greenhouse Gas Emissions – Our world in data ↩︎

  3. World Bank – Climate Change Could Force 216 Million People to Migrate Within Their Own Countries by 2050 (2021) ↩︎

  4. Institute for Economics & Peace – Over one billion people at threat of being displaced by 2050 due to environmental change, conflict and civil unrest (2020) ↩︎

  5. Bezner Kerr, R.; Hasegawa, T.; Lasco, R.; Bhatt, I.; Deryng, D.; Farrell, A.; Gurney-Smith, H.; Ju, H.; Lluch-Cota, S.; Meza, F.; Nelson, G.; Neufeldt, H.; Thornton, P. (2022). “Food, Fibre and Other Ecosystem Products” ↩︎

  6. Régnier C, Achaz G, Lambert A, Cowie RH, Bouchet P, Fontaine B. Mass extinction in poorly known taxa. Proc Natl Acad Sci U S A. 2015 Jun 23;112(25):7761-6. doi: 10.1073/pnas.1502350112. Epub 2015 Jun 8 ↩︎

  7. Andre, P., Boneva, T., Chopra, F. et al. Globally representative evidence on the actual and perceived support for climate action. Nat. Clim. Chang., 2024 ↩︎

  8. Prevalence and Correlates of Patient Rationing of Insulin in the United States: A National Survey, Adam Gaffney, MD, MPH, David U. Himmelstein, MD, and Steffie Woolhandler, MD, MPH (2022) ↩︎

  9. Improving the prognosis of health care in the USA Galvani, Alison P et al. The Lancet, Volume 395, Issue 10223, 524 - 533 ↩︎

  10. Shelter England – Two people died homeless every day last year (2022) ↩︎

  11. United Way NCA – How Many Houses Are in the US? Homelessness vs Housing Availability (2024) ↩︎

  12. Caveat: you should probably still vote to minimize the damage of right-wing policies, but across the world Western “democracies” are almost universally pro-capital regardless of how you vote. ↩︎

  13. Gilens M, Page BI. Testing Theories of American Politics: Elites, Interest Groups, and Average Citizens. Perspectives on Politics. 2014 ↩︎

The British Airways position on various border disputes

5 May 2025 at 00:00

My spouse and I are on vacation in Japan, spending half our time seeing the sights and the other half working remotely and enjoying the experience of living in a different place for a while. To get here, we flew on British Airways from London to Tokyo, and I entertained myself on the long flight by browsing the interactive flight map on the back of my neighbor’s seat and trying to figure out how the poor developer who implemented this map solved the thorny problems that displaying a world map implies.

I began my survey by poking through the whole interface of this little in-seat entertainment system1 to see if I can find out anything about who made it or how it works – I was particularly curious to find a screen listing open source licenses that such such devices often disclose. To my dismay I found nothing at all – no information about who made it or what’s inside. I imagine that there must be some open source software in that thing, but I didn’t find any licenses or copyright statements.

When I turned my attention to the map itself, I did find one copyright statement, the only one I could find in the whole UI. If you zoom in enough, it switches from a satellite view to a street view showing the OpenStreetMap copyright line:

Picture of the display showing “Street Maps: (c) OpenStreetMap contributors”

Note that all of the pictures in this article were taken by pointing my smartphone camera at the screen from an awkward angle and fine-tune your expectations accordingly. I don't have pictures to support every border claim documented in this article, but I did take notes during the flight.

Given that British Airways is the proud flag carrier of the United Kingdom I assume that this is indeed the only off-the-shelf copyrighted material included in this display, and everything else was developed in-house without relying on any open source software that might require a disclosure of license and copyright details. For similar reasons I am going to assume that all of the borders shown in this map are reflective of the official opinion of British Airways on various international disputes.

As I briefly mentioned a moment ago, this map has two views: satellite photography and a very basic street view. Your plane and its route are shown in real-time, and you can touch the screen to pan and zoom the map anywhere you like. You can also rotate the map and change the angle in “3D” if you have enough patience to use complex multitouch gestures on the cheapest touch panel they could find.

The street view is very sparse and only appears when you’re pretty far zoomed in, so it was mostly useless for this investigation. The satellite map, thankfully, includes labels: cities, country names, points of interest, and, importantly, national borders. The latter are very faint, however. Here’s an illustrative example:

A picture of the screen showing the area near the Caucasus mountains with the plane overflying the Caspian sea

We also have our first peek at a border dispute here: look closely between the “Georgia” and “Caucasus Mountains” labels. This ever-so-faint dotted line shows what I believe is the Russian-occupied territory of South Ossetia in Georgia. Disputes implicating Russia are not universally denoted as such – I took a peek at the border with Ukraine and found that Ukraine is shown as whole and undisputed, with its (undotted) border showing Donetsk, Luhansk, and Crimea entirely within Ukraine’s borders.

Of course, I didn’t start at Russian border disputes when I went looking for trouble. I went directly to Palestine. Or rather, I went to Israel, because Palestine doesn’t exist on this map:

Picture of the screen showing Israel

I squinted and looked very closely at the screen and I’m fairly certain that both the West Bank and Gaza are outlined in these dotted lines using the borders defined by the 1949 armistice. If you zoom in a bit more to the street view, you can see labels like “West Bank” and the “Area A”, “Area B” labels of the Oslo Accords:

Picture of the street map zoomed in on Ramallah

Given that this is British Airways, part of me was surprised not to see the whole area simply labelled Mandatory Palestine, but it is interesting to know that British Airways officially supports the Oslo Accords.

Heading south, let’s take a look at the situation in Sudan:

Picture of the satellite map over Sudan

This one is interesting – three areas within South Sudan’s claimed borders are disputed, and the map only shows two with these dotted lines. The border dispute with Sudan in the northeast is resolved in South Sudan’s favor. Another case where BA takes a stand is Guyana, which has an ongoing dispute with Venezuela – but the map only shows Guyana’s claim, albeit with a dotted line, rather than the usual approach of drawing both claims with dotted lines.

Next, I turned my attention to Taiwan:

Picture of the satellite map over eastern China and Taiwan

The cities of Taipei and Kaohsiung are labelled, but the island as a whole was not labelled “Taiwan”. I zoomed and panned and 3D-zoomed the map all over the place but was unable to get a “Taiwan” label to appear. I also zoomed into the OSM-provided street map and panned that around but couldn’t find “Taiwan” anywhere, either.

The last picture I took is of the Kashmir area:

Picture of the satellite map showing the Kashmir region

I find these faint borders difficult to interpret and I admit to not being very familiar with this conflict, but perhaps someone in the know with the patience to look more closely will email me their understanding of the official British Airways position on the Kashmir conflict (here’s the full sized picture).

Here are some other details I noted as I browsed the map:

  • The Hala’ib Triangle and Bir Tawil are shown with dotted lines
  • The Gulf of Mexico is labelled as such
  • Antarctica has no labelled borders or settlements

After this thrilling survey of the official political positions of British Airways, I spent the rest of the flight reading books or trying to sleep.


  1. I believe the industry term is “infotainment system”, but if you ever catch me saying that with a straight face then I have been replaced with an imposter and you should contact the authorities. ↩︎

You can cheat a test suite with a big enough polynomial

24 June 2025 at 16:27

Hi nerds, I'm back from Systems Distributed! I'd heartily recommend it, wildest conference I've been to in years. I have a lot of work to catch up on, so this will be a short newsletter.

In an earlier version of my talk, I had a gag about unit tests. First I showed the test f([1,2,3]) == 3, then said that this was satisfied by f(l) = 3, f(l) = l[-1], f(l) = len(l), f(l) = (129*l[0]-34*l[1]-617)*l[2] - 443*l[0] + 1148*l[1] - 182. Then I progressively rule them out one by one with more unit tests, except the last polynomial which stubbornly passes every single test.

If you're given some function of f(x: int, y: int, …): int and a set of unit tests asserting specific inputs give specific outputs, then you can find a polynomial that passes every single unit test.

To find the gag, and as SMT practice, I wrote a Python program that finds a polynomial that passes a test suite meant for max. It's hardcoded for three parameters and only finds 2nd-order polynomials but I think it could be generalized with enough effort.

The code

Full code here, breakdown below.

from z3 import *  # type: ignore
s1, s2 = Solver(), Solver()

Z3 is just the particular SMT solver we use, as it has good language bindings and a lot of affordances.

As part of learning SMT I wanted to do this two ways. First by putting the polynomial "outside" of the SMT solver in a python function, second by doing it "natively" in Z3. I created two solvers so I could test both versions in one run.

a0, a, b, c, d, e, f = Consts('a0 a b c d e f', IntSort())
x, y, z = Ints('x y z')
t = "a*x+b*y+c*z+d*x*y+e*x*z+f*y*z+a0"

Both Const('x', IntSort()) and Int('x') do the exact same thing, the latter being syntactic sugar for the former. I did not know this when I wrote the program.

To keep the two versions in sync I represented the equation as a string, which I later eval. This is one of the rare cases where eval is a good idea, to help us experiment more quickly while learning. The polynomial is a "2nd-order polynomial", even though it doesn't have x^2 terms, as it has xy and xz terms.

lambdamax = lambda x, y, z: eval(t)

z3max = Function('z3max', IntSort(), IntSort(), IntSort(),  IntSort())
s1.add(ForAll([x, y, z], z3max(x, y, z) == eval(t)))

lambdamax is pretty straightforward: create a lambda with three parameters and eval the string. The string "a*x" then becomes the python expression a*x, a is an SMT symbol, while the x SMT symbol is shadowed by the lambda parameter. To reiterate, a terrible idea in practice, but a good way to learn faster.

z3max function is a little more complex. Function takes an identifier string and N "sorts" (roughly the same as programming types). The first N-1 sorts define the parameters of the function, while the last becomes the output. So here I assign the string identifier "z3max" to be a function with signature (int, int, int) -> int.

I can load the function into the model by specifying constraints on what z3max could be. This could either be a strict input/output, as will be done later, or a ForAll over all possible inputs. Here I just use that directly to say "for all inputs, the function should match this polynomial." But I could do more complicated constraints, like commutativity (f(x, y) == f(y, x)) or monotonicity (Implies(x < y, f(x) <= f(y))).

Note ForAll takes a list of z3 symbols to quantify over. That's the only reason we need to define x, y, z in the first place. The lambda version doesn't need them.

inputs = [(1,2,3), (4, 2, 2), (1, 1, 1), (3, 5, 4)]

for g in inputs:
    s1.add(z3max(*g) == max(*g))
    s2.add(lambdamax(*g) == max(*g))

This sets up the joke: adding constraints to each solver that the polynomial it finds must, for a fixed list of triplets, return the max of each triplet.

for s, func in [(s1, z3max), (s2, lambdamax)]:
    if s.check() == sat:
        m = s.model()
        for x, y, z in inputs:
            print(f"max([{x}, {y}, {z}]) =", m.evaluate(func(x, y, z)))
        print(f"max([x, y, z]) = {m[a]}x + {m[b]}y",
            f"+ {m[c]}z +", # linebreaks added for newsletter rendering
            f"{m[d]}xy + {m[e]}xz + {m[f]}yz + {m[a0]}\n")

Output:

max([1, 2, 3]) = 3
# etc
max([x, y, z]) = -133x + 130y + -10z + -2xy + 62xz + -46yz + 0

max([1, 2, 3]) = 3
# etc
max([x, y, z]) = -17x + 16y + 0z + 0xy + 8xz + -6yz + 0

I find that z3max (top) consistently finds larger coefficients than lambdamax does. I don't know why.

Practical Applications

Test-Driven Development recommends a strict "red-green refactor" cycle. Write a new failing test, make the new test pass, then go back and refactor. Well, the easiest way to make the new test pass would be to paste in a new polynomial, so that's what you should be doing. You can even do this all automatically: have a script read the set of test cases, pass them to the solver, and write the new polynomial to your code file. All you need to do is write the tests!

Pedagogical Notes

Writing the script took me a couple of hours. I'm sure an LLM could have whipped it all up in five minutes but I really want to learn SMT and LLMs may decrease learning retention.1 Z3 documentation is not... great for non-academics, though, and most other SMT solvers have even worse docs. One useful trick I use regularly is to use Github code search to find code using the same APIs and study how that works. Turns out reading API-heavy code is a lot easier than writing it!

Anyway, I'm very, very slowly feeling like I'm getting the basics on how to use SMT. I don't have any practical use cases yet, but I wanted to learn this skill for a while and glad I finally did.


  1. Caveat I have not actually read the study, for all I know it could have a sample size of three people, I'll get around to it eventually 

Solving LinkedIn Queens with SMT

12 June 2025 at 15:43

No newsletter next week

I’ll be speaking at Systems Distributed. My talk isn't close to done yet, which is why this newsletter is both late and short.

Solving LinkedIn Queens in SMT

The article Modern SAT solvers: fast, neat and underused claims that SAT solvers1 are "criminally underused by the industry". A while back on the newsletter I asked "why": how come they're so powerful and yet nobody uses them? Many experts responded saying the reason is that encoding SAT kinda sucked and they rather prefer using tools that compile to SAT.

I was reminded of this when I read Ryan Berger's post on solving “LinkedIn Queens” as a SAT problem.

A quick overview of Queens. You’re presented with an NxN grid divided into N regions, and have to place N queens so that there is exactly one queen in each row, column, and region. While queens can be on the same diagonal, they cannot be adjacently diagonal.

(Important note: Linkedin “Queens” is a variation on the puzzle game Star Battle, which is the same except the number of stars you place in each row/column/region varies per puzzle, and is usually two. This is also why 'queens' don’t capture like chess queens.)

An image of a solved queens board. Copied from https://ryanberger.me/posts/queens

Ryan solved this by writing Queens as a SAT problem, expressing properties like "there is exactly one queen in row 3" as a large number of boolean clauses. Go read his post, it's pretty cool. What leapt out to me was that he used CVC5, an SMT solver.2 SMT solvers are "higher-level" than SAT, capable of handling more data types than just boolean variables. It's a lot easier to solve the problem at the SMT level than at the SAT level. To show this, I whipped up a short demo of solving the same problem in Z3 (via the Python API).

Full code here, which you can compare to Ryan's SAT solution here. I didn't do a whole lot of cleanup on it (again, time crunch!), but short explanation below.

The code

from z3 import * # type: ignore
from itertools import combinations, chain, product
solver = Solver()
size = 9 # N

Initial setup and modules. size is the number of rows/columns/regions in the board, which I'll call N below.

# queens[n] = col of queen on row n
# by construction, not on same row
queens = IntVector('q', size) 

SAT represents the queen positions via N² booleans: q_00 means that a Queen is on row 0 and column 0, !q_05 means a queen isn't on row 0 col 5, etc. In SMT we can instead encode it as N integers: q_0 = 5 means that the queen on row 0 is positioned at column 5. This immediately enforces one class of constraints for us: we don't need any constraints saying "exactly one queen per row", because that's embedded in the definition of queens!

(Incidentally, using 0-based indexing for the board was a mistake on my part, it makes correctly encoding the regions later really painful.)

To actually make the variables [q_0, q_1, …], we use the Z3 affordance IntVector(str, n) for making n variables at once.

solver.add([And(0 <= i, i < size) for i in queens])
# not on same column
solver.add(Distinct(queens))

First we constrain all the integers to [0, N), then use the incredibly handy Distinct constraint to force all the integers to have different values. This guarantees at most one queen per column, which by the pigeonhole principle means there is exactly one queen per column.

# not diagonally adjacent
for i in range(size-1):
    q1, q2 = queens[i], queens[i+1]
    solver.add(Abs(q1 - q2) != 1)

One of the rules is that queens can't be adjacent. We already know that they can't be horizontally or vertically adjacent via other constraints, which leaves the diagonals. We only need to add constraints that, for each queen, there is no queen in the lower-left or lower-right corner, aka q_3 != q_2 ± 1. We don't need to check the top corners because if q_1 is in the upper-left corner of q_2, then q_2 is in the lower-right corner of q_1!

That covers everything except the "one queen per region" constraint. But the regions are the tricky part, which we should expect because we vary the difficulty of queens games by varying the regions.

regions = {
        "purple": [(0, 0), (0, 1), (0, 2), (0, 3), (0, 4), (0, 5), (0, 6), (0, 7), (0, 8),
                   (1, 0), (2, 0), (3, 0), (4, 0), (5, 0), (6, 0), (7, 0), (8, 0),
                   (1, 1), (8, 1)],
        "red": [(1, 2), (2, 2), (2, 1), (3, 1), (4, 1), (5, 1), (6, 1), (6, 2), (7, 1), (7, 2), (8, 2), (8, 3),],
        # you get the picture
        }

# Some checking code left out, see below

The region has to be manually coded in, which is a huge pain.

(In the link, some validation code follows. Since it breaks up explaining the model I put it in the next section.)

for r in regions.values():
    solver.add(Or(
        *[queens[row] == col for (row, col) in r]
        ))

Finally we have the region constraint. The easiest way I found to say "there is exactly one queen in each region" is to say "there is a queen in region 1 and a queen in region 2 and a queen in region 3" etc." Then to say "there is a queen in region purple" I wrote "q_0 = 0 OR q_0 = 1 OR … OR q_1 = 0 etc."

Why iterate over every position in the region instead of doing something like (0, q[0]) in r? I tried that but it's not an expression that Z3 supports.

if solver.check() == sat:
    m = solver.model()
    print([(l, m[l]) for l in queens])

Finally, we solve and print the positions. Running this gives me:

[(q__0, 0), (q__1, 5), (q__2, 8), 
 (q__3, 2), (q__4, 7), (q__5, 4), 
 (q__6, 1), (q__7, 3), (q__8, 6)]

Which is the correct solution to the queens puzzle. I didn't benchmark the solution times, but I imagine it's considerably slower than a raw SAT solver. Glucose is really, really fast.

But even so, solving the problem with SMT was a lot easier than solving it with SAT. That satisfies me as an explanation for why people prefer it to SAT.

Sanity checks

One bit I glossed over earlier was the sanity checking code. I knew for sure that I was going to make a mistake encoding the region, and the solver wasn't going to provide useful information abut what I did wrong. In cases like these, I like adding small tests and checks to catch mistakes early, because the solver certainly isn't going to catch them!

all_squares = set(product(range(size), repeat=2))
def test_i_set_up_problem_right():
    assert all_squares == set(chain.from_iterable(regions.values()))

    for r1, r2 in combinations(regions.values(), 2):
        assert not set(r1) & set(r2), set(r1) & set(r2)

The first check was a quick test that I didn't leave any squares out, or accidentally put the same square in both regions. Converting the values into sets makes both checks a lot easier. Honestly I don't know why I didn't just use sets from the start, sets are great.

def render_regions():
    colormap = ["purple",  "red", "brown", "white", "green", "yellow", "orange", "blue", "pink"]
    board = [[0 for _ in range(size)] for _ in range(size)] 
    for (row, col) in all_squares:
        for color, region in regions.items():
            if (row, col) in region:
                board[row][col] = colormap.index(color)+1

    for row in board:
        print("".join(map(str, row)))

render_regions()

The second check is something that prints out the regions. It produces something like this:

111111111
112333999
122439999
124437799
124666779
124467799
122467899
122555889
112258899

I can compare this to the picture of the board to make sure I got it right. I guess a more advanced solution would be to print emoji squares like 🟥 instead.

Neither check is quality code but it's throwaway and it gets the job done so eh.

Update for the Internet

This was sent as a weekly newsletter, which is usually on topics like software history, formal methods, unusual technologies, and the theory of software engineering. You can subscribe here.


  1. "Boolean SATisfiability Solver", aka a solver that can find assignments that make complex boolean expressions true. I write a bit more about them here

  2. "Satisfiability Modulo Theories" 

AI is a gamechanger for TLA+ users

5 June 2025 at 14:59

New Logic for Programmers Release

v0.10 is now available! This is a minor release, mostly focused on logic-based refactoring, with new material on set types and testing refactors are correct. See the full release notes at the changelog page. Due to conference pressure v0.11 will also likely be a minor release.

The book cover

AI is a gamechanger for TLA+ users

TLA+ is a specification language to model and debug distributed systems. While very powerful, it's also hard for programmers to learn, and there's always questions of connecting specifications with actual code.

That's why The Coming AI Revolution in Distributed Systems caught my interest. In the post, Cheng Huang claims that Azure successfully used LLMs to examine an existing codebase, derive a TLA+ spec, and find a production bug in that spec. "After a decade of manually crafting TLA+ specifications", he wrote, "I must acknowledge that this AI-generated specification rivals human work".

This inspired me to experiment with LLMs in TLA+ myself. My goals are a little less ambitious than Cheng's: I wanted to see how LLMs could help junior specifiers write TLA+, rather than handling the entire spec automatically. Details on what did and didn't work below, but my takeaway is that LLMs are an immense specification force multiplier.

All tests were done with a standard VSCode Copilot subscription, writing Claude 3.7 in Agent mode. Other LLMs or IDEs may be more or less effective, etc.

Things Claude was good at

Fixing syntax errors

TLA+ uses a very different syntax than mainstream programming languages, meaning beginners make a lot of mistakes where they do a "programming syntax" instead of TLA+ syntax:

NotThree(x) = \* should be ==, not =
    x != 3 \* should be #, not !=

The problem is that the TLA+ syntax checker, SANY, is 30 years old and doesn't provide good information. Here's what it says for that snippet:

Was expecting "==== or more Module body"
Encountered "NotThree" at line 6, column 1

That only isolates one error and doesn't tell us what the problem is, only where it is. Experienced TLA+ users get "error eyes" and can quickly see what the problem is, but beginners really struggle with this.

The TLA+ foundation has made LLM integration a priority, so the VSCode extension naturally supports several agents actions. One of these is running SANY, meaning an agent can get an error, fix it, get another error, fix it, etc. Provided the above sample and asked to make it work, Claude successfully fixed both errors. It also fixed many errors in a larger spec, as well as figure out why PlusCal specs weren't compiling to TLA+.

This by itself is already enough to make LLMs a worthwhile tool, as it fixes one of the biggest barriers to entry.

Understanding error traces

When TLA+ finds a violated property, it outputs the sequence of steps that leads to the error. This starts in plaintext, and VSCode parses it into an interactive table:

An example error trace

Learning to read these error traces is a skill in itself. You have to understand what's happening in each step and how it relates back to the actually broken property. It takes a long time for people to learn how to do this well.

Claude was successful here, too, accurately reading 20+ step error traces and giving a high-level explanation of what went wrong. It also could condense error traces: if ten steps of the error trace could be condensed into a one-sentence summary (which can happen if you're modeling a lot of process internals) Claude would do it.

I did have issues here with doing this in agent mode: while the extension does provide a "run model checker" command, the agent would regularly ignore this and prefer to run a terminal command instead. This would be fine except that the LLM consistently hallucinated invalid commands. I had to amend every prompt with "run the model checker via vscode, do not use a terminal command". You can skip this if you're willing to copy and paste the error trace into the prompt.

As with syntax checking, if this was the only thing LLMs could effectively do, that would already be enough1 to earn a strong recommend. Even as a TLA+ expert I expect I'll be using this trick regularly.

Boilerplate tasks

TLA+ has a lot of boilerplate. One of the most notorious examples is UNCHANGED rules. Specifications are extremely precise — so precise that you have to specify what variables don't change in every step. This takes the form of an UNCHANGED clause at the end of relevant actions:

RemoveObjectFromStore(srv, o, s) ==
  /\ o \in stored[s]
  /\ stored' = [stored EXCEPT ![s] = @ \ {o}]
  /\ UNCHANGED <<capacity, log, objectsize, pc>>

Writing this is really annoying. Updating these whenever you change an action, or add a new variable to the spec, is doubly so. Syntax checking and error analysis are important for beginners, but this is what I wanted for myself. I took a spec and prompted Claude

Add UNCHANGED <> for each variable not changed in an action.

And it worked! It successfully updated the UNCHANGED in every action.

(Note, though, that it was a "well-behaved" spec in this regard: only one "action" happened at a time. In TLA+ you can have two actions happen simultaneously, that each update half of the variables, meaning neither of them should have an UNCHANGED clause. I haven't tested how Claude handles that!)

That's the most obvious win, but Claude was good at handling other tedious work, too. Some examples include updating vars (the conventional collection of all state variables), lifting a hard-coded value into a model parameter, and changing data formats. Most impressive to me, though, was rewriting a spec designed for one process to instead handle multiple processes. This means taking all of the process variables, which originally have types like Int, converting them to types like [Process -> Int], and then updating the uses of all of those variables in the spec. It didn't account for race conditions in the new concurrent behavior, but it was an excellent scaffold to do more work.

Writing properties from an informal description

You have to be pretty precise with your intended property description but it handles converting that precise description into TLA+'s formalized syntax, which is something beginners often struggle with.

Things it is less good at

Generating model config files

To model check TLA+, you need both a specification (.tla) and a model config file (.cfg), which have separate syntaxes. Asking the agent to generate the second often lead to it using TLA+ syntax. It automatically fixed this after getting parsing errors, though.

Fixing specs

Whenever the ran model checking and discovered a bug, it would naturally propose a change to either the invalid property or the spec. Sometimes the changes were good, other times the changes were not physically realizable. For example, if it found that a bug was due to a race condition between processes, it would often suggest fixing it by saying race conditions were okay. I mean yes, if you say bugs are okay, then the spec finds that bugs are okay! Or it would alternatively suggest adding a constraint to the spec saying that race conditions don't happen. But that's a huge mistake in specification, because race conditions happen if we don't have coordination. We need to specify the mechanism that is supposed to prevent them.

Finding properties of the spec

After seeing how capable it was at translating my properties to TLA+, I started prompting Claude to come up with properties on its own. Unfortunately, almost everything I got back was either trivial, uninteresting, or too coupled to implementation details. I haven't tested if it would work better to ask it for "properties that may be violated".

Generating code from specs

I have to be specific here: Claude could sometimes convert Python into a passable spec, an vice versa. It wasn't good at recognizing abstraction. For example, TLA+ specifications often represent sequential operations with a state variable, commonly called pc. If modeling code that nonatomically retrieves a counter value and increments it, we'd have one action that requires pc = "Get" and sets the new value to "Inc", then another that requires it be "Inc" and sets it to "Done".

I found that Claude would try to somehow convert pc into part of the Python program's state, rather than recognize it as a TLA+ abstraction. On the other side, when converting python code to TLA+ it would often try to translate things like sleep into some part of the spec, not recognizing that it is abstractable into a distinct action. I didn't test other possible misconceptions, like converting randomness to nondeterminism.

For the record, when converting TLA+ to Python Claude tended to make simulators of the spec, rather than possible production code implementing the spec. I really wasn't expecting otherwise though.

Unexplored Applications

Things I haven't explored thoroughly but could possibly be effective, based on what I know about TLA+ and AI:

Writing Java Overrides

Most TLA+ operators are resolved via TLA+ interpreters, but you can also implement them in "native" Java. This lets you escape the standard language semantics and add capabilities like executing programs during model-checking or dynamically constrain the depth of the searched state space. There's a lot of cool things I think would be possible with overrides. The problem is there's only a handful of people in the world who know how to write them. But that handful have written quite a few overrides and I think there's enough there for Claude to work with.

Writing specs, given a reference mechanism

In all my experiments, the LLM only had my prompts and the occasional Python script as information. That makes me suspect that some of its problems with writing and fixing specs come down to not having a system model. Maybe it wouldn't suggest fixes like "these processes never race" if it had a design doc saying that the processes can't coordinate.

(Could a Sufficiently Powerful LLM derive some TLA+ specification from a design document?)

Connecting specs and code

This is the holy grail of TLA+: taking a codebase and showing it correctly implements a spec. Currently the best ways to do this are by either using TLA+ to generate a test suite, or by taking logged production traces and matching them to TLA+ behaviors. This blog post discusses both. While I've seen a lot of academic research into these approaches there are no industry-ready tools. So if you want trace validation you have to do a lot of manual labour tailored to your specific product.

If LLMs could do some of this work for us then that'd really amplify the usefulness of TLA+ to many companies.

Thoughts

Right now, agents seem good at the tedious and routine parts of TLA+ and worse at the strategic and abstraction parts. But, since the routine parts are often a huge barrier to beginners, this means that LLMs have the potential to make TLA+ far, far more accessible than it previously was.

I have mixed thoughts on this. As an advocate, this is incredible. I want more people using formal specifications because I believe it leads to cheaper, safer, more reliable software. Anything that gets people comfortable with specs is great for our industry. As a professional TLA+ consultant, I'm worried that this obsoletes me. Most of my income comes from training and coaching, which companies will have far less demand of now. Then again, maybe this an opportunity to pitch "agentic TLA+ training" to companies!

Anyway, if you're interested in TLA+, there has never been a better time to try it. I mean it, these tools handle so much of the hard part now. I've got a free book available online, as does the inventor of TLA+. I like this guide too. Happy modeling!


  1. Dayenu. 

What does "Undecidable" mean, anyway

28 May 2025 at 19:34

Systems Distributed

I'll be speaking at Systems Distributed next month! The talk is brand new and will aim to showcase some of the formal methods mental models that would be useful in mainstream software development. It has added some extra stress on my schedule, though, so expect the next two monthly releases of Logic for Programmers to be mostly minor changes.

What does "Undecidable" mean, anyway

Last week I read Against Curry-Howard Mysticism, which is a solid article I recommend reading. But this newsletter is actually about one comment:

I like to see posts like this because I often feel like I can’t tell the difference between BS and a point I’m missing. Can we get one for questions like “Isn’t XYZ (Undecidable|NP-Complete|PSPACE-Complete)?”

I've already written one of these for NP-complete, so let's do one for "undecidable". Step one is to pull a technical definition from the book Automata and Computability:

A property P of strings is said to be decidable if ... there is a total Turing machine that accepts input strings that have property P and rejects those that do not. (pg 220)

Step two is to translate the technical computer science definition into more conventional programmer terms. Warning, because this is a newsletter and not a blog post, I might be a little sloppy with terms.

Machines and Decision Problems

In automata theory, all inputs to a "program" are strings of characters, and all outputs are "true" or "false". A program "accepts" a string if it outputs "true", and "rejects" if it outputs "false". You can think of this as automata studying all pure functions of type f :: string -> boolean. Problems solvable by finding such an f are called "decision problems".

This covers more than you'd think, because we can bootstrap more powerful functions from these. First, as anyone who's programmed in bash knows, strings can represent any other data. Second, we can fake non-boolean outputs by instead checking if a certain computation gives a certain result. For example, I can reframe the function add(x, y) = x + y as a decision problem like this:

IS_SUM(str) {
    x, y, z = split(str, "#")
    return x + y == z
}

Then because IS_SUM("2#3#5") returns true, we know 2 + 3 == 5, while IS_SUM("2#3#6") is false. Since we can bootstrap parameters out of strings, I'll just say it's IS_SUM(x, y, z) going forward.

A big part of automata theory is studying different models of computation with different strengths. One of the weakest is called "DFA". I won't go into any details about what DFA actually can do, but the important thing is that it can't solve IS_SUM. That is, if you give me a DFA that takes inputs of form x#y#z, I can always find an input where the DFA returns true when x + y != z, or an input which returns false when x + y == z.

It's really important to keep this model of "solve" in mind: a program solves a problem if it correctly returns true on all true inputs and correctly returns false on all false inputs.

(total) Turing Machines

A Turing Machine (TM) is a particular type of computation model. It's important for two reasons:

  1. By the Church-Turing thesis, a Turing Machine is the "upper bound" of how powerful (physically realizable) computational models can get. This means that if an actual real-world programming language can solve a particular decision problem, so can a TM. Conversely, if the TM can't solve it, neither can the programming language.1

  2. It's possible to write a Turing machine that takes a textual representation of another Turing machine as input, and then simulates that Turing machine as part of its computations.

Property (1) means that we can move between different computational models of equal strength, proving things about one to learn things about another. That's why I'm able to write IS_SUM in a pseudocode instead of writing it in terms of the TM computational model (and why I was able to use split for convenience).

Property (2) does several interesting things. First of all, it makes it possible to compose Turing machines. Here's how I can roughly ask if a given number is the sum of two primes, with "just" addition and boolean functions:

IS_SUM_TWO_PRIMES(z):
    x := 1
    y := 1
    loop {
        if x > z {return false}
        if IS_PRIME(x) {
            if IS_PRIME(y) {
                if IS_SUM(x, y, z) {
                    return true;
                }
            }
        }
        y := y + 1
        if y > x {
            x := x + 1
            y := 0
        }
    }

Notice that without the if x > z {return false}, the program would loop forever on z=2. A TM that always halts for all inputs is called total.

Property (2) also makes "Turing machines" a possible input to functions, meaning that we can now make decision problems about the behavior of Turing machines. For example, "does the TM M either accept or reject x within ten steps?"2

IS_DONE_IN_TEN_STEPS(M, x) {
    for (i = 0; i < 10; i++) {
        `simulate M(x) for one step`
        if(`M accepted or rejected`) {
            return true
        }
    }
    return false
}

Decidability and Undecidability

Now we have all of the pieces to understand our original definition:

A property P of strings is said to be decidable if ... there is a total Turing machine that accepts input strings that have property P and rejects those that do not. (220)

Let IS_P be the decision problem "Does the input satisfy P"? Then IS_P is decidable if it can be solved by a Turing machine, ie, I can provide some IS_P(x) machine that always accepts if x has property P, and always rejects if x doesn't have property P. If I can't do that, then IS_P is undecidable.

IS_SUM(x, y, z) and IS_DONE_IN_TEN_STEPS(M, x) are decidable properties. Is IS_SUM_TWO_PRIMES(z) decidable? Some analysis shows that our corresponding program will either find a solution, or have x>z and return false. So yes, it is decidable.

Notice there's an asymmetry here. To prove some property is decidable, I need just to need to find one program that correctly solves it. To prove some property is undecidable, I need to show that any possible program, no matter what it is, doesn't solve it.

So with that asymmetry in mind, do are there any undecidable problems? Yes, quite a lot. Recall that Turing machines can accept encodings of other TMs as input, meaning we can write a TM that checks properties of Turing machines. And, by Rice's Theorem, almost every nontrivial semantic3 property of Turing machines is undecidable. The conventional way to prove this is to first find a single undecidable property H, and then use that to bootstrap undecidability of other properties.

The canonical and most famous example of an undecidable problem is the Halting problem: "does machine M halt on input i?" It's pretty easy to prove undecidable, and easy to use it to bootstrap other undecidability properties. But again, any nontrivial property is undecidable. Checking a TM is total is undecidable. Checking a TM accepts any inputs is undecidable. Checking a TM solves IS_SUM is undecidable. Etc etc etc.

What this doesn't mean in practice

I often see the halting problem misconstrued as "it's impossible to tell if a program will halt before running it." This is wrong. The halting problem says that we cannot create an algorithm that, when applied to an arbitrary program, tells us whether the program will halt or not. It is absolutely possible to tell if many programs will halt or not. It's possible to find entire subcategories of programs that are guaranteed to halt. It's possible to say "a program constructed following constraints XYZ is guaranteed to halt."

The actual consequence of undecidability is more subtle. If we want to know if a program has property P, undecidability tells us

  1. We will have to spend time and mental effort to determine if it has P
  2. We may not be successful.

This is subtle because we're so used to living in a world where everything's undecidable that we don't really consider what the counterfactual would be like. In such a world there might be no need for Rust, because "does this C program guarantee memory-safety" is a decidable property. The entire field of formal verification could be unnecessary, as we could just check properties of arbitrary programs directly. We could automatically check if a change in a program preserves all existing behavior. Lots of famous math problems could be solved overnight.

(This to me is a strong "intuitive" argument for why the halting problem is undecidable: a halt detector can be trivially repurposed as a program optimizer / theorem-prover / bcrypt cracker / chess engine. It's too powerful, so we should expect it to be impossible.)

But because we don't live in that world, all of those things are hard problems that take effort and ingenuity to solve, and even then we often fail.

Update for the Internet

This was sent as a weekly newsletter, which is usually on topics like software history, formal methods, unusual technologies, and the theory of software engineering. You can subscribe here.


  1. To be pendantic, a TM can't do things like "scrape a webpage" or "render a bitmap", but we're only talking about computational decision problems here. 

  2. One notation I've adopted in Logic for Programmers is marking abstract sections of pseudocode with backticks. It's really handy! 

  3. Nontrivial meaning "at least one TM has this property and at least one TM doesn't have this property". Semantic meaning "related to whether the TM accepts, rejects, or runs forever on a class of inputs". IS_DONE_IN_TEN_STEPS is not a semantic property, as it doesn't tell us anything about inputs that take longer than ten steps. 

Finding hard 24 puzzles with planner programming

20 May 2025 at 18:21

Planner programming is a programming technique where you solve problems by providing a goal and actions, and letting the planner find actions that reach the goal. In a previous edition of Logic for Programmers, I demonstrated how this worked by solving the 24 puzzle with planning. For reasons discussed here I replaced that example with something more practical (orchestrating deployments), but left the code online for posterity.

Recently I saw a family member try and fail to vibe code a tool that would find all valid 24 puzzles, and realized I could adapt the puzzle solver to also be a puzzle generator. First I'll explain the puzzle rules, then the original solver, then the generator.1 For a much longer intro to planning, see here.

The rules of 24

You're given four numbers and have to find some elementary equation (+-*/+groupings) that uses all four numbers and results in 24. Each number must be used exactly once, but do not need to be used in the starting puzzle order. Some examples:

  • [6, 6, 6, 6] -> 6+6+6+6=24
  • [1, 1, 6, 6] -> (6+6)*(1+1)=24
  • [4, 4, 4, 5] -> 4*(5+4/4)=24

Some setups are impossible, like [1, 1, 1, 1]. Others are possible only with non-elementary operations, like [1, 5, 5, 324] (which requires exponentiation).

The solver

We will use the Picat, the only language that I know has a built-in planner module. The current state of our plan with be represented by a single list with all of the numbers.

import planner, math.
import cp.

action(S0, S1, Action, Cost) ?=>
  member(X, S0)
  , S0 := delete(S0, X) % , is `and`
  , member(Y, S0)
  , S0 := delete(S0, Y)
  , (
      A = $(X + Y) 
    ; A = $(X - Y)
    ; A = $(X * Y)
    ; A = $(X / Y), Y > 0
    )
    , S1 = S0 ++ [apply(A)]
  , Action = A
  , Cost = 1
  .

This is our "action", and it works in three steps:

  1. Nondeterministically pull two different values out of the input, deleting them
  2. Nondeterministically pick one of the basic operations
  3. The new state is the remaining elements, appended with that operation applied to our two picks.

Let's walk through this with [1, 6, 1, 7]. There are four choices for X and three four Y. If the planner chooses X=6 and Y=7, A = $(6 + 7). This is an uncomputed term in the same way lisps might use quotation. We can resolve the computation with apply, as in the line S1 = S0 ++ [apply(A)].

final([N]) =>
  N =:= 24. % handle floating point

Our final goal is just a list where the only element is 24. This has to be a little floating point-sensitive to handle floating point divison, done by =:=.

main =>
  Start = [1, 5, 5, 6]
  , best_plan(Start, 4, Plan)
  , printf("%w %w%n", Start, Plan)
  .

For main, we just find the best plan with the maximum cost of 4 and print it. When run from the command line, picat automatically executes whatever is in main.

$ picat 24.pi
[1,5,5,6] [1 + 5,5 * 6,30 - 6]

I don't want to spoil any more 24 puzzles, so let's stop showing the plan:

main =>
- , printf("%w %w%n", Start, Plan)
+ , printf("%w%n", Start)

Generating puzzles

Picat provides a find_all(X, p(X)) function, which ruturns all X for which p(X) is true. In theory, we could write find_all(S, best_plan(S, 4, _). In practice, there are an infinite number of valid puzzles, so we need to bound S somewhat. We also don't want to find any redundant puzzles, such as [6, 6, 6, 4] and [4, 6, 6, 6].

We can solve both issues by writing a helper valid24(S), which will check that S a sorted list of integers within some bounds, like 1..8, and also has a valid solution.

valid24(Start) =>
  Start = new_list(4)
  , Start :: 1..8 % every value in 1..8
  , increasing(Start) % sorted ascending
  , solve(Start) % turn into values
  , best_plan(Start, 4, Plan)
  .

This leans on Picat's constraint solving features to automatically find bounded sorted lists, which is why we need the solve step.2 Now we can just loop through all of the values in find_all to get all solutions:

main =>
  foreach([S] in find_all(
    [Start],
    valid24(Start)))
    printf("%w%n", S)
  end.
$ picat 24.pi

[1,1,1,8]
[1,1,2,6]
[1,1,2,7]
[1,1,2,8]
# etc

Finding hard puzzles

Last Friday I realized I could do something more interesting with this. Once I have found a plan, I can apply further constraints to the plan, for example to find problems that can be solved with division:

valid24(Start, Plan) =>
  Start = new_list(4)
  , Start :: 1..8
  , increasing(Start)
  , solve(Start)
  , best_plan(Start, 4, Plan)
+ , member($(_ / _), Plan)
  .

In playing with this, though, I noticed something weird: there are some solutions that appear if I sort up but not down. For example, [3,3,4,5] appears in the solution set, but [5, 4, 3, 3] doesn't appear if I replace increasing with decreasing.

As far as I can tell, this is because Picat only finds one best plan, and [5, 4, 3, 3] has two solutions: 4*(5-3/3) and 3*(5+4)-3. best_plan is a deterministic operator, so Picat commits to the first best plan it finds. So if it finds 3*(5+4)-3 first, it sees that the solution doesn't contain a division, throws [5, 4, 3, 3] away as a candidate, and moves on to the next puzzle.

There's a couple ways we can fix this. We could replace best_plan with best_plan_nondet, which can backtrack to find new plans (at the cost of an enormous number of duplicates). Or we could modify our final to only accept plans with a division:

% Hypothetical change
final([N]) =>
+ member($(_ / _), current_plan()),
  N =:= 24.

My favorite "fix" is to ask another question entirely. While I was looking for puzzles that can be solved with division, what I actually want is puzzles that must be solved with division. What if I rejected any puzzle that has a solution without division?

+ plan_with_no_div(S, P) => best_plan_nondet(S, 4, P), not member($(_ / _), P).

valid24(Start, Plan) =>
  Start = new_list(4)
  , Start :: 1..8
  , increasing(Start)
  , solve(Start)
  , best_plan(Start, 4, Plan)
- , member($(_ / _), Plan)
+ , not plan_with_no_div(Start, _)
  .

The new line's a bit tricky. plan_with_div nondeterministically finds a plan, and then fails if the plan contains a division.3 Since I used best_plan_nondet, it can backtrack from there and find a new plan. This means plan_with_no_div only fails if not such plan exists. And in valid24, we only succeed if plan_with_no_div fails, guaranteeing that the only existing plans use division. Since this doesn't depend on the plan found via best_plan, it doesn't matter how the values in Start are arranged, this will not miss any valid puzzles.

Aside for my logic book readers

The new clause is equivalent to !(some p: Plan(p) && !(div in p)). Applying the simplifications we learned:

  1. !(some p: Plan(p) && !(div in p)) (init)
  2. all p: !(plan(p) && !(div in p)) (all/some duality)
  3. all p: !plan(p) || div in p) (De Morgan's law)
  4. all p: plan(p) => div in p (implication definition)

Which more obviously means "if P is a valid plan, then it contains a division".

Back to finding hard puzzles

Anyway, with not plan_with_no_div, we are filtering puzzles on the set of possible solutions, not just specific solutions. And this gives me an idea: what if we find puzzles that have only one solution?

different_plan(S, P) => best_plan_nondet(S, 4, P2), P2 != P.

valid24(Start, Plan) =>
+ , not different_plan(Start, Plan)

I tried this from 1..8 and got:

[1,2,7,7]
[1,3,4,6]
[1,6,6,8]
[3,3,8,8]

These happen to be some of the hardest 24 puzzles known, though not all of them. Note this is assuming that (X + Y) and (Y + X) are different solutions. If we say they're the same (by appending writing A = $(X + Y), X <= Y in our action) then we got a lot more puzzles, many of which are considered "easy". Other "hard" things we can look for include plans that require fractions:

plan_with_no_fractions(S, P) => 
  best_plan_nondet(S, 4, P)
  , not(
    member(X, P),
    round(apply(X)) =\= X
  ).

% insert `not plan...` in valid24 as usual

Finally, we could try seeing if a negative number is required:

plan_with_no_negatives(S, P) => 
  best_plan_nondet(S, 4, P)
  , not(
    member(X, P),
    apply(X) < 0
  ).

Interestingly this one returns no solutions, so you are never required to construct a negative number as part of a standard 24 puzzle.


  1. The code below is different than old book version, as it uses more fancy logic programming features that aren't good in learning material. 

  2. increasing is a constraint predicate. We could alternatively write sorted, which is a Picat logical predicate and must be placed after solve. There doesn't seem to be any efficiency gains either way. 

  3. I don't know what the standard is in Picat, but in Prolog, the convention is to use \+ instead of not. They mean the same thing, so I'm using not because it's clearer to non-LPers. 

Modeling Awkward Social Situations with TLA+

14 May 2025 at 16:02

You're walking down the street and need to pass someone going the opposite way. You take a step left, but they're thinking the same thing and take a step to their right, aka your left. You're still blocking each other. Then you take a step to the right, and they take a step to their left, and you're back to where you started. I've heard this called "walkwarding"

Let's model this in TLA+. TLA+ is a formal methods tool for finding bugs in complex software designs, most often involving concurrency. Two people trying to get past each other just also happens to be a concurrent system. A gentler introduction to TLA+'s capabilities is here, an in-depth guide teaching the language is here.

The spec

---- MODULE walkward ----
EXTENDS Integers

VARIABLES pos
vars == <<pos>>

Double equals defines a new operator, single equals is an equality check. <<pos>> is a sequence, aka array.

you == "you"
me == "me"
People == {you, me}

MaxPlace == 4

left == 0
right == 1

I've gotten into the habit of assigning string "symbols" to operators so that the compiler complains if I misspelled something. left and right are numbers so we can shift position with right - pos.

direction == [you |-> 1, me |-> -1]
goal == [you |-> MaxPlace, me |-> 1]

Init ==
  \* left-right, forward-backward
  pos = [you |-> [lr |-> left, fb |-> 1], me |-> [lr |-> left, fb |-> MaxPlace]]

direction, goal, and pos are "records", or hash tables with string keys. I can get my left-right position with pos.me.lr or pos["me"]["lr"] (or pos[me].lr, as me == "me").

Juke(person) ==
  pos' = [pos EXCEPT ![person].lr = right - @]

TLA+ breaks the world into a sequence of steps. In each step, pos is the value of pos in the current step and pos' is the value in the next step. The main outcome of this semantics is that we "assign" a new value to pos by declaring pos' equal to something. But the semantics also open up lots of cool tricks, like swapping two values with x' = y /\ y' = x.

TLA+ is a little weird about updating functions. To set f[x] = 3, you gotta write f' = [f EXCEPT ![x] = 3]. To make things a little easier, the rhs of a function update can contain @ for the old value. ![me].lr = right - @ is the same as right - pos[me].lr, so it swaps left and right.

("Juke" comes from here)

Move(person) ==
  LET new_pos == [pos[person] EXCEPT !.fb = @ + direction[person]]
  IN
    /\ pos[person].fb # goal[person]
    /\ \A p \in People: pos[p] # new_pos
    /\ pos' = [pos EXCEPT ![person] = new_pos]

The EXCEPT syntax can be used in regular definitions, too. This lets someone move one step in their goal direction unless they are at the goal or someone is already in that space. /\ means "and".

Next ==
  \E p \in People:
    \/ Move(p)
    \/ Juke(p)

I really like how TLA+ represents concurrency: "In each step, there is a person who either moves or jukes." It can take a few uses to really wrap your head around but it can express extraordinarily complicated distributed systems.

Spec == Init /\ [][Next]_vars

Liveness == <>(pos[me].fb = goal[me])
====

Spec is our specification: we start at Init and take a Next step every step.

Liveness is the generic term for "something good is guaranteed to happen", see here for more. <> means "eventually", so Liveness means "eventually my forward-backward position will be my goal". I could extend it to "both of us eventually reach out goal" but I think this is good enough for a demo.

Checking the spec

Four years ago, everybody in TLA+ used the toolbox. Now the community has collectively shifted over to using the VSCode extension.1 VSCode requires we write a configuration file, which I will call walkward.cfg.

SPECIFICATION Spec
PROPERTY Liveness

I then check the model with the VSCode command TLA+: Check model with TLC. Unsurprisingly, it finds an error:

Screenshot 2025-05-12 153537.png

The reason it fails is "stuttering": I can get one step away from my goal and then just stop moving forever. We say the spec is unfair: it does not guarantee that if progress is always possible, progress will be made. If I want the spec to always make progress, I have to make some of the steps weakly fair.

+ Fairness == WF_vars(Next)

- Spec == Init /\ [][Next]_vars
+ Spec == Init /\ [][Next]_vars /\ Fairness

Now the spec is weakly fair, so someone will always do something. New error:

\* First six steps cut
7: <Move("me")>
pos = [you |-> [lr |-> 0, fb |-> 4], me |-> [lr |-> 1, fb |-> 2]]
8: <Juke("me")>
pos = [you |-> [lr |-> 0, fb |-> 4], me |-> [lr |-> 0, fb |-> 2]]
9: <Juke("me")> (back to state 7)

In this failure, I've successfully gotten past you, and then spend the rest of my life endlessly juking back and forth. The Next step keeps happening, so weak fairness is satisfied. What I actually want is for both my Move and my Juke to both be weakly fair independently of each other.

- Fairness == WF_vars(Next)
+ Fairness == WF_vars(Move(me)) /\ WF_vars(Juke(me))

If my liveness property also specified that you reached your goal, I could instead write \A p \in People: WF_vars(Move(p)) etc. I could also swap the \A with a \E to mean at least one of us is guaranteed to have fair actions, but not necessarily both of us.

New error:

3: <Move("me")>
pos = [you |-> [lr |-> 0, fb |-> 2], me |-> [lr |-> 0, fb |-> 3]]
4: <Juke("you")>
pos = [you |-> [lr |-> 1, fb |-> 2], me |-> [lr |-> 0, fb |-> 3]]
5: <Juke("me")>
pos = [you |-> [lr |-> 1, fb |-> 2], me |-> [lr |-> 1, fb |-> 3]]
6: <Juke("me")>
pos = [you |-> [lr |-> 1, fb |-> 2], me |-> [lr |-> 0, fb |-> 3]]
7: <Juke("you")> (back to state 3)

Now we're getting somewhere! This is the original walkwarding situation we wanted to capture. We're in each others way, then you juke, but before either of us can move you juke, then we both juke back. We can repeat this forever, trapped in a social hell.

Wait, but doesn't WF(Move(me)) guarantee I will eventually move? Yes, but only if a move is permanently available. In this case, it's not permanently available, because every couple of steps it's made temporarily unavailable.

How do I fix this? I can't add a rule saying that we only juke if we're blocked, because the whole point of walkwarding is that we're not coordinated. In the real world, walkwarding can go on for agonizing seconds. What I can do instead is say that Liveness holds as long as Move is strongly fair. Unlike weak fairness, strong fairness guarantees something happens if it keeps becoming possible, even with interruptions.

Liveness == 
+  SF_vars(Move(me)) => 
    <>(pos[me].fb = goal[me])

This makes the spec pass. Even if we weave back and forth for five minutes, as long as we eventually pass each other, I will reach my goal. Note we could also by making Move in Fairness strongly fair, which is preferable if we have a lot of different liveness properties to check.

A small exercise for the reader

There is a presumed invariant that is violated. Identify what it is, write it as a property in TLA+, and show the spec violates it. Then fix it.

Answer (in rot13): Gur vainevnag vf "ab gjb crbcyr ner va gur rknpg fnzr ybpngvba". Zbir thnenagrrf guvf ohg Whxr qbrf abg.

More TLA+ Exercises

I've started work on an exercises repo. There's only a handful of specific problems now but I'm planning on adding more over the summer.


  1. learntla is still on the toolbox, but I'm hoping to get it all moved over this summer. 

Write the most clever code you possibly can

8 May 2025 at 15:04

I started writing this early last week but Real Life Stuff happened and now you're getting the first-draft late this week. Warning, unedited thoughts ahead!

New Logic for Programmers release!

v0.9 is out! This is a big release, with a new cover design, several rewritten chapters, online code samples and much more. See the full release notes at the changelog page, and get the book here!

The new cover! It's a lot nicer

Write the cleverest code you possibly can

There are millions of articles online about how programmers should not write "clever" code, and instead write simple, maintainable code that everybody understands. Sometimes the example of "clever" code looks like this (src):

# Python

p=n=1
exec("p*=n*n;n+=1;"*~-int(input()))
print(p%n)

This is code-golfing, the sport of writing the most concise code possible. Obviously you shouldn't run this in production for the same reason you shouldn't eat dinner off a Rembrandt.

Other times the example looks like this:

def is_prime(x):
    if x == 1:
        return False
    return all([x%n != 0 for n in range(2, x)])

This is "clever" because it uses a single list comprehension, as opposed to a "simple" for loop. Yes, "list comprehensions are too clever" is something I've read in one of these articles.

I've also talked to people who think that datatypes besides lists and hashmaps are too clever to use, that most optimizations are too clever to bother with, and even that functions and classes are too clever and code should be a linear script.1. Clever code is anything using features or domain concepts we don't understand. Something that seems unbearably clever to me might be utterly mundane for you, and vice versa.

How do we make something utterly mundane? By using it and working at the boundaries of our skills. Almost everything I'm "good at" comes from banging my head against it more than is healthy. That suggests a really good reason to write clever code: it's an excellent form of purposeful practice. Writing clever code forces us to code outside of our comfort zone, developing our skills as software engineers.

Debugging is twice as hard as writing the code in the first place. Therefore, if you write the code as cleverly as possible, you [will get excellent debugging practice at exactly the right level required to push your skills as a software engineer] — Brian Kernighan, probably

There are other benefits, too, but first let's kill the elephant in the room:2

Don't commit clever code

I am proposing writing clever code as a means of practice. Being at work is a job with coworkers who will not appreciate if your code is too clever. Similarly, don't use too many innovative technologies. Don't put anything in production you are uncomfortable with.

We can still responsibly write clever code at work, though:

  1. Solve a problem in both a simple and a clever way, and then only commit the simple way. This works well for small scale problems where trying the "clever way" only takes a few minutes.
  2. Write our personal tools cleverly. I'm a big believer of the idea that most programmers would benefit from writing more scripts and support code customized to their particular work environment. This is a great place to practice new techniques, languages, etc.
  3. If clever code is absolutely the best way to solve a problem, then commit it with extensive documentation explaining how it works and why it's preferable to simpler solutions. Bonus: this potentially helps the whole team upskill.

Writing clever code...

...teaches simple solutions

Usually, code that's called too clever composes several powerful features together — the "not a single list comprehension or function" people are the exception. Josh Comeau's "don't write clever code" article gives this example of "too clever":

const extractDataFromResponse = (response) => {
  const [Component, props] = response;

  const resultsEntries = Object.entries({ Component, props });
  const assignIfValueTruthy = (o, [k, v]) => (v
    ? { ...o, [k]: v }
    : o
  );

  return resultsEntries.reduce(assignIfValueTruthy, {});
}

What makes this "clever"? I count eight language features composed together: entries, argument unpacking, implicit objects, splats, ternaries, higher-order functions, and reductions. Would code that used only one or two of these features still be "clever"? I don't think so. These features exist for a reason, and oftentimes they make code simpler than not using them.

We can, of course, learn these features one at a time. Writing the clever version (but not committing it) gives us practice with all eight at once and also with how they compose together. That knowledge comes in handy when we want to apply a single one of the ideas.

I've recently had to do a bit of pandas for a project. Whenever I have to do a new analysis, I try to write it as a single chain of transformations, and then as a more balanced set of updates.

...helps us master concepts

Even if the composite parts of a "clever" solution aren't by themselves useful, it still makes us better at the overall language, and that's inherently valuable. A few years ago I wrote Crimes with Python's Pattern Matching. It involves writing horrible code like this:

from abc import ABC

class NotIterable(ABC):

    @classmethod
    def __subclasshook__(cls, C):
        return not hasattr(C, "__iter__")

def f(x):
    match x:
        case NotIterable():
            print(f"{x} is not iterable")
        case _:
            print(f"{x} is iterable")

if __name__ == "__main__":
    f(10)
    f("string")
    f([1, 2, 3])

This composes Python match statements, which are broadly useful, and abstract base classes, which are incredibly niche. But even if I never use ABCs in real production code, it helped me understand Python's match semantics and Method Resolution Order better.

...prepares us for necessity

Sometimes the clever way is the only way. Maybe we need something faster than the simplest solution. Maybe we are working with constrained tools or frameworks that demand cleverness. Peter Norvig argued that design patterns compensate for missing language features. I'd argue that cleverness is another means of compensating: if our tools don't have an easy way to do something, we need to find a clever way.

You see this a lot in formal methods like TLA+. Need to check a hyperproperty? Cast your state space to a directed graph. Need to compose ten specifications together? Combine refinements with state machines. Most difficult problems have a "clever" solution. The real problem is that clever solutions have a skill floor. If normal use of the tool is at difficult 3 out of 10, then basic clever solutions are at 5 out of 10, and it's hard to jump those two steps in the moment you need the cleverness.

But if you've practiced with writing overly clever code, you're used to working at a 7 out of 10 level in short bursts, and then you can "drop down" to 5/10. I don't know if that makes too much sense, but I see it happen a lot in practice.

...builds comradery

On a few occasions, after getting a pull request merged, I pulled the reviewer over and said "check out this horrible way of doing the same thing". I find that as long as people know they're not going to be subjected to a clever solution in production, they enjoy seeing it!

Next week's newsletter will probably also be late, after that we should be back to a regular schedule for the rest of the summer.


  1. Mostly grad students outside of CS who have to write scripts to do research. And in more than one data scientist. I think it's correlated with using Jupyter. 

  2. If I don't put this at the beginning, I'll get a bajillion responses like "your team will hate you" 

What we learnt from the CIA Masterclass

By: Boy Boy
28 June 2025 at 14:03

💾

Try Odoo For Free Today https://www.odoo.com/r/6vTf

Go to https://www.patreon.com/Boy_Boy for exclusive videos

Follow us on twitter: https://twitter.com/BoyBoy_Official

Thanks Ostonox for the edit: https://twitter.com/ostonox

I Tried To Make Something In America (The Smarter Scrubber Experiment) - Smarter Every Day 308

8 June 2025 at 19:52

💾

Get a Smarter Scrubber Here: http://smarterscrubber.com
Interested in Wholesale, or helping us tell the story? Here's a link:
https://forms.gle/XFrLTa5b8kxSvPnu8

I would like to thank the Patrons of Smarter Every Day. They knew about this project early and helped make it happen.
http://www.patreon.com/smartereveryday

John is a great Dad and a good dude. Check out JJGeorge here:
https://www.jjgeorgestore.com/

Our goal is to make these things 100% in America. We're going to have to build some machines in order to do this.
Your support is appreciated.

Mantle's 3D Printing for injection molds is a technology I'm very excited to explore.
A huge thanks to Ted for participating!
Here's a video on what they do: https://www.youtube.com/watch?v=uywWr6-nHF4

A HUGE thanks to our Injection Molder Chris Robson, with tons of work from Jeremy:
http://www.robsonco.com/

T&C Metal Stamping (Ask to speak to Weston and tell him Destin sent you!)
https://www.tandcstamping.com/

Search Engine Podcast:
Episode "The Puzzle of the All-American BBQ Scrubber"
https://www.searchengine.show/the-puzzle-of-the-all-american-bbq-scrubber/

Check out Jeremy Fielding's Channel
https://www.youtube.com/@Jeremy_Fielding

Check out Tim Cook's comments on China at the Fortune Global Forum:
https://www.youtube.com/watch?v=_ng8xQ-SNGc

~~~~~~~~~~~~~~~~~~~~~~~~~~~~
GET SMARTER SECTION

I recommend learning about the Bretton Woods Accords
https://www.federalreservehistory.org/essays/bretton-woods-created

Read about the Bretton Woods system
https://en.wikipedia.org/wiki/Bretton_Woods_system

~~~~~~~~~~~~~~~~~~~~~~~~~~~~

Smarter Every Day on Patreon
http://www.patreon.com/smartereveryday

Ambiance, audio and musicy things by: Gordon McGladdery
https://www.ashellinthepit.com/
http://ashellinthepit.bandcamp.com/

Warm Regards,

Destin

The talk I almost didn't give in Washington

23 May 2025 at 16:05

💾

This is a talk I gave at the ARPA-E summit in Washington D.C on March 17th 2025.

If you want to see my new Nebula Original movie 17 Pages you can use this link right here: https://go.nebula.tv/17pages?ref=bobbybroccoli

My twitter profile: https://x.com/BobbyBroccole

Further reading:
Mark Bourrie - Kill the Messengers: Stephen Harper's Assault on Your Right to Know
Chris Turner - The War on Science: Muzzled Scientists and Wilful Blindness in Stephen Harper's Canada
https://www.cbc.ca/news/health/second-opinion-scientists-muzzled-1.4588913
Bringing Evidence Back from the Dead: A History of Interference in Science in Canada
https://www.corporateknights.com/workplace/why-are-canadian-scientists-still-being-muzzled/
https://www.thestar.com/news/canada/that-s-no-way-to-treat-a-library-scientists-say/article_0093dddd-a58e-5419-9ddc-eb603c65eac3.html
https://evidencefordemocracy.ca/the-muzzling-of-government-scientists-a-memoir/
https://thenarwhal.ca/silencing-scientists-threatens-evidence-based-decision-making/
Defrosting Public Science – survey by The Professional Institute of the Public Service of Canada
https://academicmatters.ca/harpers-attack-on-science-no-science-no-evidence-no-truth-no-democracy/
https://www.smithsonianmag.com/science-nature/canadian-scientists-open-about-how-their-government-silenced-science-180961942/

0:00 The Background
5:20 My talk in Washington
25:43 My new doc 17 Pages

When Building a Brand-New City Doesn’t Go as Planned

8 June 2025 at 13:00

💾

Check out Samsung's Private City (Nebula Plus Video):
https://nebula.tv/videos/notjustbikes-samsungs-private-city

Want to support this channel? Sign up to Nebula for only $36/year with this link:
https://go.nebula.tv/notjustbikes

Nebula gift cards (now with iDEAL):
https://gift.nebula.tv/notjustbikes

Watch this video ad-free and sponsor-free on Nebula:
https://nebula.tv/videos/notjustbikes-when-building-a-brandnew-city-doesnt-go-as-planned

Patreon: https://patreon.com/notjustbikes
Mastodon: @notjustbikes@notjustbikes.com
NJB Live (my live-streaming channel): https://youtube.com/@njblive

---
Relevant videos
Crossing the Street Shouldn't Be Deadly (but it is)
https://nebula.tv/videos/notjustbikes-crossing-the-street-shouldnt-be-deadly-but-it-is
https://youtu.be/_ByEBjf9ktY

The Wrong Way to Set Speed Limits
https://nebula.tv/videos/not-just-bikes-the-wrong-way-to-set-speed-limits-st06
https://youtu.be/bglWCuCMSWc

---
References & Further Reading

https://en.wikipedia.org/wiki/Songdo
https://en.wikipedia.org/wiki/Sanbon
https://en.wikipedia.org/wiki/Pangyo,_Seongnam
https://en.wikipedia.org/wiki/Bundang
https://en.wikipedia.org/wiki/Dongtan,_Hwaseong
https://en.wikipedia.org/wiki/Great_Train_eXpress

Developer Feuds With Korean Partner Over Busted ‘Smart’ City
https://www.wsj.com/articles/developer-feuds-with-korean-partner-over-busted-smart-city-11560261729

The vast majority of the content in this video was filmed on location by Not Just Bikes, with some stock footage licensed from Getty Images

No generative AI or AI voices were used in the making of this video

---
Chapters
0:00 Intro
1:23 Background
1:55 Songdo
9:51 Sanbon
14:00 Pangyo
17:33 Bundang
20:15 Dongtan
24:47 Concluding thoughts
25:27 Samsung Digital City
25:58 How to support this channel

They Tore Down a Highway and Made it a River (and traffic got better)

25 May 2025 at 13:26

💾

Visit https://80000hours.org/notjustbikes for free advice and information about finding a career with a positive impact on the world.

Watch this video ad-free and sponsor-free on Nebula: https://nebula.tv/videos/notjustbikes-they-tore-down-a-highway-and-made-it-a-river

Patreon: https://patreon.com/notjustbikes
Mastodon: @notjustbikes@notjustbikes.com
NJB Live (my live-streaming channel): https://youtube.com/@njblive

---
Relevant videos
More Lanes are (Still) a Bad Thing
https://nebula.tv/videos/notjustbikes-more-lanes-are-still-a-bad-thing
https://youtu.be/CHZwOAIect4

---
References & Further Reading

https://openknowledge.worldbank.org/entities/publication/20d568c1-f55a-5c0f-9d43-4bb7b89dcf4f
https://seoulsolution.kr/sites/default/files/policy/%5BEN%5DCheong%20Gye%20Cheon%20Restoration%20Project.pdf
https://vimeo.com/232410052
https://globaldesigningcities.org/publication/global-street-design-guide/streets/special-conditions/elevated-structure-removal/case-study-cheonggyecheon-seoul-korea/
https://issuu.com/ann1224/docs/cheonggyecheon_restoration-_ank
https://en.namu.wiki/w/%EC%B2%AD%EA%B3%84%EC%B2%9C
https://www.seoulsolution.kr/sites/default/files/policy/%5BEN%5DCheong%20Gye%20Cheon%20Restoration%20Project.pdf
https://www.si.re.kr/sites/default/files/2006-R-34_0.pdf
https://www.sciencedirect.com/science/article/abs/pii/S0967070X12000108
https://www.theguardian.com/world/2025/jan/17/seoul-cheonggyecheon-motorway-turned-into-a-stream
https://thebentway.ca/stories/cheonggyecheon-rip-it-up-and-start-again/

청계천, 과거와 현재를 기억하다ㅣKUTV 수습다큐
https://www.youtube.com/watch?v=V2fUAp2cNvY

All that remains of Jonchigyogak, the Cheonggye Expressway Overpass on the Cheonggyecheon Stream
The Seoul Guide
https://youtu.be/eSUX57eZmms
Seoul’s Viewpoint: Seoullo 7017
https://www.youtube.com/watch?v=VrLQfxeuQfI

Historical photos of Cheonggyecheon
朴昌根:《解读汉江奇迹》 同济大学出版社 ISBN:978 756 084 7979,

Public Domain, https://commons.wikimedia.org/w/index.php?curid=53245569
By 대한민국 국가기록원 -
https://www.archives.go.kr/next/search/searchTotalUp.do?selectSearch=1&upside_query=%EC%B2%AD%EA%B3%84%EC%B2%9C+%EA%B3%A0%EA%B0%80%EB%8F%84%EB%A1%9C#none, KOGL Type 1,
https://commons.wikimedia.org/w/index.php?curid=115743685

Seoul Museum of History
https://museum.seoul.go.kr/archive/NR_index.do

Historical Newsreels
https://www.ehistory.go.kr

http://www.koreaphotonews.co.kr

The number of references far exceeds the maximum length that YouTube allows in descriptions, but you can access the full list of references on Nebula or at this link:
https://notjustbikes.com/references/cheonggyecheon.txt

The vast majority of the content in this video was filmed on location by Not Just Bikes, with some stock footage licensed from Getty Images

No generative AI or AI voices were used in the making of this video

---
Chapters
0:00 Intro
4:03 Cheonggyecheon history
5:19 Gwanghwamun square
6:18 Cheonggyecheon stream
8:40 Safety & flood prevention
9:27 Accessibility
10:17 Urban places are better with fewer cars
11:55 Seoullo 7017
16:52 Urban highways need to be removed
17:53 Public transit connectivity
18:37 These projects didn't need to be this nice
19:16 Concluding thoughts
20:27 Outro & 80,000 Hours

The Absolute Best Transportation for Cities

4 May 2025 at 13:01

💾

Use code notjustbikes at the link below to get 60% off an annual Incogni plan: https://incogni.com/notjustbikes

Watch this video ad-free and sponsor-free on Nebula:
https://nebula.tv/videos/notjustbikes-the-absolute-best-transportation-for-cities

Buy a Nebula gift card (now with iDEAL!): https://gift.nebula.tv/notjustbikes

Patreon: https://patreon.com/notjustbikes
Mastodon: @notjustbikes@notjustbikes.com
NJB Live (my live-streaming channel): https://youtube.com/@njblive

----
Relevant Videos

America Always Gets This Wrong (when building transit)
https://nebula.tv/videos/notjustbikes-america-always-gets-this-wrong-when-building-transit
https://youtu.be/MnyeRlMsTgI

I Visited the World's Busiest Train Station
https://nebula.tv/videos/notjustbikes-i-visited-the-worlds-busiest-train-station
https://youtu.be/6dKiEY0UOtA

The Secret to Japan's Great Cities
https://nebula.tv/videos/notjustbikes-the-secret-to-japans-great-cities
https://youtu.be/jlwQ2Y4By0U

Why American Buses Are Just Worse (RMTransit)
https://nebula.tv/videos/rmtransit-why-north-american-buses-are-just-worse?ref=notjustbikes
https://www.youtube.com/watch?v=U3qeYRI34C8

When Transit isn't Built to be Transit (with RMTransit)
The Urbanist Agenda Podcast
https://www.youtube.com/watch?v=jgc_c2E4nFk

---
References & Further Reading

https://www.sparvagnsstaderna.se/en/tramways/tramways-are-environmentally-friendly

https://www.railjournal.com/passenger/light-rail/amsterdam-urban-rail-switches-to-renewable-energy/

https://www.fareast.mobi/en/brt/risks/BRT-Project-Risks-What-Could-Go-Wrong
https://streets.mn/2024/03/12/bus-rapid-transit-creep/

https://itdp.org/library/standards-and-guides/the-bus-rapid-transit-standard/
https://itdp.org/library/standards-and-guides/the-bus-rapid-transit-standard/the-scorecard/

https://web.archive.org/web/20170611182254/http://www.cbc.ca/news/canada/toronto/queen-street-without-streetcars-the-city-might-give-it-a-look-1.4152158
https://stevemunro.ca/2017/06/14/the-cost-of-running-the-queen-car/

An American Pickup in Europe
https://www.reddit.com/r/fuckcars/comments/vm8hu3/an_american_pickup_in_europe/

Bussen, metro’s en trams staan in grote steden 110 seconden stil uit protest tegen ov-bezuinigingen
https://www.volkskrant.nl/economie/bussen-metro-s-en-trams-staan-in-grote-steden-110-seconden-stil-uit-protest-tegen-ov-bezuinigingen~b82891e8/


Incogni
National Public Data Breach
https://www.troyhunt.com/inside-the-3-billion-people-national-public-data-breach/
https://x.com/vxunderground/status/1797047998481854512

Thanks to Extra Credits for the quote read!

The majority of the content in this video was filmed on location by Not Just Bikes, with some stock footage licensed from Getty Images

No generative AI or AI voices were used in the making of this video

---
Chapters
0:00 Intro
1:21 Why trams are great
6:56 Our ultimate goal is great urban places
8:37 Trams as a walking accelerator
9:35 Trams encourage development
11:59 Trams make urban places better
12:58 Trams improve pedestrianised places
14:52 Metros also fit well in great urban places
16:05 Streetcars were replaced with subways
17:42 Trams and metros have different purposes
19:21 Trams can be faster than metros
21:01 Buses are not a replacement for trams
23:45 Trams are better for cycling
25:16 BRTs are problematic
27:26 Your city should not be building a BRT
29:49 Trams are not "in the way"
31:16 We need to properly fund public transit
32:51 Toronto streetcars are terrible
33:25 City with good trams reap the benefits
33:29 Outro and Incogni Sponsorship

Workshop: The Gray Scott School 2025 @ Slovenia

23 June 2025 at 10:00

Video overview

Overview: LAPP, as part of the ESCAPE Collaboration work programme, and in collaboration with the CC-FR Competence Centre is organizing the third Gray Scott summer school from 23 June to 4 July 2025. This summer school on High Performance Computing, in a unique format and entirely free of charge, will be dedicated to programming and optimization on Heterogeneous Architectures.

The school will cover the optimisation of computations on different types of hardware (CPU, GPU), presenting their respective characteristics, architectures and bottlenecks. It will cover generic optimisation methods applicable to all types of hardware, as well as the various libraries, technologies and languages available to achieve the best possible performance. Ideally, the peak performance of the machine.

  • Hardware: CPU, GPU
  • Languages considered: C++17, C++20, CUDA, Fortran, Rust, Python, Julia
  • Libraries considered: SYCL, Eve, Numpy, cunumerics, legate, Jax, Thrust, cuPy, pycuda and PyTorch
  • Compilers considered: G++, Clang++, nvc++, gfortran, nvfortran, dpc++. 
  • Profiling tools: Valgrind, Maqao, Perf, NSight, Malt and NumaProf

All the methods will be illustrated on simple examples, such as Hadamard products, reductions, barycentre calculations and matrix products, in order to be applied to a single problem: the simulation of a Gray Scott reaction.

This problem is simple enough to be understood quickly and complex enough for compilers to have difficulty optimising it without help. Each method will be broken down into a simple version, using default options, and one or more advanced versions, which will allow their advantages and disadvantages to be discussed and quantified.

 

 

How to attend the Gray Scott School 2025:

NCC Slovenia is offering a distance learning in Ljubljana, where one of the various satellites in Europe will take place.

The satellite will take place in hybrid format - the speakers will be present in France, and will stream via Zoom. Our lecturers will be in the room to help participants with access and implementation. At the same time, there will be a discord next to it, where the discussion will take place. 

Date and location:

  • 23.6., 24.6., 26.6 - 4.7.2025 Faculty of Mechanical Engineering, Aškerčeva c. 6, (Room II/3B)
  • 25. 6. 2025 IJS - Teslova ulica 30, 1000 Ljubljana,  

 


Bootcamp: Profiling AI Software

10 July 2025 at 07:00

Overview: Together with NVIDIA and OpenACC organization, EuroCC2 will host a virtual Profiling AI Software Bootcamp on July 10, 2025.

The Profiling AI Software Bootcamp covers the process and tools needed to profile AI and machine learning applications to fully utilize high-performance systems. Attendees will learn to profile applications using NVIDIA Nsight™ Systems, a system-wide performance analysis tool; analyze and identify optimization opportunities; and improve performance of applications to scale efficiently across systems of any quantity or size of CPUs and GPUs. Additionally, this bootcamp will walk through the system topology to learn the dynamics of multi-GPU and multi-node connections and architecture.

People who complete the bootcamp are encouraged to apply to participate in the upcoming EuroCC AI Hackathon, which will be open for applications shortly.

Due to EuroCC2 regulations, generic or private email addresses cannot be accepted. Please use your official university or company email address to prove your affiliation when applying.

Application Deadline: 16th June 2025

Prerequisites: Basic experience with Python programming and PyTorch distributed training. 

Event format: This bootcamp will be hosted online in Central European Summer Time (CEST). All communication will be through Zoom, Slack and email.

Compute Resources: Attendees will be given access to a GPU cluster for the duration of the bootcamp. 


Workshop: CFD on HPC – OpenFOAM example

22 September 2025 at 07:00

Description: In the three-day course, the use of the OpenFOAM software package, which is currently the most developed open-source CFD system, will be demonstrated. As the name itself suggests, it is an open-source system that any user can enhance according to their needs. Initially, the use of ParaVIEW, a graphical environment for visually reviewing and processing data from OpenFOAM, will be shown. This will be followed by an explanation of how the OpenFOAM environment, with demonstrations of simple examples. Since the foundation of CFD is the mesh, the use of three open-source mesh generators will be demonstrated: GMSH, BlockMesh, and SnappyHexMesh. Subsequently, the application of various areas within the OpenFOAM environment will be explained and demonstrated, including:

  • Fluid transport
  • Transient simulations
  • Transient data processing (animation, particles in flow)
  • Multiphase flows
  • Multi-region simulation (Multi-region)
  • Running cases in an HPC system utilizing OpenFOAM's parallel capabilities

Difficulty: Advanced

Language: According to applications

Date and time:  16. 06. 2025 from: 9:00 - 13.00 
                            17. 06. 2025 from: 9:00 - 13:00
                            18. 06. 2025 from: 9:00 - 13.00

Max. number of participants: 30

Virtual location: ZOOM

Prerequisite knowledge: The basics of the Linux operating system and the basics of fluid mechanics and Python programming.

Target audience: The training is aimed at students and staff in academia and industry who want to learn more about the OpenFOAM open source CFD platform.

Workflow: The training is on-line, in the mornings. The interactive work is done via remote access to the HPC system at ULFS. 

After the workshop you wil:

  • Be able to connect to HPC@ULFS with NoMachine client and work in HPC Linux environment
  • Understand the theoretical background of the Computational Fluid Mechanics (CFD), especially of the Finite Volume Method (FVM)
  • Be able to set up CFD mesh using different open source programs for CFD mesh design (OF – Block Mesh, GMSH)
  • Be able to setup complete OF case (mesh, pysical model, inital and boundary conditions, ...)
  • Be able to setup and run various OF cases in parallel on an HPC cluster
  • Be able to preview and post-process OF results

 

 

Organiser:

Lecturers:

Ime: Dr. Aleksander Grm
Opis: Aleksander Grm graduated with a Bachelor's degree in Physics from the Faculty of Mathematics and Physics at the University of Ljubljana. He then completed a Master's degree in Applied Mathematics at ICTP/SISA in Trieste, Italy. After the MSc, he continued his studies at the University of Kaiserslautern in Germany and obtained a PhD in Industrial Mathematics. After the PhD, he worked partly in academia and fully in industry. In 2014, he moved to the University of Ljubljana to work in basic and applied research and to teach young people mechanics and mathematics at the engineering level.
E-mail: aleksander.grm@fs.uni-lj.si 
Ime: Dr. Pavel Tomšič
Opis: He is a research assistant at ULFE and is well qualified for several HPC related topics. He is actively involved in efforts to raise competencies in the field of supercomputing, such as the Partnership for Advanced Computing in Europe (PRACE). He is also coordinator of Erasmus + project SCtrain - a strategic partnership for the transfer of knowledge from supercomputing between Slovenia, Austria, the Czech Republic and Italy. As part of the EuroHPC project for the establishment of European National Competence Centers in the field of supercomputing (EuroCC), he is the champion for Training and Skills Development for NCC Slovenia.
E-naslov: pavel.tomsic@fs.uni-lj.si

Nanobodies Workshop: Binder Recovery by In Silico and In Vitro Panning

22 September 2025 at 08:00

Four-day intensive training for early-stage researchers:

  • When: 22. - 26. September 2025
  • Where: University of Nova Gorica, Gorica, Slovenia (campus Rožna Dolina: map)

This workshop is tailored for PhD students and early-career scientists eager to learn how to discover and engineer nanobodies — single-domain antibody fragments derived from camelid heavy-chain antibodies.

Participants will engage in a program consiting of lectures, hands-on laboratory sessions, and interactive computational tutorials. You will learn to identify nanobody candidates both in silico (using computational tools) and in vitro (through wet-lab panning techniques).

🔬 What You’ll Learn

  • How nanobody libraries are constructed, characterized, and screened

  • Computational workflows to model, rank, or design nanobody candidates

  • Hands-on training with phage display and wet-lab selection of antigen binders

🎯 Who Should Apply?

This workshop is open to PhD students with a basic background in biochemistry, molecular biology or related fields. It’s ideal for researchers ready to integrate nanobody discovery into their work—computationally, experimentally or both. We will select 12 participants based on their submitted abstract and motivation letter. To encourage learning from each other, all accepted participants are expected to deliver a 10-minute short presentation in the daily PhD2PhD sessions. Successful applicants will be notified and asked to pay a €300 registration fee (see FAQ for what is included).

🧠 PhD2PhD Session

Each registered participant presents either their current PhD project as it relates to nanobody research or a method/idea they plan to adopt. Talks are strictly 10 minutes, followed by a moderated discussion with peers and lecturers. Participation in these sessions is a requirement of registration.

🤝 Collaboration & Networking

The program includes informal discussion rounds, dedicated networking events, and social activities designed to foster connections and future collaborations.

💰 Funding & Support

This workshop is supported by:

          

 


 

VROČE SREDOZEMLJE, MRTVE KORALE, POPLAVLJENA MESTA

26 June 2025 at 11:14

Meteorologi na Neurje.si poročajo, da segrevanje Sredozemskega morja že sedaj dosega rekordne stopnje. To morje je sicer že dlje časa nadpovprečno toplo, vendar v teh dneh prihaja do intenzivne faze segrevanja. Vztrajno območje visokega pritiska nad osrednjim Sredozemljem, pomanjkanje vetra in malo oblakov prispevajo k temu dvigu temperature morske vode.  

Nobenega dvoma ni, da kljub vplivom naravnih dejavnikov, kot so morski tokovi in atmosferska nihanja, glavno vlogo pri povečanih temperaturah igra dvig koncentracije toplogrednih plinov v atmosferi. Povečane koncentracije teh plinov so posledica nebrzdanega fosilnega kapitalizma, ki je iz podzemlja osvobodil na tisoče ton dolgo pokopanega premoga in nafte. Fosilna goriva kapital uporablja za neskončno širjenje proizvodnje in večanje svojih dobičkov. Kljub silovitemu zaostrovanju podnebne krize, ki ga te dni z vročinskim valom še kako občutimo na lastni koži in ki se šteje v vse večjih smrtnih žrtvah,  se ta trend v zadnjih letih še krepi, na primer s ponovnim odpiranjem termoelektrarn v Nemčiji ter skrajno reakcionarnimi potezami trenutne ameriške administracije (Drill, baby, drill!).

Povečane koncentracije toplogrednih plinov v atmosferi in s tem segrevanje podnebja prinašajo tudi katastrofalne posledice za morske ekosisteme. Povečana absorpcija ogljikovega dioksida iz atmosfere zakisljuje morja, zaradi česar kalcifikacijski organizmi, kot so korale, plankton in školjke težje gradijo svoje oklepe in skelete. To vpliva tudi na druge morske organizme in ima kaskadne učinke v prehranjevalni verigi. 

Oceani se že od sedemdesetih dalje segrevajo, saj posrkajo velik del presežne toplote, ki se kopiči v atmosferi. Toplejša morja povzročijo migracije morskih organizmov, kar prav tako spremeni prehranjevalne verige in poruši ekosisteme. V skrajnih primerih ekstremni vročinski valovi pobijejo morske organizme na nekem območju.

Prav tako se s segrevanjem morske vode zmanjšuje vsebnost kisika v njej, kar lahko v kombinaciji s povečeno količino hranilnih snovi v vodi povzroči cvetenja morja in pomanjkanje kisika za morske organizme. Tudi to lahko povzroča velike motnje v ekosistemih.

Ti dejavniki posebno močno vplivajo na zaprta morja, kakršno je Sredozemsko morje, še več škode pa povzročijo v Jadranskem bazenu, ki je še bolj nepretočen.

Dvig temperature morja pa  nima le uničujočih posledic za morske ekosisteme, temveč vpliva tudi na poznopoletne in zgodnjejesenske nevihte, nalive in posledično poplave. Namreč večja kot je temperaturna razlika med toplim in vlažnim sredozemskih zrakom ter hladnejšim zrakom s kontinenta, močnejši so ti vremenski pojavi. Tako vemo, da čez nekaj mesecev obstaja večja možnost za hude nevihte in poplave. Takšne smo na primer lani spremljali v Španiji in pred dvema letoma doživeli v Sloveniji.

Četudi ne moremo zanemariti naravnih oceansko-atmosferskih nihanj je jasno, da je ta dvig morske temperature posledica podnebnih sprememb, ki jih povzročajo vedno večje količine sežganih fosilnih goriv. Ta poganjajo sistem, ki omogoča rast kapitalu, ki se ne meni ne za propadle morske ekosisteme, ne za obubožane milijone na globalnem jugu in ne za gore trupel čez katere koraka. Ustavi ga lahko le organizirano delavsko gibanje, ki bo zmožno ustvariti svet, v katerem bomo demokratično odločali o zadovoljevanju potreb vseh, ne da bi pri tem uničevali naravo in s tem materialne pogoje svojega bivanja. Na nas pa je, da to gibanje zgradimo.

The post VROČE SREDOZEMLJE, MRTVE KORALE, POPLAVLJENA MESTA first appeared on Rdeča Pesa.

“NI BILO LAHKO, AMPAK NA KONCU SO VSI PRISTALI NA 5%”

25 June 2025 at 10:49

Foto: Peter Žiberna

Ta novica nas glede na zadnje trende oboroževanja, vojaškega hujskanja in militarizacije ne preseneča toliko kot dejstvo, da smo glede kolektivne zaveze dviga proračunskih sredstev izvedeli kar iz Trumpove objave, ki jo je delil na družabnih omrežjih. Gre za sporočilo generalnega sekretarja Nata Marka Rutteja, ki se je želel pohvaliti ameriškemu predsedniku:

»Dragi gospod predsednik, dragi Donald,

Čestitke in hvala za odločno akcijo v primeru Irana, to je bilo res neverjetno in nekaj, česar si nihče drug ne bi upal. Zaradi tega smo vsi bolj varni. Pred teboj je še en velik uspeh nocoj v Hagu. Ni bilo lahko, ampak na koncu so vsi podpisali dvig na 5%.

Donald, pripeljal si nas do prelomne točke za Ameriko, Evropo in svet. Dosegel boš to, česar ni še noben ameriški predsednik.«

V skupni izjavi sprejeti v Hagu naj bi se članice Nata zavezale k dvigu obrambnih izdatkov na pet odstotkov BDP-ja do leta 2035. Od tega naj bi namenile 1,5 odstotka za naložbe povezane, z njo. Slednji odstotek bi naj predstavljal to, k čemur nas je napeljeval domač politični vrh. Češ da bomo »pretentali« Nato in s tem denarjem gradili bolnice in »drugo infrastrukturo«. Pa je to res? 

Takole nadaljuje Rutte: »Evropa bo plačala veliko, kot bi tudi morala, in to bo tvoja zmaga.«. Trump je z objavo zasebnih sporočil zgolj razkril ustaljene prakse odločanja v Natu. Gre za jasno hegemonijo in uresničevanje imperialističnih teženj ZDA, ki bo preko Nata izčrpala že tako skrčene javne sisteme socialnega varstva, zdravstva, šolstva in najbolj prizadela delovne ljudi.

Kaj pa naš politični vrh? Golob podpisuje peticije za Palestino, izraža skrb za palestinske otroke in se za medije rokuje s palestinskimi športniki. A ko je treba dejansko ukrepati in prevzeti odgovornost, stisne rep med noge in se dela nemočnega državljana namesto predsednika vlade. Slednji je dejal: Slovenija tako “ostaja zavezana obrambni (oziroma bolje rečeno morilski) drži Nata. Očitno ne vidi ali noče videti povezave med genocidom v Palestini in oboroževanju Izraela s strani članic Nata.

 Naš vladajoči razred deluje povsem skladno z ameriškim in nemškim vladajočim razredom. Med njimi ni nobene razlike, zato nas ne bodo prepričali z argumentom, da denar, ki ga bomo dali Natu pravzaprav ne gre za vojaške misije v tujini, pa za izraelske bombe in za profite tuje vojaške industrije.

Morda Tanja Fajon res verjame, da Nato služi obrambi, mi pa vemo, da gre za pakt nasilja, agresije in uničenja.  Ustvarja nevidne sovražnike, zato da bi nas prepričali, da smo v nevarnosti. A največja nevarnost za nas je obstanek v Natu paktu. Zato je izgradnja močne protiimperialistične fronte nujna, da se skupaj upremo vojaškemu diktatu vojne industrije in vojnih dobičkarjev. Prvi korak je izstop Slovenije iz Nata. Korak v tej smeri smo naredili tudi včeraj na protestnih shodih (Ljudska fronta – vojni kontra) v Ljubljani in Kopru.

The post “NI BILO LAHKO, AMPAK NA KONCU SO VSI PRISTALI NA 5%” first appeared on Rdeča Pesa.

AMERIŠKI ZLOČINSKI NAPAD NA IRAN – UPRIMO SE VOJNI!

22 June 2025 at 12:59

“Vse strani pozivam, da naj stopijo korak nazaj, naj se vrnejo za pogajalsko mizo. Iranu ne sme biti dovoljeno, da bi razvil jedrsko orožje.” Tako groteskno je nočne zločinske bombne napade ZDA na Iran, komentirala visoka predstavnica Evropske unije (EU) za zunanjo politiko, Kaja Kallas. Iran je več desetletji žrtev popolnoma neupravičenih ekonomskih sankcij s strani držav imperialističnega centra. Zadnjih deset dni pa je tarča terorističnega raketiranja ter bombandiranja združenih izraelsko-ameriških sil. Skladno z logiko vladajočih razredov na Zahodu pa ga je njihova zvesta predstavnica, Kallas, postavila v vlogo agresorja in “faktorja nestabilnosti” na Bližnjem vzhodu. 

Takšnemu razumevanju, ki nam ga vsakodnevno množično servirajo osrednji mediji in njihovi (prikriti) agenti imperialističnega tabora, se je potrebno odločno postaviti po robu. Iran ni agresor. Izrael, ki že več kot 20 mesecev izvaja genocid nad Palestinkami in Palestinci, je sprožil obsežne raketne in letalske napade na Iran. Cilj Izraela je jasen in večplasten:

1.) Z bombandiranjem povzročiti čim več smrti in razdejanja, preko tega sprožiti zamenjavo iranskih oblasti in ustoličiti režim, ki bo deloval v skladu z njihovimi ekspanzionističnimi interesi ter interesi celotnega kolektivnega zahoda; 

2.) Obrniti pozornost svetovne javnosti od genocida v Gazi. Samo v zadnjih desetih dneh je izraelski okupator pobil najmanj 450 Palestink in Palestincev. Mnoge od njih so izraelske krogle pokosile medtem, ko so sestradani čakali na pakete hrane; 

3.) Zagotoviti obstanek vlade Benjamina Netanjahuja na oblasti. Vsa “koalicijska nesoglasja” so bila pozabljena, ko so prva izraelska letala poletela na krvavo misijo na vzhod. 

Ker Izrael ni bil zmožen zlomiti iranskega odpora, še več, mnogi valovi iranskih raketnih napadov so pokazali na relativno šibkost izraelski obrambnih sistemov (poročanje o tem je zaradi medijske cenzure v Izraelu strogo prepovedano), so se v agresijo neposredno vpletle ZDA. Ameriški napad je zločinski akt, ki je nelegalen celo po pravilih mednarodnega prava in pomeni zgolj dodatno zaostrovanje situacije na terenu. Trumpovo pojasnilo po njem (“Sedaj je čas za mir.”), pa je moč razumeti zgolj kot slabo priredbo zdaj že legendarnega slogana iz protestov proti vojni v Vietnamu (“Bombing for peace is like fucking for virginity”).

Takšni agresiji se je potrebno upreti. Pokazati moramo solidarnost z izstradanimi Palestinkami in Palestinci ter napadenimi Irankami in Iranci. Ameriško-izraelska koalicija, podprta s strani držav članic NATO pakta ne napada zgolj njih, ampak poskuša z izsiljenim konfliktom zatreti vsa prizadevanja po svetu, ki gredo v smeri miru, sodelovanja med delovnimi ljudmi sveta in svobodne Palestine. 

Priložnosti za takšen upor in izkaz solidarnosti imamo že ta teden:

Pridite, da nas bo čim več!

En svet, en boj!

The post AMERIŠKI ZLOČINSKI NAPAD NA IRAN – UPRIMO SE VOJNI! first appeared on Rdeča Pesa.

Jutri, 23.6., RUŠKI SHOD ZA MIR V PALESTINI!

22 June 2025 at 11:48

Povezava do dogodka.

Stop genocidu!
Stop vojni!
Stop stradanju otrok!
ZA MIR!

23.6.2025, ob 18. uri, bo na Trgu vstaje v Rušah potekal Shod za mir v Palestini. Protest organizira Zveza borcev za vrednote NOB Ruše. Pridi in pokaži, da je bilo dovolj genocidnih politik!

The post Jutri, 23.6., RUŠKI SHOD ZA MIR V PALESTINI! first appeared on Rdeča Pesa.

PRIDI NA PROTEST LJUDSKA FRONTA – VOJNI KONTRA!

22 June 2025 at 10:05

Pridruži se nam 24. junija ob 17.00 v Ljubljani na Kongresnem trgu in ob 18.00 v Kopru na Titovem trgu na protestu proti militarizaciji in NATO paktu! Vse napredne sile stare celine, od Soluna do Stockholma, se bodo med 22. in 24. junijem zbirale na ulicah, kjer bomo izkazovale svoje nasprotovanje militarističnim, oboroževalskim in natovskim politikam, ki jih bo vladajoči razred sprejemal v Haagu na srečanju vrha zveze NATO.

Evropska unija in NATO nas vse bolj potiskata v vojno, ki koristi le vojnim dobičkarjem in političnim interesom zahodnega kapitala. Evropa in z njo Slovenija se pogrezata v militarizacijo in drastičen dvig vojaških izdatkov – temu se moramo upreti!

Vpliv vojaške industrije in vojaških politik, za katerimi stojijo vojni hujskači domačih in tujih vladajočih razredov, je vedno večji. Namesto mirovne politike Slovenija danes solidarnost kaže s pošiljanjem orožja na tuje fronte in z zaostreno vojaško retoriko. Naši politiki ne delajo za mir, ampak za vojno!

EU je sprejela 800 milijard evrov vreden proračun za vojaško porabo. Na račun že tako razpadajočih socialnih storitev bomo nakupovali orožje! NATO in Evropska unija pritiskata na članice, naj povečajo vojaško porabo, razpravljajo o skupni vojski in celo o ponovni uvedbi vojaškega roka.

Slovenija po diktatu NATO pakta že načrtuje višanje porabe na 1,4 milijarde evrov za vojaški proračun, kar je skoraj pol milijarde več, kot je bilo prvotno predvideno. Vendar nenasitna zveza NATO od nas zahteva še več, do 2030 zahtevajo, da za oboroževanje prispevamo kar 5 % svojega BDP. Denar za orožje, tanke in vojskovanje bodo vzeli iz žepov delovnih ljudi. Še bolj kot sedaj bo trpelo naše šolstvo, zdravstvo, domovi za ostarele, pokojnine.

Slovenski vladajoči razred z našim denarjem finančno in politično podpira tako vojno v Ukrajini kot genocid nad Palestinci, ki ga izvaja Izrael. Samo še vprašanje časa je, kdaj bodo na fronto poslali naše brate in sestre. Ne bomo umirali za interese zahodnega kapitala!

ZATO POZIVAMO:

  • K izstopu iz NATO pakta!
  • Proti militarizaciji!
  • Proti večanju vojaških izdatkov!
  • Proti služenju v vojskah, ki branijo interese tujih kapitalistov!

The post PRIDI NA PROTEST LJUDSKA FRONTA – VOJNI KONTRA! first appeared on Rdeča Pesa.

MARIBORČANKA PO PROTESTU: “MAMO MI TO”

21 June 2025 at 11:31

Objavljamo pismo, ki nam ga je poslala mariborčanka Urška, ki se je, skupaj s svojo mamo, udeležila četrtkovega protesa. 

Mariborčani,

hvala vsem, ki ste prišli in javno pokazali, da vam je mar. (Hvala tudi vsem, ki ste delili, sponzorirali in govorili okoli o protestu, ki je še kako pomemben). Fun fact za vse, ki podcenjujejo našo “slabo” udeležitev na protestu: včasih se iz malega lahko razvije veliko. In vse, kar se potrebuje, so zgolj 3,5% populacije, ki drži skupaj (in verjamem, da se tega mnogo ljudi, ki je raziskovalo psihologijo množic, tudi zaveda). Včasih morajo stvari tudi dozoreti, zatorej brez skrbi. Vedno je tu potencial za napredek/rast, vprašanje pa je, če smo ga zmožni videti in če mi znamo dati svoj čas. 

Naučili so nas, da se velikokrat moramo prilagoditi, da sistem tako pač je, da se moramo zmanjšati, da je ok, če v letu 2025 vse živo razpada, če starejši občani nekje na daljni postaji čakajo 60-90 min na soncu brez dostopa do vode, wcja, da je ok, da so vozniki brez časa za malice in se pajsajo ure na vročini v avtobusu, da je ok, da gibalno ovirani nimajo dostopa do bolnice (kar bi morala biti čista osnova), dijaki do šole, da se vrata avtobusa zatikajo ipd… Naučeni smo sprejeti to kot našo normalo. Ker tako pač je, ker naj bi bili tiho, poslušni, (vstavi pridevnik) ____. Včasih je pa dovolj le par ljudi, ki reče: no, ne rabi biti ravno tako. Mogoče pa ni ok. Mogoče pa je lahko drugače. Mogoče pa en glas opogumi še 2 druga. In mogoče imamo ljudje več vpliva kot si predstavljamo. Mogoče smo vzor mladini, ki nas (kljub telefonom) vseeno gleda. In mogoče smo močnejši, ko stopimo skupaj.  Kdo bi vedel. 

Še en fun fact: Vsak dan imamo možnost, da si sami ustvarimo boljši svet. Lahko samo smo in jamramo nad sistemom. Lahko pa aktivno ustvarjamo novega. Vsak se vsak dan odloča kakšne odločitve bo sprejel. Lahko smo del problema, lahko pa se fokusiramo na rešitev in implementacijo novih idej, izboljšanja sistema, ustvarjanje novega. To hkrati pomeni, da ne čakamo, da bo nekdo drug naredil stvari za nas. Ker se same več kot očitno ne bodo zrihtale. Pa naj bo to šolstvo, zdravstvo, karkoli. Sprememba je edina konstanta v življenju, sami se pa vsak dan odločamo ali bomo aktivno sodelovali v njej. Hvala še enkrat. Vesela in hvaležna sem, da se Maribor aktivno prebuja. Mamo mi to.

Mariborčanka Urška

The post MARIBORČANKA PO PROTESTU: “MAMO MI TO” first appeared on Rdeča Pesa.

Busi gor – Arso dol!

20 June 2025 at 15:20

Včeraj je Trgu svobode potekal protest za boljši javni potniški promet, ki ga je organizirala Pobuda za boljši javni potniški promet v Mariboru. Protestniki so se od Trga svobode sprehodili proti občini, nadaljevali so po Maistrovi, Tyreševi in Slovenski ulici. Več kot sto petdeset protestnikov je med drugim glasno vzklikalo: “busi gor, arso dol,” “manj avtov, več busov,” “busi za ljudi, ne za župana,” “jebeš profit, za županovo rit.” Na prehodu za peščce pred občino jim je v znak solidarnosti pohupal tudi voznik Marproma, ki je takrat vozil avtobus. 

Protestniki so zahtevali, da: 

  • mora biti JPP kakovostna alternativa avtomobilu,
  • se mora povečati frekventnost, tudi med vikendi in ob večerih,
  • morata biti mestni in primestni promet povezana,
  • se morajo vrniti  linije, ki so prej učinkovito povezovale ključne dele mesta, 
  • mora biti Tezno neposredno povezano z UKC, 
  • morajo biti potniki, delavci in lokalne skupnosti vključeni v načrtovanje in spremembe linij, 
  • se morajo izboljšati delovni pogoji voznikov voznikov in voznic ter drugih zaposlenih v JPP.

Ti “kreativci” naj sedejo na invalidski voziček in vozijo po klancu

Franc Žiberna iz Mestne četrti Tezno je dejal, da so Tezenčani od vseh Mariborčanov najbolj prizadeti zaradi zmanjšanja števila frekvenc avtobusnih linij G3 in P12 ter ukinitvi ali preusmeritvi linije 1 Tezno (sedaj G1). Z novo linijo G1 so iz uporabe izločili ključna postajališča: Ljubljanska II. gimnazija, ZD Magdalena, UKC ter Glavni trg. “Tukaj in na vseh izpuščenih postajališčih pripravljavci sprememb niso sploh razmišljali o starejših ljudeh, mamicah z otroškimi vozički ter o gibalno oviranih ljudeh in bolnih,” pravi. Žiberna je bil velik uporabnik avtobusa 1 Tezno, saj je samostojno obiskoval ambulante v UKC in Magdaleni. Zdaj tega več ne more.

Foto: Večer

Namesto spoštljivega odgovora pa so Tezenčani prejeli komentar, da se naj tisti, ki bodo namenjeni v ZD Magdalena in UKC Maribor, pač odpravijo peš od postaje pri Europarku. “Ti “kreativni” ljudje naj sedejo na invalidski voziček in se s te postaje odpravijo na pot pod Titovim mostom (klančina prestrma) do glavnega vhoda v UKC ali od postaje vrnejo nazaj na peš nadhod nad Titovo cesto (turbokrožišče). Pot je dolga, s strmimi klanci in ob slabem vremenu tudi nemogoča,” je pojasnil. “Zaradi trme in vztrajanja župana z mestnim svetom stare linije G1 ne bo, pa pika. Vendar, dragi g. župan, takšno trmarjenje ne bo koristilo ne nam uporabnikom avtobusnega prometa ne vam, ko se boste kmalu pojavili na volitvah.” Tezenčani zahtevajo vrnitev linije G1. 

Fredi Magdič, prav tako iz Teznega je dejal, da so zadnje spremembe linij mestnega avtobusnega prometa so prizadele najbolj ranljive. “Namesto približevanja našim potrebam so na občini sledili le kapitalski logiki. Olajšali so dostop do trgovskih centrov, mnogim pa so otežili pot do bolnišnice, šol, kulturnih ustanov, pokopališč … Namesto modernega, efektnega in uporabnega mestnega javnega prometa smo dobili nepovezan cirkus, ki nikakor ne služi svojemu namenu: to je zmanjšanju števila osebnih avtomobilov v mestu. Nasprotno, še povečuje ga.”

Kakovosten javni promet koristi vsem 

Maja Šnuderl iz gibanja Mladi za podnebno pravičnost Maribor poudarja, da kakovosten, frekventen in dostopen javni potniški promet koristi vsem prebivalcev, ne le tistim, ki ga uporabljajo. Zmanjša količino avtomobilov, kar pomeni manj zastojev, hrupa, onesnaženosti zraka ter potrebe po parkiriščih, ki pogosto stojijo na nekdaj zelenih površinah. Šnuderl je ogoročena nad zmanjšano frekventnostjo avtobusov: “Na praktično vseh linijah se je število odhodov zmanjšalo, najbolj izven prometnih konic in ob vikendih. V Zrkovcih morajo danes čakati na avtobus kar 90 minut, v Limbušu 75 minut, do pokopališča Dobrava pa vozi linija 12 namesto vsakih 20 minut sedaj le še vsako uro.” 

Ministrstvo za okolje, podnebje in energijo je pozvala, naj ne ne zvišuje cen vozovnic, saj bi v času podnebne krize in vsesplošne draginje morali javni promet narediti kar se da dostopen in konkurenčen avtomobilu. “Potrebujemo ustrezne avtobusne povezave, ki ne bodo namenjene le šolarjem v času prometnih konic, temveč tudi delavcem za prihod na službo, ter vsem uporabnikom omogočale ustrezno mobilnost tudi popoldan, zvečer in med vikendi,” je sklenila Šnuderl.

Luka Mofardin iz Mreže za pravičen prehod je dejal, da podnebna kriza ni oddaljena grožnja: “Občutimo jo že sedaj. Vremenske katastrofe, vročina, vse hujši delovni pogoji. Pozna se z draginjo, uničenimi domovi in vse večjo negotovostjo.” Izpostavil je, da delavke in delavci pogosto izgubljamo ure vsak dan samo zato, da sploh pridemo do službe – ker javnega prevoza enostavno ni ali pa je slab. “Zato mora biti pravični prehod javno usmerjen, skupnostno voden in zgrajen od spodaj. In naj se začne tukaj – z bojem za javni promet, ki služi ljudem in okolju, ne kapitalu. Zato danes ne protestiramo samo proti slabemu prometnemu sistemu – protestiramo za drugačen svet. Svet, v katerem bomo skupno dobro postavili pred zasebni interes,” je sklenil.

Vozniki v prenovo niso bili vključeni

Govor voznika Marproma je prebral moderator, voznik pa ostaja anonimen. Kot je zapisal, se ne more izpostavljati, saj so bili nekateri sodelavci že tarče klicev direktorja z opozorili, da ni pametno govoriti proti občini. Voznik je zapisal, da se v kompletnem projektu prenove linij kaže aroganca odgovornih iz mariborske občine: “Prenovo so zaupali fakulteti in tujemu načrtovalcu, Ljubljančanu, ki o potrebah Mariborčanov nima pojma. V projekt nismo bili vključeni niti mi, vozniki, ki problematiko mestnega prometa poznamo najbolje. Vsaj operativno.”

Vozniki so občino opozarjali, da za prenovo linij potrebujejo več avtobusov, več voznikov in delujoč vozni park z delujočimi avtobusi. Opozarjali so, da bo del voznikov dislociran in ne bo mogel niti na malico ali na stranišče, saj so linije, oz. postajališča postavljena tako, da si odmaknjen od vsega. Za potrebe novih linij bi bilo potrebovali med 15 in 20 novih avtobusov in dvakrat toliko voznikov, sporoča. A naleteli so na gluha ušesa. 

Vozniki in voznice si želijo boljših pogojev dela, prilagojenih linij uporabnikom, spoštovanje pravice do malice, počitka in prostih vikendov. Nasprotujejo poslovanje na meji dopustnega in varčevanja na zaposlenih in potnikih. Frekvenca se mora povečati na vseh linijah. Voznik je v govoru prosil potnice in potnike k razumevanju, da niso vozniki krivi za nedelujoče klime, zamude, razpadajoče avtobuse in nedelujoča vrata. 

O razmerah dela na Marpromu in krčenju javnega prevoza je spregovorila tudi Ana Onić, upokojena delavka iz Marproma. Pozvala je k solidarnosti do voznic in voznikov, ki da se trudijo za potnike, a problemi, ki jih je nakopala občina, žal niso v njihovih rokah. 

Minimum ni dovoljJerneja Breznik iz Ekosocialistične iniciativa Klas je poudarila, da so ljudje z več tisoč podpisi pod tremi peticijami pokazali, da ne gre za posameznike, temveč za skupno zahtevo po dostopnem prometu za osnovne potrebe – pot do službe, šole, bolnišnice, pokopališča. “Krčenje javnih storitev gre zmeraj na škodo delovnih ljudi in poslabša kvaliteto naših življenj,” je dejala. Zgolj vrnitev v prejšnje stanje ni dovolj. Javni potniški promet se ne sme zadovoljiti zgolj z minimalnim standardom. Prav tako ne sme biti podrejen profitni logiki. Mestni in primestni promet morata biti bolje povezana. Javni potniški promet mora biti podružbljen, o njem morajo odločati potniki in delavci. Vozniki in voznice pa morajo imeti boljše delovne pogoje. “Motite se, če si mislite, da se bomo zadovoljili s kozmetičnimi popravki. Ne pristajamo na ignoranco in posmeh ter ne bomo odnehali dokler javni prevoz ne bo resnično služil našim potrebam.”

The post Busi gor – Arso dol! first appeared on Rdeča Pesa.

STOP POKOJNINSKI REFORMI!

18 June 2025 at 16:18

Pokojninsko reformo Golobove vlade, ki bo najbolj prizadela delavke in delavce z najnižjimi dohodki in najtežjimi poklici, želi koalicija kar čim prej spraviti pod streho. Z njo vladajoči varčujejo na plečih tistih, ki imajo najmanj, medtem ko kapital ostaja popolnoma nedotaknjen. Proti škodljivi reformi se pojavlja vse več odpora, ki je hkrati vse bolj organiziran.

Danes je Delavska koalicija na tiskovni konferenci predstavila naslednje korake za sprožitev referenduma proti škodljivemu predlogu pokojninske reforme, ki bo predvidoma sprejet še pred začetkom parlamentarnih počitnic. Zavezništvo združuje borbene sindikate in druge organizacije, ki povezujejo več deset tisoč delavcev iz najrazličnejših panog.

Kot pravijo, bomo “zaradi nove pokojninske reforme garali dlje, pokojnine pa bodo še naprej prenizke”. Nova pokojninska reforma nam prinaša:

Poznejše upokojevanje: 

Pokojnine so že danes prenizke, delovne obremenitve pa previsoke – nova reforma pa predlaga upokojitve šele pri 62 ali celo 67 letih. Mnoge delavke in delavci komaj zdržijo do 60. leta, še posebej v fizično napornih poklicih, kot so gradbeništvo, zdravstvo in industrija. Vlada utemeljuje reformo s trditvijo, da “živimo dlje”, a ob tem spregleda, da sta družbena blaginja in produktivnost danes precej višji kot nekoč. 

Nepravičen izračun pokojnin: 

Nova formula za izračun pokojnin bo številnim delavcem znižala pokojnine – namesto da bi šteli le najboljša leta, se bo pokojnina izračunavala na podlagi 40 let delovne dobe (minus 5 najslabših let), namesto dosedanjih 24 najugodnejših. Ker večina ljudi nima visokih in stabilnih plač, bodo v izračun vključena tudi obdobja z nižjimi dohodki, kar bo močno znižalo povprečje. Posebej prizadeti bodo prekarci in vsi, ki so delali v manj plačanih poklicih. 

Nižanje vrednosti pokojnin: 

Reforma bo dolgoročno zmanjševala realno vrednost pokojnin. Zdaj so pokojnine vezanje na plače, ki rastejo hitreje od inflacije. Reforma želi pokojnine vezati predvsem na inflacijo, kar pometni, da bo čez 30 let kar 80 % upokojencev prejemalo manj od minimalne plače – to ni dostojno staranje, temveč revščina.

Popušča delodajalcem:

Reforma ne predvideva dviga prispevkov za delodajalce. Po reformi bodo delavci prispevali še vedno skoraj dvakrat več kot delodajalci (15,5 % proti 8,85 %). Namesto da bi odgovornost nosili vsi, bo breme spet padlo le na delavce. Pokojninska blagajna se ne krepila, temveč varčevala – na hrbtih delavcev in delavk.

Izigrava delavce:

Reforma je bila sprejeta brez resnega sodelovanja delavcev ali upokojenk. Javno razpravo so skrajšali, odločanje pa prestavili v čas dopustov. Vlada se izogiba dialogu in namesto soglasja vsiljuje škodljive rešitve.

Delavska koalicija je že v nizkem štartu za pričetek referendumske akcije. Na njihovem spletnem mestu www.stoppokojninski.si in prek Facebooka in Instagrama bodo obveščali, kdaj bodo začeli zbirati podpise Hkrati vabijo delavstvo, upokojence in študente, da se jim pridružite, da skupaj ubranimo naše penzije in postavimo potrebe ljudi pred interes kapitala.

O pokojninski reformi smo pri Rdeči pesi pisali že večkrat, osrednje prispevke za razumevanje njene škodljivosti pa lahko najdete tukaj: 

https://rdecapesa.com/kako-dolgo-bomo-se-delali/

https://rdecapesa.com/mescev-mit-o-medgeneracijskem-konfliktu/

https://rdecapesa.com/pokojninska-reforma-in-demografsko-vprasanje/

https://rdecapesa.com/demografsko-vprasanje-2-del-ucinki-na-pretocni-sistem/

https://rdecapesa.com/za-dostojno-starost-vseh-ne-dobicek-kapitala/

https://rdecapesa.com/politicna-ekonomija-pokojninske-reforme/

@STOP pokojninski

The post STOP POKOJNINSKI REFORMI! first appeared on Rdeča Pesa.

IRANSKE DELAVKE IN DELAVCI: “OD VOJNE NIMAMO NIČ!”

18 June 2025 at 07:55

“Mi, delovni ljudje Irana – učitelji, medicinske sestre, delavci, upokojenci – od vojne, militarizma, bombardiranja in imperialističnih politik nimamo nič. Običajni ljudje, zlasti delavski razred, plačujejo ceno vojne s svojimi življenji, zdravjem in domovi,” sporočajo delavske organizacije in skupine upokojenih delavcev v Iranu v skupni izjavi za javnost. Iransko delavstvo je izrazilo ostro nasprotovanje vojaški eskalaciji v regiji in obsodilo še vedno trajajoč genocid v Gazi.

Med podpisniki so Sindikat delavcev avtobusnega podjetja Teheran in predmestja, Sindikat delavcev sladkorne tovarne Haft Tappeh in Zavezništvo upokojencev. Poudarjajo, da iranski delavci od vojne nimajo ničesar – le uničenje in poglabljanje revščine.

Trenutna nestabilna in nevarna situacija v Iranu in na Bližnjem vzhodu zahteva skupno ukrepanje. Ostro obsojajo nedavne izraelske zračne napade na infrastrukturo, stanovanjska območja, poslovne stavbe. Zavračajo izraelske trditve, da Izrael ni sovražen do iranskega naroda, to je zgolj del propagande. Opozarjajo na nedavno grožnjo izraelskega obrambnega ministra, da bo “Teheran zgorel”, ter na stalno podporo Zahoda takšnim vojaškim dejanjem.

Delavke in delavci so kritični do ZDA in Izraela, ki sta odgovorna za genocid v Gazi in druge zločine v regiji. Obsojajo tišino Združenih narodov in mednarodnih institucij. Kapitalizem in imperializem sta kriva za vojne, okoljski kolaps in trpljenje ljudi, pišejo v izjavi.

Iranski delavski razred od vojne nima ničesar: “Ekonomske sankcije, vojaški proračuni in represija so že ustvarili lakoto, smrt in razseljenost. Vojna bo to še poslabšala.”

Poudarjajo, da tudi glede Islamske republike nimajo nobenih iluzij. Režim označujejo kot represiven, avanturističen in protidelavski. Kritizirajo desetletja zatiranja delavskih protestov, zanikanje sindikalnih pravic ter zapiranje in mučenje delavskih aktivistov.

“Naš boj je družbeni in razredni boj. Zanašamo se sami nase in  nadaljujemo s protesti za kruh, delo in svobodo. Naš boj je povezan z delavci in svobodoljubnimi ljudmi po vsem svetu. Stop vojni. Stop militarizmu. Zahtevamo takojšnje premirje.”

Podpisniki pozivajo sindikate, skupine za človekove pravice, mirovne in okoljske organizacije ter protivojne aktiviste po vsem svetu, naj se pridružijo njihovi zahtevi po koncu bombardiranja, vojnih zločinov in uničevanja okolja – ter naj izrazijo solidarnost z ljudmi Irana in Bližnjega vzhoda v njihovem boju za mir, pravičnost in dostojanstvo.

The post IRANSKE DELAVKE IN DELAVCI: “OD VOJNE NIMAMO NIČ!” first appeared on Rdeča Pesa.

KAKO DOLGO BOMO ŠE DELALI?

16 June 2025 at 08:53

Najbolj brutalen in za delovne ljudi škodljiv ukrep je podaljšanje spodnjega in zgornjega pogoja upokojitvene starosti s 60 na 62 let za tiste s 40 leti pokojninske dobe brez dokupa oz. s 65 na 67 let za tiste z najmanj 15 leti dobe. Sistemsko gledano ukrep postane še bolj radikalen, ko pomislimo, da se za dve leti navzgor premikajo tudi vse ostale pravice, recimo pravica do pridobitve vdovske in invalidske pokojnine. 

Vlada ukrep utemeljuje z dvema argumentoma. Prvi je, da smo pri tako zakonsko pri upokojitveni starosti kot pri podatkih o efektivnem izstopu iz trga dela med najnižjimi med državami OECD. A ne pozabimo, da Slovenija v EU spada med države z najvišjimi stopnjami intenzivnosti dela. Kakorkoli že, z vidika napredne politike je globalno gledano relativno hitro upokojevanje svojevrsten dosežek: pomeni, da Slovenija ni podlegla najhujšim reformam financializacije in podaljševanja delovne aktivnosti. To deloma priznava tudi Ministrstvo za delo, ki v izročku za medije zapiše: “To seveda lahko štejemo kot dosežek, nekaj pozitivnega za ljudi – dokler si to lahko privoščimo.” 

Izjava je v mnogih pogledih zavajajoča: predpostavlja namreč, da staranje mehanično določa izdatke v prihodnosti, kar smo v preteklih objavah že demantirali. Poleg tega pa je tudi cinična, saj je na področju oboroževanja Slovenija že izpogajala odstop od fiskalnih pravil za povečanje naložb v oboroževanje. Če neupogljivi zakoni fiskalnih jastrebov velajajo za upokojence in delavce, se ti zlomijo ob blišču orožarske industrije. 

Omenimo še drug klasičen argument za podaljševanje upokojitvene starosti: podaljševanje pričakovanega življenjskega obdobja. Težava te argumentacije je, da tudi podatki OECD in Eurostat kažejo, da se rast pričakovanega življenja upočasnjuje. Poročilo OECD »Pensions at Glance 2023«, glavna referenca birokratskih aparatov po Evropi, ugotavlja podobno:

»Življenje se še naprej podaljšuje in ta trend se bo predvidoma nadaljeval, čeprav se je hitrost izboljševanja v starosti v zadnjem času upočasnila, zlasti glede na COVID-19. Leta 2022 bo pričakovana življenjska doba pri starosti 65 let v povprečju znašala 83,0 leta za moške in 86,2 leta za ženske. /…/ V povprečju držav OECD naj bi se pričakovana življenjska doba pri starosti 65 let do leta 2065 podaljšala za 4,4 leta med ženskami in 4,9 leta med moškimi.«

Zveni še zmeraj spodbudno, a za oblikovanje politik ne prav informativno. Pričakovano življenjsko obdobje verjetno sploh ni primeren kazalnik za določanje upokojitvene starosti: veliko bolj primerno je upoštevati pričakovana zdrava leta življenja, kjer pa je slika drastično drugačna. Pričakovana zdrava leta življenja so se v Sloveniji v zadnjih 10 letih za ženske gibala nekje med 59 leti in 69 leti, za moške pa med 58 in 65 leti, odvisno od družbene in ekonomske klime. Zadnji razpoložljivi podatki NIJZ iz leta 2022 kažejo, da so pričakovana zdrava leta za moške 65 let, za ženske pa 68,5 let. Predlog vlade bi torej zgornjo mejo za upokojitev dvignil celo za dve leti nad slovensko povprečje za moške, verjetno pa tudi nad povprečje za vse fizične delavke in delavce po vsej državi. 

Dodajmo, da Ministrstvo nima nobenih podatkov o prihodnjih trendih zdravega življenja. Ne da bi sami špekulirali o teh trendih povejmo le, da glede na krhkost slovenskega zdravstvenega sistema, povečevanje absolutne in relativne revščine ter napovedi krize, nadaljnja rast pričakovanih zdravih let življenja ni zagotovljena. 

Za konec poskusimo orisati še nekatere posledice omejenega ukrepa za državo, kapital in delavce. Po naših grobih ocenah bo dvig zgornje meje upokojitvene starosti na 67 let pomenil, da se bo zaradi vpliva zvišanja upokojitvene starosti za 2 leti število delovno aktivnih zvišalo za vsaj 3 odstotke. Menimo, da bi z ukrepom vlada v naslednjih 10 letih lahko privarčevala vsaj med 0,7 in eno odstotno točko slovenskega BDP – a gre le za preprosta ugibanja. 

Z vidika države gre za varčevanje, z vidika kapitala pa za razširitev bazena poceni delovne sile. Podatki kažejo, da plače nižje izobraženih po 54 letu stagnirajo ali padajo. Razpoložljivi podatki SURS za desetletne starostne skupine prebivalstva in stopnje izobrazbe kažejo, da se neto plače povečujejo za vse stopnje izobrazbe do starosti 54 let. Nato ravni plač bolj kot ne stagnirajo do starosti 65 let in več za nizko izobražene posameznike in povprečno izobražene posameznike. Znižujejo se za srednje izobražene posameznike in le rahlo naraščajo za višje izobražene. Kapital torej s povišanjem upokojitvene starosti prejme dodaten bazen rezervne industrijske armade, ki je večinoma cenejša (bolj ranljiva, z manjšo pogajalsko močjo itd.) od mlajših delavcev. Kakšne so posledice za delavce, pa prepuščamo v premislek pozornim bralkam in bralcem. 

The post KAKO DOLGO BOMO ŠE DELALI? first appeared on Rdeča Pesa.

Čitalnica na prostem ob retro računalniških revijah

By: Igor B
16 June 2025 at 08:25

Na Poletno muzejsko noč bo v muzeju prost vstop na stalno razstavo “Kaj pa programje?” in razstavo “Pol-pismeni“.

Zunaj pa bo na voljo muzejska knjižnica na prostem pri na novo pridobljenem legendarnem Kiosku K67. V primeru slabega vremena čitalnica odpade.

The post Čitalnica na prostem ob retro računalniških revijah first appeared on Računalniški muzej.

Voden ogled stalne razstave (v angleškem jeziku)

By: Igor B
16 June 2025 at 08:24

Na Poletno muzejsko noč vas ob 19. uri vabimo na voden ogled stalne razstave Kaj pa programje? v angleškem jeziku. Zberemo se pri muzejski blagajni, ogled pa traja približno 45 minut.

Na prostem bo ob tej priložnosti postavljena tudi muzejska knjižnica pri na novo pridobljenem legendarnem kiosku K67. V primeru slabega vremena čitalnica odpade.


On the Summer Museum Night, we invite you to a guided tour of the permanent exhibition What about software? in English, starting at 7 PM. We will gather at the museum ticket desk, and the tour will last approximately 45 minutes.

On this occasion, an open-air museum library will also be set up next to the newly acquired legendary K67 kiosk. In case of bad weather, the reading corner will be cancelled.

The post Voden ogled stalne razstave (v angleškem jeziku) first appeared on Računalniški muzej.

Retro tržnica na Poletno muzejsko noč

By: Igor B
16 June 2025 at 08:24

Lepo vabljeni na 3. Retro Gaming Kupi/Prodaj/Menjaj dogodek v Sloveniji! Dogodek bo potekal po principu Garažne razprodaje/ Bolšjega trga (“Buy/Sell/Trade Event”). Vse platforme dobrodošle, od ZX Spectruma in Atarija, do Nintendo Switch 2 in Playstation 5.

 

Informacije:

  • V muzeju sta na voljo WC in garderoba v kleti (omarice, ki se zaklenejo s kovancem za 1€, ki ga potem dobiš nazaj). Izven teh omaric ničesar ne bo možno hraniti v muzeju.
  • Zelo je pomembno, da ne bomo moteči za okolico (bloke in druge uporabnike objekta Celovška 111), da bo izkušnja takšne tržnice pred muzejem pustila dober vtis.
    Dovoljena je izključno uporaba dela ploščadi pred muzejem in sosednjim e-kolesar.si dolžine 35 metrov znotraj parkirnih ovir, označenih z muzejskim logotipom.
  • Dostop z vozili je omejen na bližnja parkirišča, npr. pred Kinom Šiška (Trg prekomorskih brigad) ter Parkirišče Šiška, Janez d.o.o., Celovška 135.
  • Do same lokacije z vozilom žal ne morete priti – vožnja in parkiranje po celotni ploščadi in travi okrog muzeja sta strogo prepovedani!
  • Prav tako je strogo prepovedano parkirati za muzejem na rumeno označenih mestih, ki so last stanovalcev bloka.

 

Drobni tisk:

  • Računalniški muzej zagotavlja samo prostor za dogodek. Vse transakcije so med prodajalcem in kupcem.
  • Za svoje stvari odgovarjate sami, v primeru izgube ali uničenja lastnine ne odgovarjamo.
  • V primeru slabega vremena dogodek odpade.

The post Retro tržnica na Poletno muzejsko noč first appeared on Računalniški muzej.

Voden ogled stalne razstave (v hrvaškem jeziku)

By: Igor B
16 June 2025 at 08:23

Na Poletno muzejsko noč vas ob 20. uri vabimo na voden ogled stalne razstave Kaj pa programje? v hrvaškem jeziku. Zberemo se pri muzejski blagajni, ogled pa traja približno 45 minut.

Na prostem bo ob tej priložnosti postavljena tudi muzejska knjižnica pri na novo pridobljenem legendarnem kiosku K67. V primeru slabega vremena čitalnica odpade.


Za vrijeme Ljetne muzejske noći pozivamo vas na vođeni obilazak stalnog postava ‘A što je s programjem?‘ na hrvatskom jeziku, s početkom u 20 sat. Okupljamo se kod muzejske blagajne, a obilazak traje oko 45 minuta.

Na otvorenom će tom prigodom biti postavljena i muzejska knjižnica uz novo nabavljeni legendarni kiosk K67. U slučaju lošeg vremena, čitaonica se otkazuje.

The post Voden ogled stalne razstave (v hrvaškem jeziku) first appeared on Računalniški muzej.

Voden ogled stalne razstave

By: Igor B
16 June 2025 at 08:23

Na Poletno muzejsko noč vas ob 18. uri vabimo na voden ogled stalne razstave Kaj pa programje? v slovenskem jeziku. Zberemo se pri muzejski blagajni, ogled pa traja približno 45 minut.

Na prostem bo ob tej priložnosti postavljena tudi muzejska knjižnica pri na novo pridobljenem legendarnem kiosku K67. V primeru slabega vremena čitalnica odpade.

The post Voden ogled stalne razstave first appeared on Računalniški muzej.

Voden ogled stalne razstave (v italijanskem jeziku)

By: Igor B
16 June 2025 at 08:22

Na Poletno muzejsko noč vas ob 21. uri vabimo na voden ogled stalne razstave Kaj pa programje? v italijanskem jeziku. Zberemo se pri muzejski blagajni, ogled pa traja približno 45 minut.

Na prostem bo ob tej priložnosti postavljena tudi muzejska knjižnica pri na novo pridobljenem legendarnem kiosku K67. V primeru slabega vremena čitalnica odpade.


Per la Notte estiva dei musei vi invitiamo alle ore 21:00 a una visita guidata in lingua italiana della mostra permanente “E i programmi?”. Il punto di ritrovo è presso la biglietteria del museo; la visita durerà circa 45 minuti.

All’esterno, in questa occasione, sarà allestita anche una biblioteca museale accanto al leggendario chiosco K67 recentemente acquisito. In caso di maltempo, la “sala di lettura” non sarà disponibile.

The post Voden ogled stalne razstave (v italijanskem jeziku) first appeared on Računalniški muzej.

Mladinsko delo 4.0: Umetna inteligenca v praksi – odgovorno in uporabno (BREŽICE)

By: gaja
3 June 2025 at 14:40

Z Uradom RS za mladino pripravljamo izvedbo petih praktičnih delavnic za razumevanje in uporabo tehnologij umetne inteligence v mladinskem delu. Delavnice bo izvajal Računalniški muzej, ki delavnice razumevanja in uporabe umetne inteligence izvaja že od leta 2023. Delavnice bodo priložnost za druženje, izmenjavo izkušenj in uvodno spoznavanje z orodji in tehnologijami, ki vam lahko pomagajo pri konkretnem delu z mladimi. Želimo spodbuditi skupno učenje, pomoč pri izzivih, s katerimi se srečujemo, in iskanje svežih, skupno oblikovanih rešitev za podporo mladim pri vedno bolj digitalnem odraščanju.

Zaželeno je, da prinesete s seboj svoj osebni prenosni računalnik, v nasprotnem primeru ga bomo zagotovili mi.
Cilj delavnice:

  • Povečati razumevanja tehnologije UI ter njene razširjenosti in načinov uporabe
  • Povečati zavedanje o vplivu na razvoj poklicev in veščin uporabe digitalne tehnologije
  • Vpeljati sistem kritičnega razmišljanja o raznolikih aspektih uporabe UI in neposrednega vpliva na življenjske situacije mladih
  • Opremiti mladinske delavce z metodami za spodbujanje informiranega in aktivnega opredeljevanja mladih do tehnologij UI ter izražanje aktivnega državljanstva
  • Povečati poznavanje in sposobnost uporabe praktičnih orodij UI za obogatitvene programe in programe učne pomoči ter socialne integracije
  • Spoznati praktična orodja za administrativno razbremenitev mladinskega dela
  • Spodbuditi razmišljanja o konkretni uporabi UI v mladinskem delu 

Število udeležencev je zaradi narave delavnic omejeno, zato predlagamo, da se čimprej prijavite – povezava do prijavnice.

Vljudno vabljeni k udeležbi in zavedni, odgovorni ter predvsem ustvarjalni rabi najnovejših tehnologij!

 

Potek delavnice

9:15 – 9:30 Sprejem in uvod

Registracija, pozdrav in predstavitev poteka dneva

9:30 – 10:15 Osnove UI: “Umetna inteligenca in svet”

Uvodno predavanje s primeri, skupinska diskusija

10:15 – 11:45 Etična vprašanja in praktične vaje za razvijanje kritičnega mišljenja

Študije primerov in skupinsko praktično delo

11:45 – 12:30 Odmor za skupinsko kosilo

Pogostitev, odmor in priložnost za spoznavanje prisotnih udeležencev in organizacij

12:30 – 14:00 Praktična uporaba UI orodij za mladinsko delo

Praktično delo v digitalnih okoljih s prosto dostopnimi orodji za uporabo tehnologij U

14:00 – 14:30 Naslednji koraki in zaključek

Skupinski razmislek o konkretni uporabi UI v mladinskem delu, evalvacija

The post Mladinsko delo 4.0: Umetna inteligenca v praksi – odgovorno in uporabno (BREŽICE) first appeared on Računalniški muzej.

Mladinsko delo 4.0: Umetna inteligenca v praksi – odgovorno in uporabno (KOPER)

By: gaja
3 June 2025 at 14:38

Z Uradom RS za mladino pripravljamo izvedbo petih praktičnih delavnic za razumevanje in uporabo tehnologij umetne inteligence v mladinskem delu. Delavnice bo izvajal Računalniški muzej, ki delavnice razumevanja in uporabe umetne inteligence izvaja že od leta 2023. Delavnice bodo priložnost za druženje, izmenjavo izkušenj in uvodno spoznavanje z orodji in tehnologijami, ki vam lahko pomagajo pri konkretnem delu z mladimi. Želimo spodbuditi skupno učenje, pomoč pri izzivih, s katerimi se srečujemo, in iskanje svežih, skupno oblikovanih rešitev za podporo mladim pri vedno bolj digitalnem odraščanju.

Zaželeno je, da prinesete s seboj svoj osebni prenosni računalnik, v nasprotnem primeru ga bomo zagotovili mi.
Cilj delavnice:

  • Povečati razumevanja tehnologije UI ter njene razširjenosti in načinov uporabe
  • Povečati zavedanje o vplivu na razvoj poklicev in veščin uporabe digitalne tehnologije
  • Vpeljati sistem kritičnega razmišljanja o raznolikih aspektih uporabe UI in neposrednega vpliva na življenjske situacije mladih
  • Opremiti mladinske delavce z metodami za spodbujanje informiranega in aktivnega opredeljevanja mladih do tehnologij UI ter izražanje aktivnega državljanstva
  • Povečati poznavanje in sposobnost uporabe praktičnih orodij UI za obogatitvene programe in programe učne pomoči ter socialne integracije
  • Spoznati praktična orodja za administrativno razbremenitev mladinskega dela
  • Spodbuditi razmišljanja o konkretni uporabi UI v mladinskem delu 

Število udeležencev je zaradi narave delavnic omejeno, zato predlagamo, da se čimprej prijavite – povezava do prijavnice.

Vljudno vabljeni k udeležbi in zavedni, odgovorni ter predvsem ustvarjalni rabi najnovejših tehnologij!

 

Potek delavnice

9:15 – 9:30 Sprejem in uvod

Registracija, pozdrav in predstavitev poteka dneva

9:30 – 10:15 Osnove UI: “Umetna inteligenca in svet”

Uvodno predavanje s primeri, skupinska diskusija

10:15 – 11:45 Etična vprašanja in praktične vaje za razvijanje kritičnega mišljenja

Študije primerov in skupinsko praktično delo

11:45 – 12:30 Odmor za skupinsko kosilo

Pogostitev, odmor in priložnost za spoznavanje prisotnih udeležencev in organizacij

12:30 – 14:00 Praktična uporaba UI orodij za mladinsko delo

Praktično delo v digitalnih okoljih s prosto dostopnimi orodji za uporabo tehnologij U

14:00 – 14:30 Naslednji koraki in zaključek

Skupinski razmislek o konkretni uporabi UI v mladinskem delu, evalvacija

The post Mladinsko delo 4.0: Umetna inteligenca v praksi – odgovorno in uporabno (KOPER) first appeared on Računalniški muzej.

Spring Ruby meetup

By: gaja
2 June 2025 at 09:12

We’re thrilled to announce the spring edition of the Slovenia Ruby User Group meetup! 💎🇸🇮

Mark your calendar on Monday, June 16th, 2025, we’ll be gathering from 18:00 to 21:00 at the Računalniški muzej / Slovenian Computer History Museum.

We’ll be trying out an alternative format this time:
– Show-and-tell
– Open floor discussion

This means that the content of this meetup is entirely up to the attendees!

Have an interesting project you’d like to talk about? Bring it along! Got a technical problem you’d like to discuss? Bring it up! Got new approaches on your mind? Don’t be afraid to challenge the status quo!

As always, we’re keeping the floor open for any lightning talks. If you have an idea, a project, or a Ruby gem you’re passionate about, bring it along!
Post the knowledge exchange, we’ll go for food and drinks.

Whether you’re a seasoned Ruby professional, a newbie who’s just getting started, or anywhere in between, this is the place for you.
Don’t miss out on Summer Ruby meetup!

We can’t wait to see you there.

The post Spring Ruby meetup first appeared on Računalniški muzej.

c| srečanje № 26: Zadružno spraviti FOSS v slovenske šole & mostovanje naprej

By: gaja
30 May 2025 at 13:10

V prvem delu bo Kristijan Tkalec (kiki, lapor) predstavil plan Zadruge Na Prostem za lokalizacijo in testiranje sistema za vodenje šol, AlekSIS. Namen je, da se migrira slovenske šole na odprtokodne rešitve.

Nato bomo nadaljevali z izdelavo spletne strani in boljših komunikacijskih kanalov znotraj skupnosti. Če bo interes, se dotaknemo še teme https://endof10.org/

vir slike: David Ravoy (CC-BY-4.0)

The post c| srečanje № 26: Zadružno spraviti FOSS v slovenske šole & mostovanje naprej first appeared on Računalniški muzej.

Poletni urnik 2025

By: Tilen
19 June 2025 at 06:00

Poletni delovni čas

(velja od 7. 7. 2025 do 17. 8. 2025)

ponedeljek, torek, četrtek, petek: 8:00 – 14:00
sreda: 8.00 – 18.00*
sobota: 9:00 – 13:00
*Opomba: v sredo poteka izposoja do 14.00, od 14.00 do 18.00 pa le s pomočjo knjigomata.

Izposoja gradiva:
iz skladišča: 8.00 – 14.00
iz prostega pristopa: 8.00 – 14.00

Ostali oddelki:
ponedeljek – petek: 8.00 – 14.00
sobota: zaprto

Druženje ob kavi pred CTK

By: Mitja
13 June 2025 at 06:43

Centralna tehniška knjižnica Univerze v Ljubljani v sodelovanju z Makedonsko študentsko organizacijo v Sloveniji vabi študentke in študente na sproščeno jutranje srečanje ob kavi, ki bo potekalo to soboto 13. 6. 2025 ob 9.00 pred vhodom v CTK (Trg republike 3, Ljubljana).

Dogodek je namenjen ustvarjanju prijetnega vzdušja med izpitnim obdobjem – priložnost za druženje, klepet, izmenjavo spodbud in nekaj minut oddiha ob skodelici dobre kave.

Vabljeni, da prinesete svojo najljubšo skodelico – za najbolj izvirne skodelice bomo pripravili simbolične nagrade.

Dogodek poteka v okviru podpornega okolja za študente, ki ga CTK razvija v sodelovanju s študenti.

Veselimo se vaše družbe!

Strokovno srečanje visokošolskih knjižnic: Izhodišča za strateški razvoj knjižničarstva na Univerzi v Ljubljani

By: Darko
2 June 2025 at 11:59

Na Univerzi v Ljubljani bo 11. junija 2025 potekalo strokovno srečanje visokošolskih knjižnic z naslovom “Izhodišča za strateški razvoj knjižničarstva”. Dogodek bo osredotočen na prihodnost visokošolskih knjižnic v luči trajnostnega razvoja, tehnoloških sprememb in odprte znanosti.

Ključne točke srečanja:

  • Uvodna predavanja bodo podali predstavniki Univerze v Ljubljani in gostja Martina Pronk iz združenja LIBER, ki bo govorila o strateškem načrtovanju knjižnic.
  • Vsebinski sklopi bodo obravnavali:
    • Vrednote v visokošolskem prostoru, kot so odprta znanost, informacijska pismenost in trajnostnost.
    • Tehnološki razvoj, vključno z uporabo umetne inteligence in razvojem slovenskega jezikovnega modela.
    • Odličnost storitev, kjer bodo predstavljeni primeri dobrih praks iz različnih knjižnic UL.

Dogodek bo potekal na Pedagoški fakulteti UL, udeležba pa je možna ob predhodni prijavi.

🔗 Povezavi do originalnih objav:

Pridružite se spletnem forumu SCOAP³

By: Darko
2 June 2025 at 09:14

SCOAP³ (Sponsoring Consortium for Open Access Publishing in Particle Physics) vabi zainteresirano javnost s področja odprte znanosti na spletni forum. Predstavljena bo nova pogodbena struktura SCOAP3 in vrednotenje elementov odprte znanosti, vključenih v novo shemo.

Spletni forum je odprt za vse, vključno s predstavniki sodelujočih institucij iniciative SCOAP3 (nacionalna kontaktna točka za Slovenijo je CTK) ter člani širše znanstvenoraziskovalne skupnosti. Udeležba je priložnost, da se predstavnikom mednarodnega vodstva SCOAP3 in operativni ekipi v CERN-u postavijo vprašanja.

Dogodek bo potekal v četrtek, 18. junija 2025, v obliki spletnega seminarja, čas trajanja je 90 minut. Spletni forum se bo odvijal v dveh programsko enakih sejah, s pričetkom:

  1. seja: ob 9:00 (dopoldne)
  2. seja: ob 17:00 (popoldne)

Udeležba je brezplačna, prijava na dogodek ni potrebna.

Program (obe seji):

  • Uvodna predstavitev: Ianko Lopez, konzorcij Madrono, Španija, predsednik foruma SCOAP3 – 10 minut
  • Predstavitev strukture pogodbe 4. faze SCOAP3 in elementov odprte znanosti: Anne Gentil-Beccot, ekipa SCOAP3 CERN – 20 minut
  • Rezultati prvega ocenjevanja: Pia Kretschmar, ekipa SCOAP3 CERN – 10 minut
  • Perspektiva partnerjev SCOAP3: Anna Vernon, Charles Brophy (JISC, Združeno kraljestvo) – 20 minut
  • Perspektiva CrossRef: Helena Cusijn, Kornelia Korzec, CrossRef – 10 minut
  • Moderirana razprava z vprašanji udeležencev – 10 minut

Podrobnosti o dogodku in udeležba na forumu sta na voljo na naslednji povezavi: https://indico.cern.ch/event/1549610/.

Vabilo na delavnici o odprti znanosti na Univerzi v Novi Gorici

By: Darko
28 May 2025 at 11:57

Univerzitetna knjižnica Univerze v Novi Gorici vabi na delavnici o odprti znanosti, na katerih bodo strokovnjaki iz Centralne tehnične knjižnice Univerze v Ljubljani in ARNES-a predstavili portal Odprta knjižnica: Center za zagotavljanje kakovosti v znanstvenem komuniciranju, neetične prakse založnikov pri znanstvenem objavljanju, avtorskopravna vprašanja ter vlogo storitev in infrastrukture pri napredku znanosti in raziskav.

Delavnici bosta potekali v angleškem jeziku. Udeležba je možna v fizični obliki ali prek Zoom-a. Informacije o programu in prijavah so dostopne na zgornjih povezavah.

Delavnici potekata v okviru Odprte knjižnice: Centra za zagotavljanje kakovosti v znanstvenem komuniciranju v okviru Akcijskega načrta za odprto znanost.

Morebitna vprašanja v zvezi z delavnico lahko naslovite na e-pošto: library@ung.si.

Raziskovalna etika in integriteta na Univerzi v Ljubljani: strateški razvoj in izzivi

By: Darko
23 May 2025 at 09:07

Centralna tehniška knjižnica Univerze v Ljubljani vabi raziskovalke in raziskovalce ter druge zaposlene v raziskovalnih organizacijah na predstavitev Raziskovalna etika in integriteta na Univerzi v Ljubljani: strateški razvoj in izzivi.

Univerza v Ljubljani je konec leta 2024 ustanovila novo organizacijsko enoto za raziskovalno etiko in integriteto. Gre za pomemben korak v smeri zagotavljanja odgovornega ravnanja v znanosti, spoštovanja etičnih načel ter preprečevanja neustreznih praks v raziskovalnem okolju. Enoto vodi prof. dr. Nina Peršak, ki bo 19. junija 2025 od 10:00 do 10:45 v prostorih CTK, Trg republike 3, Ljubljana, predstavila organizacijske in vsebinske vidike delovanja Univerze v Ljubljani na tem področju.

Predstavitev je brezplačna in bo potekala le v živo v prostorih CTK. Število mest je omejeno. Vabljeni!

Prijava na delavnico.

Za dodatne informacije pišite na e-naslov info@odprta-knjiznica.si.

Predstavitev poteka v okviru Odprte knjižnice in Akcijskega načrta za odprto znanost.

Delavnica Umetna inteligenca in avtorsko pravo v raziskovalni dejavnosti

By: Mitja
12 May 2025 at 12:32

Centralna tehniška knjižnica Univerze v Ljubljani vas vabi na delavnico Umetna inteligenca in avtorsko pravo v raziskovalni dejavnosti, ki bo potekala 27. maja 2025 od 10:00 do 13:30 v prostorih CTK, Trg republike 3, Ljubljana, in v spletnem okolju Zoom.

Namen delavnice je raziskovalke in raziskovalce ter druge zaposlene v raziskovalnih organizacijah seznaniti z aktualnimi vprašanji avtorskega prava v povezavi z razvojem in uporabo umetne inteligence (UI) ter poudariti odgovornosti raziskovalcev pri uporabi orodij UI z vidika možnih kršitev avtorskih pravic.

Delavnico bo vodil izr. prof. dr. Matija Damjan.

Po uvodni predstavitvi slovenskega in evropskega okvira avtorskega prava bo na delavnici predstavljena uporaba avtorskih del pri usposabljanju modelov UI in vprašanje kršitve avtorskih pravic. Predstavljena bodo avtorskopravna vprašanja v povezavi z izhodnimi vsebinami UI ter odgovornost in etika raziskovalcev pri uporabi orodij UI. Poseben del delavnice bo namenjen razpravi o konkretnih primerih uporabe UI v raziskovalnem procesu. Razprava bo vključevala vnaprej posredovana vprašanja in dileme udeležencev.

Delavnica je brezplačna. Število mest za udeležbo v živo je omejeno.

Prosimo, da udeleženci pošljejo vprašanja, povezana z vsebino delavnice, vnaprej na e-naslov: info@odprta-knjiznica.si.

Prijava na delavnico:

Delavnica je financirana v okviru Akcijskega načrta za odprto znanost za izvedbo Ukrepa 6.2 ReZrIS30.

❌
❌