Reading view

There are new articles available, click to refresh the page.

Mushroom (MDMA)

Aktivne snovi

MDMA (173 mg)

Dodaten opis

Pri več kot 1,5 mg MDMA na kg telesne teže se hitreje pojavijo neželeni učinki, kot so zategovanje čeljusti, mišični krči, panična reakcija in epileptični napad. V naslednjih dneh se po zaužitju večjih odmerkov MDMA lahko pojavi povečana depresija, pomanjkanje koncentracije, motnje spanja, izguba apetita in občutek močne brezvoljnosti. Simptomi po nekaj dneh izzvenijo.

Upoštevaj spodnje smernice zmanjševanja tveganj!

Disclaimer: Količina MDMA v tableti je zgolj informativne narave in se lahko pri tabletah z istim logotipom in barvo bistveno razlikuje.

Datum testa

14.11.2025

Zmanjševanje tveganj

  • Prilagodi odmerek glede na svojo težo in izkušenost. Literatura navaja, da je odmerek MDMA 1─1,5 mg/kg telesne mase, kar za 60 kg težkega človeka znaša 60─90 mg.
  • Bodi zelo pozoren, če MDMA uporabljaš prvič, ali če ne veš, koliko čist MDMA imaš. Učinki so lahko zelo raznoliki in nekateri ljudje čutijo veliko bolj intenzivno negativne učinke (tako fizične kot psihične). Zmeraj začni z majhnimi dozami (npr. četrtinko ekstazija ali lahko dozo MDMA-ja v kristalih) in počakaj vsaj 2h.
  • Delaj redne premore med plesom.
  • Vsako uro spij do pol litra izotoničnega napitka, če plešeš, drugače pa manj.
  • Ne mešaj različnih drog med seboj, ne mešaj z zdravili.
  • Ne jemlji različnih tablet v eni noči.
  • Poskrbi za ustrezno prehrano in dovolj spanca med tednom.
  • Delaj pavze med uživanjem MDMA-ja (2-3 mesece med eno uporabo in drugo).
  • Če opaziš težave, ki bi bile lahko povezane z uporabo ekstazija, poišči pomoč.

The post Mushroom (MDMA) appeared first on DrogArt.

m&m (MDMA)

Aktivne snovi

MDMA (190 mg)

Dodaten opis

Pri več kot 1,5 mg MDMA na kg telesne teže se hitreje pojavijo neželeni učinki, kot so zategovanje čeljusti, mišični krči, panična reakcija in epileptični napad. V naslednjih dneh se po zaužitju večjih odmerkov MDMA lahko pojavi povečana depresija, pomanjkanje koncentracije, motnje spanja, izguba apetita in občutek močne brezvoljnosti. Simptomi po nekaj dneh izzvenijo.

Upoštevaj spodnje smernice zmanjševanja tveganj!

Disclaimer: Količina MDMA v tableti je zgolj informativne narave in se lahko pri tabletah z istim logotipom in barvo bistveno razlikuje.

Datum testa

7.11.2025

Zmanjševanje tveganj

  • Prilagodi odmerek glede na svojo težo in izkušenost. Literatura navaja, da je odmerek MDMA 1─1,5 mg/kg telesne mase, kar za 60 kg težkega človeka znaša 60─90 mg.
  • Bodi zelo pozoren, če MDMA uporabljaš prvič, ali če ne veš, koliko čist MDMA imaš. Učinki so lahko zelo raznoliki in nekateri ljudje čutijo veliko bolj intenzivno negativne učinke (tako fizične kot psihične). Zmeraj začni z majhnimi dozami (npr. četrtinko ekstazija ali lahko dozo MDMA-ja v kristalih) in počakaj vsaj 2h.
  • Delaj redne premore med plesom.
  • Vsako uro spij do pol litra izotoničnega napitka, če plešeš, drugače pa manj.
  • Ne mešaj različnih drog med seboj, ne mešaj z zdravili.
  • Ne jemlji različnih tablet v eni noči.
  • Poskrbi za ustrezno prehrano in dovolj spanca med tednom.
  • Delaj pavze med uživanjem MDMA-ja (2-3 mesece med eno uporabo in drugo).
  • Če opaziš težave, ki bi bile lahko povezane z uporabo ekstazija, poišči pomoč.

The post m&m (MDMA) appeared first on DrogArt.

Punisher (MDMA)

Aktivne snovi

MDMA (228 mg)

Dodaten opis

Tabletka predstavlja visoko tveganje za zdravje uporabnikov in za pojav negativnih učinkov, kot so: slabost, bruhanje, glavobol, nemir, panični napad, visok krvni tlak, povečano potenje, epileptični napadi, motnje motorike, povišana telesna temperatura, izguba zavesti, možganski edem, možganski ali srčni infarkt.

Odmerki 200 ali več mg lahko pomenijo vsaj dvakratno dozo, pri kateri se tveganja za negativne učinke oziroma zaplete zaradi zaužitja MDMA močno povečajo.

Upoštevaj spodnje smernice zmanjševanja tveganj!

Disclaimer: Količina MDMA v tableti je zgolj informativne narave in se lahko pri tabletah z istim logotipom in barvo bistveno razlikuje.

Datum testa

30.10.2025

Zmanjševanje tveganj

  • Prilagodi odmerek glede na svojo težo in izkušenost. Literatura navaja, da je odmerek MDMA 1─1,5 mg/kg telesne mase, kar za 60 kg težkega človeka znaša 60─90 mg.
  • Bodi zelo pozoren, če MDMA uporabljaš prvič, ali če ne veš, koliko čist MDMA imaš. Učinki so lahko zelo raznoliki in nekateri ljudje čutijo veliko bolj intenzivno negativne učinke (tako fizične kot psihične). Zmeraj začni z majhnimi dozami (npr. četrtinko ekstazija ali lahko dozo MDMA-ja v kristalih) in počakaj vsaj 2h.
  • Delaj redne premore med plesom.
  • Vsako uro spij do pol litra izotoničnega napitka, če plešeš, drugače pa manj.
  • Ne mešaj različnih drog med seboj, ne mešaj z zdravili.
  • Ne jemlji različnih tablet v eni noči.
  • Poskrbi za ustrezno prehrano in dovolj spanca med tednom.
  • Delaj pavze med uživanjem MDMA-ja (2-3 mesece med eno uporabo in drugo).
  • Če opaziš težave, ki bi bile lahko povezane z uporabo ekstazija, poišči pomoč.

The post Punisher (MDMA) appeared first on DrogArt.

NASA rocket (2C-B)

Aktivne snovi

2C-B (10 mg)

Dodaten opis

Tabletka vsebuje psihedelik 2C-B. Ker se običajno prodaja v obliki tablet, je možna nenamerno zamenjava s ekstazi tabletami, ki vsebujejo MDMA. Zaradi možnega manjšega učinka pri tabletah z manjšo vsebnostjo 2C-B obstaja nevarnost redoziranja in s tem zaužitja velike količine 2C-B.

Upoštevaj spodnje smernice zmanjševanja tveganj!

Disclaimer: Količina 2C-B v tableti je zgolj informativne narave in se lahko pri tabletah z istim logotipom in barvo bistveno razlikuje.

Datum testa

24.10.2025

Zmanjševanje tveganj

  • Bodite pazljivi pri doziranju. Lahka doza za 2C-B je 2-10 mg, srednja doza 10-20 mg in močna doza 20-30 mg.
  • 2C-B se vedno jemlje na prazen želodec, saj v nasprotnem primeru ob začetku učinkovanja, lahko povzroči neprijetno slabost in krče. Poskrbite torej za lahek obrok vsaj 3-4 ure preden vzamete 2CB. V primeru slabosti pomaga, če vstanete in se nekoliko sprehodite.
  • Glede na svoje psihadelične učinke nosi 2C-B s seboj vsa tveganja, ki veljajo za psihadelične droge: paranoične reakcije, bad tripi, izguba stika z realnostjo. Poskrbi za  primerno okolje uporabe v katerem se dobro počutiš in da si v dobri psihofizični kondiciji (set in setting).
  • Če se odločiš za uporabo na partiju, je bolje da vzameš manjšo dozo (brez halucinacij) in da svojim prijateljem poveš kaj si vzel/a.
  • Če plešete na 2C-B-ju pazite (prav tako kot pri MDMA), da vsake toliko časa popijete nekoliko požirkov brezalkoholne tekočine.

The post NASA rocket (2C-B) appeared first on DrogArt.

4-BMC prodan kot mefedron (4-MMC)

Aktivne snovi

4-BMC (brefedron)

Dodaten opis

Analiza Nacionalnega laboratorija za zdravje, okolje in hrano je pokazala, da vzorec v obliki belih kristalov, kupljen kot 4-MMC v Ljubljani, dejansko vsebuje sintetični katinon 4-BMC (4-Bromometilkatinon, brefedron).

O snovi 4-BMC je na voljo omejeno število informacij. Povzroča podobne stranske učinke kot drugi sintetični katinoni, kot so povišan srčni utrip, povišan krvni tlak, bolečine v prsih, vznemirjenost, psihoze, agresija, halucinacije in nespečnost. Sintetični katinoni s halogenim elementom (npr. 4-BMC, 4-CMC) kažejo večjo citotoksičnost in nevrotoksičnost.

Sintetični katinoni lahko negativno vplivajo na socialno-ekonomski položaj, družinske odnose, delo ali šolanje ter povečajo ranljivost uporabnikov. Do danes ni bilo potrjenih primerov akutnih zastrupitev ali smrtnih primerov zaradi 4-BMC.Vzorec je v okviru anonimnega zbiranja vzorcev psihoaktivnih snovi zbral DrogArt v Ljubljani.

Datum testa

24.10.2025

Zmanjševanje tveganj

  • V letošnjem in lanskem letu smo zaznali večje število lažnih produktov, ki so se prodajali kot sladoled (3-MMC) in mefedron. Če ga uporabljaš, se zato še posebej priporoča uporabo anonimne storite testiranja drog.
  • Trenutno je na voljo malo informacij o tveganjih povezanih s 4-BMC in ostalimi katinoni. 4-kloroamfetamin (4-CA, amfetaminski derivat) je znan kot zelo nevrotoksična substanca. Ker sta si 4-CA in 4-BMC strukturno zelo podobna, obstaja verjetnost, da gre tudi v primeru 4-BMCja za toksično substanco. Trenutno ni nobenih raziskav, ki bi to potrdile, vendar se priporoča, da se vnosu omenjene spojine izognemo.

The post 4-BMC prodan kot mefedron (4-MMC) appeared first on DrogArt.

Barbie (MDMA)

Aktivne snovi

MDMA (183 mg)

Dodaten opis

Pri več kot 1,5 mg MDMA na kg telesne teže se hitreje pojavijo neželeni učinki, kot so zategovanje čeljusti, mišični krči, panična reakcija in epileptični napad. V naslednjih dneh se po zaužitju večjih odmerkov MDMA lahko pojavi povečana depresija, pomanjkanje koncentracije, motnje spanja, izguba apetita in občutek močne brezvoljnosti. Simptomi po nekaj dneh izzvenijo.

Upoštevaj spodnje smernice zmanjševanja tveganj!

Disclaimer: Količina MDMA v tableti je zgolj informativne narave in se lahko pri tabletah z istim logotipom in barvo bistveno razlikuje.

Datum testa

17.10.2025

Zmanjševanje tveganj

  • Prilagodi odmerek glede na svojo težo in izkušenost. Literatura navaja, da je odmerek MDMA 1─1,5 mg/kg telesne mase, kar za 60 kg težkega človeka znaša 60─90 mg.
  • Bodi zelo pozoren, če MDMA uporabljaš prvič, ali če ne veš, koliko čist MDMA imaš. Učinki so lahko zelo raznoliki in nekateri ljudje čutijo veliko bolj intenzivno negativne učinke (tako fizične kot psihične). Zmeraj začni z majhnimi dozami (npr. četrtinko ekstazija ali lahko dozo MDMA-ja v kristalih) in počakaj vsaj 2h.
  • Delaj redne premore med plesom.
  • Vsako uro spij do pol litra izotoničnega napitka, če plešeš, drugače pa manj.
  • Ne mešaj različnih drog med seboj, ne mešaj z zdravili.
  • Ne jemlji različnih tablet v eni noči.
  • Poskrbi za ustrezno prehrano in dovolj spanca med tednom.
  • Delaj pavze med uživanjem MDMA-ja (2-3 mesece med eno uporabo in drugo).
  • Če opaziš težave, ki bi bile lahko povezane z uporabo ekstazija, poišči pomoč.

The post Barbie (MDMA) appeared first on DrogArt.

Dragon (MDMA)

Aktivne snovi

MDMA (221 mg)

Dodaten opis

Tabletka predstavlja visoko tveganje za zdravje uporabnikov in za pojav negativnih učinkov, kot so: slabost, bruhanje, glavobol, nemir, panični napad, visok krvni tlak, povečano potenje, epileptični napadi, motnje motorike, povišana telesna temperatura, izguba zavesti, možganski edem, možganski ali srčni infarkt.

Odmerki 200 ali več mg lahko pomenijo vsaj dvakratno dozo, pri kateri se tveganja za negativne učinke oziroma zaplete zaradi zaužitja MDMA močno povečajo.

Upoštevaj spodnje smernice zmanjševanja tveganj!

Disclaimer: Količina MDMA v tableti je zgolj informativne narave in se lahko pri tabletah z istim logotipom in barvo bistveno razlikuje.

Datum testa

24.10.2025

Zmanjševanje tveganj

  • Prilagodi odmerek glede na svojo težo in izkušenost. Literatura navaja, da je odmerek MDMA 1─1,5 mg/kg telesne mase, kar za 60 kg težkega človeka znaša 60─90 mg.
  • Bodi zelo pozoren, če MDMA uporabljaš prvič, ali če ne veš, koliko čist MDMA imaš. Učinki so lahko zelo raznoliki in nekateri ljudje čutijo veliko bolj intenzivno negativne učinke (tako fizične kot psihične). Zmeraj začni z majhnimi dozami (npr. četrtinko ekstazija ali lahko dozo MDMA-ja v kristalih) in počakaj vsaj 2h.
  • Delaj redne premore med plesom.
  • Vsako uro spij do pol litra izotoničnega napitka, če plešeš, drugače pa manj.
  • Ne mešaj različnih drog med seboj, ne mešaj z zdravili.
  • Ne jemlji različnih tablet v eni noči.
  • Poskrbi za ustrezno prehrano in dovolj spanca med tednom.
  • Delaj pavze med uživanjem MDMA-ja (2-3 mesece med eno uporabo in drugo).
  • Če opaziš težave, ki bi bile lahko povezane z uporabo ekstazija, poišči pomoč.

The post Dragon (MDMA) appeared first on DrogArt.

NASA rocket (2C-B)

Aktivne snovi

2C-B (1,2 mg)

Dodaten opis

Tabletka vsebuje psihedelik 2C-B. Ker se običajno prodaja v obliki tablet, je možna nenamerno zamenjava s ekstazi tabletami, ki vsebujejo MDMA. Zaradi možnega manjšega učinka pri tabletah z manjšo vsebnostjo 2C-B obstaja nevarnost redoziranja in s tem zaužitja velike količine 2C-B.

Upoštevaj spodnje smernice zmanjševanja tveganj!

Disclaimer: Količina 2C-B v tableti je zgolj informativne narave in se lahko pri tabletah z istim logotipom in barvo bistveno razlikuje.

Datum testa

24.10.2025

Zmanjševanje tveganj

  • Bodite pazljivi pri doziranju. Lahka doza za 2C-B je 2-10 mg, srednja doza 10-20 mg in močna doza 20-30 mg.
  • 2C-B se vedno jemlje na prazen želodec, saj v nasprotnem primeru ob začetku učinkovanja, lahko povzroči neprijetno slabost in krče. Poskrbite torej za lahek obrok vsaj 3-4 ure preden vzamete 2CB. V primeru slabosti pomaga, če vstanete in se nekoliko sprehodite.
  • Glede na svoje psihadelične učinke nosi 2C-B s seboj vsa tveganja, ki veljajo za psihadelične droge: paranoične reakcije, bad tripi, izguba stika z realnostjo. Poskrbi za  primerno okolje uporabe v katerem se dobro počutiš in da si v dobri psihofizični kondiciji (set in setting).
  • Če se odločiš za uporabo na partiju, je bolje da vzameš manjšo dozo (brez halucinacij) in da svojim prijateljem poveš kaj si vzel/a.
  • Če plešete na 2C-B-ju pazite (prav tako kot pri MDMA), da vsake toliko časa popijete nekoliko požirkov brezalkoholne tekočine.

The post NASA rocket (2C-B) appeared first on DrogArt.

McDonalds (MDMA)

Aktivne snovi

MDMA (175 mg)

Dodaten opis

Pri več kot 1,5 mg MDMA na kg telesne teže se hitreje pojavijo neželeni učinki, kot so zategovanje čeljusti, mišični krči, panična reakcija in epileptični napad. V naslednjih dneh se po zaužitju večjih odmerkov MDMA lahko pojavi povečana depresija, pomanjkanje koncentracije, motnje spanja, izguba apetita in občutek močne brezvoljnosti. Simptomi po nekaj dneh izzvenijo.

Upoštevaj spodnje smernice zmanjševanja tveganj!

Disclaimer: Količina MDMA v tableti je zgolj informativne narave in se lahko pri tabletah z istim logotipom in barvo bistveno razlikuje.

Datum testa

17.10.2025

Zmanjševanje tveganj

  • Prilagodi odmerek glede na svojo težo in izkušenost. Literatura navaja, da je odmerek MDMA 1─1,5 mg/kg telesne mase, kar za 60 kg težkega človeka znaša 60─90 mg.
  • Bodi zelo pozoren, če MDMA uporabljaš prvič, ali če ne veš, koliko čist MDMA imaš. Učinki so lahko zelo raznoliki in nekateri ljudje čutijo veliko bolj intenzivno negativne učinke (tako fizične kot psihične). Zmeraj začni z majhnimi dozami (npr. četrtinko ekstazija ali lahko dozo MDMA-ja v kristalih) in počakaj vsaj 2h.
  • Delaj redne premore med plesom.
  • Vsako uro spij do pol litra izotoničnega napitka, če plešeš, drugače pa manj.
  • Ne mešaj različnih drog med seboj, ne mešaj z zdravili.
  • Ne jemlji različnih tablet v eni noči.
  • Poskrbi za ustrezno prehrano in dovolj spanca med tednom.
  • Delaj pavze med uživanjem MDMA-ja (2-3 mesece med eno uporabo in drugo).
  • Če opaziš težave, ki bi bile lahko povezane z uporabo ekstazija, poišči pomoč.

The post McDonalds (MDMA) appeared first on DrogArt.

Panama Owl (MDMA)

Aktivne snovi

MDMA (137 mg)

Dodaten opis

Pri več kot 1,5 mg MDMA na kg telesne teže se hitreje pojavijo neželeni učinki, kot so zategovanje čeljusti, mišični krči, panična reakcija in epileptični napad. V naslednjih dneh se po zaužitju večjih odmerkov MDMA lahko pojavi povečana depresija, pomanjkanje koncentracije, motnje spanja, izguba apetita in občutek močne brezvoljnosti. Simptomi po nekaj dneh izzvenijo.

Upoštevaj spodnje smernice zmanjševanja tveganj!

Disclaimer: Količina MDMA v tableti je zgolj informativne narave in se lahko pri tabletah z istim logotipom in barvo bistveno razlikuje.

Datum testa

17.10.2025

Zmanjševanje tveganj

  • Prilagodi odmerek glede na svojo težo in izkušenost. Literatura navaja, da je odmerek MDMA 1─1,5 mg/kg telesne mase, kar za 60 kg težkega človeka znaša 60─90 mg.
  • Bodi zelo pozoren, če MDMA uporabljaš prvič, ali če ne veš, koliko čist MDMA imaš. Učinki so lahko zelo raznoliki in nekateri ljudje čutijo veliko bolj intenzivno negativne učinke (tako fizične kot psihične). Zmeraj začni z majhnimi dozami (npr. četrtinko ekstazija ali lahko dozo MDMA-ja v kristalih) in počakaj vsaj 2h.
  • Delaj redne premore med plesom.
  • Vsako uro spij do pol litra izotoničnega napitka, če plešeš, drugače pa manj.
  • Ne mešaj različnih drog med seboj, ne mešaj z zdravili.
  • Ne jemlji različnih tablet v eni noči.
  • Poskrbi za ustrezno prehrano in dovolj spanca med tednom.
  • Delaj pavze med uživanjem MDMA-ja (2-3 mesece med eno uporabo in drugo).
  • Če opaziš težave, ki bi bile lahko povezane z uporabo ekstazija, poišči pomoč.

The post Panama Owl (MDMA) appeared first on DrogArt.

Soundcloud (MDMA)

Aktivne snovi

MDMA (129 mg)

Dodaten opis

Pri več kot 1,5 mg MDMA na kg telesne teže se hitreje pojavijo neželeni učinki, kot so zategovanje čeljusti, mišični krči, panična reakcija in epileptični napad. V naslednjih dneh se po zaužitju večjih odmerkov MDMA lahko pojavi povečana depresija, pomanjkanje koncentracije, motnje spanja, izguba apetita in občutek močne brezvoljnosti. Simptomi po nekaj dneh izzvenijo.

Upoštevaj spodnje smernice zmanjševanja tveganj!

Disclaimer: Količina MDMA v tableti je zgolj informativne narave in se lahko pri tabletah z istim logotipom in barvo bistveno razlikuje.

Datum testa

17.10.2025

Zmanjševanje tveganj

  • Prilagodi odmerek glede na svojo težo in izkušenost. Literatura navaja, da je odmerek MDMA 1─1,5 mg/kg telesne mase, kar za 60 kg težkega človeka znaša 60─90 mg.
  • Bodi zelo pozoren, če MDMA uporabljaš prvič, ali če ne veš, koliko čist MDMA imaš. Učinki so lahko zelo raznoliki in nekateri ljudje čutijo veliko bolj intenzivno negativne učinke (tako fizične kot psihične). Zmeraj začni z majhnimi dozami (npr. četrtinko ekstazija ali lahko dozo MDMA-ja v kristalih) in počakaj vsaj 2h.
  • Delaj redne premore med plesom.
  • Vsako uro spij do pol litra izotoničnega napitka, če plešeš, drugače pa manj.
  • Ne mešaj različnih drog med seboj, ne mešaj z zdravili.
  • Ne jemlji različnih tablet v eni noči.
  • Poskrbi za ustrezno prehrano in dovolj spanca med tednom.
  • Delaj pavze med uživanjem MDMA-ja (2-3 mesece med eno uporabo in drugo).
  • Če opaziš težave, ki bi bile lahko povezane z uporabo ekstazija, poišči pomoč.

The post Soundcloud (MDMA) appeared first on DrogArt.

3-CMC prodan kot mefedron (4-MMC) v Mariboru

Aktivne snovi

3-CMC

Dodaten opis

Rezultati analize, narejene v Nacionalnem laboratoriju za zdravje, okolje in hrano, so pokazali, da vzorec v obliki prahu, ki se je v Ljubljani prodajal kot sladoled (3-MMC), dejansko vsebuje 3-CMC .

O snovi 3-CMC ni na voljo veliko informacij, predvideva pa se, da povzroča podobne stranske učinke kot drugi sintetični katinoni, in sicer: povišan srčni utrip, povišan krvni tlak, vznemirjenost, psihoze, epileptični napad, pregretje telesa, bolečine v prsih, itn. V povezavi z uporabo 3-CMC je pet držav EU poročalo številne zastrupitve in smrtne primere.

Datum testa

31.1.2025

Zmanjševanje tveganj

  • V letošnjem in lanskem letu smo zaznali večje število lažnih produktov, ki so se prodajali kot sladoled (3-MMC) in mefedron. Če ga uporabljaš, se zato še posebej priporoča uporabo anonimne storite testiranja drog.
  • Trenutno je na voljo malo informacij o tveganjih povezanih s 3-CMC in ostalimi katinoni. 4-kloroamfetamin (4-CA, amfetaminski derivat) je znan kot zelo nevrotoksična substanca. Ker sta si 4-CA in 3-CMC strukturno zelo podobna, obstaja verjetnost, da gre tudi v primeru 3-CMCja za toksično substanco. Trenutno ni nobenih raziskav, ki bi to potrdile, vendar se priporoča, da se vnosu omenjene spojine izognemo.

Vzorec je v okviru anonimnega zbiranja vzorcev psihoaktivnih snovi zbrala info točka Združenja DrogArt. Analizo vzorca je izvedel Nacionalni laboratorij za zdravje, okolje in hrano. Obvestilo je pripravil Nacionalni inštitut za varovanje zdravja.

The post 3-CMC prodan kot mefedron (4-MMC) v Mariboru appeared first on DrogArt.

Apple ugotovil, da znižanje provizij v App Storu na potrošnike ni vplivalo

Apple je naročil raziskavo, katere izsledki kažejo, da se znižanje provizij za nakup aplikacij ali dodatnih funkcij v aplikacijah na App Storu ni prelilo v prihranke za stranke. Tudi ko so se provizije zaradi zahtev Evropske komisije znižale za 10 odstotnih točk, je več kot devet desetin aplikacij ceno obdržalo ali jo celo rahlo povišalo. Aplikacije, ki so se vendarle pocenile, pa so v povprečju za 2,5 odstotka cenejše. Apple je zato ocenil, da zakonodaja ni dosegla namena. Raziskavo je zanj izvajal Analysis Group. Analizirali so 41 milijonov transakcij v 21.000 aplikacijah, ki so plačljive ali imajo plačljive dodatke. Primerjali so tri mesece pred znižanjem provizij in tri mesece po njem. V tem času so razvijalci aplikacij Applu plačali 20,1 milijona evrov manj, kot bi sicer. Ti prihranki so šli večinoma v tujino, in sicer jih je 86 odstotkov zapustilo EU, ker so bili razvijalci od drugod. Apple zato trdi, da je DMA neučinkovit. Dodatno trdijo, da je uporabniška izkušnja zato slabša, da je varnost nižja, da je zasebnosti manj, inovacije skromnejše.[st.slika 75938]

Pavel Durov je svoboden, postopek proti njemu pa še traja

Francija je v celoti odpravila omejitve gibanja za Telegramovega ustanovitelja Pavla Durova, ki so ga lani pridržali in obtožili več kaznivih dejanj v povezavi s poslovanjem in delovanjem podjetja. Že julija letos so prepoved svobodnega gibanja močno razrahljali in mu dovolili, da je do dva tedna skupaj preživljal na svojem drugem domu v Združenih arabskih emiratih. Še vedno se je moral vsak drugi teden zglasiti na policijski postaji v Nici, česar bo sedaj konec. Giblje se lahko popolnoma svobodno. Durov, ki je ruski in francoski državljan, lahko trajno zapusti Francijo. Tam sicer še vedno teče postopek proti njemu. Za izpustitev je sicer moral plačati pet milijonov evrov varščine, saj ga obtožnica bremeni upravljanja spletne platforme, prek katere so se izvrševala kazniva dejanja. Durov sicer krivde ne priznava, francosko policijo in tožilstvo pa obtožuje nezakonitega postopanja in kršitev pravic. Uradnih izjav ob umiku prepovedi potovanja ni dajal.[st.slika 75937]

Izšel GPT-5.1

OpenAI je izdal nova velika jezikovna modela, ki se imenujeta GPT‑5.1 Instant in GPT‑5.1 Thinking. Ob izidu so povedali, da so izboljšali predvsem uporabniško izkušnjo, torej način komunikacije z uporabnikom, ki bo odslej toplejši in prijaznejši, a ne vsiljiv. Dejali so, da prvi testi kažejo na igrivo naravo, a še vedno jasnost in uporabnost informacij. Izboljšali so tudi sledenje navodilom, ki se je doslej dostikrat izgubilo pri podvprašanjih in v nadaljevanju pogovora. Novi model GPT-5.1 Thinking se bo sam odločil, kdaj bo hitro odgovoril in kdaj bo posegel po daljšem razmišljanju. S tem bo model hitrejši pri odgovarjanju na preprosta vprašanja, medtem ko bo na bolj zapletena vprašanja odgovarjal pravilneje in dlje. Trdijo še, da so njegovi odgovori jasnejši in vsebujejo manj nepotrebnega žargona. GPT-5.1 ima več različnih osebnosti, med katerimi lahko preklapljamo. Uporabniki plačljivih paketov (Pro, Plus, Go, Business) bodo nova modela lahko preizkusili takoj, medtem ko bo za brezplačne uporabnike na voljo pozneje. Takojšnja uvedba še vedno pomeni, da se bo ta pričela nemudoma, a bo trajala nekaj dni. Dostop preko API bo še sledil. Microsoft ga je v svoja orodja že integriral, denimo v Copilota. [st.slika 75936]

Danes ponoči se obeta severni sij

Danes ponoči je lep del Slovenije razsvetlil severni sij, ki se bo zelo verjetno ponovil tudi zvečer, najintenzivneje med 18. in 24. uro. Povzročil ga je Sončev izbruh s pege AR4274, ki je usmerjena ravno proti Zemlji. Izbruh stopnje X5.1 je najmočnejši po lanskem oktobru, ki je bil prav tako viden dovolj južno, da smo si ga lahko ogledovali tudi iz Slovenije. Za opazovanje je seveda treba na področje s čim manj svetlobnega onesnaženja in brez megle. Pega AR4274 je izbruhnila že v nedeljo in ponedeljek, a sta bila izbruha šibkejša. V vseh primerih je s Sonce odletelo veliko snovi (CME, coronal mass ejection), ki bo Zemljo dosegla danes zvečer po slovenskem času. Jakost izbruhov ocenjujejo na lestvici A, B, C, M in X, ki je logaritmična. Izbruhi X5.1 so torej med močnejšimi, nekje sredi razpona X (ki sicer teoretično ni navzgor omejen). Trenutno smo v 25. opazovanem Sončevem ciklu, ki se je začel leta 2019 in ravno letos dosega vrhunec. Ko je Sončeva aktivnost največja, je tudi izbruhov največ, lani in letos pa so še močnejši od pričakovanj. [st.slika 75935]

Kljub koncu podpore Microsoft izdal posodobitev za Windows 10

Mesec dni po koncu uradne podpore za Windows 10, s čimer se je končalo obdobje brezplačno posodobitev, je Microsoft nepričakovano izdal nov popravek za vse uporabnike Windows 10. Razlog za izdajo popravka KB5071959 je precej ironičen, saj z njim odpravljajo težavo, ki je preprečevala vključitev v sistem podaljšane podpore ESU (Extended Security Updates). Kdor želi tudi po 14. oktobru prejemati posodobitve, se mora včlaniti v ESU. To je možno storiti proti plačilu ali z nekaterimi brezplačnimi možnostmi, denimo vključitvijo varnostnih kopij, medtem ko je v EU enoletna brezplačna podpora na voljo vsem domačim uporabnikom. Poslovni uporabniki bodo morali za ESU plačati. Microsoft je opazil, da imajo nekateri računalniki težavo, saj čarovnik za vključitev v ESU javi napako in ne dovoli pridružitve. Novi popravek to napako odpravlja, s čimer bodo računalniki lahko vstopili v ESU in še eno do tri leta prejemali popravke. Izdajanje izrednih popravkov tudi po koncu podpore ni nezaslišano, saj se je tudi pri Windows XP in Windows 7 to že zgodilo. Tedaj je šlo za izredne varnostne razmere, ko so popravki preprečili širjenje agresivnih virusov, s čimer se je Microsoft izpostavil moralnemu hazardu. Možnosti sta bili sicer slabi, in sicer bodisi tvegati večji izpad sistemov bodisi uporabnikom pokazati, da konec podpore ni zares konec in da bo v primeru najhujših izpadov nekdo vendarle prihitel na pomoč. Odločili so se za drugo možnost.[st.slika 75933]

Izšel je Visual Studio 2026, na voljo tudi .NET 10

Po več mesecih testiranja v programu Insiders je tudi uradno izšla nova verzija paketa Visual Studio 2026. Hkrati je dokončana tudi nova inačica odprtokodne programske platforme za Windows, Linux in macOS, in sicer .NET 10. Zadnja različica Visual Studio ima že precej dolgo brado, saj je izšla leta 2022. Najnovejša različica prinaša več kot 5000 popravkov hroščev in 300 novih funkcionalnosti, so povedali v Microsoftu. Vključuje tudi umetno inteligenco, med drugim integracijo z GitHub Copilotom v IDE. Novi Visual Studio je tudi hitrejši in odzivnejši kot doslej. Večjih pretresov sicer ni, zato bo več kot 4000 razširitev za Visual Studio 2022 brez težav delovalo tudi v Visual Studio 2026. Microsoft je prav tako uslišal želje in ločil IDE od pogona za Visual Studio 2026, s čimer redne mesečne posodobitve ne bodo več vplivale na delovanje razširitev in zunanjih orodij. Novi .NET 10 pa bo podprt do novembra 2028, saj igre za LTS (Long-Term Support). [st.slika 75934]

memories of .us

How much do you remember from elementary school? I remember vinyl tile floors, the playground, the teacher sentencing me to standing in the hallway. I had a teacher who was a chess fanatic; he painted a huge chess board in the paved schoolyard and got someone to fabricate big wooden chess pieces. It was enough of an event to get us on the evening news. I remember Run for the Arts, where I tried to talk people into donating money on the theory that I could run, which I could not. I'm about six months into trying to change that and I'm good for a mediocre 5k now, but I don't think that's going to shift the balance on K-12 art funding.

I also remember a domain name: bridger.pps.k12.or.us

I have quipped before that computer science is a field mostly concerned with assigning numbers to things, which is true, but it only takes us so far. Computer scientists also like to organize those numbers into structures, and one of their favorites has always been the tree. The development of wide-area computer networking surfaced a whole set of problems around naming or addressing computer systems that belong to organizations. A wide-area network consists of a set of institutions that manage their own affairs. Each of those institutions may be made up of departments that manage their own affairs. A tree seemed a natural fit. Even the "low level" IP addresses, in the days of "classful" addressing, were a straightforward hierarchy: each dot separated a different level of the tree, a different step in an organizational hierarchy.

The first large computer networks, including those that would become the Internet, initially relied on manually building lists of machines by name. By the time the Domain Name System was developed, this had already become cumbersome. The rapid growth of the internet was hard to keep up with, and besides, why did any one central entity---Jon Postel or whoever---even care about the names of all of the computers at Georgia Tech? Like IP addressing, DNS was designed as a hierarchy with delegated control. A registrant obtains a name in the hierarchy, say gatech.edu, and everything "under" that name is within the control, and responsibility, of the registrant. This arrangement is convenient for both the DNS administrator, which was a single organization even after the days of Postel, and for registrants.

We still use the same approach today... mostly. The meanings of levels of the hierarchy have ossified. Technically speaking, the top of the DNS tree, the DNS root, is a null label referenced by a trailing dot. It's analogous to the '/' at the beginning of POSIX file paths. "gatech.edu" really should be written as "gatech.edu." to make it absolute rather than relative, but since resolution of relative URLs almost always recurses to the top of the tree, the trailing dot is "optional" enough that it is now almost always omitted. The analogy to POSIX file paths raises an interesting point: domain names are backwards. The 'root' is at the end, rather than at the beginning, or in other words, they run from least significant to most significant, rather than most significant to least significant. That's just... one of those things, you know? In the early days one wasn't obviously better than the other, people wrote hierarchies out both ways, and as the dust settled the left-to-right convention mostly prevailed but right-to-left hung around in some protocols. If you've ever dealt with endianness, this is just one of those things about computers that you have to accept: we cannot agree on which way around to write things.

Anyway, the analogy to file paths also illustrates the way that DNS has ossified. The highest "real" or non-root component of a domain name is called the top-level domain or TLD, while the component below it is called a second-level domain. In the US, it was long the case that top-level domains were fixed while second-level domains were available for registration. There have always been exceptions in other countries and our modern proliferation of TLDs has changed this somewhat, but it's still pretty much true. When you look at "gatech.edu" you know that "edu" is just a fixed name in the hierarchy, used to organize domain names by organization type, while "gatech" is a name that belongs to a registrant.

Under the second-level name, things get a little vague. We are all familiar with the third-level name "www," which emerged as a convention for web servers and became a practical requirement. Web servers having the name "www" under an organization's domain was such a norm for so many years that hosting a webpage directly at a second-level name came to be called a "naked domain" and had some caveats and complications.

Other than www, though, there are few to no standards for the use of third-level and below names. Larger organizations are more likely to use third-level names for departments, infrastructure operators often have complex hierarchies of names for their equipment, and enterprises the world 'round name their load-balanced webservers "www2," "www3" and up. If you think about it, this situation seems like kind of a failure of the original concept of DNS... we do use the hierarchy, but for the most part it is not intended for human consumption. Users are only expected to remember two names, one of which is a TLD that comes from a relatively constrained set.

The issue is more interesting when we consider geography. For a very long time, TLDs have been split into two categories: global TLDs, or gTLDs, and country-code TLDs, or ccTLDs. ccTLDs reflect the ISO country codes of each country, and are intended for use by those countries, while gTLDs are arbitrary and reflect the fact that DNS was designed in the US. The ".gov" gTLD, for example, is for use by the US government, while the UK is stuck with ".gov.uk". This does seem unfair but it's now very much cemented into the system: for the large part, US entities use gTLDs, while entities in other countries use names under their respective ccTLDs. The ".us" ccTLD exists just as much as all the others, but is obscure enough that my choice to put my personal website under .us (not an ideological decision but simply a result of where a nice form of my name was available) sometimes gets my email address rejected.

Also, a common typo for ".us" is ".su" and that's geopolitically amusing. .su is of course the ccTLD for the Soviet Union, which no longer exists, but the ccTLD lives on in a limited way because it became Structurally Important and difficult to remove, as names and addresses tend to do.

We can easily imagine a world where this historical injustice had been fixed: as the internet became more global, all of our US institutions could have moved under the .us ccTLD. In fact, why not go further? Geographers have long organized political boundaries into a hierarchy. The US is made up of states, each of which has been assigned a two-letter code by the federal government. We have ".us", why not "nm.us"?

The answer, of course, is that we do.

In the modern DNS, all TLDs have been delegated to an organization who administers them. The .us TLD is rightfully administered by the National Telecommunications and Information Administration, on the same basis by which all ccTLDs are delegated to their respective national governments. Being the US government, NTIA has naturally privatized the function through a contract to telecom-industrial-complex giant Neustar. Being a US company, Neustar restructured and sold its DNS-related business to GoDaddy. Being a US company, GoDaddy rose to prominence on the back of infamously tasteless television commercials, and its subsidiary Registry Services LLC now operates our nation's corner of the DNS.

But that's the present---around here, we avoid discussing the present so as to hold crushing depression at bay. Let's turn our minds to June 1993, and the publication of RFC 1480 "The US Domain." To wit:

Even though the original intention was that any educational institution anywhere in the world could be registered under the EDU domain, in practice, it has turned out with few exceptions, only those in the United States have registered under EDU, similarly with COM (for commercial). In other countries, everything is registered under the 2-letter country code, often with some subdivision. For example, in Korea (KR) the second level names are AC for academic community, CO for commercial, GO for government, and RE for research. However, each country may go its own way about organizing its domain, and many have.

Oh, so let's sort it out!

There are no current plans of putting all of the organizational domains EDU, GOV, COM, etc., under US. These name tokens are not used in the US Domain to avoid confusion.

Oh. Oh well.

Currently, only four year colleges and universities are being registered in the EDU domain. All other schools are being registered in the US Domain.

Huh?

RFC 1480 is a very interesting read. It makes passing references to so many facets of DNS history that could easily be their own articles. It also defines a strict, geography-based hierarchy for the .us domain that is a completely different universe from the one in which we now live. For example, we learned above that, in 1993, only four-year institutions were being placed under .edu. What about the community colleges? Well, RFC 1480 has an answer. Central New Mexico Community College would, of course, fall under cnm.cc.nm.us. Well, actually, in 1993 it was called the Technical-Vocational Institute, so it would have been tvi.tec.nm.us. That's right, the RFC describes both "cc" for community colleges and "tec" for technical institutes.

Even more surprising, it describes placing entities under a "locality" such as a city. The examples of localities given are "berkeley.ca.us" and "portland.wa.us", the latter of which betrays an ironic geographical confusion. It then specifies "ci" for city and "co" for county, meaning that the city government of our notional Portland, Washington would be ci.portland.wa.us. Agencies could go under the city government component (the RFC gives the example "Fire-Dept.CI.Los-Angeles.CA.US") while private businesses could be placed directly under the city (e.g. "IBM.Amonk.NY.US"). The examples here reinforce that the idea itself is different from how we use DNS today: The DNS of RFC 1480 is far more hierarchical and far more focused on full names, without abbreviations.

Of course, the concept is not limited to local government. RFC 1480 describes "fed.us" as a suffix for the federal government (the example "dod.fed.us" illustrates that this has not at all happened), and even "General Independent Entities" and "Distributed National Institutes" for those trickier cases.

We can draw a few lessons from how this proposal compares to our modern day. Back in the 1990s, .gov was limited to the federal government. The thinking was that all government agencies would move into .us, where the hierarchical structure made it easier to delegate management of state and locality subtrees. What actually happened was the opposite: the .us thing never really caught on, and a more straightforward and automated management process made .gov available to state and local governments. The tree has effectively been flattened.

That's not to say that none of these hierarchical names saw use. GoDaddy continues to maintain what they call the "usTLD Locality-Based Structure". At the decision of the relevant level of the hierarchy (e.g. a state), locality-based subdomains of .us can either be delegated to the state or municipality to operate, or operated by GoDaddy itself as the "Delegated Manager." The latter arrangement is far more common, and it's going to stay that way: RFC 1480 names are not dead, but they are on life support. GoDaddy's contract allows them to stop onboarding any additional delegated managers, and they have.

Few of these locality-based names found wide use, and there are even fewer left today. Multnomah County Library once used "multnomah.lib.or.us," which I believe was actually the very first "library" domain name registered. It now silently redirects to "multcolib.org", which we could consider a graceful name only in that the spelling of "Multnomah" is probably not intuitive to those not from the region. As far as I can tell, the University of Oregon and OGI (part of OHSU) were keeping very close tabs on the goings-on of academic DNS, as Oregon entities are conspicuously over-represented in the very early days of RFC 1480 names---behind only California, although Georgia Tech and Trent Heim of former Colorado company XOR both registered enough names to give their states a run for the money.

"co.bergen.nj.us" works, but just gets you a redirect notice page to bergencountynj.gov. It's interesting that this name is actually longer than the RFC 1480 name, but I think most people would agree that bergencountynj.gov is easier to remember. Some of that just comes down to habit, we all know ".gov", but some of it is more fundamental. I don't think that people often understand the hierarchical structure of DNS, at least not intuitively, and that makes "deeply hierarchical" (as GoDaddy calls them) names confusing.

Certainly the RFC 1480 names for school districts produced complaints. They were also by far the most widely adopted. You can pick and choose examples of libraries (.lib.[state].us) and municipal governments that have used RFC 1480 names, but school districts are another world: most school districts that existed at the time have a legacy of using RFC 1480 naming. As one of its many interesting asides, RFC 1480 explains why: the practice of putting school districts under [district].k12.[state].us actually predates RFC 1480. Indeed, the RFC seems to have been written in part to formalize the existing practice. The idea of the k12.[state].us hierarchy originated within IANA in consultation with InterNIC (newly created at the time) and the Federal Networking Council, a now-defunct advisory committee of federal agencies that made a number of important early decisions about internet architecture.

RFC 1480 is actually a revision on the slightly older RFC 1386, which instead of saying that schools were already using the k12 domains, says that "there ought to be a consistent scheme for naming them." It then says that the k12 branch has been "introduced" for that purpose. RFC 1386 is mostly silent on topics other than schools, so I think it was written to document the decision made about schools with other details about the use of locality-based domains left sketchy until the more thorough RFC 1480.

The decision to place "k12" under the state rather than under a municipality or county might seem odd, but the RFC gives a reason. It's not unusual for school districts, even those named after a municipality, to cover a larger area than the municipality itself. Albuquerque Public Schools operates schools in the East Mountains; Portland Public Schools operates schools across multiple counties and beyond city limits. Actually the RFC gives exactly that second one as an example:

For example, the Portland school district in Oregon, is in three or four counties. Each of those counties also has non-Portland districts.

I include that quote mostly because I think it's funny that the authors now know what state Portland is in. When you hear "DNS" you think Jon Postel, at least if you're me, but RFC 1480 was written by Postel along with a less familiar name, Ann Westine Cooper. Cooper was a coworker of Postel at USC, and RFC 1480 very matter-of-factly names the duo of Postel and Cooper as the administrator of the .US TLD. That's interesting considering that almost five years later Postel would become involved in a notable conflict with the federal government over control of DNS---one of the events that precipitated today's eccentric model of public-private DNS governance.

There are other corners of the RFC 1480 scheme that were not contemplated in 1993, and have managed to outlive many of the names that were. Consider, for example, our indigenous nations: these are an exception to the normal political hierarchy of the US. The Navajo Nation, for example, exists in a state that is often described as parallel to a state, but isn't really. Native nations are sovereign, but are also subject to federal law by statute, and subject to state law by various combinations of statute, jurisprudence, and bilateral agreement. I didn't really give any detail there and I probably still got something wrong, such is the complicated legal history and present of Native America. So where would a native sovereign government put their website? They don't fall under the traditional realm of .gov, federal government, nor do they fall under a state-based hierarchy. Well, naturally, the Navajo Nation is found at navajo-nsn.gov.

We can follow the "navajo" part but the "nsn" is odd, unless they spelled "nation" wrong and then abbreviated it, which I've always thought is what it looks like on first glance. No, this domain name is very much an artifact of history. When the problem of sovereign nations came to Postel and Cooper, the solution they adopted was a new affinity group, like "fed" and "k12" and "lib": "nsn", standing for Native Sovereign Nation. Despite being a late comer, nsn.us probably has the most enduring use of any part of the RFC 1480 concept. Dozens of pueblos, tribes, bands, and confederations still use it. squamishtribe.nsn.us, muckleshoot.nsn.us, ctsi.nsn.us, sandiapueblo.nsn.us.

Yet others have moved away... in a curiously "partial" fashion. navajo-nsn.gov as we have seen, but an even more interesting puzzler is tataviam-nsn.us. It's only one character away from a "standardized" NSN affinity group locality domain, but it's so far away. As best I can tell, most of these governments initially adopted "nsn.us" names, which cemented the use of "nsn" in a similar way to "state" or "city" as they appear in many .gov domains to this day. Policies on .gov registration may be a factor as well, the policies around acceptable .gov names seem to have gone through a long period of informality and then changed a number of times. Without having researched it too deeply, I have seen bits and pieces that make me think that at various points NTIA has preferred that .gov domains for non-federal agencies have some kind of qualifier to indicate their "level" in the political hierarchy. In any case, it's a very interesting situation because "native sovereign nation" is not otherwise a common term in US government. It's not like lawyers or lawmakers broadly refer to tribal governments as NSNs, the term is pretty much unique to the domain names.

So what ever happened to locality-based names? RFC 1480 names have fallen out of favor to such an extent as to be considered legacy by many of their users. Most Americans are probably not aware of this name hierarchy at all, despite it ostensibly being the unified approach for this country. In short, it failed to take off, and those sectors that had widely adopted it (such as schools) have since moved away. But why?

As usual, there seem to be a few reasons. The first is user-friendliness. This is, of course, a matter of opinion---but anecdotally, many people seem to find deeply hierarchical domain names confusing. This may be a self-fulfilling prophecy, since the perception that multi-part DNS names are user-hostile means that no one uses them which means that no users are familiar with them. Maybe, in a different world, we could have broken out of that loop. I'm not convinced, though. In RFC 1480, Postel and Cooper argue that a deeper hierarchy is valuable because it allows for more entities to have their "obviously correct" names. That does make sense to me, splitting the tree up into more branches means that there is less name contention within each branch. But, well, I think it might be the kind of logic that is intuitive only those who work in computing. For the general public, I think long multi-part names quickly become difficult to remember and difficult to type. When you consider the dollar amounts that private companies have put into dictionary word domain names, it's no surprise that government agencies tend to prefer one-level names with full words and simple abbreviations.

I also think that the technology outpaced the need that RFC 1480 was intended to address. The RFC makes it very clear that Postel and Cooper were concerned about the growing size of the internet, and expected the sheer number of organizations going online to make maintenance of the DNS impractical. They correctly predicted the explosion of hosts, but not the corresponding expansion of the DNS bureaucracy. Between the two versions of the .us RFC, DNS operations were contracted to Network Solutions. This began a winding path that lead to delegation of DNS zones to various private organizations, most of which fully automated registration and delegation and then federated it via a common provisioning protocol. The size of, say, the .com zone really did expand beyond what DNS's designers had originally anticipated... but it pretty much worked out okay. The mechanics of DNS's maturation probably had a specifically negative effect on adoption of .us, since it was often under a different operator from the "major" domain names and not all "registrars" initially had access.

Besides, the federal government never seems to have been all that on board with the concept. RFC 1480 could be viewed as a casualty of the DNS wars, a largely unexplored path on the branch of DNS futures that involved IANA becoming completely independent of the federal government. That didn't happen. Instead, in 2003 .gov registration was formally opened to municipal, state, and tribal governments. It became federal policy to encourage use of .gov for trust reasons (DNSSEC has only furthered this), and .us began to fall by the wayside.

That's not to say that RFC 1480 names have ever gone away. You can still find many of them in use. state.nm.us doesn't have an A record, but governor.state.nm.us and a bunch of other examples under it do. The internet is littered with these locality-based names, many of them hiding out in smaller agencies and legacy systems. Names are hard to get right, and one of the reasons is that they're very hard to get rid of.

When things are bigger, names have to be longer. There is an argument that with only 8-character names, and in each position allow a-z, 0-9, and -, you get 37**8 = 3,512,479,453,921 or 3.5 trillion possible names. It is a great argument, but how many of us want names like "xs4gp-7q". It is like license plate numbers, sure some people get the name they want on a vanity plate, but a lot more people who want something specific on a vanity plate can't get it because someone else got it first. Structure and longer names also let more people get their "obviously right" name.

You look at Reddit these days and see all these usernames that are two random words and four random numbers, and you see that Postel and Cooper were right. Flat namespaces create a problem, names must either be complex or long, and people don't like it either. What I think they got wrong, at a usability level, is that deep hierarchies still create names that are complex and long. It's a kind of complexity that computer scientists are more comfortable with, but that's little reassurance when you're staring down the barrel of "bridger.pps.k12.or.us".

the steorn orbo

We think that we're converting time into energy... that's the engineering principle.

In the 1820s, stacks, ovens, and gasometers rose over the docklands of Dublin. The Hibernian Gas Company, one of several gasworks that would eventually occupy the land around the Grand Canal Docks, heated coal to produce town gas. That gas would soon supply thousands of lights on the streets of Dublin, a quiet revolution in municipal development that paved the way for electrification---both conceptually, as it proved the case for public lighting, and literally, as town gas fired the city's first small power plants.

Ireland's supply of coal became scarce during the Second World War; as part of rationing of the town gas supply most street lights were shut off. Town gas would never make a full recovery. By that time, electricity had proven its case for lighting. Although coal became plentiful after the war, imported from England and transported from the docks to the gasworks by horse teams, even into the 1960s---this form of energy had become obsolete. In the 1980s, the gasworks stoked their brick retorts for the last time. Natural gas had arrived. It was cheaper, cleaner, safer.

The Docklands still echo with the legacy of the town gas industry. The former site of the Hibernian gasworks is now the Bord Gáis Energy Theatre, a performing arts center named for the British-owned energy conglomerate that had once run Ireland's entire gas industry as a state monopoly. Metal poles, the Red Sticks, jut into the sky from the Grand Canal Square. They are an homage to the stacks that once loomed over the industrial docks. Today, those docks have turned over to gastropubs, offices, the European headquarters of Google.

Out on the water, a new form of energy once spun to life. In December of 2009, a man named Sean McCarthy booked the Waterways Ireland Visitors Centre for a press event. In the dockside event space, and around the world through a live stream, he invited the public to witness a demonstration of his life's work. The culmination of three years of hype, controversy, and no small amount of ridicule, this was his opportunity to prove what famed physicist Michio Kaku and his own hand-picked jury of scientists had called a sham.

He had invented a perpetual motion machine.


Sean McCarthy had his start in the energy industry. Graduating with an engineering degree in the 1980s, he took a software position with a company that quickly became part of European energy equipment giant ABB. By some route now lost to history, he made is way into sales. 1994 found McCarthy in Bahrain, managing sales for the Middle East.

McCarthy is clearly a strong salesman. In Bahrain, he takes credit for growing ABB's business by orders of magnitude. Next, ABB sent him to Houston, where he worked sales to Central and South America. In 1999, the company called him to Azerbaijan---but he chose otherwise, leaving ABB and returning to Dublin.

Contacts from the energy industry, and no doubt his start in software, brought Graham to the world of the internet. 1999 in Dublin was, like so many places, crowded with dotcom boom startups. McCarthy saw opportunity: not necessarily by joining the fray, but by selling expertise and experience to those who already had. With his brother-in-law, a lawyer, he formed Steorn.

The nature of Steorn's business during these early years is a bit hazy. As McCarthy recounts it, to Barry Whyte for the book The Impossible Dream, Steorn gathered some early support from the Dublin internet industry that connected him with both capital and clients. One of their most notable clients, attested by several sources, was dotcom wonder WorldOfFruit.com. The World of Fruit, a subsidiary of fruit importer Fyffes, was a web-based auction platform for wholesale import and export of fruit. I have not been able to clarify how exactly Steorn was involved in World Of Fruit, although it cannot be ignored that the aforementioned brother-in-law and cofounder, Francis Hackett, had been the general counsel of World Of Fruit before becoming involved in Steorn. As for the future of online fruit trading, it went about as well as most dotcom-era market platforms did, stumbling on for a few years before folding with millions of Euro lost.

Later, by McCarthy's telling, Steorn shifted its business more towards R&D. They worked on a project for a client fighting credit card fraud, which led to later work for Irish and British law enforcement organizations, mostly related to video surveillance and machine vision. For Ireland's DART commuter train, for example, Steorn attempted a system that would automatically detect aggressive behavior in platform surveillance video---though it was canceled before completion. These projects floated Steorn through the worst of the dotcom bust and supported an in-house R&D operation that trialed a variety of ideas, including something like an early version of California's modern electronic license plates.

I am not sure who well this account squares with Steorn's public presence. Steorn was incorporated in July of 2000, and when their website appeared in 2001 it advertised "programme management and technical assessment advice for European companies engaging in e-commerce projects." The main emphasis of the website was in the project management experience of McCarthy and the company's CTO, Sean Menzies, formerly of recruiting and staffing firm Zartis. The primary claim was that Steorn could oversee projects to develop and launch e-commerce websites, reducing the risk and expense of jumping into the world wide web.

Steorn has been formed by individuals with significant experience in this area who have recognised that there is an absence in the market of genuinely impartial and objective advice as to the best way of implementing e-commerce ideas.

A few months later, Steorn's business became a little more general: "Steorn is an expert in the field of technology risk management." In August of 2001, Steorn launched its first concrete product offering, one that is curiously absent from McCarthy's later stories of the company. Not only would Steorn supervise technology projects, it had partnered with an insurance company to underwrite surety bonds on the completion of those projects.

Steorn had a proprietary program management methodology which they called GUIDE. Frustratingly, although GUIDE is always styled in all caps, I have not found any indication that it stood for something. What I do know is that GUIDE "offers predictability, visibility, control and results," and that it has some relation to a project planning methodology called DIRECT.

In all truth, while Steorn's 2001 websites are laden with bespoke buzzwords, the project and risk management services they offer seem unremarkable in a good way. Probabalistic project scheduling models, checkpoints and interim deliverables, "an approach which is systematic yet sufficiently flexible." The only way that Steorn's services seem foolish is the extent to which these methods have been adopted by every technology project---or have become so dated or unfashionable as to appear obsolete.

This was 2001, though, and many established companies found themselves plunging into the deep end of the online economy for the first time. No doubt the guidance of an outside expert with experience in online projects could be valuable. But, then, I am a sympathetic audience: they were hiring technical architects. That's my day job! The benefits package was, they said, best in class. Alas, the conditions that may necessitate my own move to Dublin come more than twenty years too late.

Despite McCarthy's description of their multiple projects in security, surveillance, and machine vision, Steorn never seems to have advertised or publicized any work in those areas. In 2003, their website underwent a redesign that added "services for technology inventors" and advertised a subsidiary web hosting provider called HostsMe. Perhaps my view is too colored by the late-2000s context in which I came to understand the internet industry, but pivoting to the low barrier of entry, low chance of success shared hosting market feels like a move of desperation. In 2004, the website changed again, becoming a single page of broken links with a promise that it is "being redesigned."

For this entire period, from late 2001 after the announcement of their risk underwriting service to the company's complete swerve into energy from nothing, I can find no press mentions of Steorn or any of their projects. McCarthy seems to suggest that Steorn had a staff of engineers and technicians, but I struggle to find any names or details. Well, that was a long time ago, and of course the staff are not all listed on LinkedIn. On the other hand, one of McCarthy's investors, shown a laboratory full of white-coated scientists, later came to suspect they were temps hired only to create an appearance.

This is not to accuse McCarthy of lying. Some details definitely check out. For example, their project with Fraudhalt to detect tampering with ATMs via machine vision is the subject of a patent listing Sean McCarthy's name. But I do think that McCarthy is prone to exaggerating his successes and, in the process, muddling the details. When it comes to Steorn's most famous invention, McCarthy and early employee and project manager Michael Daly agree that it was an accidental discovery. Both tell detailed stories of a project for which Steorn purchased and modified small wind turbines. The trouble is that they are not describing the same project. Two different customers, two different contexts, two different objectives; but both lead to the Orbo.

Other attempts at checking into the particulars of Steorn's early business lead in more whimsical directions. To Whyte, McCarthy described a sort of joke project by Steorn to create an "online excuse exchange" called WhatsMyExcuse.com. I can't find any evidence of this project, although there's no particular reason I would. What I can say is that some ideas are evergreen: four months ago, a Reddit user posted that they "built a tiny excuse generator for fun." It's live right now, at whatsmyexcuse.com.

In any case, the nature of Steorn around 2003 remains mysterious to me. McCarthy's claims to have been a regional expert on credit card fraud are hard to square with the complete lack of any mention of this line of business on Steorn's website. On the other hand, Steorn had a confusing but clearly close relationship with Fraudhalt to such an extent that I think Steorn may have gone largely dormant as McCarthy spent his time with the other company.


After five years of near silence, a period that McCarthy characterized as modest success, Steorn captured the world's attention with what must be one of the boldest pivots of all time. An August 2006 full-page ad in the Economist began with a George Bernard Shaw quote, "all great truths begin as blasphemies." "Imagine," it reads below, "a world with an infinite supply of pure energy. Never having to recharge your phone. Never having to refuel your car."

Steorn's website underwent a complete redesign, now promoting the free energy device they dubbed the Orbo. A too-credulous press went wild with the story, especially in Ireland where Steorn's sudden prominence fit a narrative of a new Irish technology renaissance. The Irish Independent ran a photo of McCarthy kneeling on the floor beside a contraption of aluminum extrusion and wheels and wiring. "Irish firm claims to have solved energy crisis" is the headline. The photo caption reminds us that it "has been treated with some skepticism in the scientific community because it appears to contravene the laws of physics."

I will not devote much time to a detailed history of the development of the Orbo; that topic is much better addressed by Barry Whyte's book, which benefits greatly from interviews with McCarthy. The CliffsNotes 1 go something like this: Steorn was working on some surveillance project and wanted to power the cameras with small wind turbines. The exact motivations here are confusing since, whichever version of the story you go with, the cameras were being installed in locations that already had readily available electrical service. It may have been simple greenwashing. In any case, the project lead Steorn to experimenting with modifications to off-the-shelf compact wind turbines.

At some point, presumably one or two years before 2006, it was discovered that a modified turbine seemed to produce more energy in output than it received as input. I have not been able to find enough detail on these experiments to understand what exactly was being done, so I am limited to this vague description. Steorn at first put it out of mind, but McCarthy later returned to this mystery and built a series of prototypes that further demonstrated the effect. A series of physicists and research teams were brought in to evaluate the device. This part of the story is complicated and details conflict, but an opinionated summary is that entrepreneurially-inclined technology development executives tended to find the Orbo promising while academic physicists found the whole thing so laughable that they declined to look at it. Trinity College contained both, and in 2005 McCarthy's discussions with a technology commercialization team there lead to a confrontation with physicist Michael Coey. From that point on, McCarthy viewed the scientific establishment as resistance to be overcome.

McCarthy determined that the only way to bring Orbo to the world was to definitively prove that it worked, and to do so largely without outside experience. For that, he would need equipment and a team. After long discussions with a venture capital firm, McCarthy seems to have balked at the amount of due diligence involved in such formal investment channels. Instead, he brought on a haphazard series of individuals, mostly farmers, who collectively put up about 20 million euro.


One of the curious things about the Orbo was Steorn's emphasis on cell phone batteries. I suppose it is understandable, in that ubiquitous mobile phones were new at the time and the hassle of keeping them charged was suddenly a familiar problem to all. It also feels like a remarkable failure of ambition. Having invented a source of unlimited free energy, most of what McCarthy had to say was that you wouldn't have to charge your phone any more. Sure, investment documents around Steorn laid out a timeline to portable generators, vehicles, standby generators for telecom applications... a practical set of concerns, sure, but one that is rather underwhelming from the creator of the first over-unity energy device.

It is also, when you think about it, a curious choice. McCarthy later talked about pitching the device to Apple's battery suppliers. The timeline here is a bit confusing, given that the iPhone launched after the period I think he was discussing, but still, it makes a good example. The first-generation iPhone was about a half inch thick, hefty by today's standards but a very far cry from Steorn's few publicized prototypes, which occupied perhaps two feet square. Miniaturization is a challenge for most technologies, with the result that pocket-portable devices are usually a late-stage application, not an early one. That's especially true for an electromechanical generator, one that involves bearings and electromagnets and a spinning rotor. Still, it was all about phones.

Steorn's trouble getting the attention of battery manufacturers lead them to seek independent validation. Turned away, as McCarthy tells it, by university physics departments and industrial laboratories, Steorn came up with an unconventional plan. In the 2006 Economist advertisement, they announced "The Challenge." Steorn was recruiting scientists and engineers to form a panel of independent jurors who would evaluate the technology. You could apply at steorn.com. Nearly 500 people did.

The steorn.com of the Orbo era is a fascinating artifact. It has a daily poll in a sidebar, just for some interactive fun. You could apply for the scientific jury, or just sign up to express your interest in the technology. Most intriguingly, though, there was a forum.

This is some intense mid-2000s internet. There was a time when it just made sense for every website to have a forum; I guess the collective internet industry hadn't yet learned that user-generated content is, on average, about as good as toxic waste, and that giving the general public the ability to publish their opinions under your brand is rarely a good business decision. Still, Steorn did, and what few threads of the steorn.com discussion forum survive in the Internet Archive are a journey through a bygone era. Users have the obvious arguments: is Orbo real, or a scam? They have the less obvious arguments, over each other's writing styles. There is a thread for testing BBCode. A few overactive users appear seemingly everywhere, always hostile and dismissive. An occasional thread will become suddenly, shockingly misogynistic. This is, of course, the soup we all grew up in.

The forum exemplifies one of the things that I find most strange about Steorn. The company was often obsessively secret, apparently working on the Orbo technology in quiet for years, but it was also incredibly bold, choosing a full-page ad for a publicity stunt as their first public announcement. McCarthy was well aware that the scientific establishment viewed him as a fraud, by this time individual investors seemed to be growing skeptical, and yet they had a discussion forum.


The late years of Steorn are best summarized as a period of humiliation. In 2007, the company promised a public demonstration of the Orbo at a science museum in London. I remember following this news in real time: the announcement of the demonstration triggered a new round of press coverage, and then it was canceled before it even began. Steorn claimed that the lighting in the museum had overheated the device and damaged it. The public mostly laughed.

In 2009, Steorn arranged a second public demonstration, the one at the Waterways Visitors Centre in Dublin. This one actually happened, but the device clearly incorporated a battery and Steorn didn't allow any meaningful level of inspection. The demonstration proved nothing but that Steorn could use a battery to make a wheel spin, hardly a world-changing innovation. Not even a way to charge a phone.

Later that same year, The Challenge came to its close. The independent jury of experts, after meetings, demonstrations, experiments, and investigations, returned its verdict. The news was not good. Jurors unanimously approved a statement that Steorn had not shown any evidence of energy production.

When something is as overhyped and underdeveloped as the Orbo, when a company claims the impossible and still attracts millions, you sort of expect a dramatic failure. Surely, you cannot just claim to have created a source of free energy and then wander on through life as usual. Or, perhaps you can. In 2010, steorn.com gained the Developer Community, where you could pay a licensing fee for access to information on the Steorn technology.

Even the mobile phones were not abandoned. In 2015, Steorn mounted another demonstration, of a sort. A small Orbo device, now apparently solid-state, was displayed supposedly charging a phone in a Dublin pub. A lot of big claims are made and tested in pubs, but few with so much financial backing. Steorn had been reduced to a story told over beer and the internet: at the end of 2015, McCarthy announced via Facebook that an Orbo phone and phone charger would soon go on sale. The charger, the Orbo Cube, ran €1,200 and was due to ship within the month.

As far as I can tell, these devices were actually completely unrelated to the original Orbo. The "solid-state Orbo" was completely novel design, to the extent that you would call it a design. Some units did ship, and reverse engineering efforts revealed an eccentric set of components including two alkaline 9v batteries and shrinkwrapped mystery cells that have the appearance of capacitors but are, supposedly, proprietary components relying on some kind of magnetic effect. It's hard to find much solid information about these devices, I think very few were made and they shipped mostly to "friendly" customers. It is said that many of them failed to work at all, the result of an acknowledged quality control problem with the clearly handmade interior. There is a lot of electrical tape involved. Like a lot. And some potting compound, but mostly electrical tape. The two I've found detailed accounts of failed to charge a phone even when new out of the box.

Through this whole period, Steorn was still McCarthy's day job, at least on paper. But he had turned towards a new way to make money: poker. Starting around 2015, McCarthy became a professional poker player, apparently his main form of income to this day. He announced that Steorn would be liquidated in 2017; it was finally dissolved in 2024.


History is replete with perpetual motion machines. There are rumors of some sort of machine in the 8th century, although questions about the historicity of these accounts lead to difficult questions around what makes a perpetual motion machine "real." Of the many putative free energy sources of history, many have involved some configuration of magnets. I can understand why: magnets offer just the right combination of the mysterious and the relatable. They seem to work as if by magic, and yet, they are physical objects that we can hold and manipulate. You can see how someone might develop an intuitive understanding of magnetic behavior, extend it to an impossible idea, and remain quite convinced of their discovery.

If perpetual motion machines are a dime a dozen, why does the story of Steorn so fascinate me? I think that it has many parallels to our situation today. The Orbo is not merely the product of a deluded inventor, it's the product of a period of Irish economic history. Whyte's book is focused primarily on this aspect of the story. He articulates how the overstuffed, overhyped Irish economy of the 2000s, a phenomenon known as the Celtic Tiger, created an environment in which just about any claim could raise millions---no matter how outlandish.

And here we are today, with some of the largest components of our economy run by people who fashion themselves as technical experts but have backgrounds mostly in business. They, too, are collecting money on the premise that they may build a world-changing technology---AGI.

Sure, this analogy is imperfect in many ways. But I think it is instructive. Sean McCarthy wrote of Steorn's early days:

It was just a gold rush: if you weren't doing web, you didn't exist. So, a banana import-export business needs to invest millions in e-commerce. They probably did need to invest, but it was driven, in my opinion, by all of these companies driving their share price up by doing something on the web rather than by any real business cases.

The dotcom boom and bust was an embarrassment to the computer industry, and it was also a pivotal moment in its growth. E-commerce, the graveyard of many '90s businesses, is one of the pillars of modern life. The good days of easy money can be an incubator for important ideas. They can also be an incubator for idiocy.

Whenever I read about some historic scam, I always wonder about the people at the top. Was Sean McCarthy a fraud, or was he deluded? Did he play up the Orbo to keep the money coming in, or did he really believe that he just needed one more demonstration?

Whyte's book takes a strong position for the latter, that McCarthy had the best intentions and got in over his head. He still believes in Orbo, to this day. Of course, Whyte is far more sympathetic than I find myself. The book is almost half written by McCarthy himself in the form of its lengthy excerpts from his statements. I am more cynical, I think that McCarthy must have known things were going sideways well before he shipped out broken power banks with nine volt batteries covered in electrical tape. I don't think he started out that way, though.

I think he started out with the best intentions: the leader of a moderately successful tech company, one that wasn't getting much business but had ready access to capital. He went fishing for ideas, for opportunities for growth, and he hooked the biggest of them all. A world-changing idea, one so profound that he himself struggled to understand its implications. An impossible dream of not having to charge his phone. To sustain the company, he had to bring in money. To bring in money, he had to double down. Perpetual motion was always just around the corner.

I believe that McCarthy is a con man, in that he has too much confidence. It's a trait that makes for a good poker player. McCarthy saw something remarkable, perhaps an error in measurement or an error in understanding. He placed a bet, and spent the next decade pushing more money into the pot, trying to make it work out. I would hope that our industry has learned to be more cautious, but then, we're only human. We watched the wheels spin, faster and faster, and we convince ourselves that they can spin forever.

  1. CliffsNotes was a primitive technology for summarizing large texts, which relied mostly on humans to do the summarization. Since summarizing texts has since become the primary focus of the US economy, the process has become more automated but does not necessarily produce better results.

The Ascent to Sandia Crest II

Where we left off, Albuquerque's boosters, together with the Forest Service, had completed construction of the Ellis Ranch Loop and a spur to the Sandia Crest. It was possible, even easy, to drive from Albuquerque east through Tijeras Pass, north to the present-day location of Sandia Park, and through the mountains to Placitas before reaching Bernalillo to return by the highway. The road provided access to the Ellis Ranch summer resort, now operated by the Cooper family and the First Presbyterian Church, and to the crest itself.

The road situation would remain much the same for decades to come, although not due to a lack of investment. One of the road-building trends of the 1920s and 1930s was the general maturation of the United States' formidable highway construction program. The Federal Aid Highway Act of 1921 established the pattern that much of western road building would follow: the federal government would split costs 50:50 to help western states build highways. This funding would bring about many of the US highways that we use today.

A share of the money, called the forest highway fund, was specifically set aside for highways that were in national forests or connected national forests to existing state highways. By 1926, the Federal Lands Transportation Program had taken its first form, a set of partnerships between the Bureau of Public Roads (later the Federal Highway Administration) and federal land management agencies to develop roads for economic and recreational use of federal land. For the Forest Service of the era, a forest without road access was of limited use. The following years saw a systematic survey of the national forests of New Mexico with an eye towards construction.

The Federal Aid Highway Act presaged the later interstate freeway program in several ways. First, the state-federal cost sharing model would become the norm for new highways and drive the politics of road construction to this day. Second, despite the nominal responsibility of the states for highways, the Act established the pattern of the federal government determining a map of "desirable" or "meritorious" road routes that states would be expected to follow. And finally, the Act enshrined the relationship between the military and road building. The first notional map of an integrated US highway system, developed mostly by the Army for the Bureau of Public Roads, was presented to Congress by esteemed General of the Armies John Pershing. This plan, the Pershing Map, is now over 100 years old but still resembles our contemporary freeway system.

The Great Depression did not generally drive construction, but roads would prove an exception. Roosevelt's New Deal, and the dryer language of the Emergency Relief Acts of the early 1930s, provided substantial federal funding for construction and improvement of highways. The impact on New Mexico was considerable. "Surveys, plans, and estimates of Navajo Canyon Section of State Highway No. 2 within the Carson National Forest, south of Canjilon, and survey, plans, and estimates on State Highway No. 12 between Reserve and Apache Creek on the Datil-Reserve Forest Highway route." The lists of highway projects in newspapers become difficult to track. The forest highway program's budget doubled, and doubled again. "The building of new Forest Highways in the National Forests of New Mexico will be rushed as rapidly as possible... to open up the National Forests to greater use and protection from fires."

The papers were optimistic. The Carlsbad Current-Argus ran a list of Forest Highway efforts in 1932; the headline read simply "Forest Service To Solve Unemployment."


The depression-era building campaign left the Ellis Ranch loop on the same route, but saw much of the route improved to a far higher standard than before. Graveling, oiling, and even paving stabilized the road surface while CCC crews built out embankments on the road's mountainside segments. By 1940, much of the loop had been incorporated into New Mexico Highway 44---a continuous route from Cedar Crest to Aztec.

Former NM 44 exemplifies changes to the loop's northern segment. Much of NM 44 is now known as NM 550, a busy highway from Bernalillo and alongside the far reaches of Los Lunas that eventually becomes one of the major routes in the state's northwest corner. The connection to Cedar Crest, though, is now difficult to fathom. The former route of NM 44 continued east of the freeway through Placitas and along NM 165, over the mountains to become NM 536 at Balsam Glade and then NM 14 at San Antonito. NM 14 is a major route as well, and NM 536 is at least paved---but as discussed in part I, NM 165 is now a rough unpaved road appealing mostly to OHV users and impassable in winter 1.

Sections of NM 165 do seem to have been paved, mostly closer to Placitas, but maintenance as a main highway had ended at least by 1988 when the NM 44 designation was retired.

This is the major reason for my fascination with the Ellis Ranch Loop: today, it is hardly a loop---the whole loop is still there, and you can drive it at least in summer, but the segment of the highway in the north part of the Sandias, what is now 165, doesn't feel at all like a continuation of the same loop as NM 536. On the ground, when driving up 536, it's clear that the "main" route at Balsam Glade is to follow the curve west onto Sandia Crest Scenic Highway. And yet, the Scenic Highway was only a spur of the original loop.

This oddity of history is still reflected in the road designations and, as a result, modern maps. NM 536 and NM 165 are state highways. Sandia Crest Scenic Highway, despite the name, is not. As you zoom out, Google Maps hides the Scenic Highway entirely, depicting the former Ellis Ranch Loop as the main route. This is very different from what you will find if you actually drive it.

This halfway state of change, with NM 165 still designated as a highway but effectively abandoned and the more major Scenic Highway not designated, partly reflects the bureaucratic details of the 1930s. NM 165 was considered part of a proper inter-city route; the Scenic Highway deadends at the crest and thus never really "went anywhere." But there is more to the story: it also reflects the ambitions, errors, and shifting priorities of the mid-century.


The first formal ski area in the Sandia Mountains opened in 1936, developed by Robert Nordhaus's Albuquerque Ski Club. The ski area, then called La Madera for the canyon that tops out at the ski slopes, became a center of southwestern wintersports. In the following decades, the ski area would displace Ellis Ranch as the main destination in the mountains.

World War II disrupted skiing and recreation more broadly; progress on Sandia Mountain development stalled into the 1940s. The decline wouldn't last: WWII raised the possibility of ground combat in Northern Europe or even a need to defend the inland United States against invasion. Military planners considered Denver the best choice for an emergency relocation of the national capital in response to an Atlantic assault. While never a formal policy, the concept of Denver as "national redoubt" survived into the Cold War and directed military attention to the Rocky Mountains.

From 1939 into the 1940s, the National Ski Patrol lobbied for development of ski-mounted military units to the extent that it became a de facto auxiliary of the US Army. Ultimately only one Army "Mountain Division" would be established, but its members---recruited and trained by the Ski Patrol itself---went on to an instrumental role in securing the surrender of German forces in Austria.

While the Alpine Light Infantry were never a large part of the Army, they brought public attention to skiing and mountain recreation. Military support enabled expansion of the Ski Patrol's training programs, and returning soldiers had stories of skiing in the Alps. Much like aviation, skiing surged after the war, elevated from a fairly obscure extreme sport to one of the country's best known forms of recreation.

In 1946, the La Madera Ski Area installed what was then the longest T-bar lift in the country. It built by Ernst Constam, a Swiss engineer who invented the modern ski lift and became trapped in the United States when the war broke out during his business trip. Constam found himself an American by accident, but developed a fierce allegiance to his new home and soon worked for the Army. He participated in the training of the Mountain Division, contributed his understanding of European alpine combat to military intelligence, and even built the first military ski lift: a T-bar at Camp Hale in Colorado, now called Ski Cooper.

During the 1950s, the ski area underwent ownership changes and some financial difficulty. Nordhaus operated the area profitably, but needed cash to build the infrastructure for continued growth. That cash came in the form of Albuquerque businessman and real estate developer Ben Abruzzo, who is remembered first as a famous balloonist (responsible for much of Albuquerque's prominence in hot air ballooning) but was also an avid skier. Nordhaus and Abruzzo formed the Sandia Peak Ski Company and construction was soon underway.

You could access the base of the ski area from the Ellis Ranch Loop, but it was slow going, especially in winter. The trip from Albuquerque reportedly took hours. Ski area expansion required easier access. After negotiations related to the ski area permit expansion, the Forest Service paved the entirety of Ellis Loop from the base of the mountains to the ski area. This was the first time that a large section of the road in the mountains was fully paved, and it appears to be the Forest Service's decision to pave from the ski area southeast to San Antonito, rather than north to Placitas, that sealed the fate of NM 165. From that point on, the "main" way from Albuquerque to the Mountains was via Tijeras, not via Placitas.


The late 1950s were a busy time in the Sandias. The ski area was not the only growing tourism attraction; a post-war resurgence in travel and automobilism renewed the pro-tourism boosterism of the 1920s. Albuquerque business interests were back to lobbying the Forest Service for tourist facilities, and Abruzzo and Nordhaus's efforts at the ski area inspired confidence in a bright future for mountain recreation. All that was needed were better roads.

Sometime during the 1950s, the names "Ellis Ranch Loop" or "Ellis Loop" gave way to "Sandia Loop." Despite the new name, the road kept to its old history: the state and the Forest Service announced their intent to complete a project that, as far as they were concerned, had been in intermittent progress since the 1930s: paving and widening the entire route as a two-lane highway.

The project had never been completed in part because of the difficult conditions, but there were politics at play as well. Federal funding, critical to completing highway projects, was allocated based on the recommendations of committees at each level of government. The development of "secondary highways," a category that included NM 44, fell to county committees for prioritization. Despite the emphasis that Bernalillo County put on the Sandia Loop, Sandoval County was less motivated. Another nail in NM-165's coffin came in 1958 when the Sandoval County Road Committee definitively stated that their budget was quite limited and the Sandia Loop was not a priority. As far as I can tell, serious efforts to pave the existing road between Balsam Glade and Placitas ended at that meeting. A few years later, the state highway department announced that the project had been canceled.

Despite the fate of the loop's northern half, the rest of the Sandia mountain roads enjoyed constant attention. The southern portion of the loop was widened (by four feet, three inches) with new road base laid. The road to the crest itself, despite the involvement of the state highway department in the original construction, was decisively the problem of the Forest Service---at least, this was the point repeatedly made by the state highway department as Albuquerque interests campaigned for its improvement. Still, boosters did not have to wait for long. In 1958 the Forest Service let contracts for improvement of the Sandia Crest road as an "anti-recession project."

While many reports describe the late-1950s Sandia Crest project as widening and resurfacing, some changes to the alignment were made as well. The most severe of its many switchbacks, at the Nine Mile Picnic Area, was reworked for fewer curves. The original alignment is still in use for access to the picnic area.

In the mean time, crews contracted by the Forest Service completed yet another round of improvements to the north section of the loop road. This project spanned from a "preliminary engineering survey" in 1958 to the point where the gravel base was largely complete in 1959, but while paving of the entire route was planned it does not seem to have happened. The Albuquerque business community felt that there was good reason for the paving project: the Albuquerque Tribune reported in 1958 that "upwards of 100,000 cars annually use the Loop Road in making the punishing trip up to La Madera ski run and the Crest." At the same time, though, Albuquerque's once unified push for mountain access started to face political headwinds.


The 1960s were a different world, but some things stayed the same: Bob Cooper advertised cabins for lease at Ellis Ranch, and the business community wanted to see an improved, paved loop highway around the Sandias. The original completion of the Ellis Ranch Loop appeared in the Albuquerque Tribune's "30 Years Ago" feature in 1960, alongside coverage of the growing debate over placement of the city's two upcoming interstate freeways.

Despite efforts by the Chamber of Commerce, Highland Business Men's Association, and Downtown Business Men's Association to restart the Sandia Loop project, the state Highway Department remained silent on the subject. This was the peak era of freeway construction, and the department had many construction problems on the I-25 corridor especially. The Sandia Loop remained a popular facility, though, recommended by the Mobil travel guide starting in 1962. That same year, the La Madera ski area, now called Sandia Peak, opened the "Summit House" restaurant at the top. Summit House would later be known as High Finance before closing for replacement by today's Ten 3.

This might be confusing to today's Burqueños, as the main attraction at Sandia Peak would not begin construction until 1964. The "Spectacular Sandia Peak Chairlift Ride" described in 1962 newspaper advertisements is the ski area's Lift #1, a chairlift that is no longer in service and slated for replacement with a mixed chair/gondola system in coming years. And yet, the restaurant and lift were operated by the "Sandia Peak Ski and Aerial Tram Co." This somewhat contradicts the "official" history of the Sandia Peak Tramway, but one would think that Nordhaus and Abruzzo must have already had a bigger vision for access to the mountaintop.

The full story of the Sandia Peak Tramway could occupy its own article, and perhaps it will, but the short version is this: Bob Nordhaus visited Switzerland, where he learned that ski areas were being made more accessible by means of gondola lifts. He thought that the same scheme would work in Albuquerque, and together with Abruzzo, mounted a long campaign for permitting and funding. The plan was ambitious and more than just a bit quixotic, but Nordhaus and Abruzzo were both big personalities with extensive business connections. In 1966, they opened what was then, and for many decades after, the longest aerial tramway in the world. It runs from the northeast fringe of Albuquerque directly up the western slope of the Sandias, delivering passengers to the very top of the ski area. The 3,819 foot climb, over 2.7 miles and just two towers, takes about fifteen minutes. It is far faster, of course, than the drive around and up the east side. And yet, in the fashion of Albuquerque, one of its biggest impacts on our modern city is a road: Tramway Boulevard, or NM 556, roughly one quarter of what was originally planned as a beltway highway around the city.

While the tramway opened the steep western slope to tourism, attention to the east side was reinvigorated. The Albuquerque business community had continued to lobby the Forest Service for new and better roads, and as the tramway went in the road boosters found success. The Forest Service circulated a draft plan that delighted some and terrified others: a brand new loop¸ the Crest Loop, that would replace the road from Balsam Glade to the crest and extend it straight up the very spine of the mountain, allowing views down both sides, until coming down the northern ridge to meet the existing Sandia Loop Road near Placitas. It would be a first-class skyline road, 24' wide pavement over almost exactly the route of today's crest trail.


Previous efforts at radical Sandia mountain road construction were held off by weather, funding, and war. For the Crest Loop, there was a new enemy: bighorn sheep.

Rocky Mountain Bighorn Sheep don't appear to have ever been common in New Mexico, but a small population around the Rio Grande Valley was completely wiped out by human impacts in the late 19th and early 20th centuries. Efforts to reintroduce bighorns started in the 1930s, with more translocated sheep in the 1940s and 1960s. These populations struggled severely, and despite the attempts there are no bighorn sheep in the Sandia Mountains today. Still, the population in the mid-'60s was likely the largest it would ever be, a hard-won victory that the state was inclined to protect.

Besides, Rachel Caron's Silent Spring was published in 1962, a landmark in the history of American environmentalism. The environmentalist movement wasn't as powerful as it would become, but it was certainly gaining momentum. Not only the state Game and Fish Department opposed, but community groups like the New Mexico Mountain Club stepped up to oppose development along the ridge.

On Monday, August 8th, 1966, the Forest Service called a public hearing to be held at the East Central Branch of Albuquerque National Bank. To an audience of businessmen and community groups, regional forester William D. Hurst presented the official plan for the Sandias. The existing road from Balsam Glade up to the crest, the Crest Scenic Highway, would be significantly reworked. The old alignment had over a dozen switchbacks, the new alignment only four. The wider, straighter road would offer access to a completely new ski area at about the location of today's Capulin snow play area.

The Crest Loop would not make it quite all the way to the crest, instead turning north a bit to the east (and below) the crest where a parking lot and turnout area formed the main attraction. The highway itself no longer followed the ridge, an admission to the Bighorn Sheep interests, but instead paralleled it about 1/4 mile to the east. About 2.5 miles north of the crest, the highway would switch back and start it a descent down the eastern face, through Las Huertas canyon, to meet the Sandia Loop before Placitas.

It was noted that, in the future, an additional road could be planned up the canyon above Sandia Cave for yet a third road all the way up the eastern side. At least you can't say they weren't ambitious.

The 1966 Crest Loop plan was presented by the Forest Service in a rather matter-of-fact way. Hurst told the press that he did not expect much objection, and apparently the presentation itself was well received. Still, Hurst's quip that the plan presented "the greatest good for the largest numbers of people" hinted that it was not uncontroversial, as did the fact that every newspaper article about the presentation made some reference to the "hubbub."

While the 1/4 mile shift away from the ridge had won the support of the state, it was not enough to please the New Mexico Wildlife and Conservation Association and gained only the most tacit support from the New Mexico Mountain Club, which described their position to the papers as "wait and see." They had time: the Forest Service expected the project to take as much as ten years, even with surveying work starting immediately.

The Crest Loop was only part of a broader plan for Sandia recreation, one that responded to Albuquerque's rapid mid-century population growth. There were more than twice as many people in Albuquerque then as there had been before the war, for example, and the Forest Service considered the nine picnic areas on the existing road to be overused. The new crest loop would enable 200 acres of new picnic grounds

For all of the promise of the new Crest Loop, there was a curious omission: the view from the crest. The road's alignment 1/4 mile off the crest meant that the highest point on the road, the turnout with bathrooms and facilities, was on the other side of the ridge from Albuquerque. It would have views to the north and northeast, but not at all to the west or southwest. In other words, you would no longer be able to drive up to the crest and then look down on the city---getting the same view offered by the old road would require a hike.

This is a fascinating decision and one that, if you will allow me to editorialize, seems to reflect the Forest Service's process-focused approach to planning. The original Crest Loop plan would have placed the road directly on the ridge, probably providing one of the finest roadside views in the nation. Well, in actuality, there was always disagreement over the aesthetic merits of the plan. The forest service had always denied any intent to clearcut trees to improve sightlines, so it was likely that even the original ridgetop alignment would have had trees blocking the view along much of the route. The Forest Service even conceded to environmental groups by promising to keep the road off of the large clear bluff just north of the crest, the spot with the very best potential for automobile sightseeing.

The final proposed alignment, downslope from the ridge, had the view completely blocked on the west... and it was away from the windswept, rocky western face, decidedly in the forest. Even to the east, where the terrain dropped away precipitously, you would be unlikely to see anything other than the forest right in front of you.

Proponents said the Crest Loop would be a natural attraction like no other, with a 360 degree perspective over the Rio Grande Valley and the High Plains. Opponents said it would be a "green tunnel," so densely forested on both sides that it might as well have been down in the valley for all the sightseeing it would afford. They seem to have been right: as ultimately proposed, the "skyline" Crest Loop actually offered less of a view than the existing route.

What it would offer is capacity. The road was designed for traffic, intended to support visitors to the new picnic areas, the expanding Sandia Peak Ski Area, and the new ski area yet to be named. The new ski area penciled out as 1,000 acres, larger than Sandia Peak. A new snow play area was expected to attract thousands of non-skiing visitors. And parking---parking was a problem, the Forest Service reported, with just 750 spaces at the bottom of the ski area. The Forest Service intended to provide 800 spaces at the new snow play area, and the bypassed switchbacks of the old crest road would themselves be repurposed as new parking areas.


As far as the Forest Service was concerned, the Crest Loop was a done deal. They had already committed $600,000 a year for the next several years to fund surveys and planning, and expected more federal funding to match state support. And yet, as always, progress was slow. 1967 passed with little mention of the road. In 1968 the Albuquerque Chamber of Commerce must have been getting nervous, as they passed a resolution in support of its completion, for no reason in particular.

In 1969, an interesting new proposal for the Crest Scenic Highway put it back into the papers: a race. Albuquerque advertising firm Eicher and Associates proposed to one-up the Pikes Peak by hosting a Labor Day hill climb. The state Highway Commission approved the plan, but the race stalled when a State Police captain pointed out that state law forbid racing on public roads---and the Attorney General issued an opinion that the Highway Commission had no authority to grant an exception. Nothing ultimately came of the Sandia Peak Climb, or at least, nothing other than a series of headlines and another round of objections to planned improvements.

Central to the opponent's arguments was a disagreement over the nature of forest recreation. The New Mexico Wildlife and Conservation Association advocated for the establishment of wilderness areas covering much of the Sandias, precluding further development. Their proposal actually accommodated the Crest Loop by putting it just off of the edge of the wilderness area covering the western face, but Cibola National Forest Supervisor George Proctor was still hesitant: "Proctor made clear he did not rule out wilderness use for the land, but only wanted time to weigh the proposals against possible other needs---as for future additional tramways or recreation areas." Perhaps one tramway and two highways was not enough; we can't know what the future will hold.

The conservationists said exactly what you would expect:

It would be nice to have complete access for large volumes of people---since so many have seen nothing like the top of the Sandias---said [NMCWA director] McDonald, but when large volumes come, then the beauty of the area often ceases to exist. 2

While the Forest Service had been quiet, it had not been entirely idle. Surveying for the route was nearly complete, and the Forest Service contracted the first phase of the work: clearing trees for the new route, starting in the immediate area of the crest. A bit over three miles of right of way were clearcut, mostly along the "top" of the road a quarter mile from the crest. The new upper alignment of the existing road was cleared as well, wider switchbacks well to the north and south of the current road.

Almost immediately after the clearing, Forest Service funding for the project unexpectedly dried up. Work was discontinued until new funding for the project could be secured, a delay that was not expected to last long but that fell at the worst possible time.

The Crest Loop had been abstract, seen only in the form of Forest Service presentations and hand-drawn newspaper illustrations. In that form, it had attracted considerable support from the business community and only muted objections from conservationists. But then the trees fell. Acres of clearcut forest, so close to the ridge and so visible from the road, turned the highway plan into an eyesore. As they sat, unchanged, month after month, an eyesore turned into a scandal.


By the summer of 1970, the Wildlife and Conservation Association, the New Mexico Conservation Coordination, and the National Wildlife Federation sent letters opposing the project. State legislators became involved. There were accusations of treachery, with Forest Service employees allegedly lobbying business leaders to support the project while on the clock, and the district forester holding secret meetings with state officials to coordinate.

Over the following year, opposition stacked up. US Rep. Manuel Lujan, a prominent New Mexico politician, wrote to the director of the Forest Service with his objections. NM Attorney General David Norvell joined with a press release asking the Forest Service to reconsider the plan. These objections were actually rather weak: Lujan, in particular, mostly just thought the proposed road was too expensive, a waste of money considering that there was already a highway connecting the same places. Even environmental groups tempered their criticism, with at least two clarifying that they objected only to the highway along the ridge, and not to improvements to the road between Balsam Glade and the crest.

The Forest Service's motivations can be difficult to understand. The whole Crest Loop plan had reportedly begun with a presidential directive to identify opportunities for new recreational facilities, and it had certainly had the strong support of the business community in the 1960s. At the turn of the 1970s, the situation had noticeably changed. Most press coverage of the project focused on the opposition, and Albuquerque's business clubs had fallen silent, perhaps averse to the controversy. There's not much to say for the Forest Service's lobbying effort beyond inertia: as far as they were concerned, they had committed to the plan in 1966 and were now simply executing on an old decision.

The federal government, once it gets into motion, does not easily change directions. For Forest Service staff, careers, or at least plans to climb the corporate ladder, must have been staked on the success of the Crest Loop. Even as the budget escalated from $3 million to $3.5, then as high as $5 million, the Forest Service remained committed to finishing what it had started. "[Opponents] claim the Forest Service won't heed their cries of distress," the Albuquerque Journal summarized. "The Forest Service says the decision to build the road was made a long time ago."

In May of 1971, the controversy broke out into demonstrations. This was no Vietnam war; anti-Crest Loop protests had a more whimsical quality. On Sunday the 16th, around two hundred demonstrators went up the Crest Scenic Highway and gathered in the cleared path of the future road. The plan, originally, had been to replant the clearing with new trees, a slow way of walking back the road's likewise slow advance. This threat of minor vandalism was met by a threat of minor reprisal when the Forest Service informed the press that unauthorized planting of trees in a National Forest could be charged as a misdemeanor.

The result, a mountaintop clash between Albuquerque's most motivated environmentalists and the force majeure of the Forest Service, must be one of the stranger moments in Albuquerque's environmental history.

One group---lead by Dr. Gerald Bordin---advocated "peaceful protest." Another, lead partly by Jesse Cole, planted three symbolic trees. Finally, two individuals---Florencio Baca and Marvin Price---began a private discussion that turned into a simulated panel discussion on the negative aspects of the proposed road....

Few persons started the planned three-mile walk down the graded Ellis Loop in the 10,000-foot altitude. Several protesters, disillusioned for the lack of trees to plant, began to leave by noon.

Bordin produced a cardboard replica of a pine tree, dissected by a highway, and placed it on the proposed road. "This type of planting is legal," he said; and another protester, watching two others dig a hole for a tree a few yards away, asked Bordin "Where's your tree, man?" 3


On the first day of 1970, president Richard Nixon signed the National Environmental Policy Act. Nixon was famously a mixed bag on environmental issues; his creation of the Environmental Protection Agency is one of the largest single steps forward in the history of US conservation, but he also pinched pennies to a degree that smothered many environmental programs. As it would turn out, the fate of the Crest Loop hinged on the interplay of these two sides of Nixon.

To satisfy its obligations under NEPA, the Forest Service completed an environmental impact assessment of the proposed Crest Road in 1973. It was now almost five years since the clearing of part of the right of way, but the only progress made had been on paper. Unsurprisingly, the impact assessment found the road to be feasible. "The highway is completely on the east slope and won't even be able to be seen from Albuquerque," the forest supervisor explained.

The environmental analysis spurred another series of exchanges between the Forest Service and environmental interests. The state Environmental Planning Commission filed comment, suggesting that the Crest Loop would better be replaced by a system of two one-way roads at different altitudes. One thing the EPC did agree on was the necessity that something be done. Like the groups who had assured their support for improving the road to the crest in the years before, the EPC recommended against taking no action. The existing crest road was unsustainable, they found, and needed to be improved in some way.

During the plant-in, one organizer suggested letters to the editor. Perhaps he was heard, as letters in opposition stacked up during 1973. Frank Marquart wrote that the highway would be good for General Motors and no one else. A letter from L. Graham got the simple headline "Crest Road Not Necessary." Among the growing public protest, the gears of government were turning: the Bernalillo County Commission made a resolution asking the Forest Service to stop.

Joe Whiton attended a hearing on the matter in Tijeras, leaving a Journal reporter with a memorable quote:

The Crest Road is a juggernaut already aimed, the switch is on and it is going to be fired no matter what we say. I have a feeling it is going to happen anyway and I don't think it should. 4


Despite the Forest Service's momentum, two significant roadblocks had arisen in the Crest Loop plan: first, the Forest Service was having a hard time getting to a final environmental impact statement on the project. Several drafts had been circulated, and from what I can tell one was even published as final, although it only covered a portion of the work rather than the entirety. The National Environmental Policy Act has many problems and often fails to achieve its aims, but it does have its virtues: while NEPA does not necessarily require agencies to listen to public feedback, it does require them to solicit it, and the hearings and comment periods that make up federal environmental policy gave the Crest Loop's opponents a platform on which to organize.

Second, there was President Nixon. Nixon had generally supported the Federal Lands Highway Program, under which the Crest Loop now fell, but his efforts to reduce federal spending through impoundment quickly became controversial. A fight with Congress over changes to the federal highway funding model in 1972 lead to a lapse in highway funding, a situation that became even more chaotic as Nixon specifically refused funding for a long list of highway grants. The funding situation probably didn't kill the Crest Loop, but it delayed further progress during the 1970s, contributing to both the sense of scandal over the lack of progress and to the opposition's developing power.

In 1973, the Forest Service produced new draft environmental documents covering four variant plans, which other than the no action alternative all involved some form of new highway. This was, as it turns out, the last gasp of the Crest Loop: in 1975, over a decade after planning first started, the Forest Service released a new environmental impact statement covering the entire Sandia mountain recreational plan. Its alternatives included new picnic grounds, bathrooms, and campgrounds. It included various improvements to the Crest Scenic Highway. But more important is what it did not include: any mention of the Crest Loop.

In the following years, the Forest Service would release further environmental and planning documents on the remaining scope of work, which was only the improvement of the road between Balsam Glade and the Crest. A 1976 impact statement covered three alternatives: the first would use the area already cleared for realignment, the second would keep the old alignment, and the third would finish realignment of the entire route, as included in the original Crest Loop plan. Based on the Albuquerque Tribune's count of public comments, Burqueños overwhelmingly favored the second option---keeping the existing route. While the road has been widened and resurfaced over the years, it remains on its original, winding route.


In 1927, roads reached Sandia Crest. Today, just shy of one hundred years later, the drive to the crest is essentially the same as this original route. It's all paved now, it's wider, and the curves has softened. It is still, by our modern standards, a rough road. The biggest change from the long-ago Ellis Ranch Loop to today's forest highways is actually a loss: the reduction of NM 165 to a minor, unpaved road. One might say, then, that progress since 1927 has been backwards. We are less ambitious than we once were.

On the flip side, the ambitions of the mid-century are so difficult to square with our modern sense of environmental preservation. The Forest Service dreamed, at times, of thousands of parking spaces, of cabins, a resort. We now value conservation over improvement, wilderness over access. It almost seems like it has always been that way---but it hasn't. This is a modern idea in land management, an innovation of the post-war era and the American environmental movement.

The old tensions between tourist promotion and wilderness preservation have never gone away. In 2023, Mountain Capital Partners signed a joint venture with the Sandia Peak Ski Company. The Sandia Peak Ski Area has barely opened for the last five years; climate change has shortened the season so severely that the Sandias can go all winter without enough snow to ski. Mountain Capital Partners how hopes, through artificial snowmaking, to bring the ski area back to life. Still, it's clear that snow will never be enough: ski areas are, in general, facing tough times.

Sandia Peak Ski Company has developed an ambitious plan for a "four season" recreational destination. It called for a mountain roller coaster, mountainbike trails, a complete replacement of Lift #1. The mountain coaster has already been abandoned, having attracted more environmental controversy than the Forest Service was prepared to handle.

In aerial images, you can still clearly see the path of the Crest Loop, at least to the first switchback where 1968 clearing work ended. Some of the modern trails follow the unused highway rights of way. They are incongruously described by in some Forest Service documents as firebreaks, a function that they do serve but were never intended for. I get the impression that even some of the Forest Service staff have forgotten the entire Crest Loop story. Well, it's one of many things about the Sandias that have been forgotten.

Ellis Ranch is now hardly remembered, up a canyon from a picnic area. It is once again difficult to access, past the part of the former Ellis Loop that is still paved. The Crest Loop was canceled in its early stages, the notional second highway was never even planned.

But Ellis Ranch Loop Road is still there. Millions of people drive up it every year. They look down on the city, and marvel at the view. I wonder how many realize---that the climb started so long ago, and that it has still never been finished. I think that it's better this way. A summit should always be an achievement, so easy to see, but so hard to reach. We get used to living in the shadows of mountains.

  1. Impassable is of course a subjective term. Last time I had it in mind to attempt NM 165 with snow on the ground, about two years ago, I started from the top at Balsam Glade. At least, that was my plan... on arrival at Balsam Glade I found a small crowd of people formed around an F-350 that had only gotten perhaps 50' down 165, a steep hill at that spot, before becoming unable to get back up. I think that a smaller vehicle would have fared better but this was an obstacle that did not seem likely to clear up quickly. I abandoned the attempt.

  2. Albuquerque Journal, 1969-07-20 p. 37

  3. Albuquerque Journal, 1971-05-16 p. 1

  4. Albuquerque Journal, 1974-02-15 p. 45

Error'd: Will You Still Need Me?

... when I'm eight thousand and three? Doesn't quite scan.

Old soul jeffphi hummed "It's comforting to know that I'll have health insurance coverage through my 8,030th birthday!"

1

 

Scififan Steve muttered "I asked Copilot if Tom Baker and Lalla Ward were the same age. Apparently, they have been traveling on different timelines, so Copilot claims they are the same age despite having birthdays 17 years apart. Who knew?" It's a timey-wimey thing.

2

 

An anonymous Aussie announced "I was trying to look up a weather forecast for several weeks in advance as I'm attending an event on Saturday 22nd November. Apparently the Weather Channel has invented its own calendar (or decided there are too many days in the year), as while the 1st of November was a Saturday and 22nd of November occur on a Saturday, the Weather Channel has decided those dates fall on a Friday.
Rosanna is in Melbourne, Australia. Temperatures are displayed in Celsius. " Rosanna is rated the 99th most liveable suburb of Melbourne, probably due to the aberrant calendar.

0

 

Beatrix W. wants to relocate to a more pedestrian-friendly nabe. "I was bored and looked for houses on a german real estate website. When I tried to have the distance to a house calculated I got the lovely result from the screenshot. The 99 minutes are okay when using the car but the rest is -1 minute."

3

 

Taxpayer Marlin D. is making sure he gets everything lined up for year-end. Checking his witholdings, he found that the IRS try very hard to be precise with their language. "This is from the IRS online Tax Withholding Estimator tool. I guess they really do mean *between*."

4

 

[Advertisement] ProGet’s got you covered with security and access controls on your NuGet feeds. Learn more.

CodeSOD: Lucky Thirteen

Wolferitza sends us a large chunk of a C# class. We'll take it in chunks because there's a lot here, but let's start with the obvious problem:

    private int iID0;
    private int iID1;
    private int iID2;
    private int iID3;
    private int iID4;
    private int iID5;
    private int iID6;
    private int iID7;
    private int iID8;
    private int iID9;
    private int iID10;
    private int iID11;
    private int iID12;
    private int iID13;

If you say, "Maybe they should use an array," you're missing the real problem here: Hungarian notation. But sure, yes, they should probably use arrays. And you might think, "Hey, they should use arrays," would be an easy fix. Not for this developer, who used an ArrayList.

private void Basculer(DataTable dtFrom, DataTable dtTo)
{
    ArrayList arrRows = new ArrayList();

    int index;

    DataRow drNew1 = dtTo.NewRow();
    DataRow drNew2 = dtTo.NewRow();
    DataRow drNew3 = dtTo.NewRow();
    DataRow drNew4 = dtTo.NewRow();
    DataRow drNew5 = dtTo.NewRow();
    DataRow drNew6 = dtTo.NewRow();
    DataRow drNew7 = dtTo.NewRow();
    DataRow drNew8 = dtTo.NewRow();
    DataRow drNew9 = dtTo.NewRow();
    DataRow drNew10 = dtTo.NewRow();
    DataRow drNew11 = dtTo.NewRow();
    DataRow drNew12 = dtTo.NewRow();
    DataRow drNew13 = dtTo.NewRow();
    DataRow drNew14 = dtTo.NewRow();
    DataRow drNew15 = dtTo.NewRow();

    arrRows.Add(drNew1);
    arrRows.Add(drNew2);
    arrRows.Add(drNew3);
    arrRows.Add(drNew4);
    arrRows.Add(drNew5);
    arrRows.Add(drNew6);
    arrRows.Add(drNew7);
    arrRows.Add(drNew8);
    arrRows.Add(drNew9);
    arrRows.Add(drNew10);
    arrRows.Add(drNew11);
    arrRows.Add(drNew12);
    arrRows.Add(drNew13);
    arrRows.Add(drNew14);
    arrRows.Add(drNew15);
    // more to come…

Someone clearly told them, "Hey, you should use an array or an array list", and they said, "Sure." There's just one problem: arrRows is never used again. So they used an ArrayList, but also, they didn't use an ArrayList.

But don't worry, they do use some arrays in just a moment. Don't say I didn't warn you.

    if (m_MTTC)
    {
        if (m_dtAAfficher.Columns.Contains("MTTCRUB" + dr[0].ToString()))
        {
            arrMappingNames.Add("MTTCRUB" + dr[0].ToString());
            arrHeadersTexte.Add(dr[4]);
            arrColumnsFormat.Add("");
            arrColumnsAlign.Add("1");

Ah, they're splitting up the values in their data table across multiple arrays; the "we have object oriented programming at home" style of building objects.

And that's all the setup. Now we can get into the real WTF here.

            if (iCompt == Convert.ToInt16(0))
            {
                iID0 = Convert.ToInt32(dr[0]);
                iCompt = iCompt + 1;
            }
            else if (iCompt == Convert.ToInt16(1))
            {
                iID1 = Convert.ToInt32(dr[0]);
                iCompt = iCompt + 1;
            }
            else if (iCompt == Convert.ToInt16(2))
            {
                iID2 = Convert.ToInt32(dr[0]);
                iCompt = iCompt + 1;
            }
            else if (iCompt == Convert.ToInt16(3))
            {
                iID3 = Convert.ToInt32(dr[0]);
                iCompt = iCompt + 1;
            }
            else if (iCompt == Convert.ToInt16(4))
            {
                iID4 = Convert.ToInt32(dr[0]);
                iCompt = iCompt + 1;
            }
            else if (iCompt == Convert.ToInt16(5))
            {
                iID5 = Convert.ToInt32(dr[0]);
                iCompt = iCompt + 1;
            }
            else if (iCompt == Convert.ToInt16(6))
            {
                iID6 = Convert.ToInt32(dr[0]);
                iCompt = iCompt + 1;
            }
            else if (iCompt == Convert.ToInt16(7))
            {
                iID7 = Convert.ToInt32(dr[0]);
                iCompt = iCompt + 1;
            }
            else if (iCompt == Convert.ToInt16(8))
            {
                iID8 = Convert.ToInt32(dr[0]);
                iCompt = iCompt + 1;
            }
            else if (iCompt == Convert.ToInt16(9))
            {
                iID9 = Convert.ToInt32(dr[0]);
                iCompt = iCompt + 1;
            }
            else if (iCompt == Convert.ToInt16(10))
            {
                iID10 = Convert.ToInt32(dr[0]);
                iCompt = iCompt + 1;
            }
            else if (iCompt == Convert.ToInt16(11))
            {
                iID11 = Convert.ToInt32(dr[0]);
                iCompt = iCompt + 1;
            }
            else if (iCompt == Convert.ToInt16(12))
            {
                iID12 = Convert.ToInt32(dr[0]);
                iCompt = iCompt + 1;
            }
            else if (iCompt == Convert.ToInt16(13))
            {
                iID13 = Convert.ToInt32(dr[0]);
                iCompt = iCompt + 1;
            }
        }
    }

Remember those private iID* values? Here's how they get populated. We check a member variable called iCompt and pull the first value out of a dr variable (a data reader, also a member variable). You may have looked at the method signature and assumed dtFrom and dtTo would be used, but no- they have to purpose in this method at all.

And if you liked what happened in this branch of the if, you'll love the else:

    else
    {
        if (m_dtAAfficher.Columns.Contains("MTTHRUB" + dr[0].ToString()))
        {
            arrMappingNames.Add("MTTHRUB" + dr[0].ToString());
            arrHeadersTexte.Add(dr[4]);
            arrColumnsFormat.Add("");
            arrColumnsAlign.Add("1");

            if (iCompt == Convert.ToInt16(0))
            {
                iID0 = Convert.ToInt32(dr[0]);
                iCompt = iCompt + 1;
            }
            else if (iCompt == Convert.ToInt16(1))
            {
                iID1 = Convert.ToInt32(dr[0]);
                iCompt = iCompt + 1;
            }
            else if (iCompt == Convert.ToInt16(2))
            {
                iID2 = Convert.ToInt32(dr[0]);
                iCompt = iCompt + 1;
            }
            else if (iCompt == Convert.ToInt16(3))
            {
                iID3 = Convert.ToInt32(dr[0]);
                iCompt = iCompt + 1;
            }
            else if (iCompt == Convert.ToInt16(4))
            {
                iID4 = Convert.ToInt32(dr[0]);
                iCompt = iCompt + 1;
            }
            else if (iCompt == Convert.ToInt16(5))
            {
                iID5 = Convert.ToInt32(dr[0]);
                iCompt = iCompt + 1;
            }
            else if (iCompt == Convert.ToInt16(6))
            {
                iID6 = Convert.ToInt32(dr[0]);
                iCompt = iCompt + 1;
            }
            else if (iCompt == Convert.ToInt16(7))
            {
                iID7 = Convert.ToInt32(dr[0]);
                iCompt = iCompt + 1;
            }
            else if (iCompt == Convert.ToInt16(8))
            {
                iID8 = Convert.ToInt32(dr[0]);
                iCompt = iCompt + 1;
            }
            else if (iCompt == Convert.ToInt16(9))
            {
                iID9 = Convert.ToInt32(dr[0]);
                iCompt = iCompt + 1;
            }
            else if (iCompt == Convert.ToInt16(10))
            {
                iID10 = Convert.ToInt32(dr[0]);
                iCompt = iCompt + 1;
            }
            else if (iCompt == Convert.ToInt16(11))
            {
                iID11 = Convert.ToInt32(dr[0]);
                iCompt = iCompt + 1;
            }
            else if (iCompt == Convert.ToInt16(12))
            {
                iID12 = Convert.ToInt32(dr[0]);
                iCompt = iCompt + 1;
            }
            else if (iCompt == Convert.ToInt16(13))
            {
                iID13 = Convert.ToInt32(dr[0]);
                iCompt = iCompt + 1;
            }
        }
    }
}

I can only assume that this function is called inside of a loop, though I have to wonder about how that loop exits? Maybe I'm being too generous, this might not be called inside of a loop, and the whole class gets to read 13 IDs out before it's populated. Does iCompt maybe get reset somewhere? No idea.

Honestly, does this even work? Wolferitza didn't tell us what it's actually supposed to do, but found this code because there's a bug in there somewhere that needed to be fixed. To my mind, "basically working" is the worst case scenario- if the code were fundamentally broken, it could just be thrown away. If it mostly works except for some bugs (and terrible maintainability) no boss is going to be willing to throw it away. It'll just fester in there forever.

[Advertisement] BuildMaster allows you to create a self-service release management platform that allows different teams to manage their applications. Explore how!

CodeSOD: Historical Dates

Handling non-existent values always presents special challenges. We've (mostly) agreed that NULL is, in some fashion, the right way to do it, though it's still common to see some sort of sentinel value that exists outside of the expected range- like a function returning a negative value when an error occurred, and a zero (or positive) value when the operation completes.

Javier found this function, which has a… very French(?) way of handling invalid dates.

 Private Function CheckOraDate(ByVal sDate As String) As String
        Dim OraOValDate As New DAL.PostGre.DataQuery()
        Dim tdate As Date
        If IsDate(sDate) Then
            Return IIf(OraOValDate.IsOracle, CustomOracleDate(Convert.ToDateTime(sDate).ToString("MM-dd-yyyy")), "'" & sDate & "'")
        Else
            '~~~ No Date Flag of Bastille Day
            Return CustomOracleDate(Convert.ToDateTime("07/14/1789").ToString("MM-dd-yyyy"))
        End If

    End Function

Given a date string, we check if it is a valid date string using IsDate. If it is, we check if our data access layer thinks the IsOracle flag is set, and if it is, we do some sort of conversion to a `CustomOracleDate", otherwise we just return the input wrapped in quotes.

All that is sketchy- any function that takes dates as a string input and then returns the date in a new format as a string always gets my hackles up. It implies loads of stringly typed operations.

But the WTF is how we handle a bad input date: we return Bastille Day.

In practice, this meant that their database system was reporting customers' birthdays as Bastille Day. And let me tell you, those customers don't look a day over 200, let alone 236.

For an extra bonus WTF, while the "happy path" checks if we should use the custom oracle formatting, the Bastille Day path does not, and just does whatever the Oracle step is every time.

[Advertisement] Keep the plebs out of prod. Restrict NuGet feed privileges with ProGet. Learn more.

CodeSOD: Losing a Digit

Alicia recently moved to a new country and took a job with a small company willing to pay well and help with relocation costs. Overall, the code base was pretty solid. Despite the overall strong code base, one recurring complaint was that running the test suite was painfully long.

While Alicia doesn't specify what the core business is, but says: "in this company's core business, random numbers were the base of everything."

As such, they did take generating random numbers fairly seriously, and mostly used strong tools for doing that. However, whoever wrote their test suite was maybe a bit less concerned, and wrote this function:

public static Long generateRandomNumberOf(int length) {
    while (true) {
            long numb = (long)(Math.random() * 100000000 * 1000000); // had to use this as int's are to small for a 13 digit number.
            if (String.valueOf(numb).length() == length)
                return numb;		
        }		
}

They want many digits of random number. So they generate a random floating point, and then multiply it a few times to get a large number. If the length of the resulting number, in characters, is the desired length, we return it. Otherwise, we try again.

The joy here, of course, is that this function is never guaranteed to exit. In fact, if you request more than 15 digits, it definitely won't exit. In practice, most of the time, the function is able to hit the target length in a relative handful of iterations, but there's no guarantee for that.

Alicia was tracking down a bug in a test which called this function. So she went ahead and fixed this function so that it use a sane way to generate the appropriate amount of entropy that actually guaranteed a result. She included that change in her pull request, nobody had any comments, and it got merged in.

The unit tests aren't vastly faster than they were, but they are faster. Who knows what other surprises the test suite has in store?

[Advertisement] Keep all your packages and Docker containers in one place, scan for vulnerabilities, and control who can access different feeds. ProGet installs in minutes and has a powerful free version with a lot of great features that you can upgrade when ready.Learn more.

CodeSOD: High Temperature

Brian (previously)found himself contracting for an IoT company, shipping thermostats and other home automation tools, along with mobile apps to control them.

Brian was hired because the previous contractor had hung around long enough for the product to launch, cashed the check, and vanished, never to be heard from again.

And let's just say that Brian's predecessor had a unique communication style.

    private class NoOpHandler extends AsyncCharisticHandler {
        public NoOpHandler(Consumer<BluetoothGattCharacteristic> consumer) {
            super(null, consumer);
        }

        @Override
        public void Execute() {
            // Don't do any actual BLE Communication here, and just go straight to the callback, This
            // handler is just to allow people to get a callback after after a bunch of Async IO Operations
            // have happened, without throwing all the completion logic into the "last" async callback of your batch
            // since the "last" one changes.
            InvokeCallback();
            // After this callback has been handled, recheck the queue to run any subsequent Async IO Operations
            OnBLEAsyncOperationCompleted();
            // I'm aware this is recursive. If you get a stack overflow here, you're using it wrong.
            // You're not supposed to queue thousands of NoOp handlers one after the other, Stop doing it!
            // If you need to have code executed sequentially just, er... write a fu*king function there is
            // nothing special about this callback, or the thread it is called on, and you don't need to use
            // it for anything except getting a callback after doing a batch of async IO, and then, it runs 
            // in the context of the last IO Completion callback, which shouldn't take ages. If you use 
            // AsyncRunWhenCompleted() to create more of these within the callback of AsyncRunWhenCompleted
            // it just keeps the IO completion thread busy, which also breaks shit. 
            // Basically, you shouldn't be using AsyncRunWhenCompleted() at all if you're not me.
        }
    }

Who said bad programmers don't write comments? This bad programmer wrote a ton of comments. What's funny about this is that, despite the wealth of comments, I'm not 100% certain I actually know what I'm supposed to do, aside from not use AsyncRunWhenCompleted.

The block where we initialize the Bluetooth system offers more insight into this programmer's style.

    @SuppressLint("MissingPermission")
    private void initializeBluetooth() {
        _bluetoothManager = (BluetoothManager) getSystemService(BLUETOOTH_SERVICE);
        _bluetoothAdapter = _bluetoothManager != null ? _bluetoothManager.getAdapter() : null;
        if (_bluetoothAdapter != null && _bluetoothAdapter.isEnabled()) {
            /* #TODO: I don't know if cancelDiscovery does anything... either good, or bad. It seems to make BLE discovery faster after 
            *         the service is restarted by android, but I don't know if it screws anything else up in the process. Someone should check into that */
            _bluetoothAdapter.cancelDiscovery();
            _bluetoothScanner = _bluetoothAdapter.getBluetoothLeScanner();
            _scanFilters = Collections.singletonList(new ScanFilter.Builder().setServiceUuid(new ParcelUuid(BLE_LIVELINK_UUID)).build());
            CreateScanCallback();
        } else {
            // #TODO: Handle Bluetooth not available or not enabled
            stopSelf();
        }
    }

This is a clear example of "hacked together till it works". What does cancelDiscovery do? No idea, but we call it anyway because it seems to be faster. Should we look it up? Because yes, it sounds like calling it is correct, based on the docs. Which took me 15 seconds to find. "Someone should check into that," and apparently I am that someone.

Similarly, the second TODO seems like an important missing feature. At least a notification which says, "Hey, you need bluetooth on to talk to bluetooth devices," would go a long way.

All this is in service of an IoT control app, which seems to double as a network scanner. It grabs the name of every Bluetooth and WiFi device it finds, and sends them and a location back to a web service. That web service logs them in a database, which nobody at the company ever looks at. No one wants to delete the database, because it's "valuable", though no one can ever specify exactly how they'd get value from it. At best, they claim, "Well, every other app does it." Mind you, I think we all know how they'd get value: sell all this juicy data to someone else. It's just no one at the company is willing to say that out loud.

[Advertisement] Keep all your packages and Docker containers in one place, scan for vulnerabilities, and control who can access different feeds. ProGet installs in minutes and has a powerful free version with a lot of great features that you can upgrade when ready.Learn more.

Error'd: What Goes Up

As I was traveling this week (just home today), conveyances of all sorts were on my mind.

Llarry A. warned "This intersection is right near my house. Looks like it's going to be inconvenient for a while..." Keeping this in mind, I chose to take the train rather than drive.

1

 

Unfortunately US trains are restricted to plodding sublight speeds, but Mate has it better. "I love how the Swiss Federal Railways keep investing in new rolling stock... like this one that can teleport from one side of the country to the other in 0 minutes?! "

0

 

And Michael R. 's TfL seems to operate between parallel universes. "I was happy to see that the "not fitted" Northern Line train actually rolled in 2 mins later."

2

 

Daniel D. 's elevator went up but the ubiquitous screen went down. Daniel even slipped a bit of selfie into his submission. Sneaky. "This display usually features some looping video. This time it featured only desktop with bunch of scripts / *.bat files. I guess it's better when elevator's display crashes than when an actual elevator crashes?"

4

 

Joel C. 's elevator had a different fault. "The screen in my hotel's elevator is not sending quite the message they probably want."

3

 

[Advertisement] Plan Your .NET 9 Migration with Confidence
Your journey to .NET 9 is more than just one decision.Avoid migration migraines with the advice in this free guide. Download Free Guide Now!

Secure to Great Lengths

Our submitter, Gearhead, was embarking on STEM-related research. This required him to pursue funding from a governmental agency that we’ll call the Ministry of Silly Walks. In order to start a grant application and track its status, Gearhead had to create an account on the Ministry website.

The registration page asked for a lot of personal information first. Then Gearhead had to create his own username and password. He used his password generator to create a random string: D\h.|wAi=&:;^t9ZyoO

Silly Walk Gait

Upon clicking Save, he received an error.

Your password must be a minimum eight characters long, with no spaces. It must include at least three of the following character types: uppercase letter, lowercase letter, number, special character (e.g., !, $, % , ?).

Perplexed, Gearhead emailed the Ministry’s web support, asking why his registration failed. The reply:

Hello,
The site rejects password generators as hacking attempts. You will need to manually select a password.
Ex. GHott*01

Thank you,

Support

So a long sequence of random characters was an active threat, but a 1990s-era AOL username was just fine. What developer had this insane idea and convinced other people of it? How on earth did they determine what was a "manually selected" string versus a randomly-generated one?

It seems the deciding factor is nothing more than length. If you go to the Ministry’s registration page now, their password guidelines have changed (emphasis theirs):

Must be 8-10 characters long, must contain at least one special character ( ! @ # $ % ^ & * ( ) + = { } | < > \ _ - [ ] / ? ) and no spaces, may contain numbers (0-9), lower and upper case letters (a-z, A-Z). Please note that your password is case sensitive.

Only good can come of forcing tiny passwords.

The more a company or government needs secure practices, the less good they are at secure practices. Is that a law yet? It should be.

[Advertisement] Plan Your .NET 9 Migration with Confidence
Your journey to .NET 9 is more than just one decision.Avoid migration migraines with the advice in this free guide. Download Free Guide Now!

Future Documentation

Dotan was digging through vendor supplied documentation to understand how to use an API. To his delight, he found a specific function which solved exactly the problem he had, complete with examples of how it was to be used. Fantastic!

He copied one of the examples, and hit compile, and reviewed the list of errors. Mostly, the errors were around "the function you're calling doesn't exist". He went back to the documentation, checked it, went back to the code, didn't find any mistakes, and scratched his head.

Now, it's worth noting the route Dotan took to find the function. He navigated there from a different documentation page, which sent him to an anchor in the middle of a larger documentation page- vendorsite.com/docs/product/specific-api#specific-function.

This meant that as the page loaded, his browser scrolled directly down to the specific-function section of the page. Thus, Dotan missed the gigantic banner at the top of the page for that API, which said this:

/!\ NOTE /!\ NOTE /!\ NOTE /!\ NOTE /!\ NOTE /!\ NOTE /!\ NOTE /!
This doc was written to help flesh out a user API. The features described here are all hypothetical and do not actually exist yet, don't assume anything you see on this page works in any version /!\ NOTE /!\ NOTE /!\ NOTE /!\ NOTE /!\ NOTE /!\ NOTE /!\ NOTE /!\

On one hand, I think providing this kind of documentation is invaluable, both to your end users and for your own development team. It's a great roadmap, a "documentation driven development" process. And I can see that they made an attempt to be extremely clear about it being incomplete and unimplemented- but they didn't think about how people actually used their documentation site. A banner at the top of the page only works if you read the page from top to bottom, but documentation pages you will frequently skip to specific sections of the page.

But there was a deeper issue with the way this particular approach was executed: while the page announced that one shouldn't assume anything works, many of the functions on the page did work. Many did not. There was no rhyme or reason, to version information or other indicators to help a developer understand what was and was not actually implemented.

So while the idea of a documentation-oriented roadmap specifying features that are coming is good, the execution here verged into WTF territory. It was a roadmap, but with all the landmarks erased, so you had no idea where you actually were along the length of that road. And the one warning sign that would help you was hidden behind a bush.

Dotan asks: "WTF is that page doing on the official documentation wiki?"

And I'd say, I understand why it's there, but boy it should have been more clear about what it actually was.

[Advertisement] Keep the plebs out of prod. Restrict NuGet feed privileges with ProGet. Learn more.

Undefined Tasks

Years ago, Brian had a problem: their C# application would crash sometimes. What was difficult to understand was why it was crashing, because it wouldn't crash in response to a user action, or really, any easily observable action.

The basic flow was that the users used a desktop application. Many operations that the users wanted to perform were time consuming, so the application spun up background tasks to do them, thus allowing the user to do other things within the application. And sometimes, the application would just crash, both when the user hadn't done anything, and when all background jobs should have been completed.

The way the background task was launched was this:

seeder.RunSeeder();

It didn't take too much head scratching to realize what was running every time the application crashed: the garbage collector.

RunSeeder returned a Task object, but since Brian's application treated the task as "fire and forget", they didn't worry about the value itself. But C# did- the garbage collector had to clean up that memory.

And this was running under .Net 4.0. This particular version of the .Net framework was a special, quantum realm, at least when it came to tasks. You see, if a Task raises an exception, nothing happens. At least, not right away. No one is notified of the exception unless they inspect the Task object directly. There's a cat in the box, and no one knows the state of the cat unless they open the box.

The application wasn't checking the Task result. The cat remained in a superposition of "exception" and "no exception". But the garbage collector looked at the task. And, in .Net 4.0, Microsoft made a choice about what to do there: when they opened the box and saw an exception (instead of a cat), they chose to crash.

Microsoft's logic here wasn't entirely bad; an uncaught exception means something has gone wrong and hasn't been handled. There's no way to be certain the application is in a safe state to continue. Treating it akin to undefined behavior and letting the application crash was a pretty sane choice.

The fix for Brian's team was simple: observe the exception, and choose not to do anything with it. They truly didn't care- these tasks were fire-and-forget, and failure was acceptable.

seeder.RunSeeder().ContinueWith(t => { var e = t.IsFaulted ? t.Exception : null; }); // Observe exceptions to prevent quantum crashes

This code merely opens the box and sees if there's an exception in there. It does nothing with it.

Now, I'd say as a matter of programming practice, Microsoft was right here. Ignoring exceptions blindly is a definite code smell, even for a fire-and-forget task. Writing the tasks in such a way as they catch and handle any exceptions that bubble up is better, as is checking the results.

But I, and Microsoft, were clearly on the outside in this argument. Starting with .Net 4.5 and moving forward, uncaught exceptions in background tasks were no longer considered show-stoppers. Whether there was a cat or an exception in the box, when the garbage collector observed it, it got thrown away either way.

In the end, this reminds me of my own failing using background tasks in .Net.

[Advertisement] BuildMaster allows you to create a self-service release management platform that allows different teams to manage their applications. Explore how!

CodeSOD: Solve a Captcha to Continue

The first time Z hit the captcha on his company's site, he didn't think much of it. And to be honest, the second time he wasn't paying that much attention. So it wasn't until the third time that he realized that the captcha had showed him the same image every single time- a "5" with lines scribbled all over it.

That led Z to dig out the source and see how the captcha was implemneted.

<Center>Click a number below to proceed to the next page. <br>Some browsers do not like this feature and will try to get around it. If you are having trouble<br> seeing the image, empty your internet cache and temporary internet files may help. <br>Please ensure you have no refresher add-ons installed on your browser.<br />
<table border=1><Tr><td colspan='3' align='center'>
<font style='font-size:36px;'><img width='150' title='5' alt='5' src='valimages/5.gif'> </font></td></tr>
<Tr>
<Td align=center><font size='6'><A href='valid.php?got=crimied&linknum=1'>1</a></font></td>
<Td align=center><font size='6'><A href='valid.php?got=crimied&linknum=2'>2</a></font></td>
<Td align=center><font size='6'><A href='valid.php?got=crimied&linknum=3'>3</a></font></td>
</tr>
<Tr>
<Td align=center><font size='6'><A href='valid.php?got=crimied&linknum=4'>4</a></font></td>
<Td align=center><font size='6'><A href='valid.php?got=crimied&linknum=5'>5</a></font></td>
<Td align=center><font size='6'><A href='valid.php?got=crimied&linknum=6'>6</a></font></td>
</tr>
<tr>
<Td align=center><font size='6'><A href='valid.php?got=crimied&linknum=7'>7</a></font></td>
<Td align=center><font size='6'><A href='valid.php?got=crimied&linknum=8'>8</a></font></td>
<Td align=center><font size='6'><A href='valid.php?got=crimied&linknum=9'>9</a></font></td>
</tr></table>
</Center>

Look, I know there's a joke about how hard it is to center things in CSS, but I think we've gone a little overboard with our attempt here.

Now, the PHP driving this page could have easily been implemented to randomly select an image from the valimages directory, and there was some commented out code to that effect. But it appears that whoever wrote it couldn't quite understand how to link the selected image to the behavior in valid.php, so they just opted to hard code in five as the correct answer.

The bonus, of course, is that the image for five is named 5.gif, which means if anyone really wanted to bypass the captcha, it'd be trivial to do so by scraping the code. I mean, not more trivial than just realizing "it's the same answer every time", but still, trivial.

Of course, out here in the real world, captchas have never been about keeping bots out of sites, and instead are just a way to trick the world into training AI. Pretty soon we'll roll out the Voight-Kampf test, but again, the secret purpose won't be to find the replicants, but instead gather data so that the next generation of replicants can pass the test.

[Advertisement] Keep all your packages and Docker containers in one place, scan for vulnerabilities, and control who can access different feeds. ProGet installs in minutes and has a powerful free version with a lot of great features that you can upgrade when ready.Learn more.

Error'd: Once Is Never Enough

"Getting ready to!" anticipated richard h. but then this happened. "All I want are the CLI options to mark the stupid TOS box so I can install this using our Chef automation. "What are the options" is too much to ask, apparently. But this is Microsoft. Are stupid errors like this really that unexpected?"

1

 

Followed immediately by richard's report: "Following up my error'd submission a few minutes ago, I clicked the "Accept TOS" box, and the "Something unexpected happened" box lit up, so I clicked the button to let the unexpected do what the something wanted to do. Now I have successfully Something unexpected happened. smh Microsoft. "

0

 

An anonymous griper snickered "It's a made up word, but I just wanted to check the spelling before writing it in a slack comment as a joke referencing the show("that's a nice tnetennba"), but the first thing I saw was the AI preview with the first sentence incorrectly claiming it's "basketball" spelled backwards(which it's clearly not, backwards it would be "abnnetent" which is also not a word). " I have to differ, though. Spelled backwards it would be llabteksab.

Silly monkey, backwards it would be ti

 

And a different anonymous griper (I assume they're different, but they're anonymous so who can really know?) needed some help doing a quite trivial computation. "On which planet?" we all wonder together. 3

 

Finally, a recurring theme from a recurring reader, B.J.H. keeps buying stuff. "This screen shot was captured the morning of 26 October. I'm not sure what bothers me more, that the package was picked up twice (once in the future), or that "Standard Transit" (when the package should be expected) is a day before the pick-up. Or maybe they just lie about the pickup to cover for not meeting the standard delivery date. "

4

 

[Advertisement] BuildMaster allows you to create a self-service release management platform that allows different teams to manage their applications. Explore how!

The Ghost Cursor

Everyone's got workplace woes. The clueless manager; the disruptive coworker; the cube walls that loom ever higher as the years pass, trapping whatever's left of your soul.

But sometimes, Satan really leaves his mark on a joint. I worked Tech Support there. You may remember The C-Level Ticket. I'm Anonymous. This is my story.


Between 2 Buildings (Montreal) - Flickr - MassiveKontent

Night after night, my dreams are full of me trying and failing at absolutely everything. Catch a bus? I'm already running late and won't make it. Dial a phone number to get help? I can't recall the memorized sequence, and the keypad's busted anyway. Drive outta danger? The car won't start. Run from a threat? My legs are frozen.

Then I wake up in my bed in total darkness, scared out of my skull, and I can't move for real. Not one muscle works. Even if I could move, I'd stay still because I'm convinced the smallest twitch will give me away to the monster lurking nearby, looking to do me in.

The alarm nags me before the sun's even seen fit to show itself. What day is it? Tuesday? An invisible, overwhelming dread pins me in place under the covers. I can't do it. Not again.

The thing is, hunger, thirst, and cold are even more nagging than the alarm. Dead tired, I force myself up anyway to do the whole thing over.


The office joe that morning was so over-brewed as to be sour. I tossed down the last swig in my mug, checking my computer one more time to make sure no Tech Support fires were raging by instant message or email. Then I threw on my coat and hat and quit my cube, taking the stairs to ground level.

I pushed open a heavy fire-escape door and stepped out into the narrow alley between two massive office buildings. Brisk autumn air and the din of urban motor traffic rushed to greet me. The dull gray sky above threatened rain. Leaning against the far brick wall were Toby and Reynaldo, a couple of network admins, hugging themselves as they nursed smoldering cigarettes. They nodded hello.

I tipped my hat in greeting, slipping toward the usual spot, a patch of asphalt I'd all but worn grooves in by that point. I lit my own cigarette and took in a deep, warming draw.

"Make it last another year," Toby spoke in a mocking tone, tapping ash onto the pavement. "I swear, that jerk can squeeze a nickel until Jefferson poops!"

An ambulance siren blared through the alley for a minute. The rig was no doubt racing toward the hospital down the street.

Reynaldo smirked. "You think Morty finally did it?"

Toby smirked as well.

I raised an eyebrow. "Did what?"

"Morty always says he's gonna run out into traffic one of these days so they can take him to the hospital and he won't have to be here," Reynaldo explained.

I frowned at the morbid suggestion. "Hell of a way to catch a break."

"Well, it's not like we can ask for time off," Toby replied bitterly. "They always find some way to rope us back in."

I nodded in sympathy. "You have it worse than we do. But my sleep's still been jacked plenty of times by 3AM escalated nonsense that shoulda been handled by a different part of the globe."

Reynaldo's eyes lit up fiercely. "They have all the same access and training, but it never falls on them! Yeah, been there."

The door swung open again, admitting a young woman with the weight of the world on her shoulders. This was Megan, a junior developer and recent hire. I tipped my hat while helping myself to another drag.

She hastened my way, pulling a pack of cigarettes from her handbag. With shaking hands, she fumbled to select a single coffin nail. "I quit these things!" she lamented. After returning the pack to her bag, she rummaged through it fruitlessly. "Dammit, where are those matches?!" She glanced up at me with a pleading expression.

I pulled the lighter from my coat pocket. "You sure?"

She nodded like she hadn't been more sure about anything in her entire life.

I lit it for her. She took a lung-filling pull, then exhaled a huge cloud of smoke.

"Goin' that well, huh?" I asked.

Megan also hugged herself, her expression pained. "Every major player in the industry uses our platform, and I have no idea how it hasn't all come crashing down. There are thousands of bugs in the code base. Thousands! It breaks all the time. Most of the senior devs have no clue what they're doing. And now we're about to lose the only guy who understands the scheduling algorithm, the most important thing!"

"That's tough." I had no idea what else to say. Maybe it was enough that I listened.

Megan glanced up nervously at the brewing storm overhead. "I just know that algorithm's gonna get dumped in my lap."

"The curse of competence." I'd seen it plenty of times.

"Ain't that the truth!" She focused on me again with a look of apology. "How've you been?"

I shrugged. "Same old, same old." I figured a fresh war story might help. "Had to image and set up the tech for this new manager's onboarding. Her face is stuck in this permanent glare. Every time she opens her mouth, it's to bawl someone out."

"Ugh."

"The crazy thing is, the walls of her office are completely covered with crucifixes, and all these posters plastered with flowers and hearts and sap like Choose Kindness." I leaned in and lowered my voice. "You know what I think? I think she’s an ancient Roman whose spite has kept her alive for over two thousand years. Those crosses are a threat!"

That teased a small laugh out of Megan. For a moment, the amusement reached her eyes. Then it was gone, overwhelmed by worry. She took to pacing through the narrow alley.


Back at my cube, I found a new urgent ticket at the top of my case load. Patricia Dracora, a senior project manager, had put in a call claiming her computer had been hacked. Her mouse cursor was moving around and clicking things all on its own.

It was too early in the morning for a case like this. That old dread began sneaking up on me again. The name put me on edge as well. Over the years, our paths had never crossed, but her nickname throughout Tech Support, Dracula, betrayed what everyone else made of her.

"Make like a leaf and blow!"

The boss barked his stern command over my shoulder. I stood and turned from my computer to find him at my cubicle threshold with arms folded, blocking my egress.

I couldn't blow, so I shrugged. "Can't be as bad as The Crucifier."

"Dracula's worse than The Crucifier," the boss replied under his breath in a warning tone. "For your own good, don't keep her waiting!" He tossed a thumb over his shoulder for good measure.

When he finally backed out of the way, I made tracks outta there. A few of my peers made eye contact as I passed, looking wary on my behalf.

The ticket pegged Dracora's office in a subfloor I'd never set foot in before. Descending the stairs, I had too much time to think. Of course I didn't expect a real hacking attempt. Peripheral hardware on the fritz, some software glitch: there'd be a simple explanation. What fresh hell would I have to endure to reach that point? That was what my tired brain couldn't let go of. The stimulants hadn't kicked in yet. With the strength of a kitten, I was stepping into a lion's den. A lion who might make me wish for crucifixion by the time it was all over.

From the stairwell, I entered a dank, deserted corridor. Old florescent lighting fixtures hummed and flickered overhead. That, combined with the overwhelming stench of paint fumes, set the stage for a ripping headache. There were no numbers on the walls to lead me to the right place. They must've taken them down to paint and never replaced them. I inched down worn, stained carpeting, peeking into each open gap I found to either side of me. Nothing but darkness, dust, and cobwebs at first. Eventually, I spotted light blaring from one of the open doors ahead of me. I jogged the rest of the way, eager to see any living being by that point.

The room I'd stumbled onto was almost closet-sized. It contained a desk and chair, a laptop docking station, and a stack of cardboard boxes on the floor. Behind the desk was a woman of short stature, a large purse slung over one shoulder. Her arms were folded as she paced back and forth in the space behind her chair. When I appeared, she stopped and looked to me wide-eyed, maybe just as relieved as I was. "Are you Tech Support?"

"Yes, ma'am." I entered the room. "What's—?"

"I don't know how it happened!" Dracora returned to pacing, both hands making tight fists around the straps of the purse she was apparently too wired and distracted to set down. "They made me move here from the fourth floor. I just brought everything down and set up my computer, and now someone has control of the mouse. Look, look!" She stopped and pointed at the monitor.

I rounded the desk. By the time I got there, whatever she'd seen had vanished. Onscreen, the mouse cursor sat still against a backdrop of open browsers and folders. Nothing unusual.

"It was moving, I swear!" Anguished, Dracora pleaded with me to believe her.

It seemed like she wasn't hostile at all, just stressed out and scared. I could handle that. "I'm sure we can figure this out, ma'am. Lemme have a look here."

I sat down at the desk and tried the wireless mouse first. It didn't work at all to move the cursor.

"The hacker's locked us out!" Dracora returned to pacing behind me.

As I sat there, not touching a thing, the mouse cursor shuttled across the screen like it was possessed.

"There! You see?"

Suddenly, somehow, my brain smashed everything together. "Ma'am, I have an idea. Could you please stand still?"

Dracora stopped.

I swiveled around in the chair to face her. "Ma'am, you said you were moving in down here. What's in your purse right now?"

Her visible confusion deepened. "What?"

"The mouse cursor only moves around when you do," I explained.

Her eyes widened. She dug deeply into her purse. A moment later, she pulled out a second wireless mouse. Then she looked to me like she couldn't believe it. "That's it?!"

"That's it!" I replied.

"Oh, lord!" Dracora replaced the dud sitting on her mousepad with the mouse that was actually connected to her machine, wilting over the desk as she did so. "I don't know whether to laugh or cry."

I knew the feeling. But the moment of triumph, I gotta admit, felt pretty swell. "Anything else I can help with, ma'am?"

"No, no! I've wasted enough of your time. Thank you so much!"

I had even more questions on the way back upstairs. With this huge, spacious office building, who was forcing Dracora to be in that pit? How had she garnered such a threatening reputation? Why had my experience been so different from everyone else's? I didn't mention it to the boss or my peers. I broke it all down to Megan in the alley a few days later.

"She even put in a good word for me when she closed the ticket," I told her. "The boss says I'm on the fast track for another promotion." I took a drag from my cigarette, full of bemusement. "I'm already as senior as it gets. The only way up from here is management." I shook my head. "That ain't my thing. Look how well it's gone for Dracora."

Megan lowered her gaze, eyes narrowed. "You said it yourself: the only reward for good work is more work."

And then they buried you ... in a basement, or a box.

I remembered being at the start of my career, like Megan. I remembered feeling horrified by all the decades standing between me and the day when I wouldn't or couldn't ever work again. A couple decades in, some part of me that I'd repressed had resurfaced. What the hell is this? What have I been doing?

Stop caring, a different part replied. Just stop caring. Take things day by day, case by case.

I'd obeyed for so long. Where had it gotten me?

Under my breath, I risked airing my wildest wish for the future. "Someday, I wanna break outta this joint."

Megan blinked up at me. I had her attention. "How?"

"I dunno," I admitted. "I gotta figure it out ... before I go nuts."

[Advertisement] Utilize BuildMaster to release your software with confidence, at the pace your business demands. Download today!

CodeSOD: A Basic Mistake

Way back in 1964, people were starting to recgonize that computers were going to have a large impact on the world. There was not, at the time, very much prepackaged software, which meant if you were going to use a computer to do work, you were likely going to have to write your own programs. The tools to do that weren't friendly to non-mathematicians.

Thus, in 1964, was BASIC created, a language derived from experiments with languages like DOPE (The Dartmouth Oversimplified Programming Experiment). The goal was to be something easy, something that anyone could use.

In 1977, the TRS-80, the Commodore PET, and the Apple II all launched, putting BASIC into the hands of end users. But it's important to note that BASIC had already been seeing wide use for a decade on "big iron" systems, or more hobbyist systems, like the Altair 8800.

Today's submitter, Coyne, was but a humble student in 1977, and despite studying at a decent university, brand spanking new computers were a bit out of reach. Coyne was working with professors to write code to support papers, and using some dialect of BASIC on some minicomputer.

One of Coyne's peers had written a pile of code, and one simple segment didn't work. As it was just a loop to print out a series of numbers, it seemed like it should work, and work quite easily. But the programmer writing it couldn't get it to work. They passed it around to other folks in the department, and those folks also couldn't get it to work. What could possibly be wrong with this code?

3010 O = 45
3020 FOR K = 1 TO O
3030   PRINT K
3040 NEXT K

Now, it's worth noting, this particular dialect of BASIC didn't support long variable names- you could use a single letter, or you could use a letter and a number, and that was it. So the short variable names are not explicitly a problem here- that's just the stone tools which were available to programmers at the time.

For days, people kept staring at this block, trying to figure out what was wrong. Finally, Coyne took a glance, and in a moment was able to spot the problem.

I've done something nasty here, because I posted the correct block first. What the programmer had actually written was this:

3010 O = 45
3020 FOR K = 1 TO 0
3030   PRINT K
3040 NEXT K

The difference is subtle, especially when you're staring at a blurry CRT late at night in the computer lab, with too little coffee and too much overhead lighting. I don't know what device they were using for display; most terminals made sure to make O look different from 0, but I couldn't be bold enough to say all of them did. And, in this era, you'd frequently review code on printed paper, so who knows how it was getting printed out?

But that, in the end, was the problem- the programmer accidentally typed a zero where they meant the letter "O". And that one typo was enough to send an entire computer science department spinning for days when no one could figure it out.

In any case, it's interesting to see how an "easy" to use language once restricted variable names to such deep inscrutability.

[Advertisement] BuildMaster allows you to create a self-service release management platform that allows different teams to manage their applications. Explore how!

CodeSOD: A Truly Bad Comparison

For C programmers of a certain age (antique), booleans represent a frustrating challenge. But with the addition of stdbool.h, we exited the world of needing to work hard to interact with boolean values. While some gotchas are still in there, your boolean code has the opportunity to be simple.

Mark's predecessor saw how simple it made things, and decided that wouldn't do. So that person went and wrote their own special way of comparing boolean values. It starts with an enum:

typedef enum exop_t {
    EXOP_NONE,
    EXOP_AND,
    EXOP_OR,
    EXOP_EQUAL,
    EXOP_NOTEQUAL,
    EXOP_LT,
    EXOP_GT,
    EXOP_LEQUAL,
    EXOP_GEQUAL,
    EXOP_ADD,
    EXOP_SUBTRACT,
    EXOP_MULTIPLY,
    EXOP_DIV,
    EXOP_MOD,
    EXOP_NEGATE,
    EXOP_UNION,
    EXOP_FILTER1,
    EXOP_FILTER2
};

Yes, they did write an enum to compare booleans. They also wrote not one, but two functions. Let's start with the almost sane one.

static bool compare_booleans (bool bool1,
                              bool bool2,
                              exop_t  exop)
{

    int32_t  cmpresult;

    if ((bool1 && bool2) || (!bool1 && !bool2)) {
        cmpresult = 0;
    } else if (bool1) {
        cmpresult = 1;
    } else {
        cmpresult = -1;
    }

    return convert_compare_result(cmpresult, exop);

}

This function takes two boolean values, and a comparison we wish to perform. Then, we test if they're equal, though the way we do that is by and-ing them together, then or-ing that with the and of their negations. If they're equal, cmpresult is set to zero. If they're not equal, and the first boolean is true, we set cmpresult to one, and finally to negative one.

Thus, we're just invented strcmp for booleans.

But then we call another function, which is super helpful, because it turns that integer into a more normal boolean value.

static boolean
    convert_compare_result (int32_t cmpresult,
                            exop_t exop)
{
    switch (exop) {
    case EXOP_EQUAL:
        return (cmpresult) ? FALSE : TRUE;
    case EXOP_NOTEQUAL:
        return (cmpresult) ? TRUE : FALSE;
    case EXOP_LT:
        return (cmpresult < 0) ? TRUE : FALSE;
    case EXOP_GT:
        return (cmpresult > 0) ? TRUE : FALSE;
    case EXOP_LEQUAL:
        return (cmpresult <= 0) ? TRUE : FALSE;
    case EXOP_GEQUAL:
        return (cmpresult >= 0) ? TRUE : FALSE;
    default:
        printf( "ERR_INTERNAL_VAL\n" );
        return TRUE;
    }
} 

We switch based on the requested operation, and each case is its own little ternary. For equality comparisons, it requires a little bit of backwards logic- if cmpresult is non-zero (thus true), we need to return FALSE. Also note how our expression enum has many more options than convert_compare_result supports, making it very easy to call it wrong- and worse, it returns TRUE if you call it wrong.

At least they made booleans hard again. Who doesn't want to be confused about how to correctly check if two boolean values are the same?

It's worth noting that, for all this code, the rest of the code base never used anything but EXOP_EQUAL and EXOP_NOTEQUAL, because why would you do anything else on booleans? Every instance of compare_booleans could have been replaced with a much clearer == or != operator. Though what should really have been replaced was whoever wrote this code, preferably before they wrote it.

[Advertisement] Utilize BuildMaster to release your software with confidence, at the pace your business demands. Download today!

A Government Data Center

Back in the antediluvian times, when I was in college, people still used floppy disks to work on their papers. This was a pretty untenable arrangement, because floppy disks lost data all the time, and few students had the wherewithal to make multiple copies. Half my time spent working helldesk was breaking out Norton Diskutils to try and rescue people's term papers. To avoid this, the IT department offered network shares where students could store documents. The network share was backed up, tracked versions, and could be accessed from any computer on campus, including the VAX system (in fact, it was stored on the VAX).

I bring this up because we have known for quite some time that companies and governments need to store documents in centrally accessible locations so that you're not reliant on end users correctly managing their files. And if you are a national government, you have to make a choice: either you contract out to a private sector company, or you do it yourself.

South Korea made the choice to do it themselves, with their G-Drive system (short for Government Drive, no relation to Google Drive), a government file store hosted primarily out of a datacenter in Daejeon. Unfortunately, "primarily" is a bit too apropos- last month, a fire in that datacenter destroyed data.

The Interior Ministry explained that while most systems at the Daejeon data center are backed up daily to separate equipment within the same center and to a physically remote backup facility, the G-Drive’s structure did not allow for external backups. This vulnerability ultimately left it unprotected.

Someone, somehow, designed a data storage system that was structurally incapable of doing backups? And then told 750,000 government employees that they should put all their files there?

Even outside of that backup failure, while other services had backups, they did not have a failover site, so when the datacenter went down, the government went down with it.

In total, it looks like about 858TB of data got torched. 647 distinct services were knocked out. At least 90 of them were reported to be unrecoverable (that last link is from a company selling Lithium Ion safety products, but is a good recap). A full recovery was, shortly after the accident, predicted to take a month, but as of October 22, only 60% of services had been restored.

Now, any kind of failure of this scale means heads must roll, and police investigations have gone down the path of illegal subcontracting. The claim is that the government hired a contractor broke the law by subcontracting the work, and that those subcontractors were unqualified for the work they were doing- that while they were qualified to install or remove a li-ion battery, they were not qualified to move one, which is what they were doing and resulted in the fire.

I know too little about Korean laws about government contracting and too little about li-ion battery management to weigh in on this. Certainly, high-storage batteries are basically bombs, and need to be handled with great care and protected well. Though, if one knows how to install and uninstall a battery, moving a battery seems covered in those steps.

But if I were doing a root cause analysis here, while that could be the root cause of the fire, it is not the root cause of the outage. If you build a giant datacenter but can't replicate services to another location, you haven't built a reliable cloud storage system- you've just built an expensive floppy disk that is one trip too close to a fridge magnet away from losing all of your work. In this case, the fridge magnet was made of fire, but the result is the same.

I'm not going to say this problem would have be easy to avoid; actually building resilient infrastructure that fails gracefully under extreme stress is hard. But while it's a hard problem, it's also a well-understood problem. There are best practices, and clearly not one of them was followed.

[Advertisement] Keep all your packages and Docker containers in one place, scan for vulnerabilities, and control who can access different feeds. ProGet installs in minutes and has a powerful free version with a lot of great features that you can upgrade when ready.Learn more.

Conservapedia Still Exists

By: Nick Heer

I am not sure it is worth writing at length about Grokipedia, the Elon Musk-funded effort to quite literally rewrite history from the perspective of a robot taught to avoid facts upsetting to the U.S. far right. Perhaps it will be an unfortunate success — the Fox News of encyclopedias, giving ideologues comfortable information as they further isolate themselves.

It is less a Wikipedia competitor than it is a machine-generated alternative to Conservapedia. Founded by Andy Schlafly, an attorney and son of Phyllis Schlafly, the Wikipedia alternative was an attempt to make an online encyclopedia from a decidedly U.S. conservative and American exceptionalism perspective. Seventeen years ago, Schlafly’s effort was briefly profiled by Canadian television and, somehow, the site is still running. Perhaps that is the fate of Grokipedia: a brief curiosity, followed by traffic coming only from a self-selecting mix of weirdos and YouTubers needing material.

⌥ Permalink

A Profile of Setlist.fm

By: Nick Heer

Marc Hogan, New York Times (gift link):

Enter Setlist.fm. The wikilike site, where users document what songs artists play each night on tour, has grown into a vast archive, updated in real time but also reaching back into the historical annals. From the era of Mozart (seriously!) to last night’s Chappell Roan show, Setlist.fm offers reams of statistics — which songs artists play most often, when they last broke out a particular tune. In recent years, the site has begun posting data about average concert start times and set lengths.

Good profile. I had no idea it was owned by Live Nation.

I try to avoid Setlist.fm ahead of a show, but I check it immediately when I get home and for the days following. I might be less familiar with an artist’s catalogue, and this is particularly true of an opener, so it lets me track down particular songs that were played. It is one of the internet’s great resources.

⌥ Permalink

Zoom CEO Eric Yuan Lies About A.I. Leading to Shorter Work Weeks

By: Nick Heer

Sarah Perez, TechCrunch:

Zoom CEO Eric Yuan says AI will shorten our workweek

[…]

“Today, I need to manually focus on all those products to get work done. Eventually, AI will help,” Yuan said.

“By doing that, we do not need to work five days a week anymore, right? … Five years out, three days or four days [a week]. That’s a goal,” he said.

So far, technological advancements have not — in general — produced a shorter work week; that was a product of collective labour action. We have been promised a shorter week before. We do not need to carry water for people who peddle obvious lies. We will always end up being squeezed for greater output.

⌥ Permalink

Colorado Police Officer Caught on Doorbell Camera Talking About Surveillance Powers

By: Nick Heer

Andrew Kenney, Denverite:

It was Sgt. Jamie Milliman [at the door], a police officer with the Columbine Valley Police Department who covers the town of Bow Mar, which begins just south of [Chrisanna] Elser’s home.

[…]

“You know we have cameras in that jurisdiction and you can’t get a breath of fresh air, in or out of that place, without us knowing, correct?” he said.

“OK?” Elser, a financial planner in her 40s, responded in a video captured by her smart doorbell and viewed by Denverite.

“Just as an example,” the sergeant told her, she had “driven through 20 times the last month.”

This story is a civil liberties rollercoaster. Milliman was relying on a nearby town’s use of Flock license plate cameras and Ring doorbells — which may also be connected to the Flock network — to accuse Elser of theft and issue a summons. Elser was able to get the summons dropped by compiling evidence from, in part, the cameras and GPS system on her truck. Milliman’s threats were recorded by a doorbell camera, too. The whole thing is creepy, and all over a $25 package stolen off a doorstep.

I have also had things stolen from me, and I wish the police officers I spoke to had a better answer for me than shrugging their shoulders and saying, in effect, this is not worth our time. But this situation is like a parallel universe ad for Amazon and its Ring subsidiary. Is this the path toward “very close to zero[ing] out crime”? It is not worth it.

⌥ Permalink

Apple’s Tedious and Expensive Procedure for Replacing the Battery in the New MacBook Pro

By: Nick Heer

Carsten Frauenheim and Elizabeth Chamberlain, iFixit:

Apple’s official replacement process requires swapping the entire top case, keyboard and all, just to replace this single consumable component. And it has for a long time. That’s a massive and unreasonable job, requiring complete disassembly and reassembly of the entire device. We’re talking screws, shields, logic board, display, Touch ID, trackpad, everything. In fact, the only thing that doesn’t get transferred are the keyboard and speakers. The keyboard is more or less permanently affixed to this top aluminum, and the speakers are glued in — which, I guess, according to Apple means that the repair is out of the scope of DIY (we disagree).

At least one does not need to send in their laptop for a mere battery replacement. Still, I do not understand why this — the most predictable repair — is so difficult and expensive.

I hate to be that guy, but the battery for a mid-2007 15-inch MacBook Pro used to cost around $150 (about $220 inflation-adjusted) and could be swapped with two fingers. The official DIY solution for replacing the one in my M1 MacBook Pro is over $700, though there is a $124 credit for returning the replaced part. The old battery was, of course, a little bit worse: 60 watt-hours compared to 70 watt-hours in the one I am writing this with. I do not even mind the built-in-ness of this battery. But it should not cost an extra $500 and require swapping the rest of the top case parts.

[…] But for now, this tedious and insanely expensive process is the only offering they make for changing out a dead battery. Is it just a byproduct of this nearly half-a-decade-old chassis design, something that won’t change until the next rethink? We don’t know.

“Nearly half-a-decade-old” is a strange way of writing “four years”, almost like it is attempting to emphasize the age of this design. Four years old does not seem particularly ancient to me. I thought iFixit’s whole vibe was motivating people to avoid the consumerist churn encouraged by rapid redesigns.

⌥ Permalink

Reddit Sues Perplexity and Three Data Scraping Companies Because They Crawled Google

By: Nick Heer

Matt O’Brien, Associated Press:

Social media platform Reddit sued the artificial intelligence company Perplexity AI and three other entities on Wednesday, alleging their involvement in an “industrial-scale, unlawful” economy to “scrape” the comments of millions of Reddit users for commercial gain.

[…]

Also named in the lawsuit are Lithuanian data-scraping company Oxylabs UAB, a web domain called AWMProxy that Reddit describes as a “former Russian botnet,” and Texas-based startup SerpApi, which lists Perplexity as a customer on its website.

Mike Masnick, Techdirt:

Most reporting on this is not actually explaining the nuances, which require a deeper understanding of the law, but fundamentally, Reddit is NOT arguing that these companies are illegally scraping Reddit, but rather that they are illegally scraping… Google (which is not a party to the lawsuit) and in doing so violating the DMCA’s anti-circumvention clause, over content Reddit holds no copyright over. And, then, Perplexity is effectively being sued for linking to Reddit.

This is… bonkers on so many levels. And, incredibly, within their lawsuit, Reddit defends its arguments by claiming it’s filing this lawsuit to protect the open internet. It is not. It is doing the exact opposite.

I am glad Masnick wrote about this despite my disagreement with his views on how much control a website owner ought to have over scraping. This is a necessary dissection of the suit, though I would appreciate views on it from actual intellectual property lawyers. They might be able to explain how a positive outcome of this case for Reddit would have clear rules delineating this conduct from the ways in which artificial intelligence companies have so far benefitted from a generous reading of fair use and terms of service documents.

⌥ Permalink

Apple Threatens to Withdraw App Tracking Transparency in Europe

By: Nick Heer

Andrej Sokolow, Deutsche Presse Agentur:

Apple could switch off a function that prevents users’ apps from tracking their behaviour across various services and websites for advertising purposes in Germany and other European countries.

The iPhone manufacturer on Wednesday complained that it has experienced constant headwinds from the tracking industry.

“Intense lobbying efforts in Germany, Italy and other countries in Europe may force us to withdraw this feature to the detriment of European consumers,” Apple said in a statement.

It is a little rich for Apple to be claiming victimhood in the face of “intense lobbying efforts” by advertising companies when it is the seventh highest spender on lobbying in the European Union. Admittedly, it spends about one-third as much as Meta in Germany, but that is not because Apple cannot afford to spend more. Apple’s argument is weak.

In any case, this is another case where Apple believes it should have a quasi-regulatory role. As I wrote last month:

[…] Apple seems to believe it is its responsibility to implement technical controls to fulfill its definition of privacy and, if that impacts competition and compatibility, too bad. E.U. regulators seem to believe it has policy protections for user privacy, and that users should get to decide how their private data is shared.

I believe there are people within Apple who care deeply about privacy. However, when Apple also gets to define privacy and tracking, it is no coincidence it found an explanation allowing it to use platform activity and in-app purchases for ad targeting. This is hardly as sensitive as the tracking performed by Google and Meta, and Apple does not use third-party data for targeting.

But why would it? Apple owns the platform and, if it wanted, could exploit far more user information without it being considered “tracking” since it is all first-party data. That it does not is a positive reflection of self-policing and, ideally, something it will not change. But it could.

What E.U. authorities are concerned about is this self-serving definition of privacy and the self-policing that results, conflicting with the role of European regulators and privacy laws, and its effects on competition. I think those are reasonable grounds for questioning the validity of App Tracking Transparency. Furthermore, the consequences emanating from violations of privacy law are documented; Meta was penalized €1.2 billion as a result of GDPR violations. Potential violations of App Store policy, on the other hand, are handled differently. If Meta has, as a former employee alleges, circumvented App Tracking Transparency, would the penalties be handled by similar regulatory bodies, or would it — like Uber before — be dealt with privately and rather quietly?

The consequences of previous decisions have been frustrating. They result in poorer on-device privacy controls for users in part because Apple is a self-interested party. It would be able to make its case more convincingly if it walked away from the advertising business altogether.

Sokolow:

Apple argues that it has proposed various solutions to the competition authorities, but has not yet been able to dispel their concerns.

The company wants to continue to offer ATT to European users. However, it argued that the competition authorities have proposed complex solutions that would effectively undermine the function from Apple’s point of view.

Specificity would be nice. It would be better if these kinds of conversations could be had in public instead of in vague statements provided on background to select publications.

⌥ Permalink

The Verge Delivers a Bad Article About Amazon’s Ring

By: Nick Heer

Jennifer Pattison Tuohy, of the Verge, interviewed Ring founder Jamie Siminoff about a new book — which Tuohy has not read — written with Andrew Postman about the success of the company. During this conversation, Tuohy stumbled into Siminoff making a pretty outrageous claim:

While research suggests that today’s video doorbells do little to prevent crime, Siminoff believes that with enough cameras and with AI, Ring could eliminate most of it. Not all crime — “you’ll never stop crime a hundred percent … there’s crimes that are impossible to stop,” he concedes — but close.

“I think that in most normal, average neighborhoods, with the right amount of technology — not too crazy — and with AI, that we can get very close to zero out crime. Get much closer to the mission than I ever thought,” he says. “By the way, I don’t think it’s 10 years away. That’s in 12 to 24 months … maybe even within a year.”

If this sounds ridiculous to you, congratulations, you are thinking harder than whomever wrote the headline on this article:

Ring’s CEO says his cameras can almost ‘zero out crime’ within the next 12 months

The word “almost” and the phrase “very close” are working very hard to keep the core of Siminoff’s claim intact. What he says is that, by this time next year, “normal” communities with enough Ring cameras and a magic dusting of A.I. will have virtually no crime. The caveats are there to imply more nuance, but they are merely an escape hatch for when someone revisits this next year.

The near-complete elimination of crime in “normal” areas — whatever that means — will very obviously not happen. Tuohy cites a 2023 Scientific American story which, in turn, points to articles in MIT Technology Review and CNet. The first debunks a study Ring likes to promote claiming its devices drove a 55% decline in burglaries in Wilshire Park, Los Angeles in 2015, with cameras on about forty homes. Not only does the public data does not support this dramatic reduction, but:

Even if the doorbells had a positive effect, it seemed not to last. In 2017, Wilshire Park suffered more burglaries than in any of the previous seven years.

The CNet article collects a series of reports from other police departments indicating Ring cameras have questionable efficacy at deterring crime on a city-wide level.

This is also something we can know instinctually, since we already have plenty of surveillance cameras. A 2019 meta analysis (PDF) by Eric Piza, et al., found CCTV adoption decreased crime by about 13%. That is not nothing, but it is also a long way from nearly 100%. One could counter that these tests did not factor in Ring’s A.I. features, like summaries of what the camera saw — we have spent so much energy creating summary-making machines — and finding lost dogs.

The counterargument to all of this, however, is that Ring’s vision is a police state enforced by private enterprise. A 2022 paper (PDF) by Dan Calacci, et al., found race was, unsurprisingly, a motivating factor in reports of suspicious behaviour, and that reports within Ring’s Neighbors app was not correlated with the actual frequency of those crimes. Ring recently partnered with Flock, adding a further layer of creepiness.

I will allow that perhaps an article about Siminoff’s book is not the correct place to litigate these claims. By the very same logic, however, the Verge should be more cautious in publishing them, and should not have promoted them in a headline.

⌥ Permalink

App Store Restrictions Face Scrutiny in China, U.K.

By: Nick Heer

Liam Mo and Brenda Goh, Reuters:

A group of 55 Chinese iPhone and iPad users filed a complaint with China’s market regulator on Monday, a lawyer representing the group said, alleging that Apple abuses its market dominance by restricting app distribution and payments to its own platforms while charging high commissions.

[…]

This marks the second complaint against Apple led by Wang. A similar case filed in 2021 was dismissed by a Shanghai court last year.

Imran Rahman-Jones, BBC News:

But the Competition and Markets Authority (CMA) has designated both Apple and Google as having “strategic market status” – effectively saying they have a lot of power over mobile platforms.

The ruling has drawn fury from the tech giants, with Apple saying it risked harming consumers through “weaker privacy” and “delayed access to new features”, while Google called the decision “disappointing, disproportionate and unwarranted”.

The CMA said the two companies “may be limiting innovation and competition”.

Pretty soon it may be easier to list the significant markets in which Apple is still able to exercise complete control over iOS app distribution.

⌥ Permalink

OpenAI Launches ChatGPT Atlas

By: Nick Heer

Maxwell Zeff, TechCrunch:

OpenAI announced Tuesday the launch of its AI-powered browser, ChatGPT Atlas, a major step in the company’s quest to unseat Google as the main way people find information online.

The company says Atlas will first roll out on macOS, with support for Windows, iOS, and Android coming soon. OpenAI says the product will be available to all free users at launch.

Atlas, like Perplexity’s Comet, is a Chromium-based browser. You cannot use it without signing in to ChatGPT. As I was completing the first launch experience, shimmering colours radiated from the setup window and — no joke — it looked like my computer’s screen was failing.

OpenAI:

As you use Atlas, ChatGPT can get smarter and more helpful, too. Browser memories let ChatGPT remember context from the sites you visit and bring that context back when you need it. This means you can ask ChatGPT questions like: “Find all the job postings I was looking at last week and create a summary of industry trends so I can prepare for interviews.” Browser memories in Atlas are completely optional, and you’re always in control: you can view or archive them at any time in settings, and deleting browsing history deletes any associated browser memories.

I love the idea of this. So often, I need to track down something I remember reading, but have only the haziest recollection of what, exactly, it is. I want this in my life. Yet I have zero indication I can trust OpenAI with retaining and synthesizing useful information from my browsing history.

The company says it only retains pages until they have been summarized, and I am sure it thinks it is taking privacy as seriously as it can. But what about down the road? What could it do with all of this data it does retain — information that is tied to your ChatGPT account? OpenAI wants to be everywhere, and it wants to know everything about you to an even greater extent than Google or Meta have been able to accomplish. Why should I trust it? What makes the future of OpenAI look different than the trajectories of the information-hungry businesses before it?

⌥ Permalink

Federico Viticci’s M5 iPad Pro Review

By: Nick Heer

Even if you are not interested in the iPad or Apple product news generally, I recommend making time for Federico Viticci’s review, at MacStories, of the new iPad Pro. Apple claims 3.5× performance gains with A.I. models, so Viticci attempted to verify that number. Unfortunately, he ran into some problems.

Viticci (emphasis his):

This is the paradox of the M5. Theoretically speaking, the new Neural Accelerator architecture should lead to notable gains in token generation and prefill time that may be appreciated on macOS by developers and AI enthusiasts thanks to MLX (more on this below). However, all these improvements amount to very little on iPadOS today because there is no serious app ecosystem for local AI development and tinkering on iPad. That ecosystem absolutely exists on the Mac. On the iPad, we’re left with a handful of non-MLX apps from the App Store, no Terminal, and the untapped potential of the M5.

In case it’s not clear, I’m coming at this from a perspective of disappointment, not anger. […]

Viticci’s frustration with the state of A.I. models on the iPad Pro is palpable. Ideally and hopefully, it is a future-friendly system, but that is not usually the promise of Apple’s products. It usually likes to tell a complete story with the potential for sequels. To get even a glimpse of what that story looks like, Viticci had to go to great lengths, as documented in his review.

In the case of this iPad Pro, it is marketing leaps-and-bounds boosts in A.I. performance — though those claims appear to be optimistic — while still playing catch-up on last year’s Apple intelligence announcements, and offering little news for a user who wants to explore A.I. models directly on their iPad. It feels like a classic iPad story: incredible hardware, restricted by Apple’s software decisions.

Update: I missed a followup post from Viticci in which he points to a review from Max Weinbach of Creative Strategies. Weinbach found the M5 MacBook Pro does, indeed, post A.I. performance gains closer to Apple’s claims.

As an aside, I think it is curious for Apple to be supplying review units to Creative Strategies. It is nominally a research and analysis firm, not a media outlet. While there are concerns about the impartiality of reviewers granted access to prerelease devices, it feels to me like an entirely different thing for a broad-ranging research organization for reasons I cannot quite identify.

⌥ Permalink

Long Lines for Election Day in Alberta

By: Nick Heer

Ken MacGillivray and Karen Bartko, Global News:

“All electors are legislatively required to complete a Statement of Eligibility form (Form 13) at the voting station. This form is a declaration by an elector that they meet the required legislated criteria to receive and cast ballots,” Elections Edmonton said.

[…]

Those casting ballots say confirming voters are on the register or completing the necessary paperwork takes three to five minutes per voter.

I was lucky to be in and out of my polling place in about fifteen minutes, but the longest part was waiting for the person to diligently copy my name, address, and date-of-birth from my driver’s license to a triplicate form, immediately after confirming the same information on the printed voter roll. It is a silly requirement coming down as part of a larger unwanted package from our provincial government for no clear reason. The same legislation also prohibits electronic tabulation, so all the ballots are slowly being counted by hand. These are the kinds of measures that only begin to make sense if you assume someone with influence in our provincial government watches too much Fox News.

I wonder if our Minister of Red Tape Reduction has heard about all the new rules and restrictions implemented by his colleagues.

⌥ Permalink

The Blurry Future of Sora

By: Nick Heer

Jason Parham, Wired:

The uptick in artificial social networks, [Rudy] Fraser tells me, is being driven by the same tech egoists who have eroded public trust and inflamed social isolation through “divisive” algorithms. “[They] are now profiting on that isolation by creating spaces where folks can surround themselves with sycophantic bots.”

I saw this quote circulating on Bluesky over the weekend and it has been rattling around my head since. It cuts to the heart of one reason why A.I.-based “social” networks like Sora and Meta’s Vibes feel so uncomfortable.

Unfortunately, I found the very next paragraph from Parham uncompelling:

In the many conversations I had with experts, similar patterns of thought emerged. The current era of content production prioritizes aesthetics over substance. We are a culture hooked on optimization and exposure; we crave to be seen. We live on our phones and through our screens. We’re endlessly watching and being watched, submerged in a state of looking. With a sort of all-consuming greed, we are transforming into a visual-first society — an infinite form of entertainment for one another to consume, share, fight over, and find meaning through.

Of course our media reflects aesthetic trends and tastes; it always has. I do not know that there was a halcyon era of substance-over-style media, nor do I believe there was a time since celebrity was a feasible achievement in which at least some people did not desire it. In a 1948 British survey of children 10–15 years old, one-sixth to one-third of respondents aspired to “‘romantic’ [career] choices like film acting, sport, and the arts”. An article published in Scouting Magazine in 2000 noted children leaned toward high-profile careers — not necessarily celebrity, but jobs “every child is exposed to”. We love this stuff because we have always loved this stuff.

Among the bits I quibble with in the above, however, this stood out as a new and different thing: “[w]e’re endlessly watching and being watched”. That, I think, is the kind of big change Fraser is quoted as speaking about, and something I think is concerning. We already worried about echo chambers, and platforms like YouTube responded by adjusting recommendations to less frequently send users to dark places. Let us learn something, please.

Cal Newport:

A company that still believes that its technology was imminently going to run large swathes of the economy, and would be so powerful as to reconfigure our experience of the world as we know it, wouldn’t be seeking to make a quick buck selling ads against deep fake videos of historical figures wrestling. They also wouldn’t be entertaining the idea, ​as [Sam] Altman did last week​, that they might soon start offering an age-gated version of ChatGPT so that adults could enjoy AI-generated “erotica.”

To me, these are the acts of a company that poured tens of billions of investment dollars into creating what they hoped would be the most consequential invention in modern history, only to finally realize that what they wrought, although very cool and powerful, isn’t powerful enough on its own to deliver a new world all at once.

I do not think Sora smells of desperation, but I do think it is the product of a company that views unprecedented scale as its primary driver. I think OpenAI wants to be everywhere — and not in the same way that a consumer electronics company wants its smartphones to be the category’s most popular, or anything like that. I wonder if Ben Thompson’s view of OpenAI as “the Windows of A.I.” is sufficient. I think OpenAI is hoping to be a ubiquitous layer in our digital world; or, at least, it is behaving that way.

⌥ Permalink

I Bet Normal Users Will Figure Out Which Power Adapter to Buy

By: Nick Heer

John Gruber, responding to my exploration of the MacBook Pro A.C. adapter non-issue:

The problem I see with the MacBook power adapter situation in Europe is that while power users — like the sort of people who read Daring Fireball and Pixel Envy — will have no problem buying exactly the sort of power adapter they want, or simply re-using a good one they already own, normal users have no idea what makes a “good” power adapter. I suspect there are going to be a lot of Europeans who buy a new M5 MacBook Pro and wind up charging it with inexpensive low-watt power adapters meant for things like phones, and wind up with a shitty, slow charging experience.

Maybe. I think it is fair to be concerned about this being another thing people have to think about when buying a laptop. But, in my experience, less technically adept people still believe they need specific cables and chargers, even when they do not.

When I was in college, a friend forgot to bring the extension cable for their MacBook charger. There was an unused printer in the studio, though, so I was able to use the power cable from that because it is an interchangeable standard plug. I see this kind of thing all the time among friends, family members, and colleagues. It makes sense in a world frequently populated by proprietary adapters.

Maybe some people will end up with underpowered USB-C chargers. I bet a lot of people will just go to the Apple Store and buy the one recommended by staff, though.

⌥ Permalink

Latest Beta of Apple’s Operating Systems Adds Another Translucency Control

By: Nick Heer

Chance Miller, 9to5Mac:

You can find the new option [in 26.1 beta 4] on iPhone and iPad by going to the Settings app and navigating to the Display & Brightness menu. On the Mac, it’s available in the “Appearance” menu in System Settings. Here, you’ll see a new Liquid Glass menu with “Clear” and “Tinted” options.

“Choose your preferred look for Liquid Glass. Clear is more transparent, revealing the content beneath. Tinted increases opacity and adds more contrast,” Apple explains.

After Apple made the menu bar translucent in Mac OS X Leopard, it added a preference to make the bar solid after much pushback. When it refreshed the design of Mac OS X in Yosemite with more frosted glass effects, it added controls to Reduce Transparency and Increase Contrast, which replaced the menu bar-specific setting.

Here we are with yet another theme built around translucency, and more complaints about legibility and contrast — Miller writes “Apple says it heard from users throughout the iOS 26 beta testing period that they’d like a setting to manage the opaqueness of the Liquid Glass design”. Now, as has become traditional, there is another way to moderate the excesses of Apple’s new visual language. I am sure there are some who will claim this undermines the entire premise of Liquid Glass, and I do not know that they are entirely wrong. Some might call it greater personalization and customization, too. I think it feels unfocused. Apple keeps revisiting translucency and finding it needs to add more controls to compensate.

⌥ Permalink

NSO Group Banned From Using or Supplying WhatsApp Exploits

By: Nick Heer

Carly Nairn, Courthouse News Service:

U.S. District Judge Phyllis Hamilton said in a 25-page ruling that there was evidence NSO Group’s flagship spyware could still infiltrate WhatApp users’ devices and granted Meta’s request for a permanent injunction.

However, Hamilton, a Bill Clinton appointee, also determined that any damages would need to follow a ratioed amount of compensation based on a legal framework designed to proportion damages. She ordered that the jury-based award of $167 million should be reduced to a little over $4 million.

Once again, I am mystified by Apple’s decision to drop its suit against NSO Group. What Meta won is protection from WhatsApp being used as an installation vector for NSO’s spyware; importantly, high-value WhatsApp users won a modicum of protection from NSO’s customers. And, as John Scott-Railton of Citizen Lab points out, NSO has “an absolute TON of their business splashed all over the court records”. There are several depositions from which an enterprising journalist could develop a better understanding of this creepy spyware company.

Last week, NSO Group confirmed it had been acquired by U.S. investors. However, according to its spokesperson, its “headquarters and core operations remain in Israel [and] continues to be fully supervised and regulated by the relevant Israeli authorities”.

Lorenzo Franceschi-Bicchierai, TechCrunch:

NSO has long claimed that its spyware is designed to not target U.S. phone numbers, likely to avoid hurting its chances to enter the U.S. market. But the company was caught in 2021 targeting about a dozen U.S. government officials abroad.

Soon after, the U.S. Commerce Department banned American companies from trading with NSO by putting the spyware maker on the U.S. Entities List. Since then, NSO has tried to get off the U.S. government’s blocklist, as recently as May 2025, with the help of a lobbying firm tied to the Trump administration.

I have as many questions about what this change in ownership could mean for its U.S. relationship as I do about how it affects possible targets.

⌥ Permalink

Sponsor: Magic Lasso Adblock: Incredibly Private and Secure Safari Web Browsing

By: Nick Heer

My thanks to Magic Lasso Adblock for sponsoring Pixel Envy this week.

With over 5,000 five star reviews, Magic Lasso Adblock is simply the best ad blocker for your iPhone, iPad, and Mac.

Designed from the ground up to protect your privacy, Magic Lasso blocks all intrusive ads, trackers, and annoyances. It stops you from being followed by ads around the web and, with App Ad Blocking, it stops your app usage being harvested by ad networks.

So, join over 350,000 users and download Magic Lasso Adblock today.

⌥ Permalink

The New MacBook Pro Is €35 Less Expensive in E.U. Countries, Ships Without a Charger

By: Nick Heer

Are you outraged? Have you not heard? Apple updated its entry-level MacBook Pro with a new M5 chip, and across Europe, it does not ship with an A.C. adapter in the box as standard any more. It still comes with a USB-C to MagSafe cable, and you can add an adapter at checkout, but those meddling E.U. regulators have forced Apple to do something stupid and customer-unfriendly again. Right?

William Gallagher, of AppleInsider, gets it wrong:

Don’t blame Apple this time — if you’re in the European Union or the UK, your new M5 14-inch MacBook Pro or iPad Pro may cost you $70 extra because Apple isn’t allowed to bundle a charger.

First of all, the dollar is not the currency in any of these countries. Second, the charger in European countries is €65, which is more like $76 right now. Third, Apple is allowed to bundle an A.C. adapter, it just needs to offer an option to not include it. Fourth, and most important, is that the new MacBook Pro is less expensive in nearly every region in which the A.C. adapter is now a configure-to-order option — even after adding the adapter.

In Ireland, the MacBook Pro used to start at €1,949; it now starts at €1,849; in France, it was €1,899, and it is now €1,799. As mentioned, the adapter is €65, making these new Macs €35 less with a comparable configuration. The same is true in each Euro-currency country I checked: Germany, Italy, and Spain all received a €100 price cut if you do not want an A.C. adapter, and a €35 price cut if you do.

It is not just countries that use the Euro receiving cuts. In Norway, the new MacBook Pro starts at 2,000 krone less than the one it replaces, and a charger is 849 krone. In Hungary, it is 50,000 forint less, with a charger costing about 30,000 forint. There are some exceptions, too. In Switzerland, the new models are 50 francs less, but a charger is 59 francs. And in the U.K., there is no price adjustment, even though the charger is a configure-to-order option there, too.

Countries with a charger in the box, on the other hand, see no such price adjustment, at least for the ones I have checked. The new M5 model starts at the same price as the M4 it replaces in Canada, Japan, Singapore, and the United States. (For the sake of brevity and because not all of these pages have been recently crawled by the Internet Archive, I have not included links to each comparison. I welcome checking my work, however, and would appreciate an email if I missed an interesting price change.)

Maybe Apple was already planning a €100 price cut for these new models. The M4 was €100 less expensive than the M3 it replaced, for example, so it is plausible. That is something we simply cannot know. What we do know for certain is that these new MacBook Pros might not come with an A.C. adapter, but even if someone adds one at checkout, it still costs less in most places with this option.

Gallagher:

It doesn’t appear that Apple has cut prices of the MacBook Pro or iPad Pro to match, either. That can’t be proven, though, because at least with the UK, Apple generally does currency conversion just by swapping symbols.

It can be proven if you bother to put in thirty minutes’ work.

Joe Rossignol, of MacRumors, also gets it a little wrong:

According to the European Union law database, Apple could have let customers in Europe decide whether they wanted to have a charger included in the box or not, but the company has ultimately decided to not include one whatsoever: […]

A customer can, in fact, choose to add an A.C. adapter when they order their Mac.

⌥ Permalink

OpenAI and Nvidia Are at the Centre of a Trillion-Dollar Circular Investment Economy

By: Nick Heer

Tabby Kinder in New York and George Hammond, Financial Times:

OpenAI has signed about $1tn in deals this year for computing power to run its artificial intelligence models, commitments that dwarf its revenue and raise questions about how it can fund them.

Emily Forgash and Agnee Ghosh, Bloomberg:

For much of the AI boom, there have been whispers about Nvidia’s frenzied dealmaking. The chipmaker bolstered the market by pumping money into dozens of AI startups, many of which rely on Nvidia’s graphics processing units to develop and run their models. OpenAI, to a lesser degree, also invested in startups, some of which built services on top of its AI models. But as tech firms have entered a more costly phase of AI development, the scale of the deals involving these two companies has grown substantially, making it harder to ignore.

The day after Nvidia and OpenAI announced their $100 billion investment agreement, OpenAI confirmed it had struck a separate $300 billion deal with Oracle to build out data centers in the US. Oracle, in turn, is spending billions on Nvidia chips for those facilities, sending money back to Nvidia, a company that is emerging as one of OpenAI’s most prominent backers.

I possess none of the skills most useful to understand what all of this means. I am not an economist; I did not have a secret life as an investment banker. As a layperson, however, it is not comforting to read from some People With Specialized Knowledge that this is similar to historically good circular investments, just at an unprecedented scale, while other People With Specialized Knowledge say this has been the force preventing the U.S. from entering a recession. These articles might be like one of those prescient papers from before the Great Recession. Not a great feeling.

⌥ Permalink

The New ‘Foreign Influence’ Scare

By: Nick Heer

Emmanuel Maiberg, 404 Media:

Democratic U.S. Senators Richard Blumenthal and Elizabeth Warren sent letters to the Department of Treasury Secretary Scott Bessent and Electronic Arts CEO Andrew Wilson, raising concerns about the $55 billion acquisition of the giant American video game company in part by Saudi Arabia’s Public Investment Fund (PIF).

Specifically, the Senators worry that EA, which just released Battlefield 6 last week and also publishes The Sims, Madden, and EA Sports FC, “would cease exercising editorial and operational independence under the control of Saudi Arabia’s private majority ownership.”

“The proposed transaction poses a number of significant foreign influence and national security risks, beginning with the PIF’s reputation as a strategic arm of the Saudi government,” the Senators wrote in their letter. […]

In the late 1990s and early 2000s, the assumption was that it would be democratic nations successfully using the web for global influence. But I think the 2016 U.S. presidential election, during which Russian operatives worked to sway voters’ intentions, was a reality check. Fears of foreign influence were then used by U.S. lawmakers to justify banning TikTok, and to strongarm TikTok into allowing Oracle to oversee its U.S. operations. Now, it is Saudi Arabian investment in Electronic Arts raising concerns. Like TikTok, it is not the next election that is, per se, at risk, but the general thoughts and opinions of people in the United States.

U.S. politicians even passed a law intended to address “foreign influence” concerns. However, Saudi Arabia is not one of the four “covered nations” restricted by PAFACA.

Aside from xenophobia, I worry “foreign influence” is becoming a new standard excuse for digital barriers. We usually associate restrictive internet policies with oppressive and authoritarian regimes that do not trust their citizens to be able to think for themselves. This is not to say foreign influence is not a reasonable concern, nor that Saudi Arabia has no red flags, nor still that these worries are a purely U.S. phenomenon. Canadian officials are similarly worried about adversarial government actors covertly manipulating our policies and public opinion. But I think we need to do better if we want to support a vibrant World Wide Web. U.S. adversaries are allowed to have big, successful digital products, too.

⌥ Permalink

My flailing around with Firefox's Multi-Account Containers

By: cks

I have two separate Firefox environments. One of them is quite locked down so that it blocks JavaScript by default, doesn't accept cookies, and so on. Naturally this breaks a lot of things, so I have a second "just make it work" environment that runs all the JavaScript, accepts all the cookies, and so on (although of course I use uBlock Origin, I'm not crazy). This second environment is pretty risky in the sense that it's going to be heavily contaminated with tracking cookies and so on, so to mitigate the risk (and make it a better environment to test things in), I have this Firefox set to discard cookies, caches, local storage, history, and so on when it shuts down.

In theory how I use this Firefox is that I start it when I need to use some annoying site I want to just work, use the site briefly, and then close it down, flushing away all of the cookies and so on. In practice I've drifted into having a number of websites more or less constantly active in this "accept everything" Firefox, which means that I often keep it running all day (or longer at home) and all of those cookies stick around. This is less than ideal, and is a big reason why I wish Firefox had a 'open this site in a specific profile' feature. Yesterday, spurred on by Ben Zanin's Fediverse comment, I decided to make my "accept everything" Firefox environment more complicated in the pursuit of doing better (ie, throwing away at least some cookies more often).

First, I set up a combination of Multi-Account Containers for the basic multi-container support and FoxyTab to assign wildcarded domains to specific containers. My reason to use Multi-Account Containers and to confine specific domains to specific containers is that both M-A C itself and my standard Cookie Quick Manager add-on can purge all of the cookies and so on for a specific container. In theory this lets me manually purge undesired cookies, or all cookies except desired ones (for example, my active Fediverse login). Of course I'm not likely to routinely manually delete cookies, so I also installed Cookie AutoDelete with a relatively long timeout and with its container awareness turned on, and exemptions configured for the (container-confined) sites that I'm going to want to retain cookies from even when I've closed their tab.

(It would be great if Cookie AutoDelete supported different cookie timeouts for different containers. I suspect it's technically possible, along with other container-aware cookie deletion, since Cookie AutoDelete applies different retention policies in different containers.)

In FoxyTab, I've set a number of my containers to 'Limit to Designated Sites'; for example, my 'Fediverse' container is set this way. The intention is that when I click on an external link in a post while reading my Fediverse feed, any cookies that external site sets don't wind up in the Fediverse container; instead they go either in the default 'no container' environment or in any specific container I've set up for them. As part of this I've created a 'Cookie Dump' container that I've assigned as the container for various news sites and so on where I actively want a convenient way to discard all their cookies and data (which is available through Multi-Account Containers).

Of course if you look carefully, much of this doesn't really require Multi-Account Containers and FoxyTab (or containers at all). Instead I could get almost all of this just by using Cookie AutoDelete to clean out cookies from closed sites after a suitable delay. Containers do give me a bit more isolation between the different things I'm using my "just make it work" Firefox for, and maybe that's important enough to justify the complexity.

(I still have this Firefox set to discard everything when it exits. This means that I have to re-log-in every so often even for the sites where I have Cookie AutoDelete keep cookies, but that's fine.)

I wish Firefox Profiles supported assigning websites to profiles

By: cks

One of the things that Firefox is working on these days is improving Firefox's profiles feature so that it's easier to use them. Firefox also has an existing feature that is similar to profiles, in containers and the Multi-Account Containers extension. The reason Firefox is tuning up profiles is that containers only separate some things, while profiles separate pretty much everything. A profile has a separate set of about:config settings, add-ons, add-on settings, memorized logins, and so on. I deliberately use profiles to create two separate and rather different Firefox environments. I'd like to have at least two or three more profiles, but one reason I've been lazy is that the more profiles I have, the more complex getting URLs into the right profile is (even with tooling to help).

This leads me to my wish for profiles, which is for profiles to support the kind of 'assign website to profile' and 'open website in profile' features that you currently have with containers, especially with the Multi-Account Containers extension. Actually I would like a somewhat better version than Multi-Account Containers currently offers, because as far as I can see you can't currently say 'all subdomains under this domain should open in container X' and that's a feature I very much want for one of my use cases.

(Multi-Account Containers may be able to do wildcarded subdomains with an additional add-on, but on the other hand apparently it may have been neglected or abandoned by Mozilla.)

Another way to get much of what I want would be for some of my normal add-ons to be (more) container aware. I could get a lot of the benefit of profiles (although not all of them) by using Multi-Account Containers with container aware cookie management in, say, Cookie AutoDelete (which I believe does support that, although I haven't experimented). Using containers also has the advantage that I wouldn't have to maintain N identical copies of my configuration for core extensions and bookmarklets and so on.

(I'm not sure what you can copy from one profile to a new one, and you currently don't seem to get any assistance from Firefox for it, at least in the old profile interface. This is another reason I haven't gone wild on making new Firefox profiles.)

Modern Linux filesystem mounts are rather complex things

By: cks

Once upon a time, Unix filesystem mounts worked by putting one inode on top of another, and this was also how they worked in very early Linux. It wasn't wrong to say that mounts were really about inodes, with the names only being used to find the inodes. This is no longer how things work in Linux (and perhaps other Unixes, but Linux is what I'm most familiar with for this). Today, I believe that filesystem mounts in Linux are best understood as namespace operations.

Each separate (unmounted) filesystem is a a tree of names (a namespace). At a broad level, filesystem mounts in Linux take some name from that filesystem tree and project it on top of something in an existing namespace, generally with some properties attached to the projection. A regular conventional mount takes the root name of the new filesystem and puts the whole tree somewhere, but for a long time Linux's bind mounts took some other name in the filesystem as their starting point (what we could call the root inode of the mount). In modern Linux, there can also be multiple mount namespaces in existence at one time, with different contents and properties. A filesystem mount does not necessarily appear in all of them, and different things can be mounted at the same spot in the tree of names in different mount namespaces.

(Some mount properties are still global to the filesystem as a whole, while other mount properties are specific to a particular mount. See mount(2) for a discussion of general mount properties. I don't know if there's a mechanism to handle filesystem specific mount properties on a per mount basis.)

This can't really be implemented with an inode-based view of mounts. You can somewhat implement traditional Linux bind mounts with an inode based approach, but mount namespaces have to be separate from the underlying inodes. At a minimum a mount point must be a pair of 'this inode in this namespace has something on top of it', instead of just 'this inode has something on top of it'.

(A pure inode based approach has problems going up the directory tree even in old bind mounts, because the parent directory of a particular directory depends on how you got to the directory. If /usr/share is part of /usr and you bind mounted /usr/share to /a/b, the value of '..' depends on if you're looking at '/usr/share/..' or '/a/b/..', even though /usr/share and /a/b are the same inode in the /usr filesystem.)

If I'm reading manual pages correctly, Linux still normally requires the initial mount of any particular filesystem be of its root name (its true root inode). Only after that initial mount is made can you make bind mounts to pull out some subset of its tree of names and then unmount the original full filesystem mount. I believe that a particular filesystem can provide ways to sidestep this with a filesystem specific mount option, such as btrfs's subvol= mount option that's covered in the btrfs(5) manual page (or 'btrfs subvolume set-default').

You can add arbitrary zones to NSD (without any glue records)

By: cks

Suppose, not hypothetically, that you have a very small DNS server for a captive network situation, where the DNS server exists only to give clients answers for a small set of hosts. One of the ways you can implement this is with an authoritative DNS servers, such as NSD, that simply has an extremely minimal set of DNS data. If you're using NSD for this, you might be curious how minimal you can be and how much you need to mimic ordinary DNS structure.

Here, by 'mimic ordinary DNS structure', I mean inserting various levels of NS records so there is a more or less conventional path of NS delegations from the DNS root ('.') down to your name. If you're providing DNS clients with 'dog.example.org', you might conventionally have a NS record for '.', a NS record for 'org.', and a NS record for 'example.org.', mimicking what you'd see in global DNS. Of course all of your NS records are going to point to your little DNS server, but they're present if anything looks.

Perhaps unsurprisingly, NSD doesn't require this and DNS clients normally don't either. If you say:

zone:
  name: example.org
  zonefile: example-stub

and don't have any other DNS data, NSD won't object and it will answer queries for 'dog.example.org' with your minimal stub data. This works for any zone, including completely made up ones:

zone:
  name: beyond.internal
  zonefile: beyond-stub

The actual NSD stub zone files can be quite minimal. An older OpenBSD NSD appears to be happy with zone files that have only a $ORIGIN, a $TTL, a '@ IN SOA' record, and what records you care about in the zone.

Once I thought about it, I realized I should have expected this. An authoritative DNS server normally only holds data for a small subset of zones and it has to be willing to answer queries about the data it holds. Some authoritative DNS servers (such as Bind) can also be used as resolving name servers so they'd sort of like to have information about at least the root nameservers, but NSD is a pure authoritative server so there's no reason for it to care.

As for clients, they don't normally do DNS resolution starting from the root downward. Instead, they expect to operate by sending the entire query to whatever their configured DNS resolver is, which is going to be your little NSD setup. In a number of configurations, clients either can't talk directly to outside DNS or shouldn't try to do DNS resolution that way because it won't work; they need to send everything to their configured DNS resolver so it can do, for example, "split horizon" DNS.

(Yes, the modern vogue for DNS over HTTPS puts a monkey wrench into split horizon DNS setups. That's DoH's problem, not ours.)

Since this works for a .net zone, you can use it to try to disable DNS over HTTPS resolvers in your stub DNS environment by providing a .net zone with 'use-application-dns CNAME .' or the like, to trigger at least Firefox's canary domain detection.

(I'm not going to address whether you should have such a minimal stub DNS environment or instead count on your firewall to block traffic and have a normal DNS environment, possibly with split horizon or response policy zones to introduce your special names.)

Some of the things that ZFS scrubs will detect

By: cks

Recently I saw a discussion of my entry on how ZFS scrubs don't really check the filesystem structure where someone thought that ZFS scrubs only protected you from the disk corrupting data at rest, for example due to sectors starting to fail (here). While ZFS scrubs have their limits, they do manage to check somewhat more than this.

To start with, ZFS scrubs check the end to end hardware path for reading all your data (and implicitly for writing it). There are a variety of ways that things in the hardware path can be unreliable; for example, you might have slowly failing drive cables that are marginal and sometimes give you errors on data reads (or worse, data writes). A ZFS scrub has some chance to detect this; if a ZFS scrub passes, you know that as of that point in time you can reliably read all your data from all your disks and that all the data was reliably written.

If a scrub passes, you also know that the disks haven't done anything obviously bad with your data. This can be important if you're doing operations that you consider somewhat exotic, such as telling SSDs to discard unused sectors. If you have ZFS send TRIM commands to a SSD and then your scrub passes, you know that the SSD didn't incorrectly discard some sectors that were actually used.

Related to this, if you do a ZFS level TRIM and then the scrub passes, you know that ZFS itself didn't send TRIM commands that told the SSD to discard sectors that were actually used. In general, if ZFS has a serious problem where it writes the wrong thing to the wrong place, a scrub will detect it (although the scrub can't fix it). Similarly, a scrub will detect if a disk itself corrupted the destination of a write (or a read), or if things were corrupted somewhere in the lower level software and hardware path of the write.

There are a variety of ZFS level bugs that could theoretically write the wrong thing to the wrong place, or do something that works out to the same effect. ZFS could have a bug in free space handling (so that it incorrectly thinks some in use sectors are free and overwrites them), or it could write too much or too little, or it could correctly allocate and write data but record the location of the data incorrectly in higher level data structures, or it could accidentally not do a write (for example, if it's supposed to write a duplicate copy of some data but forgets to actually issue the IO). ZFS scrubs can detect all of these issues under the right circumstances.

(To a limited extent a ZFS scrub also checks the high level metadata of filesystems and snapshots. since it has to traverse that metadata to find the object set for each dataset and similar things. Since a scrub just verifies checksums, this won't cross check dataset level metadata like information on how much data was written in each snapshot, or the space usage.)

What little I want out of web "passkeys" in my environment

By: cks

WebAuthn is yet another attempt to do an API for web authentication that doesn't involve passwords but that instead allows browsers, hardware tokens, and so on to do things more securely. "Passkeys" (also) is the marketing term for a "WebAuthn credential", and an increasing number of websites really, really want you to use a passkey for authentication instead of any other form of multi-factor authentication (they may or may not still require your password).

Most everyone that wants you to use passkeys also wants you to specifically use highly secure ones. The theoretically most secure are physical hardware security keys, followed by passkeys that are stored and protected in secure enclaves in various ways by the operating system (provided that the necessary special purpose hardware is available). Of course the flipside of 'secure' is 'locked in', whether locked in to your specific hardware key (or keys, generally you'd better have backups) or locked in to a particular vendor's ecosystem because their devices are the only ones that can possibly use your encrypted passkey vault.

(WebAuthn neither requires nor standardizes passkey export and import operations, and obviously security keys are built to not let anyone export the cryptographic material from them, that's the point.)

I'm extremely not interested in the security versus availability tradeoff that passkeys make in favour of security. I care far more about preserving availability of access to my variety of online accounts than about nominal high security. So if I'm going to use passkeys at all, I have some requirements:

Linux people: is there a passkeys implementation that does not use physical hardware tokens (software only), is open source, works with Firefox, and allows credentials to be backed up and copied to other devices by hand, without going through some cloud service?

I don't think I'm asking for much, but this is what I consider the minimum for me actually using passkeys. I want to be 100% sure of never losing them because I have multiple backups and can use them on multiple machines.

Apparently KeePassXC more or less does what I want (when combined with its Firefox extension), and it can even export passkeys in a plain text format (well, JSON). However, I don't know if anything else can ingest those plain text passkeys, and I don't know if KeePassXC can be told to only do passkeys with the browser and not try to take over passwords.

(But at least a plain text JSON backup of your passkeys can be imported into another KeePassXC instance without having to try to move, copy, or synchronize a KeePassXC database.)

Normally I would ignore passkeys entirely, but an increasing number of websites are clearly going to require me to use some form of multi-factor authentication, no matter how stupid this is (cf), and some of them will probably require passkeys or at least make any non-passkey option very painful. And it's possible that reasonably integrated passkeys will be a better experience than TOTP MFA with my janky minimal setup.

(Of course KeePassXC also supports TOTP, and TOTP has an extremely obvious import process that everyone supports, and I believe KeePassXC will export TOTP secrets if you ask nicely.)

While KeePassXC is okay, what I would really like is for Firefox to support 'memorized passkeys' right along with its memorized passwords (and support some kind of export and import along with it). Should people use them? Perhaps not. But it would put that choice firmly in the hands of the people using Firefox, who could decide on how much security they did or didn't want, not in the hands of websites who want to force everyone to face a real risk of losing their account so that the website can conduct security theater.

(Firefox will never support passkeys this way for an assortment of reasons. At most it may someday directly use passkeys through whatever operating system services expose them, and maybe Linux will get a generic service that works the way I want it to. Nor is Firefox ever going to support 'memorized TOTP codes'.)

Two reasons why Unix traditionally requires mount points to exist

By: cks

Recently on the Fediverse, argv minus one asked a good question:

Why does #Linux require #mount points to exist?

And are there any circumstances where a mount can be done without a pre-existing mount point (i.e. a mount point appears out of thin air)?

I think there is one answer for why this is a good idea in general and otherwise complex to do, although you can argue about it, and then a second historical answer based on how mount points were initially implemented.

The general problem is directory listings. We obviously want and need mount points to appear in readdir() results, but in the kernel, directory listings are historically the responsibility of filesystems and are generated and returned in pieces on the fly (which is clearly necessary if you have a giant directory; the kernel doesn't read the entire thing into memory and then start giving your program slices out of it as you ask). If mount points never appear in the underlying directory, then they must be inserted at some point in this process. If mount points can sometimes exist and sometimes not, it's worse; you need to somehow keep track of which ones actually exist and then add the ones that don't at the end of the directory listing. The simplest way to make sure that mount points always exist in directory listings is to require them to have an existence in the underlying filesystem.

(This was my initial answer.)

The historical answer is that in early versions of Unix, filesystems were actually mounted on top of inodes, not directories (or filesystem objects). When you passed a (directory) path to the mount(2) system call, all it was used for was getting the corresponding inode, which was then flagged as '(this) inode is mounted on' and linked (sort of) to the new mounted filesystem on top of it. All of the things that dealt with mount points and mounted filesystem did so by inode and inode number, with no further use of the paths and the root inode of the mounted filesystem being quietly substituted for the mounted-on inode. All of the mechanics of this needed the inode and directory entry for the name to actually exist (and V7 required the name to be a directory).

I don't think modern kernels (Linux or otherwise) still use this approach to handling mounts, but I believe it lingered on for quite a while. And it's a sufficiently obvious and attractive implementation choice that early versions of Linux also used it (see the Linux 0.96c version of iget() in fs/inode.c).

Sidebar: The details of how mounts worked in V7

When you passed a path to the mount(2) system call (called 'smount()' in sys/sys3.c), it used the name to get the inode and then set the IMOUNT flag from sys/h/inode.h on it (and put the mount details in a fixed size array of mounts, which wasn't very big). When iget() in sys/iget.c was fetching inodes for you and you'd asked for an IMOUNT inode, it gave you the root inode of the filesystem instead, which worked in cooperation with name lookup in a directory (the name lookup in the directory would find the underlying inode number, and then iget() would turn it into the mounted filesystem's root inode). This gave Research Unix a simple, low code approach to finding and checking for mount points, at the cost of pinning a few more inodes into memory (not necessarily a small thing when even a big V7 system only had at most 200 inodes in memory at once, but then a big V7 system was limited to 8 mounts, see h/param.h).

We can't really do progressive rollouts of disruptive things

By: cks

In a comment on my entry on how we reboot our machines right after updating their kernels, Jukka asked a good question:

While I do not know how many machines there are in your fleet, I wonder whether you do incremental rolling, using a small snapshot for verification before rolling out to the whole fleet?

We do this to some extent but we can't really do it very much. The core problem is that the state of almost all of our machines is directly visible and exposed to people. This is because we mostly operate an old fashioned Unix login server environment, where people specifically use particular servers (either directly by logging in to them or implicitly because their home directory is on a particular NFS fileserver). About the only genuinely generic machines we have are the nodes in our SLURM cluster, where we can take specific unused nodes out of service temporarily without anyone noticing.

(Some of these login servers in use all of the time; others we might find idle if we're extremely lucky. But it's hard to predict when someone will show up to try to use a currently empty server.)

This means that progressively rolling out a kernel update (and rebooting things) to our important, visible core servers requires multiple people-visible reboots of machines, instead of one big downtime when everything is rebooted. Generally we feel that repeated disruptions are much more annoying and disruptive overall to people; it's better to get the pain of reboot disruptions over all at once. It's also much easier to explain to people, and we don't have to annoy them with repeated notifications that yet another subset of our servers and services will be down for a bit.

(To make an incremental deployment more painful for us, these will normally have to be after-hours downtimes, which means that we'll be repeatedly staying late, perhaps once a week for three or four weeks as we progressively work through a rollout.)

In addition to the nodes of our SLURM cluster, there are a number of servers that can be rebooted in the background to some degree without people noticing much. We will often try the kernel update out on a few of them in advance, and then update others of them earlier in the day (or the day before) both as a final check and to reduce the number of systems we have to cover at the actual out of hours downtime. But a lot of our servers cannot really be tested much in advance, such as our fileservers or our web server (which is under constant load for reasons outside the scope of this entry). We can (and do) update a test fileserver or a test web server, but neither will see a production load and it's under production loads that problems are most likely to surface.

This is a specific example of how the 'cattle' model doesn't fit all situations. To have a transparent rolling update that involves reboots (or anything else that's disruptive on a single machine), you need to be able to transparently move people off of machines and then back on to them. This is hard to get in any environment where people have long term usage of specific machines, where they have login sessions and running compute jobs and so on, and where you have have non-redundant resources on a single machine (such as NFS fileservers without transparent failover from server to server).

We don't update kernels without immediately rebooting the machine

By: cks

I've mentioned this before in passing (cf, also) but today I feel like saying it explicitly: our habit with all of our machines is to never apply a kernel update without immediately rebooting the machine into the new kernel. On our Ubuntu machines this is done by holding the relevant kernel packages; on my Fedora desktops I normally run 'dnf update --exclude "kernel*"' unless I'm willing to reboot on the spot.

The obvious reason for this is that we want to switch to the new kernel under controlled, attended conditions when we'll be able to take immediate action if something is wrong, rather than possibly have the new kernel activate at some random time without us present and paying attention if there's a power failure, a kernel panic, or whatever. This is especially acute on my desktops, where I use ZFS by building my own OpenZFS packages and kernel modules. If something goes wrong and the kernel modules don't load or don't work right, an unattended reboot can leave my desktops completely unusable and off the network until I can get to them. I'd rather avoid that if possible (sometimes it isn't).

(In general I prefer to reboot my Fedora machines with me present because weird things happen from time to time and sometimes I make mistakes, also.)

The less obvious reason is that when you reboot a machine right after applying a kernel update, it's clear in your mind that the machine has switched to a new kernel. If there are system problems in the days immediately afterward the update, you're relatively likely to remember this and at least consider the possibility that the new kernel is involved. If you apply a kernel update, walk away without rebooting, and the machine reboots a week and a half later for some unrelated reason, you may not remember that one of the things the reboot did was switch to a new kernel.

(Kernels aren't the only thing that this can happen with, since not all system updates and changes take effect immediately when made or applied. Perhaps one should reboot after making them, too.)

I'm assuming here that your Linux distribution's package management system is sensible, so there's no risk of losing old kernels (especially the one you're currently running) merely because you installed some new ones but didn't reboot into them. This is how Debian and Ubuntu behave (if you don't 'apt autoremove' kernels), but not quite how Fedora's dnf does it (as far as I know). Fedora dnf keeps the N most recent kernels around and probably doesn't let you remove the currently running kernel even if it's more than N kernels old, but I don't believe it tracks whether or not you've rebooted into those N kernels and stretches the N out if you haven't (or removes more recent installed kernels that you've never rebooted into, instead of older kernels that you did use at one point).

PS: Of course if kernel updates were perfect this wouldn't matter. However this isn't something you can assume for the Linux kernel (especially as patched by your distribution), as we've sometimes seen. Although big issues like that are relatively uncommon.

We (I) need a long range calendar reminder system

By: cks

About four years ago I wrote an entry about how your SMART drive database of attribute meanings needs regular updates. That entry was written on the occasion of updating the database we use locally on our Ubuntu servers, and at the time we were using a mix of Ubuntu 18.04 and Ubuntu 20.04 servers, both of which had older drive databases that probably dated from early 2018 and early 2020 respectively. It is now late 2025 and we use a mix of Ubuntu 24.04 and 22.04 servers, both of which have drive databases that are from after October of 2021.

Experienced system administrators know where this one is going: today I updated our SMART drive database again, to a version of the SMART database that was more recent than the one shipped with 24.04 instead of older than it.

It's a fact of life that people forget things. People especially forget things that are a long way away, even if they make little notes in their worklog message when recording something that they did (as I did four years ago). It's definitely useful to plan ahead in your documentation and write these notes, but without an external thing to push you or something to explicitly remind you, there's no guarantee that you'll remember.

All of which leads me to the view that it would be useful for us to have a long range calendar reminder system, something that could be used to set reminders for more than a year into the future and ideally allow us to write significant email messages to our future selves to cover all of the details (although there are hacks around that, such as putting the details on a web page and having the calendar mail us a link). Right now the best calendar reminder system we have is the venerable calendar, which we can arrange to email one-line notes to our general address that reaches all sysadmins, but calendar doesn't let you include the year in the reminder date.

(For SMART drive database updates, we could get away with mailing ourselves once a year in, say, mid-June. It doesn't hurt to update the drive database more than every Ubuntu LTS release. But there are situations where a reminder several years in the future is what we want.)

PS: Of course it's not particularly difficult to build an ad-hoc script system to do this, with various levels of features. But every local ad-hoc script that we write is another little bit of overhead, and I'd like to avoid that kind of thing if at all possible in favour of a standard solution (that isn't a shared cloud provider calendar).

We need to start doing web blocking for non-technical reasons

By: cks

My sense is that for a long time, technical people (system administrators, programmers, and so on) have seen the web as something that should be open by default and by extension, a place where we should only block things for 'technical' reasons. Common technical reasons are a harmful volume of requests or clear evidence of malign intentions, such as probing for known vulnerabilities. Otherwise, if it wasn't harming your website and wasn't showing any intention to do so, you should let it pass. I've come to think that in the modern web this is a mistake, and we need to be willing to use blocking and other measures for 'non-technical' reasons.

The core problem is that the modern web seems to be fragile and is kept going in large part by a social consensus, not technical things such as capable software and powerful servers. However, if we only react to technical problems, there's very little that preserves and reinforces this social consensus, as we're busy seeing. With little to no consequences for violating the social consensus, bad actors are incentivized to skate right up to and even over the line of causing technical problems. When we react by taking only narrow technical measures, we tacitly reward the bad actors for their actions; they can always find another technical way. They have no incentive to be nice or to even vaguely respect the social consensus, because we don't punish them for it.

So I've come to feel that if something like the current web is to be preserved, we need to take action not merely when technical problems arise but also when the social consensus is violated. We need to start blocking things for what I called editorial reasons. When software or people do things that merely shows bad manners and doesn't yet cause us technical problems, we should still block it, either soft (temporarily, perhaps with HTTP 429 Too Many Requests) or hard (permanently). We need to take action to create the web that we want to see, or we aren't going to get it or keep it.

To put it another way, if we want to see good, well behaved browsers, feed readers, URL fetchers, crawlers, and so on, we have to create disincentives for ones that are merely bad (as opposed to actively damaging). In its own way, this is another example of the refutation of Postel's Law. If we accept random crap to be friendly, we get random crap (and the quality level will probably trend down over time).

To answer one potential criticism, it's true that in some sense, blocking and so on for social reasons is not good and is in some theoretical sense arguably harmful for the overall web ecology. On the other hand, the current unchecked situation itself is also deeply harmful for the overall web ecology and it's only going to get worse if we do nothing, with more and more things effectively driven off the open web. We only get to pick the poison here.

I wish SSDs gave you CPU performance style metrics about their activity

By: cks

Modern CPUs have an impressive collection of performance counters for detailed, low level information on things like cache misses, branch mispredictions, various sorts of stalls, and so on; on Linux you can use 'perf list' to see them all. Modern SSDs (NVMe, SATA, and SAS) are all internally quite complex, and their behavior under load depends on a lot of internal state. It would be nice to have CPU performance counter style metrics to expose some of those details. For a relevant example that's on my mind (cf), it certainly would be interesting to know how often flash writes had to stall while blocks were hastily erased, or the current erase rate.

Having written this, I checked some of our SSDs (the ones I'm most interested in at the moment) and I see that our SATA SSDs do expose some of this information as (vendor specific) SMART attributes, with things like 'block erase count' and 'NAND GB written' to TLC or SLC (as well as the host write volume and so on stuff you'd expect). NVMe does this in a different way that doesn't have the sort of easy flexibility that SMART attributes do, so a random one of ours that I checked doesn't seem to provide this sort of lower level information.

It's understandable that SSD vendors don't necessarily want to expose this sort of information, but it's quite relevant if you're trying to understand unusual drive performance. For example, for your workload do you need to TRIM your drives more often, or do they have enough pre-erased space available when you need it? Since TRIM has an overhead, you may not want to blindly do it on a frequent basis (and its full effects aren't entirely predictable since they depend on how much the drive decides to actually erase in advance).

(Having looked at SMART 'block erase count' information on one of our servers, it's definitely doing something when the server is under heavy fsync() load, but I need to cross-compare the numbers from it to other systems in order to get a better sense of what's exceptional and what's not.)

I'm currently more focused on write related metrics, but there's probably important information that could be exposed for reads and for other operations. I'd also like it if SSDs provided counters for how many of various sorts of operations they saw, because while your operating system can in theory provide this, it often doesn't (or doesn't provide them at the granularity of, say, how many writes with 'Force Unit Access' or how many 'Flush' operations were done).

(In Linux, I think I'd have to extract this low level operation information in an ad-hoc way with eBPF tracing.)

A (filesystem) journal can be a serialization point for durable writes

By: cks

Suppose that you have a filesystem that uses some form of a journal to provide durability (as many do these days) and you have a bunch of people (or processes) writing and updating things all over the filesystem that they want to be durable, so these processes are all fsync()'ing their work on a regular basis (or the equivalent system call or synchronous write operation). In a number of filesystem designs, this creates a serialization point on the filesystem's journal.

This is related to the traditional journal fsync() problem, but that one is a bit different. In the traditional problem you have a bunch of changes from a bunch of processes, some of which one process wants to fsync() and most of which it doesn't; this can be handled by only flushing necessary things. Here we have a bunch of processes making a bunch of relatively independent changes but approximately all of the processes want to fsync() their changes.

The simple way to get durability (and possibly integrity) for fsync() is to put everything that gets fsync()'d into the journal (either directly or indirectly) and then force the journal to be durably committed to disk. If the filesystem's journal is a linear log, as is usually the case, this means that multiple processes mostly can't be separately writing and flushing journal entries at the same time. Each durable commit of the journal is a bottleneck for anyone who shows up 'too late' to get their change included in the current commit; they have to wait for the current commit to be flushed to disk before they can start adding more entries to the journal (but then everyone can be bundled into the next commit).

In some filesystems, processes can readily make durable writes outside of the journal (for example, overwriting something in place); such processes can avoid serializing on a linear journal. Even if they have to put something in the journal, you can perhaps minimize the direct linear journal contents by having them (durably) write things to various blocks independently, then put only compact pointers to those out of line blocks into the linear journal with its serializing, linear commits. The goal is to avoid having someone show up wanting to write megabytes 'to the journal' and forcing everyone to wait for their fsync(); instead people serialize only on writing a small bit of data at the end, and writing the actual data happens in parallel (assuming the disk allows that).

(I may have made this sound simple but the details are likely fiendishly complex.)

If you have a filesystem in this situation, and I believe one of them is ZFS, you may find you care a bunch about the latency of disks flushing writes to media. Of course you need the workload too, but there are certain sorts of workloads that are prone to this (for example, traditional Unix mail spools).

I believe that you can also see this sort of thing with databases, although they may be more heavily optimized for concurrent durable updates.

Sidebar: Disk handling of durable writes can also be a serialization point

Modern disks (such as NVMe SSDs) broadly have two mechanism to force things to durable storage. You can issue specific writes of specific blocks with 'Force Unit Access' (FUA) set, which causes the disk to write those blocks (and not necessarily any others) to media, or you can issue a general 'Flush' command to the disk and it will write anything it currently has in its write cache to media.

If you issue FUA writes, you don't have to wait for anything else other than your blocks to be written to media. If you issue 'Flush', you get to wait for everyone's blocks to be written out. This means that for speed you want to issue FUA writes when you want things on media, but on the other hand you may have already issued non-FUA writes for some of the blocks before you found out that you wanted them on media (for example, if someone writes a lot of data, so much that you start writeback, and then they issue a fsync()). And in general, the block IO programming model inside your operating system may favour issuing a bunch of regular writes and then inserting a 'force everything before this point to media' fencing operation into the IO stream.

NVMe SSDs and the question of how fast they can flush writes to flash

By: cks

Over on the Fediverse, I had a question I've been wondering about:

Disk drive people, sysadmins, etc: would you expect NVMe SSDs to be appreciably faster than SATA SSDs for a relatively low bandwidth fsync() workload (eg 40 Mbytes/sec + lots of fsyncs)?

My naive thinking is that AFAIK the slow bit is writing to the flash chips to make things actually durable when you ask, and it's basically the same underlying flash chips, so I'd expect NVMe to not be much faster than SATA SSDs on this narrow workload.

This is probably at least somewhat wrong. This 2025 SSD hierarchy article doesn't explicitly cover forced writes to flash (the fsync() case), but it does cover writing 50 GBytes of data in 30,000 files, which is probably enough to run any reasonable consumer NVMe SSD out of fast write buffer storage (either RAM or fast flash). The write speeds they get on this test from good NVMe drives are well over the maximum SATA data rates, so there's clearly a sustained write advantage to NVMe SSDs over SATA SSDs.

In replies on the Fediverse, several people pointed out that NVMe SSDs are likely using newer controllers than SATA SSDs and these newer controllers may well be better at handling writes. This isn't surprising when I thought about it, especially in light of NVMe perhaps overtaking SATA for SSDs, although apparently 'enterprise' SATA/SAS SSDs are still out there and probably seeing improvements (unlike consumer SATA SSDs where price is the name of the game).

Also, apparently the real bottleneck in writing to the actual flash is finding erased blocks or, if you're unlucky, having to wait for blocks to be erased. Actual writes to the flash chips may be able to go at something close to the PCIe 3.0 (or better) bandwidth, which would help explain the Tom's Hardware large write figures (cf).

(If this is the case, then explicitly telling SSDs about discarded blocks is especially important for any write workload that will be limited by flash write speeds, including fsync() heavy workloads.)

PS: The reason I'm interested in this is that we have a SATA SSD based system that seems to have periodic performance issues related enough write IO combined with fsync()s (possibly due to write buffering interactions), and I've been wondering how much moving it to be NVMe based might help. Since this machine uses ZFS, perhaps one thing we should consider is manually doing some ZFS 'TRIM' operations.

The strange case of 'mouse action traps' in GNU Emacs with (slower) remote X

By: cks

Some time back over on the Fediverse, I groused about GNU Emacs tooltips. That grouse was a little imprecise; the situation I usually see problems with is specifically running GNU Emacs in SSH-forwarded X from home, which has a somewhat high latency. This high latency caused me to change how I opened URLs from GNU Emacs, and it seems to be the root of the issues I'm seeing.

The direct experience I was having with tooltips was that being in a situation where Emacs might want to show a GUI tooltip would cause Emacs to stop responding to my keystrokes for a while. If the tooltip was posted and visible it would stay visible, but the stall could happen without that. However, it doesn't seem to be tooltips as such that cause this problem, because even with tooltips disabled as far as I can tell (and certainly not appearing), the cursor and my interaction with Emacs can get 'stuck' in places where there's mouse actions available.

(I tried both setting the tooltip delay times to very large numbers and setting tooltip-functions to do nothing.)

This is especially visible to me because my use of MH-E is prone to this in two cases. First, when composing email flyspell mode will attach a 'correct word' button-2 popup menu to misspelled words, which can then stall things if I move the cursor to them (especially if I use a mouse click to do so, perhaps because I want to make the word into an X selection). Second, when displaying email that has links in it, these links can be clicked on (and have hover tooltips to display what the destination URL is); what I frequently experience is that after I click on a link, when I come back to the GNU Emacs (X) window I can't immediately switch to the next message, scroll the text of the current message, or otherwise do things.

This 'trapping' and stall doesn't usually happen when I'm in the office, which is still using remote X but over a much faster and lower latency 1G network connection. Disabling tooltips themselves isn't ideal because it means I no longer get to see where links go, and anyway it's relatively pointless if it doesn't fix the real problem.

When I thought this was an issue specific to tooltips, it made sense to me because I could imagine that GNU Emacs needed to do a bunch of relatively synchronous X operations to show or clear a tooltip, and those operations could take a while over my home link. Certainly displaying regular GNU Emacs (X) menus isn't particularly fast. Without tooltips displaying it's more mysterious, but it's still possible that Emacs is doing a bunch of X operations when it thinks a mouse or tooltip target is 'active', or perhaps there's something else going on.

(I'm generally happy with GNU Emacs but that doesn't mean it's perfect or that I don't have periodic learning experiences.)

PS: In theory there are tools that can monitor and report on the flow of X events (by interposing themselves into it). In practice it's been a long time since I used any of them, and anyway there's probably nothing I can do about it if GNU Emacs is doing a lot of X operations. Plus it's probably partly the GTK toolkit at work, not GNU Emacs itself.

PPS: Having taken a brief look at the MH-E code, I'm pretty sure that it doesn't even begin to work with GNU Emacs' TRAMP (also) system for working with remote files. TRAMP has some support for running commands remotely, but MH-E has its own low-level command execution and assumes that it can run commands rapidly, whenever it feels like, and then read various results out of the filesystem. Probably the most viable approach would be to use sshfs to mount your entire ~/Mail locally, have a local install of (N)MH, and then put shims in for the very few MH commands that have to run remotely (such as inc and the low level post command that actually sends out messages you've written). I don't know if this would work very well, but it would almost certainly be better than trying to run all those MH commands remotely.

Staring at code can change what I see (a story from long ago)

By: cks

I recently read Hillel Wayne's Sapir-Whorf does not apply to Programming Languages (via, which I will characterize as being about how programming can change how you see things even though the Sapir-Whorf hypothesis doesn't apply (Hillel Wayne points to the Tetris Effect). As it happens, long ago I experienced a particular form of this that still sticks in my memory.

Many years ago, I was recruited to be a TA for the university's upper year Operating Systems course, despite being an undergraduate at the time. One of the jobs of TAs was to mark assignments, which we did entirely by hand back in those days; any sort of automated testing was far in the future, and for these assignments I don't think we even ran the programs by hand. Instead, marking was mostly done by having students hand in printouts of their modifications to the course's toy operating system and we three TAs collectively scoured the result to see if they'd made the necessary changes and spot errors.

Since this was an OS course, some assignments required dealing with concurrency, which meant that students had to properly guard and insulate their changes (in, for example, memory handling) from various concurrency problems. Failure to completely do so would cost marks, so the TAs were on the lookout for such problems. Over the course of the course, I got very good at spotting these concurrency problems entirely by eye in the printed out code. I didn't really have to think about it, I'd be reading the code (or scanning it) and the problem would jump out at me. In the process I formed a firm view that concurrency is very hard for people to deal with, because so many students made so many mistakes (whether obvious or subtle).

(Since students were modifying the toy OS to add or change features, there was no set form that their changes had to follow; people implemented the new features in various different ways. This meant that their concurrency bugs had common patterns but not specific common forms.)

I could have thought that I was spotting these problems because I was a better programmer than these other undergraduate students (some of whom were literally my peers, it was just that I'd taken the OS course a year earlier than they had because it was one of my interests). However, one of the most interesting parts of the whole experience was getting pretty definitive proof that I wasn't, and it was my focused experience that made the difference. One of the people taking this course was a fellow undergraduate who I knew and I knew was a better programmer than I was, but when I was marking his version of one assignment I spotted what I viewed at the time as a reasonably obvious concurrency issue. So I wasn't seeing these issues when the undergraduates doing the assignment missed them because I was a better programmer, since here I wasn't: I was seeing the bugs because I was more immersed in this than they were.

(This also strongly influenced my view of how hard and tricky concurrency is. Here was a very smart programmer, one with at least some familiarity with the whole area, and they'd still made a mistake.)

Uses for DNS server delegation

By: cks

A commentator on my entry on systemd-resolved's new DNS server delegation feature asked:

My memory might fail me here, but: wasn't something like this a feature introduced in ISC's BIND 8, and then considered to be a bad mistake and dropped again in BIND 9 ?

I don't know about Bind, but what I do know is that this feature is present in other DNS resolvers (such as Unbound) and that it has a variety of uses. Some of those uses can be substituted with other features and some can't be, at least not as-is.

The quick version of 'DNS server delegation' is that you can send all queries under some DNS zone name off to some DNS server (or servers) of your choice, rather than have DNS resolution follow any standard NS delegation chain that may or may not exist in global DNS. In Unbound, this is done through, for example, Forward Zones.

DNS server delegation has at least three uses that I know of. First, you can use it to insert entire internal TLD zones into the view that clients have. People use various top level names for these zones, such as .internal, .kvm, .sandbox (our choice), and so on. In all cases you have some authoritative servers for these zones and you need to direct queries to these servers instead of having your queries go to the root nameservers and be rejected.

(Obviously you will be sad if IANA ever assigns your internal TLD to something, but honestly if IANA allows, say, '.internal', we'll have good reason to question their sanity. The usual 'standard DNS environment' replacement for this is to move your internal TLD to be under your organizational domain and then implement split horizon DNS.)

Second, you can use it to splice in internal zones that don't exist in external DNS without going to the full overkill of split horizon authoritative data. If all of your machines live in 'corp.example.org' and you don't expose this to the outside world, you can have your public example.org servers with your public data and your corp.example.org authoritative servers, and you splice in what is effectively a fake set of NS records through DNS server delegation. Related to this, if you want you can override public DNS simply by having an internal and an external DNS server, without split horizon DNS; you use DNS server delegation to point to the internal DNS server for certain zones.

(This can be replaced with split horizon DNS, although maintaining split horizon DNS is its own set of headaches.)

Finally, you can use this to short-cut global DNS resolution for reliability in cases where you might lose external connectivity. For example, there are within-university ('on-campus' in our jargon) authoritative DNS servers for .utoronto.ca and .toronto.edu. We can use DNS server delegation to point these zones at these servers to be sure we can resolve university names even if the university's external Internet connection goes down. We can similarly point our own sub-zone at our authoritative servers, so even if our link to the university backbone goes down we can resolve our own names.

(This isn't how we actually implement this; we have a more complex split horizon DNS setup that causes our resolving DNS servers to have a complete copy of the inside view of our zones, acting as caching secondaries.)

The early Unix history of chown() being restricted to root

By: cks

A few years ago I wrote about the divide in chown() about who got to give away files, where BSD and V7 were on one side, restricting it to root, while System III and System V were on the other, allowing the owner to give them away too. At the time I quoted the V7 chown(2) explanation of this:

[...] Only the super-user may execute this call, because if users were able to give files away, they could defeat the (nonexistent) file-space accounting procedures.

Recently, for reasons, chown(2) and its history was on my mind and so I wondered if the early Research Unixes had always had this, or if a restriction was added at some point.

The answer is that the restriction was added in V6, where the V6 chown(2) manual page has the same wording as V7. In Research Unix V5 and earlier, people can chown(2) away their own files; this is documented in the V4 chown(2) manual page and is what the V5 kernel code for chown() does. This behavior runs all the way back to the V1 chown() manual page, with an extra restriction that you can't chown() setuid files.

(Since I looked it up, the restriction on chown()'ing setuid files was lifted in V4. In V4 and later, a setuid file has its setuid bit removed on chown; in V3 you still can't give away such a file, according to the V3 chown(2) manual page.)

At this point you might wonder where the System III and System V unrestricted chown came from. The surprising to me answer seems to be that System III partly descends from PWB/UNIX, and PWB/UNIX 1.0, although it was theoretically based on V6, has pre-V6 chown(2) behavior (kernel source, manual page). I suspect that there's a story both to why V6 made chown() more restricted and also why PWB/UNIX specifically didn't take that change from V6, but I don't know if it's been documented anywhere (a casual Internet search didn't turn up anything).

(The System III chown(2) manual page says more or less the same thing as the PWB/UNIX manual page, just more formally, and the kernel code is very similar.)

Maybe why OverlayFS had its readdir() inode number issue

By: cks

A while back I wrote about readdir()'s inode numbers versus OverlayFS, which discussed an issue where for efficiency reasons, OverlayFS sometimes returned different inode numbers in readdir() than in stat(). This is not POSIX legal unless you do some pretty perverse interpretations (as covered in my entry), but lots of filesystems deviate from POSIX semantics every so often. A more interesting question is why, and I suspect the answer is related to another issue that's come up, the problem of NFS exports of NFS mounts.

What's common in both cases is that NFS servers and OverlayFS both must create an 'identity' for a file (a NFS filehandle and an inode number, respectively). In the case of NFS servers, this identity has some strict requirements; OverlayFS has a somewhat easier life, but in general it still has to create and track some amount of information. Based on reading the OverlayFS article, I believe that OverlayFS considers this expensive enough to only want to do it when it has to.

OverlayFS definitely needs to go to this effort when people call stat(), because various programs will directly use the inode number (the POSIX 'file serial number') to tell files on the same filesystem apart. POSIX technically requires OverlayFS to do this for readdir(), but in practice almost everyone that uses readdir() isn't going to look at the inode number; they look at the file name and perhaps the d_type field to spot directories without needing to stat() everything.

If there was a special 'not a valid inode number' signal value, OverlayFS might use that, but there isn't one (in either POSIX or Linux, which is actually a problem). Since OverlayFS needs to provide some sort of arguably valid inode number, and since it's reading directories from the underlying filesystems, passing through their inode numbers from their d_ino fields is the simple answer.

(This entry was inspired by Kevin Lyda's comment on my earlier entry.)

Sidebar: Why there should be a 'not a valid inode number' signal value

Because both standards and common Unix usage include a d_ino field in the structure readdir() returns, they embed the idea that the stat()-visible inode number can easily be recovered or generated by filesystems purely by reading directories, without needing to perform additional IO. This is true in traditional Unix filesystems, but it's not obvious that you would do that all of the time in all filesystems. The on disk format of directories might only have some sort of object identifier for each name that's not easily mapped to a relatively small 'inode number' (which is required to be some C integer type), and instead the 'inode number' is an attribute you get by reading file metadata based on that object identifier (which you'll do for stat() but would like to avoid for reading directories).

But in practice if you want to design a Unix filesystem that performs decently well and doesn't just make up inode numbers in readdir(), you must store a potentially duplicate copy of your 'inode numbers' in directory entries.

Keeping notes is for myself too, illustrated (once again)

By: cks

Yesterday I wrote about restarting or redoing something after a systemd service restarts. The non-hypothetical situation that caused me to look into this was that after we applied a package update to one system, systemd-networkd on it restarted and wiped out some critical policy based routing rules. Since I vaguely remembered this happening before, I sighed and arranged to have our rules automatically reapplied on both systems with policy based routing rules, following the pattern I worked out.

Wait, two systems? And one of them didn't seem to have problems after the systemd-networkd restart? Yesterday I ignored that and forged ahead, but really it should have set off alarm bells. The reason the other system wasn't affected was I'd already solved the problem the right way back in March of 2024, when we first hit this networkd behavior and I wrote an entry about it.

However, I hadn't left myself (or my co-workers) any notes about that March 2024 fix; I'd put it into place on the first machine (then the only machine we had that did policy based routing) and forgotten about it. My only theory is that I wanted to wait and be sure it actually fixed the problem before documenting it as 'the fix', but if so, I made a mistake by not leaving myself any notes that I had a fix in testing. When I recently built the second machine with policy based routing I copied things from the first machine, but I didn't copy the true networkd fix because I'd forgotten about it.

(It turns out to have been really useful that I wrote that March 2024 entry because it's the only documentation I have, and I'd probably have missed the real fix if not for it. I rediscovered it in the process of writing yesterday's entry.)

I know (and knew) that keeping notes is good, and that my memory is fallible. And I still let this slip through the cracks for whatever reason. Hopefully the valuable lesson I've learned from this will stick a bit so I don't stub my toe again.

(One obvious lesson is that I should make a note to myself any time I'm testing something that I'm not sure will actually work. Since it may not work I may want to formally document it in our normal system for this, but a personal note will keep me from completely losing track of it. You can see the persistence of things 'in testing' as another example of the aphorism that there's nothing as permanent as a temporary fix.)

Restarting or redoing something after a systemd service restarts

By: cks

Suppose, not hypothetically, that your system is running some systemd based service or daemon that resets or erase your carefully cultivated state when it restarts. One example is systemd-networkd, although you can turn that off (or parts of it off, at least), but there are likely others. To clean up after this happens, you'd like to automatically restart or redo something after a systemd unit is restarted. Systemd supports this, but I found it slightly unclear how you want to do this and today I poked at it, so it's time for notes.

(This is somewhat different from triggering one unit when another unit becomes active, which I think is still not possible in general.)

First, you need to put whatever you want to do into a script and a .service unit that will run the script. The traditional way to run a script through a .service unit is:

[Unit]
....

[Service]
Type=oneshot
RemainAfterExit=True
ExecStart=/your/script/here

[Install]
WantedBy=multi-user.target

(The 'RemainAfterExit' is load-bearing, also.)

To get this unit to run after another unit is started or restarted, what you need is PartOf=, which causes your unit to be stopped and started when the other unit is, along with 'After=' so that your unit starts after the other unit instead of racing it (which could be counterproductive when what you want to do is fix up something from the other unit). So you add:

[Unit]
...
PartOf=systemd-networkd.service
After=systemd-networkd.service

(This is what works for me in light testing. This assumes that the unit you want to re-run after is normally always running, as systemd-networkd is.)

In testing, you don't need to have your unit specifically enabled by itself, although you may want it to be for clarity and other reasons. Even if your unit isn't specifically enabled, systemd will start it after the other unit because of the PartOf=. If the other unit is started all of the time (as is usually the case for systemd-networkd), this effectively makes your unit enabled, although not in an obvious way (which is why I think you should specifically 'systemctl enable' it, to make it obvious). I think you can have your .service unit enabled and active without having the other unit enabled, or even present.

You can declare yourself PartOf a .target unit, and some stock package systemd units do for various services. And a .target unit can be PartOf a .service; on Fedora, 'sshd-keygen.target' is PartOf sshd.service in a surprisingly clever little arrangement to generate only the necessary keys through a templated 'sshd-keygen@.service' unit.

I admit that the whole collection of Wants=, Requires=, Requisite=, BindsTo=, PartOf=, Upholds=, and so on are somewhat confusing to me. In the past, I've used the wrong version and suffered the consequences, and I'm not sure I have them entirely right in this entry.

Note that as far as I know, PartOf= has those Requires= consequences, where if the other unit is stopped, yours will be too. In a simple 'run a script after the other unit starts' situation, stopping your unit does nothing and can be ignored.

(If this seems complicated, well, I think it is, and I think one part of the complication is that we're trying to use systemd as an event-based system when it isn't one.)

Systemd-resolved's new 'DNS Server Delegation' feature (as of systemd 258)

By: cks

A while ago I wrote an entry about things that resolved wasn't for as of systemd 251. One of those things was arbitrary mappings of (DNS) names to DNS servers, for example if you always wanted *.internal.example.org to query a special DNS server. Systemd-resolved didn't have a direct feature for this and attempting to attach your DNS names to DNS server mappings to a network interface could go wrong in various ways. Well, time marches on and as of systemd v258 this is no longer the state of affairs.

Systemd v258 introduces systemd.dns-delegate files, which allow you to map DNS names to DNS servers independently from network interfaces. The release notes describe this as:

A new DNS "delegate zone" concept has been introduced, which are additional lookup scopes (on top of the existing per-interface and the one global scope so far supported in resolved), which carry one or more DNS server addresses and a DNS search/routing domain. It allows routing requests to specific domains to specific servers. Delegate zones can be configured via drop-ins below /etc/systemd/dns-delegate.d/*.dns-delegate.

Since systemd v258 is very new I don't have any machines where I can actually try this out, but based on the systemd.dns-delegate documentation, you can use this both for domains that you merely want diverted to some DNS server and also domains that you also want on your search path. Per resolved.conf's Domains= documentation, the latter is 'Domains=example.org' (example.org will be one of the domains that resolved tries to find single-label hostnames in, a search domain), and the former is 'Domains=~example.org' (where we merely send queries for everything under 'example.org' off to whatever DNS= you set, a route-only domain).

(While resolved.conf's Domains= officially promises to check your search domains in the order you listed them, I believe this is strictly for a single 'Domains=' setting for a single interface. If you have multiple 'Domains=' settings, for example in a global resolved.conf, a network interface, and now in a delegation, I think systemd-resolved makes no promises.)

Right now, these DNS server delegations can only be set through static files, not manipulated through resolvectl. I believe fiddling with them through resolvectl is on the roadmap, but for now I guess we get to restart resolved if we need to change things. In fact resolvectl doesn't expose anything to do with them, although I believe read-only information is available via D-Bus and maybe varlink.

Given the timing of systemd v258's release relative to Fedora releases, I probably won't be able to use this feature until Fedora 44 in the spring (Fedora 42 is current and Fedora 43 is imminent, which won't have systemd v258 given that v258 was released only a couple of weeks ago). My current systemd-resolved setup is okay (if it wasn't I'd be doing something else), but I can probably find uses for these delegations to improve it.

Why I have a GPS bike computer

By: cks

(This is a story about technology. Sort of.)

Many bicyclists with a GPS bike computer probably have it primarily to record their bike rides and then upload them to places like Strava. I'm a bit unusual in that while I do record my rides and make some of them public, and I've come to value this, it's not my primary reason to have a GPS bike computer. Instead, my primary reason is following pre-made routes.

When I started with my recreational bike club, it was well before the era of GPS bike computers. How you followed (or lead) our routes back then was through printed cue sheets, which had all of the turns and so on listed in order, often with additional notes. One of the duties of the leader of the ride was printing out a sufficient number of cue sheets in advance and distributing them to interested parties before the start of the ride. If you were seriously into using cue sheets, you'd use a cue sheet holder (nowadays you can only find these as 'map holders', which is basically the same job); otherwise you might clip the cue sheet to a handlebar brake or gear cable or fold it up and stick it in a back jersey pocket.

Printed cue sheets have a number of nice features, such as giving you a lot of information at a glance. One of them is that a well done cue sheet was and is a lot more than just a list of all of the turns and other things worthy of note; it's an organized, well formatted list of these. The cues would be broken up into sensibly chosen sections, with whitespace between them to make it easier to narrow in on the current one, and you'd lay out the page (or pages) so that the cue or section breaks happened at convenient spots to flip the cue sheet around in cue holders or clips. You'd emphasize important turns, cautions, or other things in various ways. And so on. Some cue sheets even had a map of the route printed on the back.

(You needed to periodically flip the cue sheet around and refold it because many routes had too many turns and other cues to fit in a small amount of printed space, especially if you wanted to use a decently large font size for easy readability.)

Starting in the early 2010s, more and more TBN people started using GPS bike computers or smartphones (cf). People began converting our cue sheet routes to computerized GPS routes, with TBN eventually getting official GPS routes. Over time, more and more members got smartphones and GPS units and there was more and more interest in GPS routes and less and less interest in cue sheets. In 2015 I saw the writing on the wall for cue sheets and the club more or less deprecated them, so in August 2016 I gave in and got a GPS unit (which drove me to finally get a smartphone, because my GPS unit assumed you had one). Cue sheet first routes lingered on for some years afterward, but they're all gone by now; everything is GPS route first.

You can still get cue sheets for club routes (the club's GPS routes typically have turn cues and you can export these into something you can print). But what we don't really have any more is the old school kind of well done, organized cue sheets, and it's basically been a decade since ride leaders would turn up with any printed cue sheets at all. These days it's on you to print your own cue sheet if you need it, and also on you to make a good cue sheet from the basic cue sheet (if you care enough to do so). There are some people who still use cue sheets, but they're a decreasing minority and they probably already had the cue sheet holders and so on (which are now increasingly hard to find). A new rider who wanted to use cue sheets would have an uphill struggle and they might never understand why long time members could be so fond of them.

Cue sheets are still a viable option for route following (and they haven't fundamentally changed). They're just not very well supported any more in TBN because they stopped being popular. If you insist on sticking with them, you still can, but it's not going to be a great experience. I didn't move to a GPS unit because I couldn't possibly use cue sheets any more (I still have my cue sheet holder); I moved because I could see the writing on the wall about which one would be the more convenient, more usable option.

Applications to the (computing) technologies of your choice are left as an exercise for the reader.

PS: As a whole I think GPS bike computers are mostly superior to cue sheets for route following, but that's a different discussion (and it depends on what sort of bicycling you're doing). There are points on both sides.

A Firefox issue and perhaps how handling scaling is hard

By: cks

Over on the Fediverse I shared a fun Firefox issue I've just run into:

Today's fun Firefox bug: if I move my (Nightly) Firefox window left and right across my X display, the text inside the window reflows to change its line wrapping back and forth. I have a HiDPI display with non-integer scaling and some other settings, so I'm assuming that Firefox is now suffering from rounding issues where the exact horizontal pixel position changes its idea of the CSS window width, triggering text reflows as it jumps back and forth by a CSS pixel.

(I've managed to reproduce this in a standard Nightly, although so far only with some of my settings.)

Close inspection says that this isn't quite what's happening, and the underlying problem is happening more often than I thought. What is actually happening is that as I move my Firefox window left and right, a thin vertical black line usually appears and disappears at the right edge of the window (past a scrollbar if there is one). Since I can see it on my HiDPI display, I suspect that this vertical line is at least two screen pixels wide. Under the right circumstances of window width, text size, and specific text content, this vertical black bar takes enough width away from the rest of the window to cause Firefox to re-flow and re-wrap text, creating easily visible changes as the window moves.

A variation of this happens when the vertical black bar isn't drawn but things on the right side of the toolbar and the URL bar area will shift left and right slightly as the window is moved horizontally. If the window is showing a scrollbar, the position of the scroll target in the scrollbar will move left and right, with the right side getting ever so slightly wider or returning back to being symmetrical. It's easiest to see this if I move the window sideways slowly, which is of course not something I do often (usually I move windows rapidly).

(This may be related to how X has a notion of sizing windows in non-pixel units if the window asks for it. Firefox in my configuration definitely asks for this; it asserts that it wants to be resized in units of 2 (display) pixels both horizontally and vertically. However, I can look at the state of a Firefox window in X and see that the window size in pixels doesn't change between the black bar appearing and disappearing.)

All of this is visible partly because under X and my window manager, windows can redisplay themselves even during an active move operation. If the window contents froze while I dragged windows around, I probably wouldn't have noticed this for some time. Text reflowing as I moved a Firefox window sideways created a quite attention-getting shimmer.

It's probably relevant that I need unusual HiDPI settings and I've also set Firefox's layout.css.devPixelsPerPx to 1.7 in about:config. That was part of why I initially assumed this was a scaling and rounding issue, and why I still suspect that area of Firefox a bit.

(I haven't filed this as a Firefox bug yet, partly because I just narrowed down what was happening in the process of writing this entry.)

What (I think) you need to do basic UDP NAT traversal

By: cks

Yesterday I wished for a way to do native "blind" WireGuard relaying, without needing to layer something on top of WireGuard. I wished for this both because it's the simplest approach for getting through NATs and the one you need in general under some circumstances. The classic and excellent work on all of the complexities of NAT traversal is Tailscale's How NAT traversal works, which also winds up covering the situation where you absolutely have to have a relay. But, as I understand things, in a fair number of situations you can sort of do without a relay and have direct UDP NAT traversal, although you need to do some extra work to get it and you need additional pieces.

Following RFC 4787, we can divide NAT into to two categories, endpoint-independent mapping (EIM) and endpoint-dependent mapping (EDM). In EIM, the public IP and port of your outgoing NAT'd traffic depend only on your internal IP and port, not on the destination (IP or port); in EDM they (also) depend on the destination. NAT'ing firewalls normally NAT based on what could be called "flows". For TCP, flows are a real thing; you can specifically tell a single TCP connection and it's difficult to fake one. For UDP, a firewall generally has no idea of what is a valid flow, and the best it can do is accept traffic that comes from the destination IP and port, which in theory is replies from the other end.

This leads to the NAT traffic traversal trick that we can do for UDP specifically. If we have two machines that want to talk to each other on each other's UDP port 51820, the first thing they need is to learn the public IP and port being used by the other machine. This requires some sort of central coordination server as well as the ability to send traffic to somewhere on UDP port 51820 (or whatever port you care about). In the case of WireGuard, you might as well make this a server on a public IP running WireGuard and have an actual WireGuard connection to it, and the discount 'coordination server' can then be basically the WireGuard peer information from 'wg' (the 'endpoint' is the public IP and port you need).

Once the two machines know each other's public IP and port, they start sending UDP port 51820 (or whatever) packets to each other, to the public IP and port they learned through the coordination server. When each of them sends their first outgoing packet, this creates a 'flow' on their respective NAT firewall which will allow the other machine's traffic in. Depending on timing, the first few packets from the other machine may arrive before your firewall has set up its state to allow them in and will get dropped, so each side needs to keep sending until it works or until it's clear that at least one side has an EDM (or some other complication).

(For WireGuard, you'd need something that sets the peer's endpoint to your now-known host and port value and then tries to send it some traffic to trigger the outgoing packets.)

As covered in Tailscale's article, it's possible to make direct NAT traversal work in some additional circumstances with increasing degrees of effort. You may be lucky and have a local EDM firewall that can be asked to stop doing EDM for your UDP port (via a number of protocols for this), and otherwise it may be possible to feel your way around one EDM firewall.

If you can arrange a natural way to send traffic from your UDP port to your coordination server, the basic NAT setup can be done without needing the deep cooperation of the software using the port; all you need is a way to switch what remote IP and port it uses for a particular peer. Your coordination server may need special software to listen to traffic and decode which peer is which, or you may be able to exploit existing features of your software (for example, by making the coordination server a WireGuard peer). Otherwise, I think you need either some cooperation from the software involved or gory hacks.

Wishing for a way to do 'blind' (untrusted) WireGuard relaying

By: cks

Over on the Fediverse, I sort of had a question:

I wonder if there's any way in standard WireGuard to have a zero-trust network relay, so that two WG peers that are isolated from each other (eg both behind NAT) can talk directly. The standard pure-WG approach has a public WG endpoint that everyone talks to and which acts as a router for the internal WG IPs of everyone, but this involves decrypting and re-encrypting the WG traffic.

By 'talk directly' I mean that each of the peers has the WireGuard keys of the other and the traffic between the two of them stays encrypted with those keys all the way through its travels. The traditional approach to the problem of two NAT'd machines that want to talk to each other with WireGuard is to have a WireGuard router that both of them talk to over WireGuard, but this means that the router sees the unencrypted traffic between them. This is less than ideal if you don't want to trust your router machine, for example because you want to make it a low-trust virtual machine rented from some cloud provider.

Since we love indirection in computer science, you can in theory solve this with another layer of traffic encapsulation (with a lot of caveats). The idea is that all of the 'public' endpoint IPs of WireGuard peers are actually on a private network, and you route the private network through your public router. Getting the private network packets to and from the router requires another level of encapsulation and unless you get very clever, all your traffic will go through the router even if two WireGuard peers could talk directly. Since WireGuard automatically keeps track of the current public IPs of peers, it would be ideal to do this with WireGuard, but I'm not sure that WG-in-WG can have the routing maintained the way we want.

This untrusted relay situation is of course one of the things that 'automatic mesh network on top of WireGuard' systems give you, but it would be nice to be able to do this with native features (and perhaps without an explicit control plane server that machines talk to, although that seems unlikely). As far as I know such systems implement this with their own brand of encapsulation, which I believe requires running their WireGuard stack.

(On Linux you might be able to do something clever with redirecting outgoing WireGuard packets to a 'tun' device connected to a user level program, which then wrapped them up, sent them off, received packets back, and injected the received packets into the system.)

Using systems because you know them already

By: cks

Every so often on the Fediverse, people ask for advice on a monitoring system to run on their machine (desktop or server), and some of the time Prometheus, and when it does I wind up making awkward noises. On the one hand, we run Prometheus (and Grafana) and are happy with it, and I run separate Prometheus setups on my work and home desktops. On the other hand, I don't feel I can recommend picking Prometheus for a basic single-machine setup, despite running it that way myself.

Why do I run Prometheus on my own machines if I don't recommend that you do so? I run it because I already know Prometheus (and Grafana), and in fact my desktops (re)use much of our production Prometheus setup (but they scrape different things). This is a specific instance (and example) of a general thing in system administration, which is that not infrequently it's simpler for you to use something you already know even if it's not necessarily an exact fit (or even a great fit) for the problem. For example, if you're quite familiar with operating PostgreSQL databases, it might be simpler to use PostgreSQL for a new system where SQLite could do perfectly well and other people would find SQLite much simpler. Especially if you have canned setups, canned automation, and so on all ready to go for PostgreSQL, and not for SQLite.

(Similarly, our generic web server hammer is Apache, even if we're doing things that don't necessarily need Apache and could be done perfectly well or perhaps better with nginx, Caddy, or whatever.)

This has a flipside, where you use a tool because you know it even if there might be a significantly better option, one that would actually be easier overall even accounting for needing to learn the new option and build up the environment around it. What we could call "familiarity-driven design" is a thing, and it can even be a confining thing, one where you shape your problems to conform to the tools you already know.

(And you may not have chosen your tools with deep care and instead drifted into them.)

I don't think there's any magic way to know which side of the line you're on. Perhaps the best we can do is be a little bit skeptical about our reflexive choices, especially if we seem to be sort of forcing them in a situation that feels like it should have a simpler or better option (such as basic monitoring of a single machine).

(In a way it helps that I know so much about Prometheus because it makes me aware of various warts, even if I'm used to them and I've climbed the learning curves.)

Apache .htaccess files are important because they enable delegation

By: cks

Apache's .htaccess files have a generally bad reputation. For example, lots of people will tell you that they can cause performance problems and you should move everything from .htaccess files into your main Apache configuration, using various pieces of Apache syntax to restrict what configuration directives apply to. The result can even be clearer, since various things can be confusing in .htaccess files (eg rewrites and redirects). Despite all of this, .htaccess files are important and valuable because of one property, which is that they enable delegation of parts of your server configuration to other people.

The Apache .htaccess documentation even spells this out in reverse, in When (not) to use .htaccess files:

In general, you should only use .htaccess files when you don't have access to the main server configuration file. [...]

If you operate the server and would be writing the .htaccess file, you can put the contents of the .htaccess in the main server configuration and make your life easier and Apache faster (and you probably should). But if the web server and its configuration isn't managed as a unitary whole by one group, then .htaccess files allow the people managing the overall Apache configuration to safely delegate things to other people on a per-directory basis, using Unix ownership. This can both enable people to do additional things and reduce the amount of work the central people have to do, letting people things scale better.

(The other thing that .htaccess files allow is dynamic updates without having to restart or reload the whole server. In some contexts this can be useful or important, for example if the updates are automatically generated at unpredictable times.)

I don't think it's an accident that .htaccess files emerged in Apache, because one common environment Apache was initially used in was old fashioned multi-user Unix web servers where, for example, every person with a login on the web server might have their own UserDir directory hierarchy. Hence features like suEXEC, so you could let people run CGIs without those CGIs having to run as the web user (a dangerous thing), and also hence the attraction of .htaccess files. If you have a bunch of (graduate) students with their own web areas, you definitely don't want to let all of them edit your departmental web server's overall configuration.

(Apache doesn't solve all your problems here, at least not in a simple configuration; you're still left with the multiuser PHP problem. Our solution to this problem is somewhat brute force.)

These environments are uncommon today but they're not extinct, at least at universities like mine, and .htaccess files (and Apache's general flexibility) remain valuable to us.

Readdir()'s inode numbers versus OverlayFS

By: cks

Recently I re-read Deep Down the Rabbit Hole: Bash, OverlayFS, and a 30-Year-Old Surprise (via) and this time around, I stumbled over a bit in the writeup that made me raise my eyebrows:

Bash’s fallback getcwd() assumes that the inode [number] from stat() matches one returned by readdir(). OverlayFS breaks that assumption.

I wouldn't call this an 'assumption' so much as 'sane POSIX semantics', although I'm not sure that POSIX absolutely requires this.

As we've seen before, POSIX talks about 'file serial number(s)' instead of inode numbers. The best definition of these is covered in sys/stat.h, where we see that a 'file identity' is uniquely determined by the combination the inode number and the device ID (st_dev), and POSIX says that 'at any given time in a system, distinct files shall have distinct file identities' while hardlinks have the same identity. The POSIX description of readdir() and dirent.h don't caveat the d_ino file serial numbers from readdir(), so they're implicitly covered by the general rules for file serial numbers.

In theory you can claim that the POSIX guarantees don't apply here since readdir() is only supplying d_ino, the file serial number, not the device ID as well. I maintain that this fails due to a POSIX requirement:

[...] The value of the structure's d_ino member shall be set to the file serial number of the file named by the d_name member. [...]

If readdir() gives one file serial number and a fstatat() of the same name gives another, a plain reading of POSIX is that one of them is lying. Files don't have two file serial numbers, they have one. Readdir() can return duplicate d_ino numbers for files that aren't hardlinks to each other (and I think legitimately may do so in some unusual circumstances), but it can't return something different than what fstatat() does for the same name.

The perverse argument here turns on POSIX's 'at any given time'. You can argue that the readdir() is at one time and the stat() is at another time and the system is allowed to entirely change file serial numbers between the two times. This is certainly not the intent of POSIX's language but I'm not sure there's anything in the standard that rules it out, even though it makes file serial numbers fairly useless since there's no POSIX way to get a bunch of them at 'a given time' so they have to be coherent.

So to summarize, OverlayFS has chosen what are effectively non-POSIX semantics for its readdir() inode numbers (under some circumstances, in the interests of performance) and Bash used readdir()'s d_ino in a traditional Unix way that caused it to notice. Unix filesystems can depart from POSIX semantics if they want, but I'd prefer if they were a bit more shamefaced about it. People (ie, programs) count on those semantics.

(The truly traditional getcwd() way wouldn't have been a problem, because it predates readdir() having d_ino and so doesn't use it (it stat()s everything to get inode numbers). I reflexively follow this pre-d_ino algorithm when I'm talking about doing getcwd() by hand (cf), but these days you want to use the dirent d_ino and if possible d_type, because they're much more efficient than stat()'ing everything.)

How part of my email handling drifted into convoluted complexity

By: cks

Once upon a time, my email handling was relatively simple. I wasn't on any big mailing lists, so I had almost everything delivered straight to my inbox (both in the traditional /var/mail mbox sense and then through to MH's own inbox folder directory). I did some mail filtering with procmail, but it was all for things that I basically never looked at, so I had procmail write them to mbox files under $HOME/.mail. I moved email from my Unix /var/mail inbox to MH's inbox with MH's inc command (either running it directly or having exmh run it for me). Rarely, I had a mbox file procmail had written that I wanted to read, and at that point I inc'd it either to my MH +inbox or to some other folder.

Later, prompted by wanting to improve my breaks and vacations, I diverted a bunch of mailing lists away from my inbox. Originally I had procmail write these diverted messages to mbox files, then later I'd inc the files to read the messages. Then I found that outside of vacations, I needed to make this email more readily accessible, so I had procmail put them in MH folder directories under Mail/inbox (one of MH's nice features is that your inbox is a regular folder and can have sub-folders, just like everything else). As I noted at the time, procmail only partially emulates MH when doing this, and one of the things it doesn't do is keep track of new, unread ('unseen') messages.

(MH has a general purpose system for keeping track of 'sequences' of messages in a MH folder, so it tracks unread messages based on what is in the special 'unseen' sequence. Inc and other MH commands update this sequence; procmail doesn't.)

Along with this procmail setup I wrote a basic script, called mlists, to report how many messages each of these 'mailing list' inboxes had in them. After a while I started diverting lower priority status emails and so on through this system (and stopped reading the mailing lists); if I got a type of email in any volume that I didn't want to read right away during work, it probably got shunted to these side inboxes. At some point I made mlists optionally run the MH scan command to show me what was in each inbox folder (well, for the inbox folders where this was potentially useful information). The mlists script was still mostly simple and the whole system still made sense, but it was a bit more complex than before, especially when it also got a feature where it auto-reset the current message number in each folder to the first message.

A couple of years ago, I switched the MH frontend I used from exmh to MH-E in GNU Emacs, which changed how I read my email in practice. One of the changes was that I started using the GNU Emacs Speedbar, which always displays a count of messages in MH folders and especially wants to let you know about folders with unread messages. Since I had the hammer of my mlists script handy, I proceeded to mutate it to be what a comment in the script describes as "a discount maintainer of 'unseen'", so that MH-E's speedbar could draw my attention to inbox folders that had new messages.

This is not the right way to do this. The right way to do this is to have procmail deliver messages through MH's rcvstore, which as a MH command can update the 'unseen' sequence properly. But using rcvstore is annoying, partly because you have to use another program to add the locking it needs, so at every point the path of least resistance was to add a bit more hacks to what I already had. I had procmail, and procmail could deliver to MH folder directories, so I used it (and at the time the limitations were something I considered a feature). I had a script to give me basic information, so it could give me more information, and then it could do one useful thing while it was giving me information, and then the one useful thing grew into updating 'unseen'.

And since I have all of this, it's not even worth the effort of switching to the proper rcvstore approach and throwing a bunch of it away. I'm always going to want the 'tell me stuff' functionality of my mlists script, so part of it has to stay anyway.

Can I see similarities between this and how various of our system tools have evolved, mutated, and become increasingly complex? Of course. I think it's much the same obvious forces involved, because each step seems reasonable in isolation, right up until I've built a discount environment that duplicates much of rcvstore.

Sidebar: an extra bonus bit of complexity

It turns out that part of the time, I want to get some degree of live notification of messages being filed into these inbox folders. I may not look at all or even many of them, but there are some periodic things that I do want to pay attention to. So my discount special hack is basically:

tail -f .mail/procmail-log |
  egrep -B2 --no-group-separator 'Folder: /u/cks/Mail/inbox/'

(This is a script, of course, and I run it in a terminal window.)

This could be improved in various ways but then I'd be sliding down the convoluted complexity slope and I'm not willing to do that. Yet. Give it a few years and I may be back to write an update.

More on the tools I use to read email affecting my email reading

By: cks

About two years ago I wrote an entry about how my switch from reading email with exmh to reading it in GNU Emacs with MH-E had affected my email reading behavior more than I expected. As time has passed and I've made more extensive customizations to my MH-E environment, this has continued. One of the recent ways I've noticed is that I'm slowly making more and more use of the fact that GNU Emacs is a multi-window editor ('multi-frame' in Emacs terminology) and reading email with MH-E inside it still leaves me with all of the basic Emacs facilities. Specifically, I can create several Emacs windows (frames) and use this to be working in multiple MH folders at the same time.

Back when I used exmh extensively, I mostly had MH pull my email into the default 'inbox' folder, where I dealt with it all at once. Sometimes I'd wind up pulling some new email into a separate folder, but exmh only really giving me a view of a single folder at a time combined with a system administrator's need to be regularly responding to email made that a bit awkward. At first my use of MH-E mostly followed that; I had a single Emacs MH-E window (frame) and within that window I switched between folders. But lately I've been creating more new windows when I want to spend time reading a non-inbox folder, and in turn this has made me much more willing to put new email directly into different (MH) folders rather than funnel it all into my inbox.

(I don't always make a new window to visit another folder, because I don't spend long on many of my non-inbox folders for new email. But for various mailing lists and so on, reading through them may take at least a bit of time so it's more likely I'll decide I want to keep my MH inbox folder still available.)

One thing that makes this work is that MH-E itself has reasonably good support for displaying and working on multiple folders at once. There are probably ways to get MH-E to screw this up and run MH commands with the wrong MH folder as the current folder, so I'm careful that I don't try to have MH-E carry out its pending MH operations in two MH-E folders at the same time. There are areas where MH-E is less than ideal when I'm also using command-line MH tools, because MH-E changes MH's global notion of the current folder any time I have it do things like show a message in some folder. But at least MH-E is fine (in normal circumstances) if I use MH commands to change the current folder; MH-E will just switch it back the next time I have it show another message.

PS: On a purely pragmatic basis, another change in my email handling is that I'm no longer as irritated with HTML emails because GNU Emacs is much better at displaying HTML than exmh was. I've actually left my MH-E setup showing HTML by default, instead of forcing multipart/alternative email to always show the text version (my exmh setup). GNU Emacs and MH-E aren't up to the level of, say, Thunderbird, and sometimes this results in confusing emails, but it's better than it was.

(The situation that seems tricky for MH-E is that people sometimes include inlined images, for example screenshots as part of problem reports, and MH-E doesn't always give any indication that it's even omitting something.)

Recently

Some meta-commentary on reading: I’ve been trying to read through my enormous queue of articles saved on Instapaper. In general I love reading stuff from the internet but refuse to do it with a computer or a phone: I don’t want all my stuff to glow!

So, this led me to an abortive trial of the Boox Go 7, an eReader that runs a full Android operating system. Rendering Android interfaces with e-ink was pretty bad, and even though it was possible to run the Instapaper Android app, highlighting text didn’t work and the whole fit & finish was off. I love that most e-ink tablets are really purpose-built computers: this didn’t give me that impression.

So, this month I bought a Kobo Libra Color. Kobo and Instapaper recently announced a partnership and first-class support for Instapaper on the devices.

Overall: it’s better than the Boox experience and definitely better than sending articles to one of my Kindles. My notes so far are:

  • I wish it had highlighting. Pretty confident that it’s on the product roadmap, but for now all it can do is sync, archive, and like articles.
  • The Kobo also integrates directly with Overdrive for local libraries! Amazing and unexpected for me, the vast majority of my books are from the Brooklyn Library or the NYPL.
  • The hardware is pretty decent: the color screen is surprisingly useful because a lot of articles have embedded images. The page turn buttons are a little worse than those on my Kindle Oasis because they’re hinged, so they only work well if you press the top of one button and the bottom of the other. I’ve gotten used to it, but wish they worked via a different mechanism.
  • The first run experience was pretty slick.
  • Annoyingly, it goes to fully ‘off’ mode pretty quickly instead of staying in ‘sleep’ mode, and waking it up from being fully off takes about 15 seconds.

Overall: I think my biggest gripe (no highlighting) will get worked out and this’ll be the perfect internet-reading-without-a-computer device.

Anyway, what’s been good this month?

Reading

In Greece, a homeowner traded ther property development rights to a builder and kept equity in place by receiving finished apartments in the resulting building, often one for themselves and one or more additional units for rental income or adult children, rather than a one-time cash payout. Applied to the U.S., that becomes, swap a house for an apartment (or three), in the same location, with special exemptions to the tax for the home sale.

From Millenial American Dream love a good scheme to fix the housing crisis and densify cities. Definitely seems like a net-positive, but like all other schemes that require big changes to tax codes, would take a miracle to actually implement.

In Zitron’s analysis, it’s always bad. It’s bad when they raise too little money because they’ll run out. It’s bad when they raise too much money because it means they need it.

David Crespo’s critique of Ed Zitron is really strong. Honestly Zitron’s writing, though needed in a certain sense, never hits home for me. In fact a lot of AI critique seems overblown. Maybe I’m warming to it a little.

This Martin Fowler article about how ‘if it hurts, do it more often,’ was good. Summarizes some well-tested wisdom.

I had really mixed feelings about ‘The Autistic Half-Century’ by Byrne Hobart. I probably have some personal stake in this - every test I’ve taken puts me on the spectrum and I have some of the classic features, but like Hobart I’ve never been interested in identifying as autistic. But my discomfort with the subject is all knotted-together and hard to summarize.

One facet I can pick out is my feeling that this era is ending, maybe getting replaced with the ADHD half-century. But the internet I grew up on was pseudonomous, text-oriented, and for me, a calm place. The last ten years have been a slow drift toward real names, and then photograph avatars, and now more and more information being delivered by some person talking into the camera, and that feels really bad, man. Heck, not to drill down on ‘classic traits’, but the number of TikTok videos in which, for some reason the person doing the talking is also eating at the same time, close to the phone, the mouth-sounds? Like, for a long time it was possible to convey information without thinking about whether you were good-looking or were having a good hair day, and that era is ending because everything is becoming oral culture.

If you believed that Trump winning would mean that everyone who supported him was right to have done so, because they had picked the winner; that the mega-rich AI industry buying its way into all corners of American society would mean that critics of the technology and of using it to displace human labors were not just defeated but meaningfully wrong in their criticisms; that some celebrity getting richer from a crypto rug-pull that ripped off hundreds of thousands of less-rich people would actually vindicate the celebrity’s choice to participate in it, because of how much richer it made them. Imagine holding this as an authentic understanding of how the world works: that the simple binary outcome of a contest had the power to reach back through time and adjust the ethical and moral weight of the contestants’ choices along the way. Maybe, in that case, you would feel differently about what to the rest of us looks like straight-up shit eating.

Defector (here, Albert Burneko) is doing some really good work. See also their article on Hailey Welch and ‘bag culture’.

Altman’s claim of a “Cambrian explosion” rings hollow because any tool built on the perverse incentives of social media is not truly designed with creativity in mind, but addiction. Sora may spark a new wave of digital expression, but it’s just as likely to entrench the same attention economy that has warped our online lives already.

From Parmy Olsen writing about OpenAI Sora in Bloomberg. I think this is a really good article: technology is rightfully judged based on what it actually does, not what it could do or is meant to do, and if AI continues to be used for things that make the world worse, it will earn itself a bad reputation.

Watching

Watching Night On Earth was such a joy. It was tender, laugh-out-loud funny, beautiful.

Youth had me crying the whole time, but I’d still recommend it. Got me into Sun Kil Moon from just a few minutes of Mark Kozelek being onscreen as the guitarist at the fancy hotel.

Listening

My old friends at BRNDA have a hit album on their hands. Kind of punk, kind of Sonic Youth, sort of indescribable, lots of good rock flute on this one.

More good ambient-adjacent instrumental rock.

This album feels next door to some Hop Along tracks. It’s a tie between highlighting this track or Holding On which also has a ton of hit quality.


Elsewhere: in September I did a huge bike trip and posted photos of it over in /photos. Might do a full writeup eventually: we rode the Great Allegheny Passage and the C&O canal in four days of pretty hard riding. Everyone survived, we had an amazing time, and I’m extremely glad that we were all able to make the time to do it. It made me feel so zen for about a week after getting back - I have to do that more often!

Porteur bag 2

Back in May, I wrote about a custom porteur bag that I sewed for use on my bike. That bag served me well on two trips - a solo ride up to Brewster and back, and my semi-yearly ride on the Empire State Trail, from Poughkeepsie to Brooklyn in two days.

But I had a longer ride in the plans summer, which I just rode two weeks ago: Pittsburgh to DC, 348 miles in 4 days, with two nights of camping. And at the last minute I decided to make the next version of that bag. Specifically, I wanted to correct three shortcomings of v1:

  • The attachment system was too complex. It had redundant ways to attach to the rack, but the main method was via a shock cord that looped through six grosgrain loops, plus another shock cord that I attached to keep it attached to the back of the rack. Plus hardware that’s used for attachment is better if it’s less flexible. Bungee cords and shock cords are ultimately pretty flawed as ways to attach things on bikes. My frame bag uses non-stretchy paracord, and most bikepacking setups are reliant on voile straps which have a minimal amount of stretch.
  • The bag had no liner, and the material is plasticky and odd-looking. ECOPAK EPLX has a waterproof coating on the outside that makes it look like a plastic bag instead of something from synthetic fabric.
  • The bag had way too many panels: each side was a panel, plus the bottom, plus the flap. These made the shape of the bag messy.

Version 2

Finished bag

It turned out pretty well. Here’s the gist:

Materials

As you can see, there’s only one built-in way to secure this bag to the bike: a webbing strap that attaches to the two buckles and tightens below the rack. This is definitely a superior mechanism to v1: instead of attaching just the front of the bag to the rack and having to deal with the back of the bag separately, this pulls and tensions the whole bag, including its contents, to the rack. It rattles a lot less and is a lot simpler to attach.

On the bike

Construction

This bag is made like a tote bag. The essential ingredient of a tote bag is a piece of fabric cut like this:

Tote bag construction

The lining is simply: the same shape, again, sewed in the same way, sewn to the inside of the bag but with the seams facing in the opposite direction, so that the seams of the liner and the outer shell of the bag face each other.

Techniques for building tote bags are everywhere on the internet, so it’s a really nice place to start. Plus, the bag body can be made with just one cut of fabric. In this case the bottom of the bag is Cordura and the top is ECOPAK, so I just tweaked the tote bag construction by adding panels of ECOPAK on the left and right of the first step above.

The risky part of this bag was its height and the zipper: ideally it could both zip and the top of the bag could fold over for water resistance. I didn’t accomplish both goals and learned something pretty important: if you’re building a waterproof bag with a zipper, once it’s zipped it’ll be hard to compress because the zipper keeps in air.

But the end of the tour included a surprise thunderstorm in Washington, DC and the zipper kept the wet out, so I count that as a win! The zipper also makes the bag very functional off the bike - using the same strap as I use to attach it to the bike but using that as a shoulder strap makes it pretty convenient to carry around. This really came in handy when we were moving bikes onto and off of Amtrak trains.

Plans for version three

Mostly kidding - I’m going to actually use use this bag for the next few trips instead of tinkering with it. But I do have thoughts, from this experience:

  • I have mixed feelings about using Cordura next time. It’s what everyone does, and it helps with the abrasion that the bag experiences from the rack. But I have a feeling that ECOPAK would hold up for many thousands of miles by itself, and is lighter and more waterproof than cordura. It could be cool to make the whole bag body from the same material.
  • I think I’ll make the next bag taller, and skip the zipper. I have more faith in folding over the material rather than relying on a waterproof zipper.
  • A good system for tightening the straps still eludes me. This setup included sliplock slide adjusters, but it’s still kind of annoying to get the right angles to pull the bag tight.
  • Again, I didn’t add any external pockets. I don’t think they’re absolutely necessary, but as it is, like the previous version, it’s not really possible to access the contents of this bag on the move. Which is fine because I have other bags that are easier to access - on this trip this bag carried my clothes, camp kit, and backup spare tires, so nothing essential in the middle of the day.
❌