Reading view

There are new articles available, click to refresh the page.

CSIR touts dubious 'Ayurveda' product for diabetes

By: VM
CSIR touts dubious 'Ayurveda' product for diabetes

At 6 am on September 13, the CSIR handle on X.com published the following post about an "anti-diabetic medicine" called either "Daiba 250" or "Diabe 250", developed at the CSIR-Indian Institute of Integrative Medicine (IIIM):

Science-led #Ayurveda for #Diabetes Care!@CSIRIIIM's Daiba 250 — an anti-diabetic medicine has been developed with cGMP standards@DrJitendraSingh @moayush @startupindia #AyushmanBharat #StartupIndia #MakeInIndia.#84YearsOfCSIR pic.twitter.com/TfVmIdtlgs

— CSIR, India (@CSIR_IND) September 13, 2025

Its "key features", according to the CSIR, are that it created more than 250 jobs and that Prime Minister Narendra Modi "mentioned the startup" to which it has been licensed in his podcast 'Mann ki Baat'. What of the clinical credentials of Diabe-250, however?

Diabe-250 is being marketed on India-based online pharmacies like Tata 1mg as an "Ayurvedic" over-the-counter tablet "for diabetes support/healthy sugar levels". The listing also claims Diabe-250 is backed by a US patent granted to an Innoveda Biological Solutions Pvt. Ltd. Contrary to the CSIR post calling Diabe-250 "medicine", some listings also carry the disclaimer that it's "a dietary nutritional supplement, not for medicinal use".

("Ayurveda" is within double-quotes throughout this post because, like most products like Diabe-250 in the market that are also licensed by the Ministry of AYUSH, there's no evidence that they're actually Ayurvedic. They may be, they may not be — and until there's credible proof, the Ayurvedic identity is just another claim.)

Second, while e-commerce and brand pages use the spellings "Diabe 250" or "Diabe-250" (without or without the hyphen), the CSIR's social media posts refer to it as "Daiba 250". The latter also describe it as an anti-diabetic developed/produced with the CSIR-IIIM in the context of incubation and licensing. These communications don't constitute clinical evidence but they might be the clearest public basis to link the "Daiba" or "Diabe" spellings with the CSIR.

Multiple product pages also credit Innoveda Biological Solutions Pvt. Ltd. as a marketer and manufacturer. Corporate registry aggregators corroborate the firm's existence; its CIN is U24239DL2008PTC178821). Similarly, the claim that Diabe-250 is backed by a US patent can be traced most directly to US8163312B2 for "Herbal formulation for prevention and treatment of diabetes and associated complications". Its inventor is listed as a G. Geetha Krishnan and Innoveda Biological Solutions (P) Ltd. is listed as the current assignee.

The patent text describes combinations of Indian herbs for diabetes and some complications. Of course no patent is proof of efficacy for any specific branded product or dose.

The ingredients in Diabe-250 vary by retailer and there's no consistent, quantitative per-tablet composition on public pages. This said, multiple listings name the following ingredients:

  • "Vidanga" (Embelia ribes)
  • "Gorakh buti" (Aerva lanata)
  • "Raj patha" (Cyclea peltata)
  • "Vairi" or "salacia" (often Salacia oblonga), and
  • "Lajalu" (Biophytum sensitivum)

The brand page also asserts a "unique combination of 16 herbs" and describes additional "Ayurveda" staples such as berberine source, turmeric, and jamun. However, there doesn't appear to be a full label image or a quantitative breakdown of the composition of Diabe-250.

Retail and brand pages also claim Diabe-250 "helps maintain healthy sugar levels", "improves lipid profile/reduces cholesterol", and "reduces diabetic complications", sometimes also including non-glycaemic effects such as "better sleep" and "regular bowel movement". Several pages also include the caveat that it's a "dietary nutritional supplement" and that it's "not for medicinal use". However, none of these source cite a peer-reviewed clinical trial of Diabe-250 itself.

In fact, there appear to be no peer-reviewed, product-specific clinical trials of Diabe-250 or Daiba-250 in humans; there are also no clinical trial registry records that were specific to this brand. If such a trial exists and its results were published in a peer-reviewed journal, it hasn't been cited on the sellers' or brand pages or in accessible databases.


Some ingredient classes in Diabe-250 are interesting even if they don't validate Diabe-250 as a finished product. For instance, Salacia spp., especially S. reticulata, S. oblonga, and S. chinensis have been known to be α-glucosidase inhibitors. In vitro studies and chemistry reviews have also described Salacia spp. can be potent inhibitors of maltase, sucrase, and isomaltase.

In one triple-blind, randomised crossover trial in 2023, biscuits fortified with S. reticulata extract reduced HbA1c levels by around 0.25% (2.7 mmol/mol) over three months versus the placebo, with an acceptable safety profile. In post-prandial studies involving healthy volunteers and type 2 diabetes, several randomised crossover designs had lower post-meal glucose and insulin area under the curve when Salacia extract was co-ingested along with carbohydrate.

Similarly, berberine-based neutraceuticals (such as those including Berberis aristata) have shown glycaemic improvements in the clinical literature (at large, not specific to Diabe-250) in people with type 2 diabetes. However, these effects were often reported in combination with other compounds and which researchers also indicated depended strongly on formulation and dose.

Finally, a 2022 systematic review of "Ayurvedic" medicines in people with type 2 diabetes reported heterogeneous evidence, including some promising signals, but also emphasised methodological limitations and the need for randomised controlled trials of higher quality.

Right now, if Diabe-250 works as advertised, there's no scientific proof in the public domain, especially in the form of product-specific clinical trials that define its composition, dosage, and endpoints.


In India, Ayurvedic drugs come under the Drugs & Cosmetics Rules 1945. Labelling provisions under Section 161 require details such as the manufacturer's address, batch, and manufacturing and expiry dates while practice guides also note the product license number on the label for "Ayurvedic" drugs. However, several retail pages for Diabe-250 display it as a “dietary nutritional supplement" and add that it's "not for medicinal use”, implying that it's being marketed with supplement-style claims rather than as an Ayurvedic “medicine” in the narrow regulatory sense — which runs against the claim in the CSIR post on X.com. Public pages also didn't display an AYUSH license number for Diabe-250. I haven't checked a physical pack.

A well-known study in JAMA in 2008, of "Ayurvedic" products purchased over the internet, found that around 20% of them contained lead, mercury or arsenic, and public-health advisories and case reports that have appeared since have echoed these concerns. This isn't a claim about Diabe-250 specifically but a category-level risk of "Ayurvedic" products that are available to buy online and which are compounded by the unclear composition of Diabe-250. The inconsistent naming also opens the door to counterfeit products that are also more likely to be contaminated.

Materials published by the Indian and state governments, including the Ministry of AYUSH, have framed "Ayurveda" as complementary to allopathic medicine. For example, if a person with diabetes chooses to try "Ayurvedic" support, the standard advice is to not discontinue prescribed therapy and to monitor one's glucose, especially if the individual is using α-glucosidase-like agents that alter the post-prandial response.

In sum, Diabe-250 is a multi-herb "Ayurvedic" tablet marketed by Innoveda for glycaemic support and has often been promoted with a related US patent owned by the company. However, patents are not clinical trials and patent offices don't clinically evaluate drugs described in patent applications. That information can only come from clinical trials, especially when a drug is being touted as "science-led", as the CSIR has vis-à-vis Diabe-250. But there are no published clinical trials of the product. And while there's some evidence for some of its constituents, particularly Salacia, to reduce post-prandial glucose and to effect small changes in the HbA1c levels over a few months, there's no product-specific proof.

A cricket beyond politics

By: VM
A cricket beyond politics

On September 11, the Supreme Court was asked to urgently hear a petition that sought to cancel the Asia Cup T20 match between India and Pakistan scheduled for September 14 in the UAE. The petition, filed by four law students, claimed that playing the match so soon after the Pahalgam terror attack and Operation Sindoor would demean the sacrifices of armed personnel and was "against national interest".

The Court declined to intervene. "It's a match, let it be," Justice J.K. Maheshwari said, refusing to elevate the petition into a question of constitutional urgency. That refusal, however, doesn't end the matter: the call to stop the match points to the fraught place cricket occupies in India today, where the sport is no longer just a sport but an annex of politics itself.

The petitioners also argued that the Board of Control for Cricket in India (BCCI) must be brought under the Ministry of Youth Affairs and Sports, in line with the new National Sports Governance Act 2025. For many decades the BCCI has prided itself on being a private body, formally outside government control, yet informally intertwined with it through patronage, appointments, and access to resources. Over the years, this hybrid arrangement has allowed political parties to capture the administration of Indian cricket without subjecting it to the mechanisms of accountability under public law. The outcome is a chimaera: an entity that's neither purely autonomous nor transparently regulated.

This political capture has contributed to a situation in which the sport has become indistinguishable from political theatre. If the BCCI were more genuinely independent and if its leadership were less frequently a stepping-stone for politicians, (men's) cricket in India may still have had the ability to separate itself from the ebbs and flows of diplomatic posturing. Instead, the BCCI has invited politics onto the field by making itself an extension of political patronage.

To be sure, cricket has always been more than a game. Since the colonial era, it has carried the weight of identity and nationalism. In The Tao of Cricket, Ashis Nandy argued that cricket in India became a way of playing with colonial inheritance rather than rejecting it. Matches against England in the mid-20th century were arenas where newly independent Indians performed parity with their former rulers. With Pakistan, the sport inherited and refracted the trauma of Partition. Every bilateral series has carried more baggage than bat and ball.

Yet the history of India-Pakistan matches is also one of conviviality. For every moment when politicians have sought to cancel tours, there have been times when cricketing exchanges have thawed frozen relations. India's tours of Pakistan in 2004 and Pakistan's participation in the 1996 World Cup hosted in India were moments when ordinary spectators could cheer a cover drive irrespective of the batsman's passport. The very fact that governments have sometimes chosen to use cricket as a tool of rapprochement suggests that the sport holds a special capacity to transcend political divides.

Sport itself has always sat at the junction of rivalry and fellowship. Aristotle saw games as part of leisure, necessary for the cultivation of civic virtue. The Olympic Truce of ancient Greece, revived in modern times, embodied the idea that contests on the field could suspend contests off of it. The South African example after apartheid, when Nelson Mandela donned a Springbok jersey at the 1995 Rugby World Cup, showed how sport could heal a wounded polity.

Against this backdrop, the call to cancel the India-Pakistan match risks impoverishing cricket of its potential to build bridges. To say that playing Pakistan dishonours Indian soldiers is to treat sport as a mere extension of politics. Sport is not reducible to politics: it's also a space where citizens can experience one another as competitors, not enemies. That distinction matters. A good game of cricket can remind people that beyond the rhetoric of national security, there are human beings bowling yorkers and lofting sixes, acts that spectators from both sides can cheer, grumble about, and analyse over endless replays.

This isn't to deny that politics already suffuses cricket. The selection of venues, the sponsorship deals, the choreography of opening ceremonies — all carry political weight. Nor can one ignore that militant groups have sometimes targeted cricket precisely because of its symbolic importance. But to cancel matches on the grounds that politics exists is to double down on cynicism. It is to concede that no space can remain where ordinary citizens of India and Pakistan might encounter each other beyond the logic of hostility.

The BCCI's long entanglement with political elites makes it harder to resist such calls. When cricket administrators behave like political courtiers, it becomes easier for petitioners to argue that cricket is an extension of the state and must therefore obey the same dictates of foreign policy. But precisely because the BCCI has failed to safeguard cricket's autonomy, the rest of us must insist that the game not be reduced to a political pawn.

The petitioners invoked "national interest" and "national dignity" yet the Constitution of India doesn't enshrine dignity in the form of cancelling sports fixtures. It enshrines dignity through the protection of rights, the pursuit of fraternity, and the preservation of liberty. Article 51 even enjoins the state to foster respect for international law and promote peace. Seen in that light, playing cricket with Pakistan is not an affront to dignity but an affirmation of the constitutional aspiration to fraternity across borders.

If anything undermines dignity, it's the reduction of sport to a theatre of grievance. It's the refusal to allow people an arena where they can cheer together, even if for rival teams. National interest is not served by foreclosing every possible space of conviviality: it's served by demonstrating that India is confident enough in its own constitutional foundations to play, to lose, to win, and to play again.

The Supreme Court was right to dismiss the petition with a simple phrase: "It's a match, let it be." That lightness is what cricket needs in India today. To insist that every over bowled is a statement of geopolitics is to impoverish both politics and cricket.

Is the Higgs boson doing its job?

By: VM
Is the Higgs boson doing its job?

At the heart of particle physics lies the Standard Model, a theory that has stood for nearly half a century as the best description of the subatomic realm. It tells us what particles exist, how they interact, and why the universe is stable at the smallest scales. The Standard Model has correctly predicted the outcomes of several experiments testing the limits of particle physics. Even then, however, physicists know that it's incomplete: it can't explain dark matter, why matter dominates over antimatter, and why the force of gravity is so weak compared to the other forces. To settle these mysteries, physicists have been conducting very detailed tests of the Model, each of which has either tightened their confidence in a hypothetical explanation or has revealed a new piece of the puzzle.

A central character in this story is a subatomic particle called the W boson — the carrier of the weak nuclear force. Without it, the Sun wouldn't shine because particle interactions involving the weak force are necessary for nuclear fusion to proceed. W bosons are also unusual among force carriers: unlike photons (the particles of light), they're massive, about 80-times heavier than a proton. This mass difference — of a massless photon and a massive W boson — arises due to a process called the Higgs mechanism. Physicists first proposed this mechanism in 1964 and confirmed it was real when they found the Higgs boson particle at the Large Hadron Collider (LHC) in 2012.

Is the Higgs boson doing its job?
The particles of the Standard Model of particle physics. The W bosons are shown among the force-carrier particles on the right. The photon is denoted γ. The electron (e) and muon (µ) are shown among the leptons on the right. The corresponding neutrino flavours are showing on the bottom row, denoted ν. Credit: Daniel Dominguez/CERN

But finding the Higgs particle was only the beginning. To prove that the Higgs mechanism really works the way the theory says, physicists need to check its predictions in detail. One of the sharpest tests involves how W bosons scatter off each other at high energies. Both photons and W bosons have a property called quantum spin, but whereas for photons its value is zero, for W bosons its non-zero. The spin also has a direction. If it points sideways, the W boson is said to be transverse polarised; if it's pointing along the particle's direction of travel, the W boson is said to be longitudinally polarised. The longitudinal ones are special because their behaviour is directly tied to the Higgs mechanism.

Specifically, if the Higgs mechanism and the Higgs boson don't exist, calculations involving the longitudinal W bosons scattering off of each other quickly give rise to nonsensical mathematical results in the theory. The Higgs boson acts like a regulator in this engine, preventing the mathematics from 'blowing up'. In fact, in the 1970s, the theoretical physicists Benjamin Lee, Chris Quigg, and Hugh Thacker showed that without the Higgs boson, the weak force would become uncontrollably powerful at high energies, leading to the breakdown of the theory. Their work was an important theoretical pillar that justified building the colossal LHC machine to search for the Higgs boson particle.

Technical foundation for a muon collider laid at J-PARC
A particle collider is a machine that energises two beams of subatomic particles and smashes them head on. The Large Hadron Collider (LHC) in Europe is the world’s largest and most famous particle collider. It accelerates (with the effect of energising) two beams of protons to nearly the speed of
Is the Higgs boson doing its job?XorlandVM
Is the Higgs boson doing its job?

The terms Higgs boson, Higgs field, and Higgs mechanism describe related but distinct ideas. The Higgs field is a kind of invisible medium thought to fill all of space. Particles like W bosons and Z bosons interact with this field as they move and through that interaction they acquire mass. This is the Higgs mechanism: the process by which particles that would otherwise be massless become heavy.

The Higgs boson is different: it's a particle that represents a vibration or a ripple in the Higgs field, just as a photon is a ripple in the electromagnetic field. Its discovery in 2012 confirmed that the field is real and not just something that appears in the mathematics of the theory. But discovery alone doesn't prove the mechanism is doing everything the theory demands. To test that, physicists need to look at situations where the Higgs boson's balancing role is crucial.

The scattering of longitudinally polarised W bosons is a good example. Without the Higgs boson, the probabilities of the scatterings occurring uncontrollably at higher energy, but with the Higgs boson in the picture, they stay within sensible bounds. Observing longitudinally polarised W bosons behaving as predicted is thus evidence for the particle as well as a check on the field and the mechanism behind it.

Imagine a roller-coaster without brakes. As it goes faster and faster, there's nothing to stop it from flying off the tracks. The Higgs mechanism is like the braking system that keeps the ride safe. Observing longitudinally polarised W bosons in the right proportions is equivalent to checking that the brakes actually work when the roller-coaster speeds up.

Is the Higgs boson doing its job?
Credit: Skyler Gerald

Another path that physicists once considered and that didn't involve a Higgs boson at all was called technicolor theory. Instead of a single kind of Higgs boson giving the W bosons their mass, technicolor proposed a brand-new force. Just as the strong nuclear force binds quarks into protons and neutrons, the hypothetical technicolor force would bind new "technifermion" particles into composite states. These bound states would mimic the Higgs boson's job of giving particles mass, while producing their own new signals in high-energy collisions.

The crucial test to check whether some given signals are due to the Higgs boson or due to technicolor lies in the behaviour of longitudinally polarised W bosons. In the Standard Model, their scattering is kept under control by the Higgs boson's balancing act. In technicolor, by contrast, there is no Higgs boson to cancel the runaway growth. The probability of the scattering of longitudinally polarised W bosons would therefore rise sharply with more energy, often leaving clearly excessive signals in the data.

Thus, observing longitudinally polarised W bosons at levels consistent with the predictions of the Standard Model, and not finding any additional signals, would also strengthen the case for the Higgs mechanism and weaken that for technicolor and other "Higgs-less" theories.

At the Large Hadron Collider, the cleanest way to study look for such W bosons is in a phenomenon called vector boson scattering (VBS). In VBS, two protons collide and the quarks inside them emit W bosons. These W bosons then scatter off each other before decaying into lighter particles. The leftover quarks form narrow sprays of particles, or 'jets', that fly far forward.

If the two W bosons happen to have the same electric charge — i.e. both positive or both negative — the process is even more distinctive. This same-sign WW scattering is quite rare and that's an advantage because then it's easy to spot in the debris of particle collisions.

Both ATLAS and CMS, the two giant detectors at the LHC, had previously observed same-sign WW scattering without breaking down the polarisation. In 2021, the CMS detector reported the first hint of longitudinal polarisation but at a statistical significance only of 2.3 sigma, which isn't good enough (particle physicists prefer at least 3 sigma). So after the LHC completed its second run in 2018, collecting data from around 10 quadrillion collisions between protons, the ATLAS collaboration set out to analyse it and deliver the evidence. This group's study was published in Physical Review Letters on September 10.

Is the Higgs boson doing its job?
The layout of the Large Hadron Collider complex at CERN. Protons (p) are pre-accelerated to higher energies in steps — at the Proton Synchrotron (PS) and then the Super Proton Synchrotron (SPS) — before being injected into the the LHC ring. The machine then draws two opposing beams of protons from the SPS and accelerates them to nearly the speed of light before colliding them head-on at four locations, under the gaze of the four detectors. ATLAS and CMS are two of them. Credit: Arpad Horvath (CC BY-SA)

The challenge of finding longitudinally polarised W bosons is like finding a particular needle in a very large haystack where most of the needles look nearly identical. So ATLAS designed a special strategy.

When one W boson decays, the result is one electron or muon and one neutrino. If the W boson is positively charged, for example, the decay could be to one anti-electron and one electron-neutrino or to one anti-muon and a muon-neutrino. Anti-electrons and anti-muons are positively charged. If the W boson is negatively charged, the products could be one electron and one electron-antineutrino or one muon and one muon-antineutrino. So first, ATLAS zeroed in on the fact that it was looking for two electrons, two muons or one of each, both carrying the same electric charge. Neutrinos however are really hard to catch and study, so the ATLAS group looked for their absence rather than their presence. In all these particle interactions, the law of conservation of momentum holds — which means in a given interaction, a neutrino's presence can be elucidated when the momenta of the electrons or muons add up to be slightly lower than that of the W boson. The missing amount would have been carried away by the neutrino, like money unaccounted for in a ledger.

This analysis also required an event of interest to have at least two jets (reconstructed from streams of particles) with a combined energy above 500 GeV and separated widely in rapidity (which is a measure of their angle relative to the beam). This particular VBS pattern — two electrons/muons, two jets, and missing momentum — is the hallmark of same-sign WW scattering.

Second, even with these strict requirements, impostors can creep in. The biggest source of confusion was WZ production, a process in which another subatomic particle called the Z boson decays invisibly or one of its decay products goes unnoticed, making the event resemble WW scattering. Other sources include electrons having their charges mismeasured, jets can masquerading as electrons/muons, and some quarks producing electrons/muons that slip into the sample. To control for all this noise, the ATLAS group focused on control regions: subsets of events that produced a distinct kind of noise that the group could cleanly 'subtract' from the data to reveal same-sign WW scattering, thus also reducing uncertainty in the final results.

Third, and this is where things get nuanced: the differences between transverse and longitudinally polarised W bosons show up in distributions — i.e. how far apart the electrons/muons are in angle, how the jets are oriented, and the energy of the system. But since no single variable could tell the whole story, the ATLAS group combined them using deep neural networks. These machine-learning models were fed up to 20 kinematic variables — including jet separations, particle angles, and missing momentum patterns — and trained to distinguish between three groups:

(i) Two transverse polarised W bosons;

(ii) One transverse polarised W boson and one longitudinally polarised W boson; and

(iii) Both longitudinally polarised W bosons

Fourth, the group combined the outputs of these neural networks and fit them with a maximum likelihood method. When physicists make measurements, they often don't directly see what they're measuring. Instead, they see data points that could have come from different possible scenarios. A likelihood is a number that tells them how probable the data is in a given scenario. If a model says "events should look like this," they can ask: "Given my actual data, how likely is that?" And the maximum likelihood method will help them decide the parameters that make the given data most likely to occur.

For example, say you toss a coin 100 times and get 62 heads. You wonder: is the coin fair or biased? If it's fair, the chance of exactly 62 heads is small. If the coin is slightly biased (heads with probability 0.62), the chance of 62 heads is higher. The maximum likelihood estimate is to pick the bias, or probability of heads, that makes your actual result most probable. So here the method would say, "The coin's bias is 0.62" — because this choice maximises the likelihood of seeing 62 heads out of 100.

In their analysis, the ATLAS group used the maximum likelihood method to check whether the LHC data 'preferred' a contribution from longitudinal scattering, after subtracting what background noise and transverse-only scattering could explain.

The results may be a milestone in experimental particle physics. In the September 10 paper, ATLAS reported evidence for longitudinally polarised W bosons in same-sign WW scattering with a significance of 3.3 sigma — sufficiently close to 4, which is the calculated significance based on the predictions of the Standard Model. This means the data behaved as theory predicted, with no unexpected excess or deficit.

It's also bad news for technicolor theory. By observing longitudinal W bosons at exactly the rates predicted by the Standard Model, and not finding any additional signals, the ATLAS data strengthens the case for the Higgs mechanism providing the check on the W bosons' scattering probability, rather than the technicolor force.

The measured cross-section for events with at least one longitudinally polarised W boson was 0.88 femtobarns, with an uncertainty of 0.3 femtobarns. These figures essentially mean that there were only a few hundred same-sign WW scattering events in the full dataset of around 10 quadrillion proton-proton collisions. The fact that ATLAS could pull this signal out of such a background-heavy environment is a testament to the power of modern machine learning working with advanced statistical methods.

The group was also able to quantify the composition of signals. Among others:

  1. About 58% of events were genuine WW scattering
  2. Roughly 16% were from WZ production
  3. Around 18% arose from irrelevant electrons/muons, charge misidentification or the decay of energetic photons

One way to appreciate the importance of these findings is by analogy: imagine trying to hear a faint melody being played by a single violin in the middle of a roaring orchestra. The violin is the longitudinal signal; the orchestra is the flood of background noise. The neural networks are like sophisticated microphones and filters, tuned to pick out the violin's specific tone. The fact that ATLAS couldn't only hear it but also measured its volume to match the score written by the Standard Model is remarkable.

These results are more than just another tick mark for the Standard Model. They're a direct test of the Higgs mechanism in action. The discovery of the Higgs boson particle in 2012 was groundbreaking but proving that the Higgs mechanism performs its theoretical role requires demonstrating that it regulates the scattering of W bosons. By finding evidence for longitudinally polarised W bosons at the expected rate, ATLAS has done just that.

The results also set the stage for the future. The LHC is currently being upgraded to a form called the High-Luminosity LHC and it will begin operating later this decade, collecting datasets about 10x larger than what the LHC did in its second run. With that much more data, physicists will be able to study differential distributions, i.e. how the rate of longitudinal scattering varies with energy, angle or jet separation. These patterns are sensitive to hitherto unknown particles and forces, such as additional Higgs-like particles or modifications to the Higgs mechanism itself. And even small deviations from the Standard Model's predictions could hint at new frontiers in particle physics.

Indeed, history has often reminded physicists that such precision studies often uncover surprises. For example physicists didn't discover neutrino oscillations by finding a new particle but by noticing that the number of neutrinos arriving from the Sun at detectors on Earth didn't match expectations. Similarly, minuscule mismatches between theory and observations in the scattering of W bosons could someday reveal new physics — and if they do, the seeds will have been planted by studies like that of the ATLAS group.

Challenging the neutrino signal anomaly
A gentle reminder before we begin: you’re allowed to be interested in particle physics. 😉 Neutrinos are among the most mysterious particles in physics. They are extremely light, electrically neutral, and interact so weakly with matter that trillions of them pass through your body each second without leaving a trace. They
Is the Higgs boson doing its job?XorlandVM
Is the Higgs boson doing its job?

On the methodological front, the analysis also showcases how particle physics is evolving. 'Classical' analyses once banked on tracking single variables; now, deep learning has played a starring role by combining many variables into a single discriminant, allowing ATLAS to pull the faint signal of longitudinally polarised W bosons from the noise. This approach could only become more important as both datasets and physicists' ambitions expand.

Perhaps the broadest lesson in all this is that science often advances by the unglamorous task of verifying the details. The discovery of the Higgs boson answered one question but opened many others; among them, measuring how it affects the scattering of W bosons is one of the more direct ways to probe whether the Standard Model is complete or just the first chapter of a longer story. Either way, the pursuit exemplifies the spirit of checking, rechecking, testing, and probing until scientists truly understand how nature works at extreme precision.

A danger of GST 2.0

By: VM
A danger of GST 2.0

Since Union finance minister Nirmala Sitharaman's announcement last week that India's Goods and Services Tax (GST) rates will be rationalised anew from September 22, I've been seeing a flood of pieces all in praise — and why not?

The GST regime has been somewhat controversial since its launch because, despite simplifying compliance for businesses and industry, it increased the costs for consumers. The Indian government exacerbated that pain point by undermining the fiscal federalism of the Union, increasing its revenues at the expense of states' as well as cutting allocations.

While there is (informed) speculation that the next Finance Commission will further undercut the devolution of funds to the states, GST 2.0 offers some relief to consumers in the form of making various products more affordable. Populism is popular, after all.

However, increasing affordability isn't always a good thing even if your sole goal is to increase consumption. This is particularly borne out in the food and nutrition domain.

For example, under the new tax regime, from September 22, the GST on pizza bread will slip from 5% to zero. This means both sourdough pizza bread and maida (refined flour) pizza bread will go from 5% to zero. However, because there is more awareness of maida as an ingredient in the populace and less so of sourdough, and because maida as a result enjoys a higher economy of scale and is thus less expensive (before tax), the demand for maida bread is likely to increase more than the demand for sourdough bread.

This is unfortunate: ideally, sourdough bread should be more affordable — or, alternatively, the two breads should be equally affordable as well as have threshold-based front-of-pack labelling. That is to say, liberating consumers to be able to buy new food products or more of the old ones without simultaneously empowering consumers to make more informed choices could tilt demand in favour of unhealthier foods.

Ultimately, the burden of non-communicable diseases in the population will increase, as will consumers' expenses on healthcare, dietary interventions, and so on. I explained this issue in The Hindu on September 9, 2025, and set out solutions that the Indian government must implement in its food regulation apparatus posthaste.

Without these measures, GST 2.0 will likely be bad news for India's dietary and nutritional ambitions.

A tribute to rubidium

By: VM
A tribute to rubidium

Rubidium isn’t respectable. It isn’t iron, whose strength built railways and bridges and it isn’t silicon, whose valley became a dubious shrine to progress. Rubidium explodes in water. It tarnishes in air. It’s awkward, soft, and unfit for the neat categories by which schoolteachers tell their students how the world is made. And yet, precisely because of this unruly character, it insinuates itself into the deepest places of science, where precision, control, and prediction are supposed to reign.

For centuries astronomers counted the stars, then engineers counted pendulums and springs — all good and respectable. But when humankind’s machines demanded nanosecond accuracy, it was rubidium, a soft metal that no practical mind would have chosen, that became the metronome of the world. In its hyperfine transitions, coaxed by lasers and microwave cavities, the second is carved more finely than human senses can comprehend. Without rubidium’s unstable grace, GPS collapses, financial markets fall into confusion, trains and planes drift out of sync. The fragile and the explosive have become the custodians of order.

What does this say about the hierarchies of knowledge? Textbooks present a suspiciously orderly picture: noble gases are inert, alkali metals are reactive, and their properties can be arranged neatly in columns of the periodic table, they say. Thus rubidium is placed there like a botanical specimen. But in practice, scientists turned to it not because of its box in a table but because of accidents, conveniences, and contingencies. Its resonance lines happen to fall where lasers can reach them easily. Its isotopes are abundant enough to trap, cool, and measure. The entire edifice of atomic clocks and exotic Bose-Einstein condensates rests not on an inevitable logic of discovery but on this convenient accident. Had rubidium’s levels been slightly different, perhaps caesium or potassium would have played the starring role. Rational reconstruction will never admit this. It prefers tidy sequences and noble inevitabilities. Rubidium, however, laughs at such tidiness.

Take condensed matter. In the 1990s and 2000s, solar researchers sought efficiency in perovskite crystals. These crystals were fragile, prone to decomposition, but again rubidium slipped in: a small ion among larger ones, it stabilised the lattice. A substitution here, a tweak there, and suddenly the efficiency curve rose. Was this progress inevitable? No; it was bricolage: chemists trying one ion after another until the thing worked. And the journals now describe rubidium as if it were always destined to “enhance stability”. But destiny is hindsight dressed as foresight. What actually happened was messy. Rubidium’s success was contingent, not planned.

Then there’s the theatre of optics. Rubidium’s spectral lines at 780 nm and 795 nm became the experimentalist’s playground. When lasers cooled atoms to microkelvin temperatures and clouds of rubidium atoms became motionless, they merged into collective wavefunctions and formed the first Bose-Einstein condensates. The textbooks now call this a triumph of theory, the “inevitable” confirmation of quantum statistics. Nonsense! The condensates weren’t predicted as practical realities — they were curiosities, dismissed by many as impossible in the laboratory. What made them possible was a melange of techniques: magnetic traps, optical molasses, sympathetic cooling. And rubidium, again, happened to be convenient, its transitions accessible, its abundance generous, its behaviour forgiving. Out of this messiness came a Nobel Prize and an entire field. Rubidium teaches us that progress comes not from the logical unfolding of ideas but from playing with elements that allegedly don’t belong.

Rubidium rebukes dogma. It’s neither grand nor noble, yet it controls time, stabilises matter, and demonstrates the strangest predictions of quantum theory. It shows science doesn’t march forward by method alone. It stumbles, it improvises, it tries what happens to be at hand. Philosophers of science prefer to speak of method and rigour yet their laboratories tell a story of messy rooms where equipment is tuned until something works, where grad students swap parts until the resonance reveals itself, where fragile metals are pressed into service because they happen to fit the laser’s reach.

Rubidium teaches us that knowledge is anarchic. It isn’t carved from the heavens by pure reason but coaxed from matter through accidents, failures, and improvised victories. Explosive in one setting, stabilising in another; useless in industry, indispensable in physics — the properties of rubidium are contradictory and it’s precisely this contradiction that makes it valuable. To force it into the straitjacket of predictable science is to rewrite history as propaganda. The truth is less comfortable: rubidium has triumphed where theory has faltered.

And yet, here we are. Our planes and phones rely on rubidium clocks. Our visions of renewable futures lean on rubidium’s quiet strengthening of perovskite cells. Our quantum dreams — of condensates, simulations, computers, and entanglement — are staged with rubidium atoms as actors. An element kings never counted and merchants never valued has become the silent arbiter of our age. Science itself couldn’t have planned it better; indeed, it didn’t plan at all.

Rubidium is the fragment in the mosaic that refuses to fit yet holds the pattern together. It’s the soft yet explosive, fragile yet enduring accident that becomes indispensable. Its lesson is simple: science also needs disorder, risk, and the unruliness of matter to thrive.

Featured image: A sample of rubidium metal. Credit: Dnn87 (CC BY).

Lighting the way with Parrondo’s paradox

By: VM
Lighting the way with Parrondo’s paradox

In science, paradoxes often appear when familiar rules are pushed into unfamiliar territory. One of them is Parrondo’s paradox, a curious mathematical result showing that when two losing strategies are combined, they can produce a winning outcome. This might sound like trickery but the paradox has deep connections to how randomness and asymmetry interact in the physical world. In fact its roots can be traced back to a famous thought experiment explored by the US physicist Richard Feynman, who analysed whether one could extract useful work from random thermal motion. The link between Feynman’s thought experiment and Parrondo’s paradox demonstrates how chance can be turned into order when the conditions are right.

Imagine two games. Each game, when played on its own, is stacked against you. In one, the odds are slightly less than fair, e.g. you win 49% of the time and lose 51%. In another, the rules are even more complex, with the chances of winning and losing depending on your current position or capital. If you keep playing either game alone, the statistics say you will eventually go broke.

But then there’s a twist. If you alternate the games — sometimes playing one, sometimes the other — your fortune can actually grow. This is Parrondo’s paradox, proposed in 1996 by the Spanish physicist Juan Parrondo.

The answer to how combining losing games can result in a winning streak lies in how randomness interacts with structure. In Parrondo’s games, the rules are not simply fair or unfair in isolation; they have hidden patterns. When the games are alternated, these patterns line up in such a way that random losses become rectified into net gains.

Say there’s a perfectly flat surface in front of you. You place a small bead on it and then you constantly jiggle the surface. The bead jitters back and forth. Because the noise you’re applying to the bead’s position is unbiased, the bead simply wanders around in different directions on the surface. Now, say you introduce a switch that alternates the surface between two states. When the switch is ON, an ice-tray shape appears on the surface. When the switch is OFF, it becomes flat again. This ice-tray shape is special: the cups are slightly lopsided because there’s a gentle downward slope from left to right in each cup. At the right end, there’s a steep wall. If you’re jiggling the surface when the switch is OFF, the bead diffuses a little towards the left, a little towards the right, and so on. When you throw the switch to ON, the bead falls into the nearest cup. Because each cup is slightly tilted towards the right, the bead eventually settles near the steep wall there. Then you move the switch to OFF again.

As you repeat these steps with more and more beads over time, you’ll see they end up a little to the right of where they started. This is Parrando’s paradox. The jittering motion you applied to the surface caused each bead to move randomly. The switch you used to alter the shape of the surface allowed you to expend some energy in order to rectify the beads’ randomness.

The reason why Parrondo’s paradox isn’t just a mathematical trick lies in physics. At the microscopic scale, particles of matter are in constant, jittery motion because of heat. This restless behaviour is known as Brownian motion, named after the botanist Robert Brown, who observed pollen grains dancing erratically in water under a microscope in 1827. At this scale, randomness is unavoidable: molecules collide, rebound, and scatter endlessly.

Scientists have long wondered whether such random motion could be tapped to extract useful work, perhaps to drive a microscopic machine. This was Feynman’s thought experiment as well, involving a device called the Brownian ratchet, a.k.a. the Feynman-Smoluchowski ratchet. The Polish physicist Marian Smoluchowski dreamt up the idea in 1912 and which Feynman popularised in a lecture 50 years later, in 1962.

Picture a set of paddles immersed in a fluid, constantly jolted by Brownian motion. A ratchet and pawl mechanism is attached to the paddles (see video below). The ratchet allows the paddles to rotate in one direction but not the other. It seems plausible that the random kicks from molecules would turn the paddles, which the ratchet would then lock into forward motion. Over time, this could spin a wheel or lift a weight.

In one of his physics famous lectures in 1962, Feynman analysed the ratchet. He showed that the pawl itself would also be subject to Brownian motion. It would jiggle, slip, and release under the same thermal agitation as the paddles. When everything is at the same temperature, the forward and backward slips would cancel out and no net motion would occur.

This insight was crucial: it preserved the rule that free energy can’t be extracted from randomness at equilibrium. If motion is to be biased in only one direction, there needs to be a temperature difference between different parts of the ratchet. In other words, random noise alone isn’t enough: you also need an asymmetry, or what physicists call nonequilibrium conditions, to turn randomness into work.

Let’s return to Parrondo’s paradox now. The paradoxical games are essentially a discrete-time abstraction of Feynman’s ratchet. The losing games are like unbiased random motion: fluctuations that on their own can’t produce net gain because the gains become cancelled out. But when they’re alternated cleverly, they mimic the effect of adding asymmetry. The combination rectifies the randomness, just as a physical ratchet can rectify the molecular jostling when a gradient is present.

This is why Parrondo explicitly acknowledged his inspiration from Feynman’s analysis of the Brownian ratchet. Where Feynman used a wheel and pawl to show how equilibrium noise can’t be exploited without a bias, Parrondo created games whose hidden rules provided the bias when they were combined. Both cases highlight a universal theme: randomness can be guided to produce order.

The implications of these ideas extend well beyond thought experiments. Inside living cells, molecular motors like kinesin and myosin actually function like Brownian ratchets. These proteins move along cellular tracks by drawing energy from random thermal kicks with the aid of a chemical energy gradient. They demonstrate that life itself has evolved ways to turn thermal noise into directed motion by operating out of equilibrium.

Parrondo’s paradox also has applications in economics, evolutionary biology, and computer algorithms. For example, alternating between two investment strategies, each of which is poor on its own, may yield better long-term outcomes if the fluctuations in markets interact in the right way. Similarly, in genetics, when harmful mutations alternate in certain conditions, they can produce beneficial effects for populations. The paradox provides a framework to describe how losing at one level can add up to winning at another.

Feynman’s role in this story is historical as well as philosophical. By dissecting the Brownian ratchet, he demonstrated how deeply the laws of thermodynamics constrain what’s possible. His analysis reminded physicists that intuition about randomness can be misleading and that only careful reasoning could reveal the real rules.

In 2021, a group of scientists from Australia, Canada, France, and Germany wrote in Cancers that the mathematics of Parrondo’s paradox could also illuminate the biology of cancerous tumours. Their starting point was the observation that cancer cells behave in ways that often seem self-defeating: they accumulate genetic and epigenetic instability, devolve into abnormal states, sometimes stop dividing altogether, and often migrate away from their original location and perish. Each of these traits looks like a “losing strategy” — yet cancers that use these ‘strategies’ together are often persistent.

The group suggested that the paradox arises because cancers grow in unstable, hostile environments. Tumour cells deal with low oxygen, intermittent blood supply, attacks by the immune system, and toxic drugs. In these circumstances, no single survival strategy is reliable. A population of only stable tumour cells would be wiped out when the conditions change. Likewise a population of only unstable cells would collapse under its own chaos. But by maintaining a mix, the group contended, cancers achieve resilience. Stable, specialised cells can exploit resources efficiently while unstable cells with high plasticity constantly generate new variations, some of which could respond better to future challenges. Together, the team continued, the cancer can alternate between the two sets of cells so that it can win.

The scientists also interpreted dormancy and metastasis of cancers through this lens. Dormant cells are inactive and can lie hidden for years, escaping chemotherapy drugs that are aimed at cells that divide. Once the drugs have faded, they restart growth. While a migrating cancer cell has a high chance of dying off, even one success can seed a tumor in a new tissue.

On the flip side, the scientists argued that cancer therapy can also be improved by embracing Parrondo’s paradox. In conventional chemotherapy, doctors repeatedly administer strong drugs, creating a strategy that often backfires: the therapy kills off the weak, leaving the strong behind — but in this case the strong are the very cells you least want to survive. By contrast, adaptive approaches that alternate periods of treatment with rest or that mix real drugs with harmless lookalikes could harness evolutionary trade-offs inside the tumor and keep it in check. Just as cancer may use Parrondo’s paradox to outwit the body, doctors may one day use the same paradox to outwit cancer.

On August 6, physicists from Lanzhou University in China published a paper in Physical Review E discussing just such a possibility. They focused on chemotherapy, which is usually delivered in one of two main ways. The first, called the maximum tolerated dose (MTD), uses strong doses given at intervals. The second, called low-dose metronomic (LDM), uses weaker doses applied continuously over time. Each method has been widely tested in clinics and each one has drawbacks.

MTD often succeeds at first by rapidly killing off drug-sensitive cancer cells. In the process, however, it also paves the way for the most resistant cancer cells to expand, leading to relapse. LDM on the other hand keeps steady pressure on a tumor but can end up either failing to control sensitive cells if the dose is too low or clearing them so thoroughly that resistant cells again dominate if the dose is too strong. In other words, both strategies can be losing games in the long run.

The question the study’s authors asked was whether combining these two flawed strategies in a specific sequence could achieve better results than deploying either strategy on its own. This is the sort of situation Parrondo’s paradox describes, even if not exactly. While the paradox is concerned with combining outright losing strategies, the study has discussed combining two ineffective strategies.

To investigate, the researchers used mathematical models that treated tumors as ecosystems containing three interacting populations: healthy cells, drug-sensitive cancer cells, and drug-resistant cancer cells. They applied equations from evolutionary game theory that tracked how the fractions of these groups shifted in different conditions.

The models showed that in a purely MTD strategy, the resistant cells soon took over, and in a purely LDM strategy, the outcomes depended strongly on drug strength but still ended badly. But when the two schedules were alternated, the tumor behaved differently. The more sensitive cells were suppressed but not eliminated while their persistence prevented the resistant cells from proliferating quickly. The team also found that the healthy cells survived longer.

Of course, tumours are not well-mixed soups of cells; in reality they have spatial structure. To account for this, the team put together computer simulations where individual cells occupied positions on a grid; grew, divided or died according to fixed rules; and interacted with their neighbours. This agent-based approach allowed the team to examine how pockets of sensitive and resistant cells might compete in more realistic tissue settings.

Their simulations only confirmed the previous set of results. A therapeutic strategy that alternated between MTD and LDM schedules extended the amount of time before the resistant cells took over and while the healthy cells dominated. When the model started with the LDM phase in particular, the sensitive cancer cells were found to compete with the resistant cancer cells and the arrival of the MTD phase next applied even more pressure on the latter.

This is an interesting finding because it suggests that the goal of therapy may not always be to eliminate every sensitive cancer cell as quickly as possible but, paradoxically, that sometimes it may be wiser to preserve some sensitive cells so that they can compete directly with resistant cells and prevent them from monopolising the tumor. In clinical terms, alternating between high- and low-dose regimens may delay resistance and keep tumours tractable for longer periods.

Then again this is cancer — the “emperor of all maladies” — and in silico evidence from a physics-based model is only the start. Researchers will have to test it in real, live tissue in animal models (or organoids) and subsequently in human trials. They will also have to assess whether certain cancers, followed by a specific combination of drugs for those cancers, will benefit more (or less) from taking the Parrando’s paradox way.

As Physics reported on August 6:

[University of London mathematical oncologist Robert] Noble … says that the method outlined in the new study may not be ripe for a real-world clinical setting. “The alternating strategy fails much faster, and the tumor bounces back, if you slightly change the initial conditions,” adds Noble. Liu and colleagues, however, plan to conduct in vitro experiments to test their mathematical model and to select regimen parameters that would make their strategy more robust in a realistic setting.

Challenging the neutrino signal anomaly

By: VM
Challenging the neutrino signal anomaly

A gentle reminder before we begin: you're allowed to be interested in particle physics. 😉

Neutrinos are among the most mysterious particles in physics. They are extremely light, electrically neutral, and interact so weakly with matter that trillions of them pass through your body each second without leaving a trace. They are produced in the Sun, nuclear reactors, the atmosphere, and by cosmic explosions. In fact neutrinos are everywhere — yet they're almost invisible.

Despite their elusiveness, they have already upended physics. In the late 20th century, scientists discovered that neutrinos can oscillate, changing from one type to another as they travel, which is something that the simplest version of the Standard Model of particle physics — the prevailing theory of elementary particles — doesn't predict. Because oscillations require neutrinos to have mass, this discovery revealed new physics. Today, scientists study neutrinos for what they might tell us about the universe’s structure and for possible hints of particles or forces yet unknown.

Challenging the neutrino signal anomaly
When neutrinos travel through space, they are known to oscillate between three types. This visualisation plots the composition of neutrinos (of 4 MeV energy) by type at various distances from a nuclear reactor. Credit: Public domain

However, detecting neutrinos is very hard. Because they rarely interact with matter, experiments must build massive detectors filled with dense material in the hopes that a small fraction of neutrinos will collide inside with atoms. One way to detect such collisions uses Cherenkov radiation, a bluish glow emitted when a charged particle moves through a medium like water or mineral oil faster than light does in that medium.

(This is allowed. The only speed limit is that of light in vacuum: 299,792,458 m/s.)

The MiniBooNE experiment at Fermilab used a large mineral-oil Cherenkov detector. When neutrinos from the Booster Neutrino Beamline struck the atomic nuclei in the mineral oil, the interaction released charged particles, which sometimes produced rings of Cherenkov radiation (like ripples) that the detector recorded. In MiniBooNE’s data, the detection events were classified by the type of light ring produced. An "electron-like" event was one that looked like it had been caused by an electron. But because photons can also produce nearly identical rings when they strike the nuclei, the detector couldn’t always tell the difference. A "muon-like" event, on the other hand, had the distinctive ring pattern of a muon, which is a subatomic particle like the electron but 200-times heavier, and which travels in a straighter, longer track. To be clear, these labels described the detector’s view; they didn’t guarantee which particle was actually present.

MiniBooNE began operating in 2002 to test an anomaly that had been reported at the LSND experiment at Los Alamos. LSND had recorded more electron-like” events than predicted, especially at low energies below about 600 MeV. This came to be called the "low-energy excess" and has become one of the most puzzling results in particle physics. It raised the possibility that neutrinos might be oscillating into a hitherto unknown neutrino type, sometimes called the sterile neutrino — or it might have been a hint of unexpected processes that produced extra photons. Since MiniBooNE couldn't reliably distinguish electrons from photons, the mystery remained unresolved.

To address this, scientists built the MicroBooNE experiment at Fermilab. It uses a very different technology: the liquid argon time-projection chamber (LArTPC). In a LArTPC, charged particles streak through an ultra-pure mass of liquid argon, leaving a trail of ionised atoms in their wake. An applied electric field causes these trails to drift towards fine wires, where they are recorded. At the same time, the argon emits light that provides the timing of the interaction. This allows the detector to reconstruct interactions in three dimensions with millimetre precision. Crucially, it lets physicists see where the particle shower begins, so they can tell whether it started at the interaction point or some distance away. This capability prepared MicroBooNE to revisit the "low-energy excess" anomaly.

MicroBooNE also had broader motivations. With an active mass of about 90 tonnes of liquid argon inside a 170-tonne cryostat, and 8,256 wires in its readout planes, it was the largest LArTPC in the US when it began operating. It served as a testbed for the much larger detectors that scientists are developing for the Deep Underground Neutrino Experiment (DUNE). And it was also designed to measure the rate at which neutrinos interacted with argon atoms, to study nuclear effects in neutrino scattering, and to contribute to searches for rare processes such as proton decay and supernova neutrino bursts.

(When a star goes supernova, it releases waves upon waves of neutrinos before it releases photons. Scientists were able to confirm this when the star Sanduleak -69 202 exploded in 1987.)

Challenging the neutrino signal anomaly
This image, released on February 24, 2017, shows Supernova 1987a (centre) surrounded by dramatic red clouds of gas and dust within the Large Magellanic Cloud. This supernova, first discovered on February 23, 1987, blazed with the power of 100 million Suns. Since that first sighting, SN 1987A has continued to fascinate astronomers with its spectacular light show. Caption and credit: NASA, ESA, R. Kirshner (Harvard-Smithsonian Centre for Astrophysics and Gordon and Betty Moore Foundation), and M. Mutchler and R. Avila (STScI)

Initial MicroBooNE analyses using partial data already challenged the idea that MiniBooNE’s excess was due to the anomaly. However, the collaboration didn’t cover the full range of parameters until recently. On August 21, MicroBooNE published results from five years of operations, corresponding to 1.11 x 1021 protons on target, which was about a 70% increase over previous analyses. This complete dataset together with higher sensitivity and better modelling has provided the most decisive test so far of the anomaly.

The MicroBooNE detector recorded neutrino interactions from the Booster Neutrino Beamline, a setup that produces neutrinos, using its LArTPC detector, which operated at about 87 K inside a cryostat. Charged particles from neutrino interactions produced ionisation electrons that drifted across the detector and were recorded by the wire. Simultaneous flashes of argon scintillation light, seen by photomultiplier tubes, gave the precise time of each interaction.

In neutrino physics, a category of events grouped by what the detector sees in the final state is called a channel. Researchers call it a signal channel when it matches the kind of event they are specifically looking for, as opposed to background signals from other processes. With MicroBooNE, the team stayed on the lookout for two signal channels: (i) one electron and no visible protons or pions (abbreviated as 1e0p0π) and (ii) one electron and at least one proton above 40 MeV (1eNp0π). These categories reflect what MiniBooNE would've seen as electron-like events while exploiting MicroBooNE's ability to identify protons.

One important source of background noise the team had to cut from the data was cosmic rays — high-energy particles from outer space that strike Earth’s atmosphere, creating particle showers that can mimic neutrino signals. In 2017, MicroBooNE added a suite of panels around the detector. For the full dataset, the panels cut an additional 25.4% of background noise in the 1e0p0π channel while preserving 98.9% of signal events.

Challenging the neutrino signal anomaly
When a cosmic-ray proton collides with a molecule in the upper atmosphere, it produces a shower of particles that includes pions, muons, photons, neutrons, electrons, and positrons. Credit: SyntaxError55 (CC BY-SA)

In the final analysis, the MicroBooNE data showed no evidence of an anomalous excess of electron-like events. When both channels were combined, the observed events matched the expectations of the Standard Model of particle physics well. The agreement was especially strong in the 1e0p0π channel.

In the 1eNp0π channel, MicroBooNE actually detected slightly fewer events than the Model predicted: 102 events v. 134. This shortfall, of about 24%, is however not enough to claim a new effect but enough to draw attention. But rather than confirming MiniBooNE’s excess, this result suggests there's some tension in the models the scientists use to simulate how the neutrinos and argon atoms will interact. Argon has a large and complex nucleus, which makes accurate predictions challenging. The scientists have in fact stated in their paper that the deficit may reflect these uncertainties rather than new physics.

The new MicroBooNE results have far-reaching consequences. Foremost, the results reshape the sterile-neutrino debate. For two decades, the LSND and MiniBooNE anomalies had been cited together as signs that the neutrino was oscillating into a previously undetected state. By showing that MiniBooNE's excess was not due to extra electron-like interactions, MicroBooNE shows that the 'extra' events were not caused by excess electron neutrinos. This in turn casts doubt on the simplest explanation, of sterile neutrinos.

As a result, theoretical models that once seemed straightforward now face strong tension. While more complex scenarios remain possible, the easy explanation is no longer viable.

Challenging the neutrino signal anomaly
The MicroBooNE cryostat inside which the LArTPC is placed. Credit: Fermilab

Second, they demonstrate the maturity of the LArTPC technology. The MicroBooNE team successfully operated a large detector for years, maintaining the argon's purity and low-noise electronics required for high-resolution imaging. Its performance validates the design choices for larger detectors like DUNE, which use similar technology but at kilotonne scales. The experiment also showcases innovations such as cryogenic electronics, sophisticated purification systems, protection against cosmic rays, and calibration with ultraviolet lasers, proving that such systems can deliver reliable data over long periods of operation.

Third, the modest deficit in the 1eNp0π channel points to the importance of better understanding neutrino-argon interactions. Argon's heavy nucleus produces complicated final states where protons and neutrons may scatter or be absorbed, altering the visible event. These nuclear effects can lead to mismatches between simulation and data (possibly including the 24% deficit in the 1eNp0π signal channel). For DUNE, which will also use argon as its target, improving these models is critical. MicroBooNE’s detailed datasets and sideband constraints will continue to inform these refinements.

Fourth, the story highlights the value of complementary detector technologies. MiniBooNE’s Cherenkov detector recorded more events but couldn’t tell electrons from photons; MicroBooNE’s LArTPC recorded fewer events but with much greater clarity. Together, they show how one experiment can identify a puzzle and another can test it with a different method. This multi-technology approach is likely to continue as experiments worldwide cross-check anomalies and precision measurements.

Finally, the MicroBooNE results show how science advances. A puzzling anomaly inspired new theories, new technology, and a new experiment. After five years of data-taking and with the most complete analysis yet, MicroBooNE has said that the MiniBooNE anomaly was not due to electron-neutrino interactions. The anomaly itself remains unexplained, but the field now has a sharper focus. Whether the cause lies in photon production, detector effects or actually new physics, the next generation of experiments can start on firmer footing.

GST 2.0 + WordPress.com

By: VM
GST 2.0 + WordPress.com

Union finance minister Nirmala Sitharaman announced sweeping changes to the GST rates on September 3. However, I think the rate for software services (HSN 99831) will remain unchanged at 18%. This is a bummer because every time I renew my WordPress.com site or purchase software over the internet in rupees, the total cost increases by almost a fifth.

The disappointment is compounded by the fact that WordPress.com and many other software service providers provide adjusted rates for users in India in order to offset the country's lower purchasing power per capita. For example, the lowest WordPress and Ghost plans by WordPress.com and MagicPages.co, respectively, cost $4 and $12 a month. But for users in India, the WordPress.com plan costs Rs 200 a month while MagicPages.co offers a Rs 450 per month plan, both with the same feature set — a big difference. The 18% GST however wipes out some, not all, of these gains.

Paying for software services over the internet when they're billed in dollars rather than rupees isn't much different. While GST doesn't apply, the rupee-to-dollar rate has become abysmal. [Checks] Rs 88.14 to the dollar at 11 am. Ugh.

I also hoped for a GST rate cut on software services because if content management software in particular becomes more affordable, more people would be able to publish on the internet.

What does it mean to interpret quantum physics?

By: VM
What does it mean to interpret quantum physics?

The United Nations has designated 2025 the International Year of Quantum Science and Technology. Many physics magazines and journals have taken the opportunity to publish more articles on quantum physics than they usually do, and that has meant quantum physics research has often been on my mind. Nirmalya Kajuri, an occasional collaborator, an assistant professor at IIT Mandi, and an excellent science communicator, recently asked other physics teachers on X.com how much time they spend teaching the interpretations of quantum physics. His question and the articles I’ve been reading inspired me to write the following post. I hope it’s useful in particular to people like me, who are interested in physics but didn’t formally train to study it.


Quantum physics is often described as the most successful theory in science. It explains how atoms bond, how light interacts with matter, how semiconductors and lasers work, and even how the sun produces energy. With its equations, scientists can predict experimental results with astonishing precision — up to 10 decimal places in the case of the electron’s magnetic moment.

In spite of this extraordinary success, quantum physics is unusual compared to other scientific theories because it doesn’t tell us a single, clear story about what reality is like. The mathematics yields predictions that have never been contradicted within their tested domain, yet it leaves open the question of what the world is actually doing behind those numbers. This is what physicists mean when they speak of the ‘interpretations’ of quantum mechanics.

In classical physics, the situation is more straightforward. Newton’s laws describe how forces act on bodies, leading them to move along definite paths. Maxwell’s theory of electromagnetism describes electric and magnetic fields filling space and interacting with charges. Einstein’s relativity shows space and time are flexible and curve under the influence of matter and energy. These theories predict outcomes and provide a coherent picture of the world: objects have locations, fields have values, and spacetime has shape. In quantum mechanics, the mathematics works perfectly — but the corresponding picture of reality is still unclear.

The central concept in quantum theory is the wavefunction. This is a mathematical object that contains all the information about a system, such as an electron moving through space. The wavefunction evolves smoothly in time according to the Schrödinger equation. If you know the wavefunction at one moment, you can calculate it at any later moment using the equation. But when a measurement is made, the rules of the theory change. Instead of continuing smoothly, the wavefunction is used to calculate probabilities for different possible outcomes, and then one of those outcomes occurs.

For instance, if an electron has a 50% chance of being detected on the left and a 50% chance of being detected on the right, the experiment will yield either left or right, never both at once. The mathematics says that before the measurement, the electron exists in a superposition of left and right, but after the measurement only one is found. This peculiar structure, where the wavefunction evolves deterministically between measurements but then seems to collapse into a definite outcome when observed, has no counterpart in classical physics.

The puzzles arise because it’s not clear what the wavefunction really represents. Is it a real physical wave that somehow ‘collapses’? Is it merely a tool for calculating probabilities, with no independent existence? Is it information in the mind of an observer rather than a feature of the external world? The mathematics doesn’t say.

The measurement problem asks why the wavefunction collapses at all and what exactly counts as a measurement. Superposition raises the question of whether a system can truly be in several states at once or whether the mathematics is only a convenient shorthand. Entanglement, where two particles remain linked in ways that seem to defy distance, forces us to wonder whether reality itself is nonlocal in some deep sense. Each of these problems points to the fact that while the predictive rules of quantum theory are clear, their meaning is not.

Over the past century, physicists and philosophers have proposed many interpretations of quantum mechanics. The most traditional is often called the Copenhagen interpretation, illustrated by the Schrödinger’s cat thought experiment. In this view, the wavefunction is not real but only a computational tool. In many Copenhagen-style readings, the wavefunction is a device for organising expectations while measurement is taken as a primitive, irreducible step. The many-worlds interpretation offers a different view that denies the wavefunction ever collapses. Instead, all possible outcomes occur, each in its own branch of reality. When you measure the electron, there is one version of you that sees it on the left and another version that sees it on the right.

In Bohmian mechanics, particles always have definite positions guided by a pilot wave that’s represented by the wavefunction. In this view, the randomness of measurement outcomes arises because we can’t know the precise initial positions of the particles. There are also objective collapse theories that take the wavefunction as real but argue that it undergoes genuine, physical collapse triggered randomly or by specific conditions. Finally, an informational approach called QBism says the wavefunction isn’t about the world at all but about an observer’s expectations for experiences upon acting on the world.

Most interpretations reproduce the same experimental predictions (objective-collapse models predict small, testable deviations) but tell different stories about what the world is really like.

It’s natural to ask why interpretations are needed at all if they don’t change the predictions. Indeed, many physicists work happily without worrying about them. To build a transistor, calculate the energy of a molecule or design a quantum computer, the rules of standard quantum mechanics suffice. Yet interpretations matter for several reasons, but especially because they shape our philosophical understanding of what kind of universe we live in.

They also influence scientific creativity because some interpretations suggest directions for new experiments. For example, objective collapse theories predict small deviations from the usual quantum rules that can, at least in principle, be tested. Interpretations also matter in education. Students taught only the Copenhagen interpretation may come away thinking quantum physics is inherently mysterious and that reality only crystallises when it’s observed. Students introduced to many-worlds alone may instead think of the universe as an endlessly branching tree. The choice of interpretation moulds the intuition of future physicists. At the frontiers of physics, in efforts to unify quantum theory with gravity or to describe the universe as a whole, questions about what the wavefunction really is become unavoidable.

In research fields that apply quantum mechanics to practical problems, many physicists don’t think about interpretation at all. A condensed-matter physicist studying superconductors uses the standard formalism without worrying about whether electrons are splitting into multiple worlds. But at the edges of theory, interpretation plays a major role. In quantum cosmology, where there are no external observers to perform measurements, one needs to decide what the wavefunction of the universe means. How we interpret entanglement, i.e. as a real physical relation versus as a representational device, colours how technologists imagine the future of quantum computing. In quantum gravity, the question of whether spacetime itself can exist in superposition renders interpretation crucial.

Interpretations also matter in teaching. Instructors make choices, sometimes unconsciously, about how to present the theory. One professor may stick to the Copenhagen view and tell students that measurement collapses the wavefunction and that that’s the end of the story. Another may prefer many-worlds and suggest that collapse never occurs, only branching universes. A third may highlight information-based views, stressing that quantum mechanics is really about knowledge and prediction rather than about what exists independently. These different approaches shape the way students can understand quantum mechanics as a tool as well as as a worldview. For some, quantum physics will always appear mysterious and paradoxical. For others, it will seem strange but logical once its hidden assumptions are made clear.

Interpretations also play a role in experiment design. Objective collapse theories, for example, predict that superpositions of large objects should spontaneously collapse. Experimental physicists are now testing whether quantum superpositions survive for increasingly massive molecules or for diminutive mechanical devices, precisely to check whether collapse really happens. Interpretations have also motivated tests of Bell’s inequalities, an idea that shows no local theory with “hidden variables” can reproduce the correlations predicted by quantum mechanics. The scientists who conducted these experiments confirmed entanglement is a genuine feature of the world, not a residue of the mathematical tools we use to study it — and won the Nobel Prize for physics in 2022. Today, entanglement is exploited in technologies such as quantum cryptography. Without the interpretative debates that forced physicists to take these puzzles seriously, such developments may never have been pursued.

The fact that some physicists care deeply about interpretation while others don’t reflects different goals. Those who work on applied problems or who need to build devices don’t have to care much. The maths provides the answers they need. Those who are concerned with the foundations of physics, with the philosophy of science or with the unification of physical theories care very much, because interpretation guides their thinking about what’s possible and what’s not. Many physicists switch back and forth, ignoring interpretation when calculating in the lab but discussing many-worlds or informational views over chai.

Quantum mechanics is unique among physical theories in this way. Few chemists or engineers spend time worrying about the ‘interpretation’ of Newtonian mechanics or thermodynamics because these theories present straightforward pictures of the world. Quantum mechanics instead gives flawless predictions but an under-determined picture. The search for interpretation is the search for a coherent story that links the extraordinary success of the mathematics to a clear vision of what the world is like.

To interpret quantum physics is therefore to move beyond the bare equations and ask what they mean. Unlike classical theories, quantum mechanics doesn’t supply a single picture of reality along with its predictions. It leaves us with probabilities, superpositions, and entanglement, and it remains ambiguous about what these things really are. Some physicists insist interpretation is unnecessary; to others it’s essential. Some interpretations depict reality as a branching multiverse, others as a set of hidden particles, yet others as information alone. None has won final acceptance, but all try to close the gap between predictive success and conceptual clarity.

In daily practice, many physicists calculate without worrying, but in teaching, in probing the limits of the theory, and in searching for new physics, interpretations matter. They shape not only what we understand about the quantum world but also how we imagine the universe we live in.

A transistor for heat

By: VM
A transistor for heat

Quantum technologies and the prospect of advanced, next-generation electronic devices have been maturing at an increasingly rapid pace. Both research groups and governments around the world are investing more attention in this domain.

India for example mooted its National Quantum Mission in 2023 with a decade-long outlay of Rs 6,000 crore. One of the Mission’s goals, in the words of IISER Pune physics professor Umakant Rapol, is “to engineer and utilise the delicate quantum features of photons and subatomic particles to build advanced sensors” for applications in “healthcare, security, and environmental monitoring”.

On the science front, as these technologies become better understood, scientists have been paying increasingly more attention to managing and controlling heat in them. These technologies often rely on quantum physical phenomena that appear only at extremely low temperatures and are so fragile that even a small amount of stray heat can destabilise them. In these settings, scientists have found that traditional methods of handling heat — mainly by controlling the vibrations of atoms in the devices’ materials — become ineffective.

Instead, scientists have identified a promising alternative: energy transfer through photons, the particles of light. And in this paradigm, instead of simply moving heat from one place to another, scientists have been trying to control and amplify it, much like how transistors and amplifiers handle electrical signals in everyday electronics.

Playing with fire

Central to this effort is the concept of a thermal transistor. This device resembles an electrical transistor but works with heat instead of electrical current. Electrical transistors amplify or switch currents, allowing the complex logic and computation required to power modern computers. Creating similar thermal devices would represent a major advance, especially for technologies that require very precise temperature control. This is particularly true in the sub-kelvin temperature range where many quantum processors and sensors operate.

A transistor for heat
This circuit diagram depicts an NPN bipolar transistor. When a small voltage is applied between the base and emitter, electrons are injected from the emitter into the base, most of which then sweep across into the collector. The end result is a large current flowing through the collector, controlled by the much smaller current flowing through the base. Credit: Michael9422 (CC BY-SA)

Energy transport at such cryogenic temperatures differs significantly from normal conditions. Below roughly 1 kelvin, atomic vibrations no longer carry most of the heat. Instead, electromagnetic fluctuations — ripples of energy carried by photons — dominate the conduction of heat. Scientists channel these photons through specially designed, lossless wires made of superconducting materials. They keep these wires below their superconducting critical temperatures, allowing only photons to transfer energy between the reservoirs. This arrangement enables careful and precise control of heat flow.

One crucial phenomenon that allows scientists to manipulate heat in this way is negative differential thermal conductance (NDTC). NDTC defies common intuition. Normally, decreasing the temperature difference between two bodies reduces the amount of heat they exchange. This is why a glass of water at 50º C in a room at 25º C will cool faster than a glass of water at 30º C. In NDTC, however, reducing the temperature difference between two connected reservoirs can actually increase the heat flow between them.

NDTC arises from a detailed relationship between temperature and the properties of the material that makes up the reservoirs. When physicists harness NDTC, they can amplify heat signals in a manner similar to how negative electrical resistance powers electrical amplifiers.

A ‘circuit’ for heat

In a new study, researchers from Italy have designed and theoretically modelled a new kind of ‘thermal transistor’ that they have said can actively control and amplify how heat flows at extremely low temperatures for quantum technology applications. Their findings were published recently in the journal Physical Review Applied.

To explore NDTC experimentally, the researchers studied reservoirs made of a disordered semiconductor material that exhibited a transport mechanism called variable range hopping (VRH). An example is neutron-transmutation-doped germanium. In VRH materials, the electrical resistance at low temperatures depends very strongly, sometimes exponentially, on temperature.

This attribute makes them ideal to tune their impedance, a property that controls the material’s resistance to energy flow, simply by adjusting temperature. That is, how well two reservoirs made of VRH materials exchange heat can be controlled by tuning the impedance of the materials, which in turn can be controlled by tuning their temperature.

In the new study, the researchers reported that impedance matching played a key role. When the reservoirs’ impedances matched perfectly (when their temperatures became equal), the efficiency with which they transferred photonic heat reached a peak. As the materials’ temperatures diverged, heat flow dropped. In fact, the researchers wrote that there was a temperature range, especially as the colder reservoir’s temperature rose to approach that of the warmer one, within which the heat flow increased even as the temperature difference shrank. This effect forms the core of NDTC.

The research team, associated with the NEST initiative at the Istituto Nanoscienze-CNR and Scuola Normale Superiore, both in Pisa in Italy, have proposed a device they call the photonic heat amplifier. They built it using two VRH reservoirs connected by superconducting, lossless wires. One reservoir was kept at a higher temperature and served as the source of heat energy. The other reservoir, called the central island, received heat by exchanging photons with the warmer reservoir.

A transistor for heat
The proposed device features a central island at temperature T1 that transfers heat currents to various terminals. The tunnel contacts to the drain and gate are positioned at heavily doped regions of the yellow central island, highlighted by a grey etched pattern. Each arrow indicates the positive direction of the heat flux. The substrate is (shown as and) maintained at temperature Tb, the gate at Tg, and the drain at Td. Credit: arXiv:2502.04250v3

The central island was also connected to two additional metallic reservoirs named the “gate” and the “drain”. These points operated with the same purpose as the control and output terminals in an electrical transistor. The drain stayed cold, allowing the amplified heat signal to exit the system from this point. By adjusting the gate temperature, the team could modulate and even amplify the flow of heat between the source and the drain (see image below).

To understand and predict the amplifier’s behaviour, the researchers developed mathematical models for all forms of heat transfer within the device. These included photonic currents between VRH reservoirs, electron tunnelling through the gate and drain contacts, and energy lost as vibrations through the device's substrate.

(Tunnelling is a quantum mechanical phenomenon where an electron has a small chance of floating through a thin barrier instead of going around it.)

Raring to go

By carefully selecting the device parameters — including the characteristic temperature of the VRH material, the source temperature, resistances at the gate and drain contacts, the volume of the central island, and geometric factors — the researchers said they could tailor the device for different amplification purposes.

They reported two main operating modes. The first was called ‘current modulation amplifier’. In this configuration, the device amplified small variations in thermal input at the gate. In this mode, small oscillations in the gate heat current produced much larger oscillations, up to 15-times greater, in the photon current between the source and the central island and in the drain current, according to the paper. This amplification was efficient down to 20 millikelvin, matching the ultracold conditions required in quantum technologies. The output range of heat current was similarly broad, showing the device’s suitability to amplify heat signals.

The second mode was called ‘temperature modulation amplifier’. Here, slight changes of only a few millikelvin in the gate temperature, the team wrote, caused the output temperature in the central island to swing by as large as 3.3 times the changes in the input. The device could also handle input temperature ranges over 100 millikelvin. This performance reportedly matched or surpassed other temperature amplifiers already reported in the scientific literature. The researchers also noted that this mode could be used to pre-amplify signals in bolometric detectors used in astronomy telescopes.

An important ability relevant for practical use is the relaxation time, i.e. how soon after operating once the device returned to its original state, ready for the next run. The amplifier in both configurations showed relaxation times between microseconds and milliseconds. According to the researchers, this speed resulted from the device’s low thermal mass and efficient heat channels. Such a fast response could make it suitable to detect and amplify thermal signals in real time.

The researchers wrote that the amplifier also maintained good linearity and low distortion across various inputs. In other words, the output heat signal changed proportionally to the input heat signal and the device didn’t add unwanted changes, noise or artifacts to the input signal. Its noise-equivalent power values were also found to rival the best available solid-state thermometers, indicating low noise levels.

Approaching the limits

For these promising results, realising this device involves some significant practical challenges. For instance, NDTC depends heavily on precise impedance matching. Real materials inevitably have imperfections, including those due to imperfect fabrication and environmental fluctuations. Such deviations could lower the device’s heat transfer efficiency and reduce the operational range of NDTC.

The system also banked on lossless superconducting wires being kept well below their critical temperatures. Achieving and maintaining these ultralow temperatures requires sophisticated and expensive refrigeration infrastructure, which adds to the experimental complexity.

Fabrication also demands very precise doping and finely tuned resistances for the gate and drain terminals. Scaling production to create many devices or arrays poses major technical difficulties. Integrating numerous photonic heat amplifiers into larger thermal circuits risks unwanted thermal crosstalk and signal degradation, a risk compounded by the extremely small heat currents involved.

Furthermore, the fully photonic design offers benefits such as electrical isolation and long-distance thermal connections. However, it also approaches fundamental physical limits. Thermal conductance caps the maximum possible heat flow through photonic channels. This limitation could restrict how much power the device is able to handle in some applications.

Then again, many of these challenges are typical of cutting-edge research in quantum devices, and highlight the need for detailed experimental work to realise and integrate photonic heat amplifiers into operational quantum systems.

If they are successfully realised for practical applications, photonic heat amplifiers could transform how scientists manage heat in quantum computing and nanotechnologies that operate near absolute zero. They could pave the way for on-chip heat control, computers to autonomously stabilise the temperature, and perform thermal logic operations. Redirecting or harvesting waste heat could also improve the efficiency and significantly reduce noise — a critical barrier in ultra-sensitive quantum devices like quantum computers.

Featured image credit: Lucas K./Unsplash.

The Hyperion dispute and chaos in space

By: VM
The Hyperion dispute and chaos in space

I believe my blog’s subscribers did not receive email notifications of some recent posts. If you’re interested, I’ve listed the links to the last eight posts at the bottom of this edition.

When reading around for my piece yesterday on the wavefunctions of quantum mechanics, I stumbled across an old and fascinating debate about Saturn’s moon Hyperion.

The question of how the smooth, classical world around us emerges from the rules of quantum mechanics has haunted physicists for a century. Most of the time the divide seems easy: quantum laws govern atoms and electrons while planets, chairs, and cats are governed by the laws of Newton and Einstein. Yet there are cases where this distinction is not so easy to draw. One of the most surprising examples comes not from a laboratory experiment but from the cosmos.

In the 1990s, Hyperion became the focus of a deep debate about the nature of classicality, one that quickly snowballed into the so-called Hyperion dispute. It showed how different interpretations of quantum theory could lead to apparently contradictory claims, and how those claims can be settled by making their underlying assumptions clear.

Hyperion is not one of Saturn’s best-known moons but it is among the most unusual. Unlike round bodies such as Titan or Enceladus, Hyperion has an irregular shape, resembling a potato more than a sphere. Its surface is pocked by craters and its interior appears porous, almost like a sponge. But the feature that caught physicists' attention was its rotation. Hyperion does not spin in a steady, predictable way. Instead, it tumbles chaotically. Its orientation changes in an irregular fashion as it orbits Saturn, influenced by the gravitational pulls of Saturn and Titan, which is a moon larger than Mercury.

In physics, chaos does not mean complete disorder. It means a system is sensitive to its initial conditions. For instance, imagine two weather models that start with almost the same initial data: one says the temperature in your locality at 9:00 am is 20.000º C, the other says it's 20.001º C. That seems like a meaningless difference. But because the atmosphere is chaotic, this difference can grow rapidly. After a few days, the two models may predict very different outcomes: one may show a sunny afternoon and the other, thunderstorms.

This sensitivity to initial conditions is often called the butterfly effect — it's the idea that the flap of a butterfly’s wings in Brazil might, through a chain of amplifications, eventually influence the formation of a tornado in Canada.

Hyperion behaves in a similar way. A minuscule difference in its initial spin angle or speed grows exponentially with time, making its future orientation unpredictable beyond a few months. In classical mechanics this is chaos; in quantum mechanics, those tiny initial uncertainties are built in by the uncertainty principle, and chaos amplifies them dramatically. As a result, predicting its orientation more than a few months ahead is impossible, even with precise initial data.

To astronomers, this was a striking case of classical chaos. But to a quantum theorist, it raised a deeper question: how does quantum mechanics describe such a macroscopic, chaotic system?

Why Hyperion interested quantum physicists is rooted in that core feature of quantum theory: the wavefunction. A quantum particle is described by a wavefunction, which encodes the probabilities of finding it in different places or states. A key property of wavefunctions is that they spread over time. A sharply localised particle will gradually smear out, with a nonzero probability of it being found over an expanding region of space.

For microscopic particles such as electrons, this spreading occurs very rapidly. For macroscopic objects, like a chair, an orange or you, the spread is usually negligible. The large mass of everyday objects makes the quantum uncertainty in their motion astronomically small. This is why you don’t have to be worried about your chai mug being in two places at once.

Hyperion is a macroscopic moon, so you might think it falls clearly on the classical side. But this is where chaos changes the picture. In a chaotic system, small uncertainties get amplified exponentially fast. A variable called the Lyapunov exponent measures this sensitivity. If Hyperion begins with an orientation with a minuscule uncertainty, chaos will magnify that uncertainty at an exponential rate. In quantum terms, this means the wavefunction describing Hyperion’s orientation will not spread slowly, as for most macroscopic bodies, but at full tilt.

In 1998, the Polish-American theoretical physicist Wojciech Zurek calculated that within about 20 years, the quantum state of Hyperion should evolve into a superposition of macroscopically distinct orientations. In other words, if you took quantum mechanics seriously, Hyperion would be “pointing this way and that way at once”, just like Schrödinger’s famous cat that is alive and dead at once.

This startling conclusion raised the question: why do we not observe such superpositions in the real Solar System?

Zurek's answer to this question was decoherence. Say you're blowing a soap bubble in a dark room. If no light touches it, the bubble is just there, invisible to you. Now shine a torchlight on it. Photons from the bulb will scatter off the bubble and enter your eyes, letting you see its position and color. But here's the catch: every photon that bounces off the bubble also carries away a little bit of information about it. In quantum terms, the bubble’s wavefunction becomes entangled with all those photons.

If the bubble were treated purely quantum mechanically, you could imagine a strange state where it was simultaneously in many places in the room — a giant superposition. But once trillions of photons have scattered off it, each carrying “which path?” information, the superposition is effectively destroyed. What remains is an apparent mixture of "bubble here" or "bubble there", and to any observer the bubble looks like a localised classical object. This is decoherence in action: the environment (the sea of photons here) acts like a constant measuring device, preventing large objects from showing quantum weirdness.

For Hyperion, decoherence would be rapid. Interactions with sunlight, Saturn’s magnetospheric particles, and cosmic dust would constantly ‘measure’ Hyperion’s orientation. Any coherent superposition of orientations would be suppressed almost instantly, long before it could ever be observed. Thus, although pure quantum theory predicts Hyperion’s wavefunction would spread into cat-like superpositions, decoherence explains why we only ever see Hyperion in a definite orientation.

Thus Zurek argued that decoherence is essential to understand how the classical world emerges from its quantum substrate. To him, Hyperion provided an astronomical example of how chaotic dynamics could, in principle, generate macroscopic superpositions, and how decoherence ensures these superpositions remain invisible to us.

Not everyone agreed with Zurek’s conclusion, however. In 2005, physicists Nathan Wiebe and Leslie Ballentine revisited the problem. They wanted to know: if we treat Hyperion using the rules of quantum mechanics, do we really need the idea of decoherence to explain why it looks classical? Or would Hyperion look classical even without bringing the environment into the picture?

To answer this, they did something quite concrete. Instead of trying to describe every possible property of Hyperion, they focused on one specific and measurable feature: the part of its spin that pointed along a fixed axis, perpendicular to Hyperion’s orbit. This quantity — essentially the up-and-down component of Hyperion’s tumbling spin — was a natural choice because it can be defined both in classical mechanics and in quantum mechanics. By looking at the same feature in both worlds, they could make a direct comparison.

Wiebe and Ballentine then built a detailed model of Hyperion’s chaotic motion and ran numerical simulations. They asked: if we look at this component of Hyperion’s spin, how does the distribution of outcomes predicted by classical physics compare with the distribution predicted by quantum mechanics?

The result was striking. The two sets of predictions matched extremely well. Even though Hyperion’s quantum state was spreading in complicated ways, the actual probabilities for this chosen feature of its spin lined up with the classical expectations. In other words, for this observable, Hyperion looked just as classical in the quantum description as it did in the classical one.

From this, Wiebe and Ballentine drew a bold conclusion: that Hyperion doesn't require decoherence to appear classical. The agreement between quantum and classical predictions was already enough. They went further and suggested that this might be true more broadly: perhaps decoherence is not essential to explain why macroscopic bodies, the large objects we see around us, behave classically.

This conclusion went directly against the prevailing view of quantum physics as a whole. By the early 2000s, many physicists believed that decoherence was the central mechanism that bridged the quantum and classical worlds. Zurek and others had spent years showing how environmental interactions suppress the quantum superpositions that would otherwise appear in macroscopic systems. To suggest that decoherence was not essential was to challenge the very foundation of that programme.

The debate quickly gained attention. On one side stood Wiebe and Ballentine, arguing that simple agreement between quantum and classical predictions for certain observables was enough to resolve the issue. On the other stood Zurek and the decoherence community, insisting that the real puzzle was more fundamental: why we never observe interference between large-scale quantum states.

At this time, the Hyperion dispute wasn't just about a chaotic moon. It was about how we could define ‘classical behavior’ in the first place. For Wiebe and Ballentine, classical meant “quantum predictions match classical ones”. For Zurek et al., classical meant “no detectable superpositions of macroscopically distinct states”. The difference in definitions made the two sides seem to clash.

But then, in 2008, physicist Maximilian Schlosshauer carefully analysed the issue and showed that the two sides were not actually talking about the same problem. The apparent clash arose because Zurek and Wiebe-Ballentine had started from essentially different assumptions.

Specifically, Wiebe and Ballentine had adopted the ensemble interpretation of quantum mechanics. In everyday terms, the ensemble interpretation says, “Don't take the quantum wavefunction too literally.” That is, it does not describe the “real state” of a single object. Instead, it's a tool to calculate the probabilities of what we will see if we repeat an experiment many times on many identical systems. It’s like rolling dice. If I say the probability of rolling a 6 is 1/6, that probability does not describe the dice themselves as being in a strange mixture of outcomes. It simply summarises what will happen if I roll a large collection of dice.

Applied to quantum mechanics, the ensemble interpretation works the same way. If an electron is described by a wavefunction that seems to say it is “spread out” over many positions, the ensemble interpretation insists this does not mean the electron is literally smeared across space. Rather, the wavefunction encodes the probabilities for where the electron would be found if we prepared many electrons in the same way and measured them. The apparent superposition is not a weird physical reality, just a statistical recipe.

Wiebe and Ballentine carried this outlook over to Hyperion. When Zurek described Hyperion’s chaotic motion as evolving into a superposition of many distinct orientations, he meant this as a literal statement: without decoherence, the moon’s quantum state really would be in a giant blend of "pointing this way” and “pointing that way”. From his perspective, there was a crisis because no one ever observes moons or chai mugs in such states. Decoherence, he argued, was the missing mechanism that explained why these superpositions never show up.

But under the ensemble interpretation, the situation looks entirely different. For Wiebe and Ballentine, Hyperion’s wavefunction was never a literal “moon in superposition”. It was always just a probability tool, telling us the likelihood of finding Hyperion with one orientation or another if we made a measurement. Their job, then, was simply to check: do these quantum probabilities match the probabilities that classical physics would give us? If they do, then Hyperion behaves classically by definition. There is no puzzle to be solved and no role for decoherence to play.

This explains why Wiebe and Ballentine concentrated on comparing the probability distributions for a single observable, namely the component of Hyperion’s spin along a chosen axis. If the quantum and classical results lined up — as their calculations showed — then from the ensemble point of view Hyperion’s classicality was secured. The apparent superpositions that worried Zurek were never taken as physically real in the first place.

Zurek, on the other hand, was addressing the measurement problem. In standard quantum mechanics, superpositions are physically real. Without decoherence, there is always some observable that could reveal the coherence between different macroscopic orientations. The puzzle is why we never see such observables registering superpositions. Decoherence provided the answer: the environment prevents us from ever detecting those delicate quantum correlations.

In other words, Zurek and Wiebe-Ballentine were tackling different notions of classicality. For Wiebe and Ballentine, classicality meant the match between quantum and classical statistical distributions for certain observables. For Zurek, classicality meant the suppression of interference between macroscopically distinct states.

Once Schlosshauer spotted this difference, the apparent dispute went away. His resolution showed that the clash was less over data than over perspectives. If you adopt the ensemble interpretation, then decoherence indeed seems unnecessary, because you never take the superposition as a real physical state in the first place. If you are interested in solving the measurement problem, then decoherence is crucial, because it explains why macroscopic superpositions never manifest.

The overarching takeaway is that, from the quantum point of view, there is no single definition of what constitutes “classical behaviour”. The Hyperion dispute forced physicists to articulate what they meant by classicality and to recognise the assumptions embedded in different interpretations. Depending on your personal stance, you may emphasise the agreement of statistical distributions or you may emphasise the absence of observable superpositions. Both approaches can be internally consistent — but they also answer different questions.

For school students that are reading this story, the Hyperion dispute may seem obscure. Why should we care about whether a distant moon’s tumbling motion demands decoherence or not? The reason is that the moon provides a vivid example of a deep issue: how do we reconcile the strange predictions of quantum theory with the ordinary world we see?

In the laboratory, decoherence is an everyday reality. Quantum computers, for example, must be carefully shielded from their environments to prevent decoherence from destroying fragile quantum information. In cosmology, decoherence plays a role in explaining how quantum fluctuations in the early universe influenced the structure of galaxies. Hyperion showed that even an astronomical body can, in principle, highlight the same foundational issues.


Recent posts:

1. The guiding light of KD45

2. What on earth is a wavefunction?

3. The PixxelSpace constellation conundrum

4. The Zomato ad and India’s hustle since 1947

5. A new kind of quantum engine with ultracold atoms

6. Trade rift today, cryogenic tech yesterday

7. What keeps the red queen running?

8. A limit of ‘show, don’t tell’

Towards KD45

By: VM
Towards KD45

On the subject of belief, I’m instinctively drawn to logical systems that demand consistency, closure, and introspection. And the KD45 system among them exerts a special pull. It consists of the following axioms:

  • K (closure): If you believe an implication and you believe the antecedent, then you believe the consequent. E.g. if you believe “if X then Y” and you believe X, then you also believe Y.
  • D (consistency): If you believe X, you don’t also believe not-X (i.e. X’s negation).
  • 4 (positive introspection): If you believe X, then you also believe that you believe X, i.e. you’re aware of your own beliefs.
  • 5 (negative introspection): If you don’t believe X, then you believe that you don’t believe X, i.e. you know what you don’t believe.

Thus, KD45 pictures a believer who never embraces contradictions, who always sees the consequences of what they believe, and who is perfectly aware of their own commitments. It’s the portrait of a mind that’s transparent to itself, free from error in structure, and entirely coherent. There’s something admirable in this picture. In moments of near-perfect clarity, it seems to me to describe the kind of believer I’d like to be.

Yet the attraction itself throws up a paradox. KD45 is appealing precisely because it abstracts away from the conditions in which real human beings actually think. In other words, its consistency is pristine because it’s idealised. It eliminates the compromises, distractions, and biases that animate everyday life. To aspire to KD45 is therefore to aspire to something constantly unattainable: a mind that’s rational at every step, free of contradiction, and immune to the fog of human psychology.

My attraction to KD45 is tempered by an equal admiration for Bayesian belief systems. The Bayesian approach allows for degrees of confidence and recognises that belief is often graded rather than binary. To me, this reflects the world as we encounter it — a realm of incomplete evidence, partial understanding, and evolving perspectives.

I admire Bayesianism because it doesn’t demand that we ignore uncertainty. It compels us to face it directly. Where KD45 insists on consistency, Bayesian thinking insists on responsiveness. I update beliefs not because they were previously incoherent but because new evidence has altered the balance of probabilities. This system thus embodies humility, my admission that no matter how strongly I believe today, tomorrow may bring evidence that forces me to change my mind.

The world, however, isn’t simply uncertain: it’s often contradictory. People hold opposing views, traditions preserve inconsistencies, and institutions are riddled with tensions. This is why I’m also drawn to paraconsistent logics, which allow contradictions to exist without collapsing. If I stick to classical logic, I’ll have to accept everything if I also accept a contradiction. One inconsistency causes the entire system to explode. Paraconsistent theories reject that explosion and instead allow me to live with contradictions without being consumed by them.

This isn’t an endorsement of confusion for its own sake but a recognition that practical thought must often proceed even when the data is messy. I can accept, provisionally, both “this practice is harmful” and “this practice is necessary”, and work through the tension without pretending I can neatly resolve the contradiction in advance. To deny myself this capacity is not to be rational — it’s to risk paralysis.

Finally, if Bayesianism teaches humility and paraconsistency teaches tolerance, the AGM theory of belief revision teaches discipline. Its core idea is that beliefs must be revised when confronted by new evidence, and that there are rational ways of choosing what to retract, what to retain, and what to alter. AGM speaks to me because it bridges the gap between the ideal and the real. It allows me to acknowledge that belief systems can be disrupted by facts while also maintaining that I can manage disruptions in a principled way.

That is to say, I don’t aspire to avoid the shock of revision but to absorb it intelligently.

Taken together, my position isn’t a choice of one system over another. It’s an attempt to weave their virtues together while recognising their limits. KD45 represents the ideal that belief should be consistent, closed under reasoning, and introspectively clear. Bayesianism represents the reality that belief is probabilistic and always open to revision. Paraconsistent logic represents the need to live with contradictions without succumbing to incoherence. AGM represents the discipline of revising beliefs rationally when evidence compels change.

A final point about aspiration itself. To aspire to KD45 isn’t to believe I will ever achieve it. In fact, I acknowledge I’m unlikely to desire complete consistency at every turn. There are cases where contradictions are useful, where I’ll need to tolerate ambiguity, and where the cost of absolute closure is too high. If I deny this, I’ll only end up misrepresenting myself.

However, I’m not going to be complacent either. I believe it’s important to aspire even if what I’m trying to achieve is going to be perpetually out of reach. By holding KD45 as a guiding ideal, I hope to give shape to my desire for rationality even as I expect to deviate from it. The value lies in the direction, not the destination.

Therefore, I state plainly (he said pompously):

  • I admire the clarity of KD45 and treat it as the horizon of rational belief
  • I embrace the flexibility of Bayesianism as the method of navigating uncertainty
  • I acknowledge the need for paraconsistency as the condition of living in a world of contradictions
  • I uphold the discipline of AGM belief revision as the art of managing disruption
  • I aspire to coherence but accept that my path will involve noise, contradiction, and compromise

In the end, the point isn’t to model myself after one system but to recognise the world demands several. KD45 will always represent the perfection of rational belief but I doubt I’ll ever get there in practice — not because I think I can’t but because I know I will choose not to in many matters. To be rational is not to be pure. It is to balance ideals with realities, to aspire without illusion, and to reason without denying the contradictions of life.

What on earth is a wavefunction?

By: VM
What on earth is a wavefunction?

If you drop a pebble into a pond, ripples spread outward in gentle circles. We all know this sight, and it feels natural to call them waves. Now imagine being told that everything — from an electron to an atom to a speck of dust — can also behave like a wave, even though they are made of matter and not water or air. That is the bold claim of quantum mechanics. The waves in this case are not ripples in a material substance. Instead, they are mathematical entities known as wavefunctions.

At first, this sounds like nothing more than fancy maths. But the wavefunction is central to how the quantum world works. It carries the information that tells us where a particle might be found, what momentum it might have, and how it might interact. In place of neat certainties, the quantum world offers a blur of possibilities. The wavefunction is the map of that blur. The peculiar thing is, experiments show that this 'blur' behaves as though it is real. Electrons fired through two slits make interference patterns as though each one went through both slits at once. Molecules too large to see under a microscope can act the same way, spreading out in space like waves until they are detected.

So what exactly is a wavefunction, and how should we think about it? That question has haunted physicists since the early 20th century and it remains unsettled to this day.

In classical life, you can say with confidence, "The cricket ball is here, moving at this speed." If you can't measure it, that's your problem, not nature's. In quantum mechanics, it is not so simple. Until a measurement is made, a particle does not have a definite position in the classical sense. Instead, the wavefunction stretches out and describes a range of possibilities. If the wavefunction is sharply peaked, the particle is most likely near a particular spot. If it is wide, the particle is spread out. Squaring the wavefunction's magnitude gives the probability distribution you would see in many repeated experiments.

If this sounds abstract, remember that the predictions are tangible. Interference patterns, tunnelling, superpositions, entanglement — all of these quantum phenomena flow from the properties of the wavefunction. It is the script that the universe seems to follow at its smallest scales.

To make sense of this, many physicists use analogies. Some compare the wavefunction to a musical chord. A chord is not just one note but several at once. When you play it, the sound is rich and full. Similarly, a particle’s wavefunction contains many possible positions (or momenta) simultaneously. Only when you press down with measurement do you "pick out" a single note from the chord.

Others have compared it to a weather forecast. Meteorologists don't say, "It will rain here at exactly 3:07 pm." They say, "There’s a 60% chance of showers in this region." The wavefunction is like nature's own forecast, except it is more fundamental: it is not our ignorance that makes it probabilistic, but the way the universe itself behaves.

Mathematically, the wavefunction is found by solving the Schrödinger equation, which is a central law of quantum physics. This equation describes how the wavefunction changes in time. It is to quantum mechanics what Newton’s second law (F = ma) is to classical mechanics. But unlike Newton's law, which predicts a single trajectory, the Schrödinger equation predicts the evolving shape of probabilities. For example, it can show how a sharply localised wavefunction naturally spreads over time, just like a drop of ink disperses in water. The difference is that the spreading is not caused by random mixing but by the fundamental rules of the quantum world.

But does that mean the wavefunction is real, like a water wave you can touch, or is it just a clever mathematical fiction?

There are two broad camps. One camp, sometimes called the instrumentalists, argues the wavefunction is only a tool for making predictions. In this view, nothing actually waves in space. The particle is simply somewhere, and the wavefunction is our best way to calculate the odds of finding it. When we measure, we discover the position, and the wavefunction 'collapses' because our information has been updated, not because the world itself has changed.

The other camp, the realists, argues that the wavefunction is as real as any energy field. If the mathematics says a particle is spread out across two slits, then until you measure it, the particle really is spread out, occupying both paths in a superposed state. Measurement then forces the possibilities into a single outcome, but before that moment, the wavefunction's broad reach isn't just bookkeeping: it's physical.

This isn't an idle philosophical spat. It has consequences for how we interpret famous paradoxes like Schrödinger's cat — supposedly "alive and dead at once until observed" — and for how we understand the limits of quantum mechanics itself. If the wavefunction is real, then perhaps macroscopic objects like cats, tables or even ourselves can exist in superpositions in the right conditions. If it is not real, then quantum mechanics is only a calculating device, and the world remains classical at larger scales.

The ability of a wavefunction to remain spread out is tied to what physicists call coherence. A coherent state is one where the different parts of the wavefunction stay in step with each other, like musicians in an orchestra keeping perfect time. If even a few instruments go off-beat, the harmony collapses into noise. In the same way, when coherence is lost, the wavefunction's delicate correlations vanish.

Physicists measure this 'togetherness' with a parameter called the coherence length. You can think of it as the distance over which the wavefunction's rhythm remains intact. A laser pointer offers a good everyday example: its light is coherent, so the waves line up across long distances, allowing a sharp red dot to appear even all the way across a lecture hall. By contrast, the light from a torch is incoherent: the waves quickly fall out of step, producing only a fuzzy glow. In the quantum world, a longer coherence length means the particle's wavefunction can stay spread out and in tune across a larger stretch of space, making the object more thoroughly delocalised.

However, coherence is fragile. The world outside — the air, the light, the random hustle of molecules — constantly disturbs the system. Each poke causes the system to 'leak' information, collapsing the wavefunction's delicate superposition. This process is called decoherence, and it explains why we don't see cats or chairs spread out in superpositions in daily life. The environment 'measures' them constantly, destroying their quantum fuzziness.

One frontier of modern physics is to see how far coherence can be pushed before decoherence wins. For electrons and atoms, the answer is "very far". Physicists have found their wavefunctions can stretch across micrometres or more. They have also demonstrated coherence with molecules with thousands of atoms, but keeping them coherent has been much more difficult. For larger solid objects, it's harder still.

Physicists often talk about expanding a wavefunction. What they mean is deliberately increasing the spatial extent of the quantum state, making the fuzziness spread wider, while still keeping it coherent. Imagine a violin string: if it vibrates softly, the motion is narrow; if it vibrates with larger amplitude, it spreads. In quantum mechanics, expansion is more subtle but the analogy holds: you want the wavefunction to cover more ground not through noise or randomness but through genuine quantum uncertainty.

Another way to picture it is as a drop of ink released into clear water. At first, the drop is tight and dark. Over time, it spreads outward, thinning and covering more space. Expanding a quantum wavefunction is like speeding up this spreading process, but with a twist: the cloud must remain coherent. The ink can't become blotchy or disturbed by outside currents. Instead, it must preserve its smooth, wave-like character, where all parts of the spread remain correlated.

How can this be done? One way is to relax the trap that's being used to hold the particle in place. In physics, the trap is described by a potential, which is just a way of talking about how strong the forces are that pull the particle back towards the centre. Imagine a ball sitting in a bowl. The shape of the bowl represents the potential. A deep, steep bowl means strong restoring forces, which prevent the ball from moving around. A shallow bowl means the forces are weaker. That is, if you suddenly make the bowl shallower, the ball is less tightly confined and can explore more space. In the quantum picture, reducing the stiffness of the potential is like flattening the bowl, which allows the wavefunction to swell outward. If you later return the bowl to its steep form, you can catch the now-broader state and measure its properties.

The challenge is to do this fast and cleanly, before decoherence destroys the quantum character. And you must measure in ways that reveal quantum behaviour rather than just classical blur.

This brings us to an experiment reported on August 19 in Physical Review Letters, conducted by researchers at ETH Zürich and their collaborators. It seems the researchers have achieved something unprecedented: they prepared a small silica sphere, only about 100 nm across, in a nearly pure quantum state and then expanded its wavefunction beyond the natural zero-point limit. This means they coherently stretched the particle's quantum fuzziness farther than the smallest quantum wiggle that nature usually allows, while still keeping the state coherent.

To appreciate why this matters, let's consider the numbers. The zero-point motion of their nanoparticle — the smallest possible movement even at absolute zero — is about 17 picometres (one picometre is a trillionth of a meter). Before expansion, the coherence length was about 21 pm. After the expansion protocol, it reached roughly 73 pm, more than tripling the initial reach and surpassing the ground-state value. For something as massive as a nanoparticle, this is a big step.

The team began by levitating a silica nanoparticle in an optical tweezer, created by a tightly focused laser beam. The particle floated in an ultra-high vacuum at a temperature of just 7 K (-266º C). These conditions reduced outside disturbances to almost nothing.

Next, they cooled the particle's motion close to its ground state using feedback control. By monitoring its position and applying gentle electrical forces through the surrounding electrodes, they damped its jostling until only a fraction of a quantum of motion remained. At this point, the particle was quiet enough for quantum effects to dominate.

The core step was the two-pulse expansion protocol. First, the researchers switched off the cooling and briefly lowered the trap's stiffness by reducing the laser power. This allowed the wavefunction to spread. Then, after a carefully timed delay, they applied a second softening pulse. This sequence cancelled out unwanted drifts caused by stray forces while letting the wavefunction expand even further.

Finally, they restored the trap to full strength and measured the particle's motion by studying how they scattered light. Repeating this process hundreds of times gave them a statistical view of the expanded state.

The results showed that the nanoparticle's wavefunction expanded far beyond its zero-point motion while still remaining coherent. The coherence length grew more than threefold, reaching 73 ± 34 pm. Per the team, this wasn't just noisy spread but genuine quantum delocalisation.

More strikingly, the momentum of the nanoparticle had become 'squeezed' below its zero-point value. In other words, while uncertainty over the particle's position increased, that over its momentum decreased, in keeping with Heisenberg's uncertainty principle. This kind of squeezed state is useful because it's especially sensitive to feeble external forces.

The data matched theoretical models that considered photon recoil to be the main source of decoherence. Each scattered photon gave the nanoparticle a small kick, and this set a fundamental limit. The experiment confirmed that photon recoil was indeed the bottleneck, not hidden technical noise. The researchers have suggested using dark traps in future — trapping methods that use less light, such as radio-frequency fields — to reduce this recoil. With such tools, the coherence lengths can potentially be expanded to scales comparable to the particle's size. Imagine a nanoparticle existing in a state that spans its own diameter. That would be a true macroscopic quantum object.

This new study pushes quantum mechanics into a new regime. Thus far, large, solid objects like nanoparticles could be cooled and controlled, but their coherence lengths stayed pinned near the zero-point level. Here, the researchers were able to deliberately increase the coherence length beyond that limit, and in doing so showed that quantum fuzziness can be engineered, not just preserved.

The implications are broad. On the practical side, delocalised nanoparticles could become extremely sensitive force sensors, able to detect faint electric or gravitational forces. On the fundamental side, the ability to hold large objects in coherent, expanded states is a step towards probing whether gravity itself has quantum features. Several theoretical proposals suggest that if two massive objects in superposition can become entangled through their mutual gravity, it would prove gravity must be quantum. To reach that stage, experiments must first learn to create and control delocalised states like this one.

The possibilities for sensing in particular are exciting. Imagine a nanoparticle prepared in a squeezed, delocalised state being used to detect the tug of an unseen mass nearby or to measure an electric field too weak for ordinary instruments. Some physicists have speculated that such systems could help search for exotic particles such as certain dark matter candidates, which might nudge the nanoparticle ever so slightly. The extreme sensitivity arises because a delocalised quantum object is like a feather balanced on a pin: the tiniest push shifts it in measurable ways.

There are also parallels with past breakthroughs. The Laser Interferometer Gravitational-wave Observatories, which detect gravitational waves, rely on manipulating quantum noise in light to reach unprecedented sensitivity. The ETH Zürich experiment has extended the same philosophy into the mechanical world of nanoparticles. Both cases show that pushing deeper into quantum control could yield technologies that were once unimaginable.

But beyond the technologies also lies a more interesting philosophical edge. The experiment strengthens the case that the wavefunction behaves like something real. If it were only an abstract formula, could we stretch it, squeeze it, and measure the changes in line with theory? The fact that researchers can engineer the wavefunction of a many-atom object and watch it respond like a physical entity tilts the balance towards reality. At the least, it shows that the wavefunction is not just a mathematical ghost. It's a structure that researchers can shape with lasers and measure with detectors.

There are also of course the broader human questions. If nature at its core is described not by certainties but by probabilities, then philosophers must rethink determinism, the idea that everything is fixed in advance. Our everyday world looks predictable only because decoherence hides the fuzziness. But under carefully controlled conditions, that fuzziness comes back into view. Experiments like this remind us that the universe is stranger, and more flexible, than classical common sense would suggest.

The experiment also reminds us that the line between the quantum and classical worlds is not a brick wall but a veil — thin, fragile, and possibly removable in the right conditions. And each time we lift it a little further, we don't just see strange behaviour: we also glimpse sensors more sensitive than ever, tests of gravity's quantum nature, and perhaps someday, direct encounters with macroscopic superpositions that will force us to rewrite what we mean by reality.

On the PixxelSpace constellation

By: VM
On the PixxelSpace constellation

The announcement that a consortium led by PixxelSpace India will design, build, and operate a constellation of 12 earth-observation satellites marks a sharp shift in how India approaches large space projects. The Indian National Space Promotion and Authorisation Centre (IN-SPACe) awarded the project after a competitive process.

What made headlines was that the winning bid asked for no money from the government. Instead, the group — which includes Piersight Space, SatSure Analytics India, and Dhruva Space — has committed to invest more than Rs 1,200 crore of its own resources over the next four to five years. The constellation will carry a mix of advanced sensors, from multispectral and hyperspectral imagers to synthetic aperture radar, and it will be owned and operated entirely by the private side of the partnership.

PixxelSpace has said the zero-rupee bid is a conscious decision to support the vision of building an advanced earth-observation system for India and the world. The companies have also expressed belief they will recover their investment over time by selling high-value geospatial data and services in India and abroad. IN-SPACe's chairman has called this a major endorsement of the future of India’s space economy.

Of course the benefits for India are clear. Once operational, the constellation should reduce the country’s reliance on foreign sources of satellite imagery. That will matter in areas like disaster management, agriculture planning, and national security, where delays or restrictions on outside data can have serious consequences. Having multiple companies in the consortium brings together strengths in hardware, analytics, and services, which could create a more complete space industry ecosystem. The phased rollout will also mean technology upgrades can be built in as the system grows, without heavy public spending.

Still, the arrangement raises difficult questions. In practice, this is less a public–private partnership than a joint venture. I assume the state will provide its seal of approval, policy support, and access to launch and ground facilities. If it does share policy support, it will have to explain why that's vouchsafed for the collaboration isn't of being expanded to the industry as a whole. I also heard IN-SPACe will 'collate' demand within the government for the constellation's products and help meet them.

Without assuming a fiscal stake, however, the government is left with less leverage to set terms or enforce priorities, especially if the consortium's commercial goals don't always align with national needs. It's worth asking why the government issued an official request-for-proposal if didn't intend to assume a stake, and whether the Rs-350-crore soft loan IN-SPACe originally offered for the project will still be available, repurposed or quietly withdrawn.

I think the pitch will also test public oversight. IN-SPACe will need stronger technical capacity, legal authority, procedural clarity, and better public communication to monitor compliance without frustrating innovation. Regulations on remote sensing and data-sharing will probably have to be updated to cover a fully commercial system that sells services worldwide. Provisions that guarantee government priority access in emergencies and that protect sensitive imagery will have to be written clearly into law and contracts. Infrastructure access, from integration facilities to launch slots, must be managed transparently to avoid bottlenecks or perceived bias.

The government's minimal financial involvement saves public money but it also reduces long-term control. If India repeats this model, it should put in place new laws and safeguards that define how sovereignty, security, and public interest are to be protected when critical space assets are run by private companies. Without such steps, the promise of cost-free expansion could instead lead to new dependencies that are even harder to manage in future.

Featured image credit: Carl Wang/Unsplash.

The Zomato ad and India's hustle since 1947

By: VM
The Zomato ad and India's hustle since 1947

In contemporary India, corporate branding has often aligned itself with nationalist sentiment, adopting imagery such as the tricolour, Sanskrit slogans or references to ancient achievements to evoke cultural pride. Marketing narratives frequently frame consumption as a patriotic act, linking the choice of a product with the nation's progress or "self-reliance". This fusion of commercial messaging and nationalist symbolism serves both to capitalise on the prevailing political mood and to present companies as partners in the nationalist project. An advertisement in The Times of India on August 15, which describes the work of nation-building as a "hustle", is a good example.

The Zomato ad and India's hustle since 1947

I remember in engineering college my class had a small-minded and vindictive professor in our second year of undergraduate studies. He repeatedly picked on one particular classmate to the extent that, as resentment between the two people escalated, the professor's actions in one arguably innocuous matter resulted in the student being suspended for a semester. He eventually didn't have the number of credits he needed to graduate and had to spend six more months redoing many of the same classes. Today, this student is a successful researcher in Europe, having gone on to acquire a graduate degree followed by a PhD from some of the best research institutes in the world.

When we were chatting a few years ago about our batch's decadal reunion that was coming up, we thought it would be a good idea to attend and, there, rub my friend's success in this professor's face. We really wanted to do it because we wanted him to know how petty he had been. But as we discussed how we'd orchestrate this moment, it dawned on us that we'd also be signalling that our achievements don't amount to more than those necessary to snub him, as if to say they have no greater meaning or purpose. We eventually dropped the idea. At the reunion itself, my friend simply ignored the professor.

India may appear today to have progressed well past Winston Churchill's belief, expressed in the early 1930s, but to advertise as Zomato has is to imply that it remains on our minds and animates the purpose of what we're trying to do. It is a juvenile and frankly resentful attitude that also hints at a more deep-seated lack of contentment. The advertisement's achievement of choice is the Chandrayaan 3 mission, its Vikram lander lit dramatically by sunlight and earthlight and photographed by the Pragyan rover. The landing was a significant achievement, but to claim that that above all else describes contemporary India is also to dismiss the evident truth that a functional space organisation and a democracy in distress can coexist within the same borders. One neither carries nor excuses the other.

In fact, it's possible to argue that ISRO's success is at least partly a product of the unusual circumstances of its creation and its privileged place in the administrative structure. Founded by a scientist who worked directly with Jawaharlal Nehru — bypassing the bureaucratic hurdles faced by most others — ISRO was placed under the purview of the prime minister, ensuring it received the political attention, resources, and exemptions that are not typically available to other ministries or public enterprises. In this view, ISRO's achievements are insulated from the broader fortunes of the country and can't be taken as a reliable proxy for India's overall 'success'.

The question here is: to whose words do we pay attention? Obviously not those of Churchill: his prediction is nearly a century old. In fact, as Ramachandra Guha sets out in the prologue of India After Gandhi (which I'm currently rereading), they seem in their particular context to be untempered and provocative.

In the 1940s, with Indian independence manifestly round the corner, Churchill grumbled that he had not becoming the King's first minister in order to preside over the liquidation of the British Empire. A decade previously he had tried to rebuild a fading political career on the plank of opposing self-government for Indians. After Gandhi's 'salt satyagraha' of 1930 in protest against taxes on salt, the British government began speaking with Indian nationalists about the possibility of granting the colony dominion status. This was vaguely defined, with no timetable set for its realization. Even so, Churchill called the idea 'not only fantastic in itself but criminally mischievous in its effects'. Since Indians were not fit for self-government, it was necessary to marshal 'the sober and resolute forces of the British Empire' to stall any such possibility.

In 1930 and 1931 Churchill delivered numerous speeches designed to work up, in most unsober form, the constituency opposed to independence for India. Speaking to an audience at the City of London in December 1930, he claimed that if the British left the subcontinent, then an 'army of white janissaries, officered if necessary from Germany, will be hired to secure the armed ascendancy of the Hindu'.

This said, Guha continues later in the prologue:

The forces that divide India are many. … But there are also forces that have kept India together, that have helped transcend or contain the cleavages of class and culture, that — so far, at least — have nullified those many predictions that India would not stay united and not stay democratic. These moderating influences are far less visible. … they have included individuals as well as institutions.

Indeed, reading through the history of independent India, through the 1940s and '50s filled with hope and ambition, the turmoil of the '60s and the '70s, the Emergency, followed by economic downturn, liberalisation, finally to the rise of Hindu nationalism, it has been clear that the work of the "forces that have kept India together" is unceasing. Earlier, the Constitution's framework, with its guarantees of rights and democratic representation, provided a common political anchor. Regular elections, a free press, and an independent judiciary reinforced faith in the system even as the linguistic reorganisation of states reduced separatist tensions. National institutions such as the armed forces, civil services, and railways fostered a sense of shared identity across disparate regions.

Equally, integrative political movements and leaders — including the All India Kisan Sabha, trade union federations like INTUC and AITUC, the Janata Party coalition of 1977, Akali leaders in Punjab in the post-1984 period, the Mazdoor Kisan Shakti Sangathan, and so on, as well as Lal Bahadur Shastri, Govind Ballabh Pant, C. Rajagopalachari, Vinoba Bhave, Jayaprakash Narayan, C.N. Annadurai, Atal Bihari Vajpayee, and so on — operated despite sharp disagreements largely within constitutional boundaries, sustaining the legitimacy of the Union. Today, however, most of these "forces" are directed at a more cynical cause of disunity: a nationalist ideology that has repeatedly defended itself with deceit, evasion, obfuscation, opportunism, pietism, pretence, subterfuge, vindictiveness, and violence.

In this light, to claim we have "just put in the work, year after year", as if to suggest India has only been growing from strength to strength, rather than lurching from one crisis to the next and of late becoming a little more balkanised as a result, is plainly disingenuous — and yet entirely in keeping with the alignment of corporate branding with nationalist sentiment, which is designed to create a climate in which criticism of corporate conduct is framed as unpatriotic. When companies wrap themselves in the symbols of the nation and position their products or services as contributions to India's progress, questioning their practices risks being cast as undermining that progress. This can blunt scrutiny of resource over-extraction, environmental degradation, and exploitative labour practices by accusing dissenters of obstructing development.

Aggressively promoting consumption and consumerism ("fuel your hustle"), which drives profits but also deepens social inequalities in the process, is recast as participating in the patriotic project of economic growth. When corporate campaigns subtly or explicitly endorse certain political agendas, their association with national pride can normalise those positions and marginalise alternative views. In this way, the fusion of commerce and nationalism builds market share while fostering a superficial sense of national harmony, even as it sidelines debates on inequality, exclusion, and the varied experiences of different communities within the nation.

Make a Web Font Subset

Related to the Faceclick Emoji picker two entries ago, I've learned how to make custom subsets of fonts and package them as Web Fonts (and how to use and debug them).

Recreating the US/* time zone situation

Yesterday, I wrote about screwing something up, so naturally THE ONE came out. I forgot to add my usual advice for them to suck down a bag of burgers from that one burger joint in Seattle.

Anyway, there are some technical aspects to this. I got to wondering... why did I pick "US/Pacific", anyway? Did I go to lengths to pick out a time zone name that (as countless posters have reminded us) has been marked as "backwards" about as long as Debian has existed? Don't I know better?

I figured the easiest way to understand this was to re-enact the sequence of events that happens when installing a Debian 12 box. I also happened to have an older .iso file from that series that was just right for creating a quick VM, so I stood it up and went through the install process.

At some point it asks where you are. I picked "United States" since that is in fact where I am. A few minutes later, you get a time zone chooser, and oh hey, would you look at this?

Text-mode chooser listing a bunch of zone names from the US/* group: Eastern, Central, Mountain, Pacific (highlighted), Alaska, Hawaii, Arizona, East Indiana, Samoa

It gave me those options... and ONLY those options. If you don't see what you want, you're told to back up to "choose language" and "... select a country that uses the desired time zone (the country where you live or are located)".

With nothing wrong, I proceeded, as I imagine most people would.

So right there, D12 was letting us pick zones that were going to get the "yeet" treatment in D13.

Next, I wondered how it got into the Postgres config. Did I really put it there on all of my systems which have it installed? I don't remember doing that. So, I finished the Debian install and installed postgres from apt, then looked in the config file, and oh look.

Screenshot of grepping for "US/" in the postgresql.conf which finds log_timezone and timezone= set to it... running in a UTM virtual machine

It's done automatically. I suspected something like this, but had to be sure before I started pointing fingers.

...

Finally, for whoever wondered if I did "SELECT 1" and went on with life, no you jackass, it's a testing machine, and I don't use it for production stuff. I use it for nerding out about temperatures in and around some property in the family. It's the least important Debian box in my life, so it went first. It's where I find problems like this so that I don't inflict them on the prod machine... you know, the one that's serving this very post.

Normally when you load up the "thermo" status page, it gives you about the last 24 or so hours of data. It varies depending on how big your screen is. The output is a bunch of (time_t, temp) tuples, and the browser runs some JS gunk to render that into the viewer's local time, and then it draws a graph.

This is not special.

What's a little unusual is that it also has a "history" mode where you can say "show me the data for 2024-01-01" or whatever. The problem with a request like that is ... 2024-01-01, where, exactly? For that, the database needs a time zone, and it always had a setting that matched the machine's location, and that's perfect for these purposes. Also, we (family) don't look at that view all that often, so nobody noticed it was acting strange.

So, the other day, I was wondering exactly how hot it got on a particular day, and finally went to the history view. Then I noticed it was going from 5 PM to about 5 PM again, and knew that was some UTC-related fuckery. The mapping of 4 or 5 PM local time being midnight UTC has been beaten into my head from countless outages which were caused by some idiot ad product acquisition that started serving ads full-tilt at midnight UTC since "oh hey, this campaign is active now and we have ZERO IMPRESSIONS so far!" ... but I digress.

I was confounded by other work that's been going on to reduce exposures on my web servers that flipped them to logging in UTC because /etc/localtime and /usr/share/zoneinfo didn't exist in their little chroot-ish environment. I thought that shift had screwed up the CGI programs that turn the SQL gunk into JSON gunk for the browsers. That distracted me from the actual problem for a few minutes.

That's when my notes from the day of the upgrade came up, and I found my own "huh, that's weird" notation about the Postgres time zone thing. I had honestly forgotten about it, since, yes, I had bigger fish to fry. I was physically on site visiting family, and the Debian upgrade was a small part of a bigger trip. I'm mostly there to spend time with them, not to play with the damn computer. So, when some stupid nerd stuff happened, I noted it, but didn't shelve the rest of the usual activities to go spaz out over whatever this particular thing was.

Because, again, the primary use case ("what have the temps been like around here over the past day or so") *was working fine*.

Also... something else DID break on the box when I did that upgrade, and it broke hard. I went and dealt with that, and forgot all about the TZ thing. I haven't written about the other problem because I'm honestly a little gunshy after the whole atop thing earlier this year. What I have to say about the thing that broke isn't going to be nice, because it's a load of horse shit that it broke in the first place.

But, the devs behind that project *also* owe me nothing, and if they can live with themselves, foisting off their particular annoyances on the world and justifying it as okay in their bug reporting system, it's not like they're going to listen to me. Seriously, when you find that other people have reported a problem well before you and the powers that be have brushed them off, there is no reason to pile on. They obviously don't give a shit, and my "+1" isn't going to change that.

Could I have done better for this one derpy Debian box that's used by a couple of family members to look at temperatures that didn't actually have a problem after I solved the main one? Of course I could... but WHY? And... how? I spend enough time on computer shit as it is.

I could have gone "hey siri, remind me to look at that stupid postgres time zone thing tomorrow". I didn't.

Now, "SELECT 1" (THE ONE?), you tell me how you'd be any better at this.

Debian 13, Postgres, and the US/* time zones

If you're running Debian 12 and Postgres, you're in or around the Americas, and you're planning on upgrading to Debian 13, you might hit a fun little snag. I did when I did my first (testing) box:

2025-08-29 06:28:12.038 GMT [220328] LOG:  invalid value for parameter "log_timezone": "US/Pacific"
2025-08-29 06:28:12.038 GMT [220328] LOG:  invalid value for parameter "TimeZone": "US/Pacific"
2025-08-29 06:28:12.038 GMT [220328] FATAL:  configuration file "/etc/postgresql/15/main/postgresql.conf" contains errors

At the time, I went "WTF?" and just commented it out to get it running again. I had bigger fish to fry... and just kind of forgot about it. Everything seemed fine.

Here it is two weeks later and I just realized that one of my little history viewing tools for my temperature tracking stuff is generating graphs that run from 5 PM to 4 PM. In other words, it's showing me a UTC-based day, not a US/Pacific (PDT at the moment) day. 5 PM here during the summer is midnight UTC.

It took far too long to realize what was going on here. So I went, okay, uh, US/Los_Angeles? No. It's now America/Los_Angeles.

The worst part about this is that it didn't get so much as a mention in the Debian 13 release notes. I read through that document before going for it and never encountered it. Indeed, even now, you won't find "tzdata" or "zone" in it.

Normally when weird stuff like this happens, someone who lives on the bleeding edge runs into it and reports it. That's what happened with the xz kerfuffle: some person running the latest stuff hit some anomalous readings and went digging. Here, too, a few people hit the problem well before I did.

They hit it in 2023. US/* was moved to tzdata-legacy, and a few people asked for something that would pop up during upgrades. I guess it didn't happen.

So, as a public service, here you go. This is your notification.

Making the most of a dumb fax switcher box in the old days

Back in the days of analog phones and fax machines, I used to get asked to come up with solutions to random problems that people were having with them. One of them involved a single phone line, a "fax switcher", an actual fax machine, and not wanting to wake other people up when a fax was involved. Here's how that went.

Friends of the family had a good-sized house and they ran some kind of business that had started getting faxes at all hours. They had a single phone line and didn't feel like getting another one for whatever reason. They wanted to be able to get a fax without having all of the other house phones ringing at all hours of the day and night when it's "meant for the machine".

The dad of the family bought this "fax switcher" box, but couldn't figure out exactly how to tie it in. Somehow, they found out that I had done some small-time phone wiring adjustments and called me over to help.

The box worked like this: it would pick up the phone and would just listen to it for a few seconds. If it heard the CNG tone from a calling fax machine (beeeep!), then it would punch the call through to the "FAX" port on the back. It did this by putting the right flavor of voltage jazz on that port to make the machine see it and pick up. Then it connected the line port to the fax port and just hung out until the call was over.

This same box had a "VOICE" port on the back, and in the event it didn't hear the soothing tones of a calling fax machine, it would time out and would push its locally-generated ring voltage down that port instead. If you had some phones plugged into that, they'd ring, and you'd know to pick them up.

(I should note that the box played a nasty fake ringback tone to the caller during this phase... and yes, since it answered, it supervised, and the caller was going to pay for the call, even if nobody was home!)

The question was: how to get the rest of the house (regular phones) "behind" this box so they wouldn't ring when the call actually came in? Their 1960s house had all of its phone jacks wired in a way that wasn't unusual for the time but which made it ugly for this problem.

One very long length of cord came from the demarc box out back and stopped off at every jack. The installer actually just stripped back the outer insulation, then the inner insulation on the primary pair (red/green), and looped the now-bare copper around the terminal screws on the jacks. They did all of this without ever cutting the wire. Very clever.

I suggested we could just cut it in half at the first jack (in the kitchen), so the box would be the first thing on the line and the whole rest of the house would sit behind it. That didn't fly, since it meant the switcher box (and probably the actual fax machine) would have to go in the kitchen, too. They wanted all of the mess out of the way in the big bedroom on the first floor, and wouldn't you know it, that was the *last* jack on the line. That seemed like a pain but it turned out to be rather helpful.

The solution ended up being something kind of nasty, but it did work. That one long cord actually contained two pairs in case the residents ever wanted a second line. That is, in addition to the red/green, they also had a yellow/black pair in there, just hanging out, not doing anything. It didn't "stop off" at any of the jacks, but it was in fact present and ran interrupted from end to end.

I figured, okay, let's "cut the line in half" in the kitchen jack, but do it in such a way that it bridges the incoming line (from the outside box) to the yellow/black. Then it'll travel all the way around the house, untouched, until it lands at the last jack in their bedroom.

Three-position phone splitter: L1/L2/L1+L2

Then I just wired up that jack to have the yellow/black on line 2, and bought a goofy little splitter thing from Radio Shack to make sense of it. This was a six dollar plastic piece that would split a two-line jack into two single-line jacks and a pass-through.

So, position 1 on this thing had line 1 as primary. Position 2 on this thing had line 2 as primary. Position 3 on this thing was another "four wires, two lines", just like the jack it plugged into.

The fax switcher's LINE jack was connected to position 2 (yellow/black, being fed from the bridge behind the kitchen jack) and its VOICE jack was connected to position 1. In so doing, the box itself drove the rest of the house "backwards" and allowed those other phones to operate.

The best part of doing it this way is that I didn't have to go around to open up the other jacks in the rest of the house. They were just fine staying on red/green.

This worked and they loved it, but I was mildly concerned. Their whole telco setup was now reliant on this dumb little box staying plugged in and working exactly as I had set it up. If it came unplugged or something else bad happened to it, they'd have no phone service anywhere in the house. (I'm not sure if it was smart enough to short together the VOICE and LINE jacks on power failure. It was random consumer-grade plastic junk.)

For that reason, I also brought them a tiny little "patch cable" thing (that Radio Shack also sold for some reason) and told them "if you ever remove this box, you HAVE TO connect positions 1 and 2 with this little cord". I sure hope they remembered that.

I can't even imagine what happened if they ever sold that house. I'm sure someone came along later and screamed at the maniac that rigged it up that way.

Well, uh, hi, I'm that maniac, I guess.

Knowing what I know now over 30 years later, I think I would have left a note in the wall for the next person to find it. "Hey, this is why we did this, and you just need to patch red & green back together in this one place and you'll be back to where Ma Bell left it originally". The person who came along later would still be miffed, but at least they'd know why someone went and did something that bizarre.

A Navajo weaving of an integrated circuit: the 555 timer

The noted Diné (Navajo) weaver Marilou Schultz recently completed an intricate weaving composed of thick white lines on a black background, punctuated with reddish-orange diamonds. Although this striking rug may appear abstract, it shows the internal circuitry of a tiny silicon chip known as the 555 timer. This chip has hundreds of applications in everything from a sound generator to a windshield wiper controller. At one point, the 555 was the world's best-selling integrated circuit with billions sold. But how did the chip get turned into a rug?

"Popular Chip" by Marilou Schultz.
 Photo courtesy of First American Art Magazine.

"Popular Chip" by Marilou Schultz. Photo courtesy of First American Art Magazine.

The 555 chip is constructed from a tiny flake of silicon with a layer of metallic wiring on top. In the rug, this wiring is visible as the thick white lines, while the silicon forms the black background. One conspicuous feature of the rug is the reddish-orange diamonds around the perimeter. These correspond to the connections between the silicon chip and its eight pins. Tiny golden bond wires—thinner than a human hair—are attached to the square bond pads to provide these connections. The circuitry of the 555 chip contains 25 transistors, silicon devices that can switch on and off. The rug is dominated by three large transistors, the filled squares with a pattern inside, while the remaining transistors are represented by small dots.

The weaving was inspired by a photo of the 555 timer die taken by Antoine Bercovici (Siliconinsider); I suggested this photo to Schultz as a possible subject for a rug. The diagram below compares the weaving (left) with the die photo (right). As you can see, the weaving closely follows the actual chip, but there are a few artistic differences. For instance, two of the bond pads have been removed, the circuitry at the top has been simplified, and the part number at the bottom has been removed.

A comparison of the rug (left) and the original photograph (right).
Dark-field image of the 555 timer is courtesy of Antoine Bercovici.

A comparison of the rug (left) and the original photograph (right). Dark-field image of the 555 timer is courtesy of Antoine Bercovici.

Antoine took the die photo with a dark field microscope, a special type of microscope that produces an image on a black background. This image emphasizes the metal layer on the top of the die. In comparison, a standard bright-field microscope produced the image below. When a chip is manufactured, regions of silicon are "doped" with impurities to create transistors and resistors. These regions are visible in the image below as subtle changes in the color of the silicon.

The RCA CA555 chip. Photo courtesy of Tiny Transistors.

The RCA CA555 chip. Photo courtesy of Tiny Transistors.

In the weaving, the chip's design appears almost monumental, making it easy to forget that the actual chip is microscopic. For the photo below, I obtained a version of the chip packaged in a metal can, rather than the typical rectangle of black plastic. Cutting the top off the metal can reveals the tiny chip inside, with eight gold bond wires connecting the die to the pins of the package. If you zoom in on the photo, you may recognize the three large transistors that dominate the rug.

The 555 timer die inside a metal-can package, with a penny for comparison. Click this image (or any other) for a larger version.

The 555 timer die inside a metal-can package, with a penny for comparison. Click this image (or any other) for a larger version.

The artist, Marilou Schultz, has been creating chip rugs since 1994, when Intel commissioned a rug based on the Pentium as a gift to AISES (American Indian Science & Engineering Society). Although Schultz learned weaving as a child, the Pentium rug was a challenge due to its complex pattern and lack of symmetry; a day's work might add just an inch to the rug. This dramatic weaving was created with wool from the long-horned Navajo-Churro sheep, colored with traditional plant dyes.

"Replica of a Chip", created by Marilou Schultz, 1994. Wool. Photo taken at the National Gallery of Art, 2024.

"Replica of a Chip", created by Marilou Schultz, 1994. Wool. Photo taken at the National Gallery of Art, 2024.

For the 555 timer weaving, Schultz experimented with different materials. Silver and gold metallic threads represent the aluminum and copper in the chip. The artist explains that "it took a lot more time to incorporate the metallic threads," but it was worth the effort because "it is spectacular to see the rug with the metallics in the dark with a little light hitting it." Aniline dyes provided the black and lavender colors. Although natural logwood dye produces a beautiful purple, it fades over time, so Schultz used an aniline dye instead. The lavender colors are dedicated to the weaver's mother, who passed away in February; purple was her favorite color.

Inside the chip

How does the 555 chip produce a particular time delay? You add external components—resistors and a capacitor—to select the time. The capacitor is filled (charged) at a speed controlled by the resistor. When the capacitor get "full", the 555 chip switches operation and starts emptying (discharging) the capacitor. It's like filling a sink: if you have a large sink (capacitor) and a trickle of water (large resistor), the sink fills slowly. But if you have a small sink (capacitor) and a lot of water (small resistor), the sink fills quickly. By using different resistors and capacitors, the 555 timer can provide time intervals from microseconds to hours.

I've constructed an interactive chip browser that shows how the regions of the rug correspond to specific electronic components in the physical chip. Click on any part of the rug to learn the function of the corresponding component in the chip.

Click the die or schematic for details...

For instance, two of the large square transistors turn the chip's output on or off, while the third large transistor discharges the capacitor when it is full. (To be precise, the capacitor goes between 1/3 full and 2/3 full to avoid issues near "empty" and "full".) The chip has circuits called comparators that detect when the capacitor's voltage reaches 1/3 or 2/3, switching between emptying and filling at those points. If you want more technical details about the 555 chip, see my previous articles: an early 555 chip, a 555 timer similar to the rug, and a more modern CMOS version of the 555.

Conclusions

The similarities between Navajo weavings and the patterns in integrated circuits have long been recognized. Marilou Schultz's weavings of integrated circuits make these visual metaphors into concrete works of art. This connection is not just metaphorical, however; in the 1960s, the semiconductor company Fairchild employed numerous Navajo workers to assemble chips in Shiprock, New Mexico. I wrote about this complicated history in The Pentium as a Navajo Weaving.

This work is being shown at SITE Santa Fe's Once Within a Time exhibition (running until January 2026). I haven't seen the exhibition in person, so let me know if you visit it. For more about Marilou Schultz's art, see The Diné Weaver Who Turns Microchips Into Art, or A Conversation with Marilou Schultz on YouTube.

Many thanks to Marilou Schultz for discussing her art with me. Thanks to First American Art Magazine for providing the photo of her 555 rug. Follow me on Mastodon (@kenshirriff@oldbytes.space), Bluesky (@righto.com), or RSS for updates.

Hardware review: ergonomic mouse Logitech Lift

# Introduction

In addition to my regular computer mouse, by the end of 2024 I bought a Logitech Lift, a wireless ergonomic vertical mouse.  This was the first time I used such mouse, although I am regularly using a track ball, the experience is really different.

=> https://www.logitech.com/en-gb/shop/p/lift-vertical-ergonomic-mouse.910-006475 Logitech.com : Lift product

I wanted to write this article to give some feedback about this device, I enjoy it a lot and I can not really go back to a regular mouse now.

# Specifications

The mouse works with a single AA / LR6 battery that with a heavy daily use for nine months is still reported as 30% charged.

The lift connects using Bluetooth, but Logitech provides a small USB dongle for a perfect "out of the box" experience with any operating system.  The dongle can be stored within the mouse when travelling, or when not using it.  There is a small button on the bottom of the mouse and 3 LED, this allows the mouse to be switched to different computers: two in Bluetooth, one for the dongle.  The first profile is always the dongle.  This allows you to connect the mouse to two different computers with Bluetooth and be able to switch between them.  This works very well in practice.

About the buttons, nothing fancy with the standard two buttons, there are extra "back / next" buttons easily available, one button to cycle the laser resolution / sensitivity.  The wheel is excellent, it is precise and easy to use, but if you give it a good kick it will spin a lot without being in free wheel like some other wheels, which is super handy to scroll a huge chunk of text.

Due to the mouse design, it is not ambidextrous, but Logitech made a version for left-handed users and right-hander users.

# Experience

The first week with the mouse was really weird, I was switching back and forth with my old Steel Series mouse because I was less accurate and not used to it.

After a week, I became used to holding it, moving it, and it was a real joy and source of fun to go on the computer to use this mouse :)

Then, without noticing, I started using it exclusively.  A few months later, I realized I did not use the previous mouse for a long time and gave it a try.  This was a terrible experience, I was surprised that it was fitting really poorly in my hand, then I disconnected it, and it has been stored in a box since then.

It is hard to describe the feeling of this ergonomic mouse, the hand position is really different, but it feels much more enjoyable that I do not consider using a non-ergonomic mouse ever again.

I was reluctant to use a wireless mouse at first, but not having to deal with the cable acting as a "spring" is really appreciable.

I can definitely play video games with this mouse, except nervous FPS (maybe with some training?).

# Conclusion

The price tag could be a blocker for many, but at the same time it is an essential peripheral when using your computer.  If you feel some pain in your hand when using your computer mouse, maybe give a try to ergonomic mice.

URL filtering HTTP(S) proxy on Qubes OS

# Preamble

This article was first published as a community guide on Qubes OS forum.  Both are kept in sync.

=> https://forum.qubes-os.org/t/url-filtering-https-proxy/35846

# Introduction

This guide is meant to users who want to allow a qube to reach some websites but not all the Internet, but facing the issue that using the firewall does not work well for DNS names using often changing IPs.

⚠️ This guide is for advanced users who understand what a HTTP(s) proxy is, and how to type commands or edit files in a terminal.

The setup will create a `sys-proxy-out` qube that will define a list of allowed domains, and use qvm-connect-tcp to allow client qubes to use it as a proxy. Those qubes could have no netvm, but still reach the filtered websites.

I based it on debian 12 xfce, so it's easy to set up and will be supported long term.

# Use case

* an offline qube that need to reach a particular website
* a web browsing qube restricted to a list of websites
* mix multiple netvm / VPNs into a single qube

# Setup the template

* Install debian-12-xfce template
* Make a clone of it, let's call it debian-12-xfce-squid
* Start the qube and open a terminal
* Type `sudo apt install -y squid`
* Delete and replace `/etc/squid/squid.conf` with this content (the default file is not suitable at all)

```
acl localnet src 127.0.0.1/32

acl SSL_ports port 443
acl Safe_ports port 80
acl Safe_ports port 443

http_access deny !Safe_ports
http_access deny CONNECT !SSL_ports

acl permit_list dstdomain '/rw/config/domains.txt'
http_access allow localnet permit_list

http_port 3128

cache deny all
logfile_rotate 0
coredump_dir /var/spool/squid
```

The configuration file only allows the proxy to be used for ports 80 and 443, and disables cache (which would only apply to port 80).

Close the template, you are done with it.

# Setup an out proxy qube

This step could be repeated multiple times, if you want to have multiple proxies with different lists of domains.

* Create a new qube, let's call it `sys-proxy-out`, based on the template you configured above (`debian-12-xfce-squid` in the example)
* Configure its firewall to allow the destination `*` and port TCP 443, and also `*` and port TCP 80 (this covers basic needs for doing http/https). This is an extra safety to be sure the proxy will not use another port.
* Start the qube
* Configure the domain list in `/rw/config/domains.txt` with this format:

```
# for a single domain
domain.example

# for all direct subdomains of qubes.org including qubes.org
# this work for doc.qubes-os.org for instance, but not foo.doc.qubes-os.org
.qubes-os.org
```

ℹ️ If you change the file, reload with `sudo systemctl reload squid`.

ℹ️ If you want to check squid started correctly, type `systemctl status squid`.  You should read that it's active, and that there are no error in the log lines.

⚠️ If you have a line with a domain included by another line, squid will not start as it considers it an error! For instance `.qubes.org` includes `doc.qubes-os.org`.

⚠️ As far as I know, it is only possible to allow a hostname or a wildcard of this hostname, so you at least need to know the depth of the hostname. If you want to allow `anything.anylevel.domain.com`, you could use `dstdom_regex` instead of `dstdomain`, but it seems a regular source of configuration problems,  and should not be useful for most users.

In dom0, using the "Qubes Policy Editor" GUI, create a new file named 50-squid (or edit the file `/etc/qubes/policy.d/50-squid.policy`) and append the configuration lines that you need to adapt from the following example:

```
qubes.ConnectTCP +3128 MyQube @default allow target=sys-proxy-out
qubes.ConnectTCP +3128 MyQube2 @default allow target=sys-proxy-out
```

This will allow qubes `MyQube` and `MyQube2` to use the proxy from `sys-proxy-out`. Adapt to your needs here.

# How to use the proxy

Now the proxy is set up, and `MyQube` is allowed to use it, a few more things are required:

* Start qube `MyQube`
* Edit `/rw/config/rc.local` to add `qvm-connect-tcp ::3128`
* Configure http(s) clients to use `localhost:3128` as a proxy

It's possible to define the proxy user wide, this should be picked by all running programs, using this:

```
mkdir -p /home/user/.config/environment.d/
cat </home/user/.config/environment.d/proxy.conf
all_proxy=http://127.0.0.1:3128/
EOF
```

# Going further

## Using a disposable qube for the proxy

The sys-proxy-out could be a disposable. In order to proceed:

* mark sys-proxy-out as a disposable template in its settings
* create a new disposable qube using sys-proxy-out as a template
* adapt the dom0 rule to have the new disposable qube name in the target field

## Checking logs

In the proxy qube, you can check all requests done in `/var/log/squid/access.log`, you can filter with `grep TCP_DENIED` to see denied requests, this can be useful to adapt the domain list.

## Test the proxy

### Check allowed domains are reachable

From the http(s) client qube, you can try this command to see if the proxy is working:

```
curl -x http://localhost:3128 https://a_domain_you_allowed/
```

If the output is not `curl: (56) CONNECT tunnel failed, response 403` then it's working.

### Check non-allowed domains are denied

Use the same command as above, but with a domain you did not allow

```
curl -x http://localhost:3128 https://a_domain_you_allowed/
```

The output should be `curl: (56) CONNECT tunnel failed, response 403`.

### Verify nothing is getting cached

In the qube `sys-proxy-out`, inspect `/var/spool/squid/`, it should be empty. If not, please report here, this should not happen.

Some logs file exist in `/var/log/squid/`, if you don't want any hints about queried domains, configure squid accordingly. Privacy-specific tweaks are beyond the scope of this guide.

What I think about when I think about Claude Code

Writing code with Claude Code doesn’t feel the same as any other way of writing code.

Quick background: it’s an AI tool that you install and run in your terminal in a code repository. It’s a little like a chat interface on the command line, but mainly you type what you want, and then Claude churns away for a few minutes, looking up docs on the web, checking through your files, making a to-do list, writing code, running tests and so on, until it’s done.

This is not the same as other AI code-writing tools such as Github Copilot (as previously discussed (2023)) which is a joy, and you stride along 20 auto-generated lines at a go, but is ultimately just very very good auto-complete.

No, spending a morning coding with Claude Code is different.

You just loop one minute composing a thoughtful paragraph to the agent, and three minutes waiting, gazing out the window contemplating the gentle breeze on the leaves, the distant hum of traffic, the slow steady unrelenting approach of that which comes for us all.


Yes yes other terminal-based coding agents are available. Claude Code made it work first and it’s the one I’ve used most.


Writing that “thoughtful paragraph”…

The trick with Claude Code is to give it large, but not too large, extremely well defined problems.

(If the problems are too large then you are now vibe coding… which (a) frequently goes wrong, and (b) is a one-way street: once vibes enter your app, you end up with tangled, write-only code which functions perfectly but can no longer be edited by humans. Great for prototyping, bad for foundations.)

So the experience is that, before you write, you gaze into space, building castles in the imagination, visualising in great detail the exact contours of the palisades and battlements, the colours of the fluttering flags, turning it round in your head, exploring how it all fits together. Then you have to narrate it all in clear paragraphs, anticipating Claude’s potential misunderstandings, stating precisely the future that you want.

You can and should think hard about your exact intent: here’s a wonderful (and long) case study (taylor.town) and you can see there are pages and pages and pages of careful design and specification documents, before Claude is even allowed to touch code.

Claude Code didn’t work well for me the first few times I used it. I asked for too much or too little. It takes a while to calibrate your seven-league boots.


So the rhythm is slower versus the regular way.

I’m interested in the subjective feeling of coding (2023) because (to me) firmware feels like precision needlework, nested parentheses feel like being high up, etc.

I think a lot of this is about breath?

Conventionally: I’m sure I hold my breath when I’m midway through typing a conditional, just a little. The rhythm of my breath takes on the rhythm of writing code.

Many years ago, Linda Stone observed email apnea (2014):

I noticed, almost immediately, that once I started to work on email, I was either shallow breathing or holding my breath.

She studied it. 80% of people experienced compromised breathing working on email (the 20% who didn’t had, in their regular lives, been taught breathing techniques, and were unconsciously managing it).

BUT, "cumulative breath holding contributes to stress-related diseases. The body becomes acidic" – there’s feedback; when you shorten your breath, even if the cause was not initially stress, you become stressed.

WHEREAS:

With Claude Code, I don’t have that metronome shortening my breath. I do not subject myself to “code apnea.”

So it becomes calm, contemplative.


New job concept: a hold music composer for the 3 minute waits while Claude Code is Thinking…

Analogy: elevator music.

I’ve been reading about the company Muzak, the subscription music company founded by George Owen Squier in 1934. The History of Muzak:

In the early 1920s, Squier discovered a method of transmitting information via electrical wires and realized that this new method could be used to distribute music.

But:

Even in the 1930s, music licensing was a difficult beast to tame. At the time, music played on the radio was broadcast live, while recorded music was only licensed for personal use at home on gramophones.

And so Muzak boiled the ocean and simply recorded their own music, hundreds of musicians over the 1930s, "sometimes capturing as many as twelve tracks in a day."

And then piped music into:

  • factories
  • restaurants
  • hotels
  • elevators: "It was a fairly common practice to play music in elevators to both soothe passengers and pass the time since elevators were not as smooth or as fast as they are today."

Music has a psychological effect, promoted by Muzak in the 1950s:

The basic concept of Stimulus Progression is the idea of changing the styles and tempos of music to counterbalance and influence the natural rhythms of the human body. Studies showed that employee production would dip during certain times of the day; before and after lunch, for example. Muzak playlists were then programmed against those patterns by playing more upbeat music during the slower times of the day, and vice versa.

Anyway.

Muzak, elevator music, has a reputation for being bland and beige.

But it is functional: Stimulus Progression, see. (Calm shoppers buy more.)

And it conceives of the elevator as a space to be filled with music; for all its liminality it is a space which we inhabit and do not simply pass across.

And so: when Claude Code is elevating my code, we should not be waiting… we should fill the space!

ChatGPT now has the ability to change the accent colour of the chat UI. Same same. Give me light! Give me sound!


A Social History of Background Music (2017):

In the 70s, Brian Eno sat in an airport for a few hours waiting for a flight and was annoyed by the canned background music. In 1978 he produced Ambient 1: Music For Airports, a mellow, experimental soundscape intended to relax listeners.

Who will be the Brian Eno of coding agent hold music?

Music for Claude Coding.


I also use Claude Code in the process of writing normal words.

Code is text, words are text. So they built it for code but it can work just the same.

As you can see in my colophon I keep a lot of notes going back a couple decades, and these notes are a big folder of Markdown text documents. (I use iA Writer these days.)

So I pop open the root directory in the terminal and init Claude Code.

Then I say: "please look over the 30-40 most recent files in the blog posts folder and - concentrating on the ones that aren’t like finished posts (because I will have published those) - give me half a dozen ideas of what to write a blog post about today"

I don’t use it to do any actual writing. I prefer my words to be my own. But it’s neat to riff over my own notes like this.


So you don’t actually sit and do nothing for 3-4 minutes.

While it works, Claude runs commands on your computer which do anything from editing code and searching the web to, uh, deleting all files in your home directory (it can make mistakes). Fortunately it asks each time for permission. And you respond each time from a menu:

  • Yes
  • Yes and don’t even ask me next time
  • No but here’s what to try instead

So your inner loop interaction with Claude Code is approval, course correction, and Claude accelerating in autonomy and power as your approvals accrete.

It’s a loop built around positive reinforcement and forward motion. And, because of this, you personally end up building a lot of trust in Claude and its ability to plan and execute.

What you want to do but absolutely MUST NOT do is start Claude Code with the flag --dangerously-skip-permissions which slams it into yolo mode.

Don’t do it! But you know you want to.


Then of course you want to put Claude Code in control of everything else.

e.g. Claude on the web can now deal with spreadsheets.

So could we give it a Hugging Face robot arm and stick the arm on Roomba and let it loose in my front room?

claude "tidy my house" --dangerously-skip-permissions

Claude Code when pls

Pneumatic elevators

I’m the tube a bunch right now (cooking something new and borrowing desks, thanks!) and one of the frustrating bits of the commute is going street level to underground. Escalators are slooooow.

(Whereas being static on trains is fine as I can tap blog posts with my thumbs while standing/sitting. Evidence A: you’re reading it.)

So I wonder if there’s a radically quicker way to descend.

Falling would be quickest (unaccelerated), but then there’s stopping.

A net would be difficult because of standing up after. You’d get hit by the next person while you were untangling. So individual transit would be quicker but overall throughput lower because you need to add buffer time.

But maybe jets of compressed air could help?

So what I’m imagining is a pit that you step into and simply drop, with AI-controlled jets of compressed air all the way down that

  • control your attitude (no tumbling pls)
  • rapidly decelerate you at the end
  • and direct you off to the side (rotating around the base clockwise, person by person, to avoid collisions) to step away and walk to the platform.

An alternative to air jets:

Sufficiently powerful magnets can also spontaneously create magnetism in flesh (or other nonferrous material) because electron orbitals are current loops. This is diagmagnetism.

e.g. in 1997 Nobel laureate Andre Geim put a live frog in a 16 tesla magnetic field and made it float: "all one needs to levitate a frog is a magnetic field 1,000 to 10,000 times stronger than a refrigerator magnet, or 10 times stronger than an MRI machine."

So superconductors and 16T magnets could be used for horizontal underground tunnels too.

Although anything metallic like smartphones and earrings, well I don’t know what would happens. Like bullets in a rifle probably.

Let’s stick with air jets.


We should be using AI for weird new physics (2022). Why not free-fall pits with too-hard-to-model-by-humans AI-controlled air jets? WHY NOT? Cowards if we don’t, that’s what I think.

So Simon Willison always asks AI models to draw a pelican riding a bicycle. It gives him a way to track performance. Here are pelicans for the first 6 months of 2025.

My personal benchmark is to ask deep research AI agents to give me an R&D plan and investment deck for a space elevator.

(A space elevator is an interesting task because it requires breakthroughs in material science but it not fundamentally impossible, and the investment viability requires a reach into the future, to show when it becomes profitable, so the challenge it to break it up into steps where each is viable in its own right and de-risks the next. So it’s multidisciplinary and complicated, but this kind of breadth-first, highly parallel search through the idea maze is precisely what AI should be good at.)

I feel like the threshold for AGI is not whether the AI can do the task, but whether it can show it to be so economically inevitable that it happens immediately via capitalism.

If a space elevator to the Karmin line is too much of a stretch then I would settle for a pneumatic elevator to the Northern line.


Auto-detected kinda similar posts:

The destination for AI interfaces is Do What I Mean

David Galbraith has a smart + straightforward way to frame how AI will change the user interface.

First he imagines taking creating prompts and wrapping them up as buttons:

The best prompts now are quite simple, leaving AI to handle how to answer a question. Meanwhile AI chat suffers from the same problem from command lines to Alexa - how can i remember what to ask? Only this time the problem is exacerbated by the fact that AI is capable of practically anything making the task less one of remembering commands but a creative one of coming up with great questions or delivering an interface to discover them and then wrapping the resulting prompts as buttons.

(Which honestly would be amazing on its own: I have a few prompts I use regularly including Diane, my transcription assistant, and I have nowhere to keep them or run them or share them except for text files and my terminal history.)

And then he uses the concept of buttons to explain how a full AI interface can be truly different:

AI buttons are different from, say Photoshop menu commands in that they can just be a description of the desired outcome rather than a sequence of steps (incidentally why I think a lot of agents’ complexity disappears). For example Photoshop used to require a complex sequence of tasks (drawing around elements with a lasso etc.) to remove clouds from an image. With AI you can just say ‘remove clouds’ and then create a remove clouds button. An AI interface is a ‘semantic interface’.

Aha!

The buttons concept is not essential for this insight (though it’s necessary for affordances); the final insight is what matters.

I would perhaps say “intent” rather than “semantic.”

i.e. the user expresses the intent to "remove clouds" and then, today, is required to follow interface bureaucracy to achieve that. AI removes the bureaucracy.

And then: there are some intents which are easy to say but can’t be simply met using the bureaucracy of interface elements like buttons, drop-downs, swipes and lists. There are cognitive ergonomic limits to the human interface with software; with hardware there are physical limits to the control panel too. This constrains what we can do with our products as much as if they didn’t have that functionality at all.

So removing the interface bureaucracy is not about simplicity but about increasing expressiveness and capability.


What does it look like if we travel down the road of intent-maxing?

There’s a philosophy from the dawn of computing, DWIM a.k.a. Do What I Mean (Wikipedia).

Coined by computer scientist Warren Teitelman in 1966 and here explained by Larry Masinter in 1981: DWIM "embodies a pervasive philosophy of user interface design."

DWIM is an embodiment of the idea that the user is interacting with an agent who attempts to interpret the user’s request from contextual information. Since we want the user to feel that he is conversing with the system, he should not be stopped and forced to correct himself or give additional information in situations where the correction or information is obvious.

Yes!

Squint and you can see ChatGPT as a DWIM UI: it never, never, never says “syntax error.”

Now, arguably it should come back and ask for clarifications more often, and in particular DWIM (and AI) interfaces are more successful the more they have access to the user’s context (current situation, history, environment, etc).

But it’s a starting point. The algo is: design for capturing intent and then DWIM; iterate until that works. AI unlocks that.


This perspective sheds some light on why OpenAI + others are chasing the mythical Third Device (The Verge). (Maybe it’s a hat.)

A DWIM AI-powered UI needs maximum access to context (to interpret the user and also for training) and to get as close as possible to the point of intent.

btw I’m not convinced the answer looks like One Device To Rule Them All but that’s another story.


It’s interesting to consider what a philosophy of Do What I Mean might lead to in a physical environment rather than just phones and PCs, say with consumer hardware.

Freed from interface bureaucracy, you want to optimise for capturing user intent with ease, expressiveness, and resolution – very different from the low bandwidth interface paradigm of jabbing single fingers at big buttons.

So I’ve talked before about high bandwidth computer input, speculatively in terms of Voders, pedals, and head cursors (2021) or more pragmatically with voice, gesture, and gaze for everything.

But honestly as a vision you can’t do better than Put-That-There (1982!!!!) by the Architecture Machine Group at MIT.

Here’s a short video demo: multimodal voice + pointing with a big screen and two-way conversation.

Like, let’s just do that?

(One observation is that I don’t think this necessarily leads to a DynamicLand-style programmable environment; Put-There-There works as a multimodal intent interface even without end-user programming.)


Anyway.

"Remove clouds."


Auto-detected kinda similar posts:

Many Hard Leetcode Problems are Easy Constraint Problems

In my first interview out of college I was asked the change counter problem:

Given a set of coin denominations, find the minimum number of coins required to make change for a given number. IE for USA coinage and 37 cents, the minimum number is four (quarter, dime, 2 pennies).

I implemented the simple greedy algorithm and immediately fell into the trap of the question: the greedy algorithm only works for "well-behaved" denominations. If the coin values were [10, 9, 1], then making 37 cents would take 10 coins in the greedy algorithm but only 4 coins optimally (10+9+9+9). The "smart" answer is to use a dynamic programming algorithm, which I didn't know how to do. So I failed the interview.

But you only need dynamic programming if you're writing your own algorithm. It's really easy if you throw it into a constraint solver like MiniZinc and call it a day.

int: total;
array[int] of int: values = [10, 9, 1];
array[index_set(values)] of var 0..: coins;

constraint sum (c in index_set(coins)) (coins[c] * values[c]) == total;
solve minimize sum(coins);

You can try this online here. It'll give you a prompt to put in total and then give you successively-better solutions:

coins = [0, 0, 37];
----------
coins = [0, 1, 28];
----------
coins = [0, 2, 19];
----------
coins = [0, 3, 10];
----------
coins = [0, 4, 1];
----------
coins = [1, 3, 0];
----------

Lots of similar interview questions are this kind of mathematical optimization problem, where we have to find the maximum or minimum of a function corresponding to constraints. They're hard in programming languages because programming languages are too low-level. They are also exactly the problems that constraint solvers were designed to solve. Hard leetcode problems are easy constraint problems.1 Here I'm using MiniZinc, but you could just as easily use Z3 or OR-Tools or whatever your favorite generalized solver is.

More examples

This was a question in a different interview (which I thankfully passed):

Given a list of stock prices through the day, find maximum profit you can get by buying one stock and selling one stock later.

It's easy to do in O(n^2) time, or if you are clever, you can do it in O(n). Or you could be not clever at all and just write it as a constraint problem:

array[int] of int: prices = [3, 1, 4, 1, 5, 9, 2, 6, 5, 3, 5, 8];
var int: buy;
var int: sell;
var int: profit = prices[sell] - prices[buy];

constraint sell > buy;
constraint profit > 0;
solve maximize profit;

Reminder, link to trying it online here. While working at that job, one interview question we tested out was:

Given a list, determine if three numbers in that list can be added or subtracted to give 0?

This is a satisfaction problem, not a constraint problem: we don't need the "best answer", any answer will do. We eventually decided against it for being too tricky for the engineers we were targeting. But it's not tricky in a solver;

include "globals.mzn";
array[int] of int: numbers = [3, 1, 4, 1, 5, 9, 2, 6, 5, 3, 5, 8];
array[index_set(numbers)] of var {0, -1, 1}: choices;

constraint sum(n in index_set(numbers)) (numbers[n] * choices[n]) = 0;
constraint count(choices, -1) + count(choices, 1) = 3;
solve satisfy;

Okay, one last one, a problem I saw last year at Chipy AlgoSIG. Basically they pick some leetcode problems and we all do them. I failed to solve this one:

Given an array of integers heights representing the histogram's bar height where the width of each bar is 1, return the area of the largest rectangle in the histogram.

example from leetcode link

The "proper" solution is a tricky thing involving tracking lots of bookkeeping states, which you can completely bypass by expressing it as constraints:

array[int] of int: numbers = [2,1,5,6,2,3];

var 1..length(numbers): x; 
var 1..length(numbers): dx;
var 1..: y;

constraint x + dx <= length(numbers);
constraint forall (i in x..(x+dx)) (y <= numbers[i]);

var int: area = (dx+1)*y;
solve maximize area;

output ["(\(x)->\(x+dx))*\(y) = \(area)"]

There's even a way to automatically visualize the solution (using vis_geost_2d), but I didn't feel like figuring it out in time for the newsletter.

Is this better?

Now if I actually brought these questions to an interview the interviewee could ruin my day by asking "what's the runtime complexity?" Constraint solvers runtimes are unpredictable and almost always slower than an ideal bespoke algorithm because they are more expressive, in what I refer to as the capability/tractability tradeoff. But even so, they'll do way better than a bad bespoke algorithm, and I'm not experienced enough in handwriting algorithms to consistently beat a solver.

The real advantage of solvers, though, is how well they handle new constraints. Take the stock picking problem above. I can write an O(n²) algorithm in a few minutes and the O(n) algorithm if you give me some time to think. Now change the problem to

Maximize the profit by buying and selling up to max_sales stocks, but you can only buy or sell one stock at a given time and you can only hold up to max_hold stocks at a time?

That's a way harder problem to write even an inefficient algorithm for! While the constraint problem is only a tiny bit more complicated:

include "globals.mzn";
int: max_sales = 3;
int: max_hold = 2;
array[int] of int: prices = [3, 1, 4, 1, 5, 9, 2, 6, 5, 3, 5, 8];
array [1..max_sales] of var int: buy;
array [1..max_sales] of var int: sell;
array [index_set(prices)] of var 0..max_hold: stocks_held;
var int: profit = sum(s in 1..max_sales) (prices[sell[s]] - prices[buy[s]]);

constraint forall (s in 1..max_sales) (sell[s] > buy[s]);
constraint profit > 0;

constraint forall(i in index_set(prices)) (stocks_held[i] = (count(s in 1..max_sales) (buy[s] <= i) - count(s in 1..max_sales) (sell[s] <= i)));
constraint alldifferent(buy ++ sell);
solve maximize profit;

output ["buy at \(buy)\n", "sell at \(sell)\n", "for \(profit)"];

Most constraint solving examples online are puzzles, like Sudoku or "SEND + MORE = MONEY". Solving leetcode problems would be a more interesting demonstration. And you get more interesting opportunities to teach optimizations, like symmetry breaking.


Update for the Internet

This was sent as a weekly newsletter, which is usually on topics like software history, formal methods, unusual technologies, and the theory of software engineering. You can subscribe here:


  1. Because my dad will email me if I don't explain this: "leetcode" is slang for "tricky algorithmic interview questions that have little-to-no relevance in the actual job you're interviewing for." It's from leetcode.com

The Angels and Demons of Nondeterminism

Greetings everyone! You might have noticed that it's September and I don't have the next version of Logic for Programmers ready. As penance, here's ten free copies of the book.

So a few months ago I wrote a newsletter about how we use nondeterminism in formal methods. The overarching idea:

  1. Nondeterminism is when multiple paths are possible from a starting state.
  2. A system preserves a property if it holds on all possible paths. If even one path violates the property, then we have a bug.

An intuitive model of this is that for this is that when faced with a nondeterministic choice, the system always makes the worst possible choice. This is sometimes called demonic nondeterminism and is favored in formal methods because we are paranoid to a fault.

The opposite would be angelic nondeterminism, where the system always makes the best possible choice. A property then holds if any possible path satisfies that property.1 This is not as common in FM, but it still has its uses! "Players can access the secret level" or "We can always shut down the computer" are reachability properties, that something is possible even if not actually done.

In broader computer science research, I'd say that angelic nondeterminism is more popular, due to its widespread use in complexity analysis and programming languages.

Complexity Analysis

P is the set of all "decision problems" (basically, boolean functions) can be solved in polynomial time: there's an algorithm that's worst-case in O(n), O(n²), O(n³), etc.2 NP is the set of all problems that can be solved in polynomial time by an algorithm with angelic nondeterminism.3 For example, the question "does list l contain x" can be solved in O(1) time by a nondeterministic algorithm:

fun is_member(l: List[T], x: T): bool {
  if l == [] {return false};

  guess i in 0..<(len(l)-1);
  return l[i] == x;
}

Say call is_member([a, b, c, d], c). The best possible choice would be to guess i = 2, which would correctly return true. Now call is_member([a, b], d). No matter what we guess, the algorithm correctly returns false. and just return false. Ergo, O(1). NP stands for "Nondeterministic Polynomial".

(And I just now realized something pretty cool: you can say that P is the set of all problems solvable in polynomial time under demonic nondeterminism, which is a nice parallel between the two classes.)

Computer scientists have proven that angelic nondeterminism doesn't give us any more "power": there are no problems solvable with AN that aren't also solvable deterministically. The big question is whether AN is more efficient: it is widely believed, but not proven, that there are problems in NP but not in P. Most famously, "Is there any variable assignment that makes this boolean formula true?" A polynomial AN algorithm is again easy:

fun SAT(f(x1, x2, …: bool): bool): bool {
   N = num_params(f)
   for i in 1..=num_params(f) {
     guess x_i in {true, false}
   }

   return f(x_1, x_2, …)
}

The best deterministic algorithms we have to solve the same problem are worst-case exponential with the number of boolean parameters. This a real frustrating problem because real computers don't have angelic nondeterminism, so problems like SAT remain hard. We can solve most "well-behaved" instances of the problem in reasonable time, but the worst-case instances get intractable real fast.

Means of Abstraction

We can directly turn an AN algorithm into a (possibly much slower) deterministic algorithm, such as by backtracking. This makes AN a pretty good abstraction over what an algorithm is doing. Does the regex (a+b)\1+ match "abaabaabaab"? Yes, if the regex engine nondeterministically guesses that it needs to start at the third letter and make the group aab. How does my PL's regex implementation find that match? I dunno, backtracking or NFA construction or something, I don't need to know the deterministic specifics in order to use the nondeterministic abstraction.

Neel Krishnaswami has a great definition of 'declarative language': "any language with a semantics has some nontrivial existential quantifiers in it". I'm not sure if this is identical to saying "a language with an angelic nondeterministic abstraction", but they must be pretty close, and all of his examples match:

  • SQL's selects and joins
  • Parsing DSLs
  • Logic programming's unification
  • Constraint solving

On top of that I'd add CSS selectors and planner's actions; all nondeterministic abstractions over a deterministic implementation. He also says that the things programmers hate most in declarative languages are features that "that expose the operational model": constraint solver search strategies, Prolog cuts, regex backreferences, etc. Which again matches my experiences with angelic nondeterminism: I dread features that force me to understand the deterministic implementation. But they're necessary, since P probably != NP and so we need to worry about operational optimizations.

Eldritch Nondeterminism

If you need to know the ratio of good/bad paths, the number of good paths, or probability, or anything more than "there is a good path" or "there is a bad path", you are beyond the reach of heaven or hell.


  1. Angelic and demonic nondeterminism are duals: angelic returns "yes" if some choice: correct and demonic returns "no" if !all choice: correct, which is the same as some choice: !correct

  2. Pet peeve about Big-O notation: O(n²) is the set of all algorithms that, for sufficiently large problem sizes, grow no faster that quadratically. "Bubblesort has O(n²) complexity" should be written Bubblesort in O(n²), not Bubblesort = O(n²)

  3. To be precise, solvable in polynomial time by a Nondeterministic Turing Machine, a very particular model of computation. We can broadly talk about P and NP without framing everything in terms of Turing machines, but some details of complexity classes (like the existence "weak NP-hardness") kinda need Turing machines to make sense. 

Logical Duals in Software Engineering

(Last week's newsletter took too long and I'm way behind on Logic for Programmers revisions so short one this time.1)

In classical logic, two operators F/G are duals if F(x) = !G(!x). Three examples:

  1. x || y is the same as !(!x && !y).
  2. <>P ("P is possibly true") is the same as ![]!P ("not P isn't definitely true").
  3. some x in set: P(x) is the same as !(all x in set: !P(x)).

(1) is just a version of De Morgan's Law, which we regularly use to simplify boolean expressions. (2) is important in modal logic but has niche applications in software engineering, mostly in how it powers various formal methods.2 The real interesting one is (3), the "quantifier duals". We use lots of software tools to either find a value satisfying P or check that all values satisfy P. And by duality, any tool that does one can do the other, by seeing if it fails to find/check !P. Some examples in the wild:

  • Z3 is used to solve mathematical constraints, like "find x, where f(x) >= 0. If I want to prove a property like "f is always positive", I ask z3 to solve "find x, where !(f(x) >= 0), and see if that is unsatisfiable. This use case powers a LOT of theorem provers and formal verification tooling.
  • Property testing checks that all inputs to a code block satisfy a property. I've used it to generate complex inputs with certain properties by checking that all inputs don't satisfy the property and reading out the test failure.
  • Model checkers check that all behaviors of a specification satisfy a property, so we can find a behavior that reaches a goal state G by checking that all states are !G. Here's TLA+ solving a puzzle this way.3
  • Planners find behaviors that reach a goal state, so we can check if all behaviors satisfy a property P by asking it to reach goal state !P.
  • The problem "find the shortest traveling salesman route" can be broken into some route: distance(route) = n and all route: !(distance(route) < n). Then a route finder can find the first, and then convert the second into a some and fail to find it, proving n is optimal.

Even cooler to me is when a tool does both finding and checking, but gives them different "meanings". In SQL, some x: P(x) is true if we can query for P(x) and get a nonempty response, while all x: P(x) is true if all records satisfy the P(x) constraint. Most SQL databases allow for complex queries but not complex constraints! You got UNIQUE, NOT NULL, REFERENCES, which are fixed predicates, and CHECK, which is one-record only.4

Oh, and you got database triggers, which can run arbitrary queries and throw exceptions. So if you really need to enforce a complex constraint P(x, y, z), you put in a database trigger that queries some x, y, z: !P(x, y, z) and throws an exception if it finds any results. That all works because of quantifier duality! See here for an example of this in practice.

Duals more broadly

"Dual" doesn't have a strict meaning in math, it's more of a vibe thing where all of the "duals" are kinda similar in meaning but don't strictly follow all of the same rules. Usually things X and Y are duals if there is some transform F where X = F(Y) and Y = F(X), but not always. Maybe the category theorists have a formal definition that covers all of the different uses. Usually duals switch properties of things, too: an example showing some x: P(x) becomes a counterexample of all x: !P(x).

Under this definition, I think the dual of a list l could be reverse(l). The first element of l becomes the last element of reverse(l), the last becomes the first, etc. A more interesting case is the dual of a K -> set(V) map is the V -> set(K) map. IE the dual of lived_in_city = {alice: {paris}, bob: {detroit}, charlie: {detroit, paris}} is city_lived_in_by = {paris: {alice, charlie}, detroit: {bob, charlie}}. This preserves the property that x in map[y] <=> y in dual[x].


  1. And after writing this I just realized this is partial retread of a newsletter I wrote a couple months ago. But only a partial retread! 

  2. Specifically "linear temporal logics" are modal logics, so "eventually P ("P is true in at least one state of each behavior") is the same as saying !always !P ("not P isn't true in all states of all behaviors"). This is the basis of liveness checking

  3. I don't know for sure, but my best guess is that Antithesis does something similar when their fuzzer beats videogames. They're doing fuzzing, not model checking, but they have the same purpose check that complex state spaces don't have bugs. Making the bug "we can't reach the end screen" can make a fuzzer output a complete end-to-end run of the game. Obvs a lot more complicated than that but that's the general idea at least. 

  4. For CHECK to constraint multiple records you would need to use a subquery. Core SQL does not support subqueries in check. It is an optional database "feature outside of core SQL" (F671), which Postgres does not support

Delavnica: ChatGPT za inženirje – Primer razvoja in implementacije izgubnega zvočnega enkoderja/dekoderja (ChatGPT, Python, C, Github)

Opis: Intenzivna, praktična delavnica za inženirje, kjer bodo udeleženci s pomočjo ChatGPT (GPT5-thinking) in ChatGPT Codex (research preview) zasnovali in implementirali učinkovit psihoakustični zvočni enkoder/dekoder: najprej prototip v Pythonu, nato prenos v čisti C.

Za to delavnico rabite svojo opremo. Podrobneje so zahteve specificirane na koncu strani. Kot izvajalci vam ne moremo zagotoviti ne računalnikov in ne potrebnih ChatGPT licenc!

Podrobnejši opis: V prvem delu predstavimo teoretične temelje perceptualnega kodiranja zvoka: osnove psihoakustike (maskiranje), časno‑frekvenčne transformacije (DCT, okna), kvantizacijo in entropijsko kodiranje. Nato  z vodenimi iterativnimi poizvedbami modela GPT5-thinking oz GPT5-pro (za udeležence ki imajo naročnini Teams ali Pro) razvijemo prototip para enkoder/dekoder v pythonu. (framing, DCT, enostavni psihoakustični model, brezizgubno kodiranje bitnega toka, optimizacija bitov).   

V drugem delu vzpostavimo GitHub repozitorij, dodamo teste ter s ChatGPT Codex izvedemo prepis v C (NE C++), skupaj z avtomatskim refaktoriranjem in avtomatskim dokumentiranjem kode. Delavnica zahteva uporabo lastnega prenosnika z inštaliranim Linux ali Windows z WSL; poudarek je izdelavi prototipa ki je lahko deliverable na nivoju TRL 3-4 ali višjega.

Zahtevnost: Napredna

Priporočeno predznanje: Osnove uporabe ChatGPT; Linux ali Windows z WSL (Windows subsystem for Linux, sam Windows ni podprt!); Git in GitHub; Osnovno znaje python (NumPy/SciPy); osnovno poznavanje signalne obdelave (FFT/DCT) in entropijskega kodiranja, ali želja naučiti se teh osnov; osnove jezika C.

Ciljna publika: Inženirji in raziskovalci s področij računalništva, elektrotehnike, strojništva, fizike; razvijalci programske opreme; uporabniki HPC.

Omejitev števila udeležencev: 15

Na izobraževanju pridobljena znanja:

• Razumevanje temeljnih pojmov perceptualnega kodiranja zvoka (maskiranje, kritični pasovi).
• Poznavanje cevovoda DCT + brezizgubno kodiranje: framing, okna, kvantizacija, entropijsko kodiranje.
• Zasnova in implementacija delujočega para audio enkoder-dekoder v Pythonu.
• Uporaba ChatGPT5 (thinking/pro) za iterativno prototipiranje, razlago in preverjanje kode.
• Prenos Python kode v ANSI C s ChatGPT Codex, osnovne optimizacije in testiranje.
• Delo z Git/GitHub (vejitev, pull requests) 

Opis poteka izobraževanja: Uvod → teoretične osnove → voden razvoj Python prototipa → organizacija kode in testi → GitHub → prenos v C s ChatGPT Codex → optimizacija/refaktoriranje → zaključek in priporočila.

Lokacija - fizična: Fakulteta za elektrotehniko - Multimedijska dvorana, Tržaška cesta 25, 1000 Ljubljana

Pozor: pogoj za udeležbo so:

1. Lastni prenosnik in lastne slušalke. Na prenosniku mora biti inštaliran delujoč Linux ali Windows z WSL2. Preverite delovanje s kodo ki jo najdete na koncu strani (Github). Koda, ki jo bomo pisali bo specifična za Linux, da bo delo enostavneje. Uporabniki Windows 10 z WSL2 boste imeli morda probleme z usposobitvijo zvoka v realnem času, zato bomo predvideli tudi izvoz v WAV datoteke, ki jih boste predvajali na slušalkah, tako da bo delo vseeno možno. Če želite, lahko poskusite inštalitati GWSL - v Microsoft Store izberite "Trial" ki je polnoma ekvivalenten plačljivi verziji: https://opticos.github.io/gwsl/

2. Lastna naročnina na ChatGPT najmanj nivoja Plus (Team ali Pro sta seveda tudi ok). Če imate  deljeno naročnino poskrbite, da to ne bo blokiralo vašega dela!

Na koncu strani najdete testno kodo s katero preizkusite ali vam zvok deluje ok. Preizkusite preden se prijavite!

 




 

Organizator:

 

Predavatelji:

Janez Perš
JRaziskovalec in mentor na UL FE; področja: obdelava signalov, računalniški vid, razvoj raziskovalne programske opreme in uporaba velikih jezikovnih modelov v inženirskih projektih. 
janez.pers@fe.uni-lj.si 

 

Janez Križaj
Janez Križaj je raziskovalec na Fakulteti za elektrotehniko Univerze v Ljubljani. Njegova področja raziskovanja so globoko učenje, računalniški vid, biometrija, razpoznavanje obrazov, razpoznavanje vzorcev in obdelava slik. 
janez.krizaj@fe.uni-lj.si 

Vabilo na predstavitev SLINGa za Javni sektor

Kako narediti učinkovito javno upravo z uporabo superračunalniške infrastrukture in umetne inteligence

 

Vabljeni na brezplačni spletni dogodek, kjer boste izvedeli, kako lahko s pomočjo superračunalništva (HPC) in umetne inteligence (AI) izboljšate storitve javne uprave. Večje računalniške zmogljivosti so že na voljo v Slovenskem superračunalniškem omrežju SLING. Potrebovali jih boste za hitrejše obdelave in povezovanje velike količine podatkov in dokumentov, izvajanje kompleksnih analiz, modeliranje, simulacije in vizualizacije, aplikacije umetne inteligence (AI), razvoj velikih jezikovnih modelov (LLM) in analiziranje velepodatkov (HPDA).

Kdaj: 11. 9. 2025, 11:00-12:30

Lokacija: ZOOM

Na spletnem seminarju boste:

  • spoznali prednosti superračunalniške infrastrukture za hitrejše obdelave in povezovanje velike količine podatkov in dokumentov, ki jih generirajo različne organizacije javnega sektorja
  • izvedeli, kako brezplačno dostopati do tehnoloških okolij in superračunalniške infrastrukture v Sloveniji in Evropi,
  • izvedeli, kako brezplačno uporabljati podporo Nacionalnega kompetenčnega centra SLING.

Program dogodka:

  • Predstavitev Nacionalnega kompetenčnega centra SLING, dr. Jan Jona Javoršek, Inštitut Jožef Stefan
  • Demonstracija dostopa do testnih okolij Slovenskega superračunalniškega omrežja SLING, doc. dr. Ratko Pilipović, Fakulteta za računalništvo in informatiko, Univerza v Ljubljani
  • Predstavitev praktične uporabe: Pogovorni sistemi in veliki jezikovni modeli v javni upravi, dr. Mladen Borovič, Fakulteta za elektrotehniko, računalništvo in informatiko, Univerza v Mariboru
  • Predstavitev praktične uporabe: Uporaba umetne inteligence za analizo preiskovalnih podatkov, izr. prof. dr. Niko Lukač, Fakulteta za elektrotehniko, računalništvo in informatiko, Univerza v Mariboru
  • Predstavitev praktične uporabe: Spremljanje stanja prostora z zlivanjem heterogenih podatkovnih virov in tokov, prof. dr. Domen Mongus, Fakulteta za elektrotehniko, računalništvo in informatiko, Univerza v Mariboru
  • Diskusija: Kako lahko NCC SLING in javna uprava sodelujeta na področju superačunalništva in umetne inteligence.

 

Udeležba je brezplačna. Zaradi omejenega števila udeležencev je predhodna prijava na dogodek OBVEZNA. 

Vsi prijavljeni boste dan pred pričetkom usposabljanja na vašo elektronsko pošto prejeli opomnik s povezavo za dostop do usposabljanja. 

Lepo vabljeni.

Organizatorji dogodka: Ministrstvo za digitalno preobrazbo, Ministrstvo za javno upravo, Nacionalni kompetenčni center SLING

 

REGISTRACIJA

Delavnica: Osnove superračunalništva

Opis: Na delavnici se bomo seznanili z zgradbo računskih gruč in programsko opremo na njih ter zagnali svoje prve naloge. Naučili se boste razlikovati med prijavnimi vozlišči, računskimi vozlišči, ter sistemi za shranjevanje podatkov. Spoznali boste vlogo operacijskega sistema, vmesne programske opreme Slurm in uporabniških programov. Povezali se boste na prijavna vozlišča, prenašali datoteke na in iz superračunalnika, zaganjali naloge, s katerimi bomo obdelovali video posnetke, in spremljali izvajanje nalog.

Zahtevnost: Osnovna

Jezik: Slovenski

Termin: 23. 09. 2025 od 10.00 - 15.00

Omejitev števila udeležencev: 30

Virtualna lokacija: ZOOM 

Ciljna publika: raziskovalci, inženirji, študenti, vsi ki potrebujejo več računskih virov pri svojem delu

Na izobraževanju pridobljena znanja:

  • Razumevanje delovanja in zgradbe superračunalnikov
  • Uporaba vmesne programske opreme SLURM
  • Osnovna uporaba programskih okolij in vsebnikov
  • Upravljanje z datotekami in poganjanje nalog
  • Osnovna obdelava videoposnetkov

 

Organizator:

FRI logo

Predavatelji:

Ime: Davor Sluga
Opis: https://fri.uni-lj.si/sl/o-fakulteti/osebje/davor-sluga 
E-mail: davor.sluga@fri.uni-lj.si
Ime: Ratko Pilipović
Opis: https://www.fri.uni-lj.si/sl/o-fakulteti/osebje/ratko-pilipovic
E-mail: ratko.pilipovic@fri.uni-lj.si

 


Delavnica: Uporabljajmo superračunalnike!

Opis: Delavnica je namenjena raziskovalcem, inženirjem, študentom in drugim, ki ste spoznali, da potrebujete več računskih virov, kot vam jih ponujajo običajni računalniki. Delavnica bo potekala v okviru konference IEEE ERK 2025.

Na delavnici se bomo seznanili s slovensko superračunalniško infrastrukturo in možnostmi dostopa do nje. V okviru delavnice bomo delali na eni od superračunalniških gruč - povezali se bomo na prijavno vozlišče, prenašali datoteke na in iz superračunalnika ter zaganjali naloge in spremljali njihvo izvajanje preko vmesne programske opreme Slurm.

Delavnica je brezplačna. Na delavnico pridite s svojim prenosnim računalnikom, mi vam bom priskrbeli poverilnice na superračunalniški gruči.

Jezik: Slovenski

Zahtevnost: Osnovna

Omejitev števila udeležencev: 15

Termin: Petek, 26. 9. 2025  9:00-12:00

Lokacija-fizična:  Hotel Bernardin, Portorož

Priporočeno predznanje: /

Ciljna publika: raziskovalci, inženirji, študenti, vsi ki potrebujejo več računskih virov pri svojem delu

Na izobraževanju pridobljena znanja:

  • Razumevanje delovanja in zgradbe superračunalnikov
  • Uporaba vmesne programske opreme SLURM
  • Osnovna uporaba programskih okolij in vsebnikov
  • Upravljanje z datotekami in poganjanje nalog
  • Osnovna obdelava videoposnetkov

 

Organizatorja:

  • Konferenca IEEE ERK 2025 in

 

FRI logo

Predavatelji:

Ime: Davor Sluga
Opis: https://fri.uni-lj.si/sl/o-fakulteti/osebje/davor-sluga 
E-mail: davor.sluga@fri.uni-lj.si
Ime: Ratko Pilipović
Opis: https://www.fri.uni-lj.si/sl/o-fakulteti/osebje/ratko-pilipovic
E-mail: ratko.pilipovic@fri.uni-lj.si

 


ZMEDA S KARTOMATI ALI ZAKAJ SPET ČAKAM V VRSTI

Prejšnja zgodba o popotovanju od Maribora do Ljubljane me je spomnila na tole mojo situacijo glede blagajn in kartomatov.

Na glavni ljubljanski postaji in okoli nje že dolgo časa potekajo razna gradbena dela. Na dan, ki ga opisujem, so menjali železniški nadvoz nad Dunajsko in prenekateri vlak iz Ljubljane v smeri proti Kamniku je zato svojo pot začel šele na postaji Ljubljana Šiška, do katere je od glavne postaje dobre četrt ure hoda. Ni pa vsak vlak začel v Šiški. Nekateri kamniški vlaki so potnike pobirali že na glavni postaji.

Tistega dne sem se vračal iz Ljubljane domov. Ker bi na avtobus čakal še nekaj minut, sem se odpravil na železniško postajo lučaj proč in na kartomatu preveril, kdaj proti Trzinu iz Ljubljane krene naslednji potniški vlak. Idealno: čez slabe četrt ure. Kupim karto; na postajah z odprto potniško blagajno je namreč treba kupiti vozovnico za vlak pred vstopom nanj (razen za vikend iz nekega razloga, takrat jo je bolje kupiti na vlaku ?????). Ozrem se na ekran na postaji, da preverim, iz katerega tira odpelje vlak. Ni napisan na tabli. Tedaj se zavem, da se mi je kartomat zlagal in mi prodal karto za vlak, ki ne obstaja. Kot vlak “Ljubljana – Trzin Mlake” mi je prodal vlak “Ljubljana Šiška -Trzin Mlake” in veselo pobral moje novce.

Nič ne de, si mislim. Vlaka sicer peš ne morem ujeti, lahko pa se zmenim na blagajni. Sigurno mi bodo lahko vrnili denar. Prodajalka na blagajni si ogleda mojo karto in potrdi mojo domnevo, da vlak, za katerega sem kupil karto, dejansko začne na neki drugi postaji, kot je napisana na karti. Ko prosim za vračilo denarja, me hladno zavrne, češ da mi za karte, kupljene na kartomatu, ne morejo vrniti denarja. Doda še, da bi mi denar v podobni situaciji lahko vrnila, če bi le karto kupil pri njej na okencu.

Doda še, da mi karta velja še cel dan in da jo lahko uporabim tudi na kakšnem kasnejšem vlaku proti Trzinu. Ko pa jo vprašam, kdaj bi se tak omenjeni kasnejši prevoz utegnil zgoditi, mi odvrne, da je zadnji vlak za Trzin tisti, za katerega karto držim v rokah in spelje čez pet minut iz postaje dober kilometer proč.

Za denar se lahko torej obrišem pod nosom. V splošnih pogojih kartomatov dejansko piše takole:

> Na kartomatu je uporabniku na voljo veljavni vozni red, ki ne prikazuje in izpostavlja posebnosti v prometu, kot so npr. zamude in nadomestni cestni prevozi zaradi del na progi ali izrednosti v prometu.

> Pred nakupom vozovnice mora uporabnik preveriti aktualno stanje v prometu na www.slo-zeleznice.si ali v aplikaciji Grem z vlakov v zavihku Zamude in Ovire oz. v obvestilih za potnike na železniški postaji.

> Ura, prikazana na kartomatih, je informativne narave.

> /…/

> Za vozovnice, kupljene prek kartomatov, se prevoznina ne vrača (razen v primerih težav z delovanjem avtomata).

> Uporabnik je dolžan pred nakupom oz. potrditvijo nakupa vozovnice preveriti vozni red in aktualno stanje v prometu v iskalniku voznega reda na spletni strani SŽ-PP ali na železniški postaji.

> /…/

> Ko je vozovnica natisnjena in plačilo opravljeno, vračilo in popravki tiskane vozovnice niso več mogoči.

> Uporabnik si mora za potovanje na želeni relaciji zagotoviti novo in ustrezno vozovnico.

Naj dodam še, da na železniški postaji ne piše, kateri vlak bo namesto z ljubljanske postaje odpeljal s postaje v Šiški. Kartomat je torej dejansko uporaben samo v kombinaciji z mobilnim telefonom, s katerim potnik preveri dejansko stanje odhodov vlakov, ker je kartomatom seveda po splošnih pogojih dovoljeno lagati. Kakšen je potem sploh smisel kartomatov, če itak potrebujem mobilni telefon, da kartomat uporabljam (da preverim, če vlak, ki mi ga ponuja kartomat, sploh obstaja)?

No, tisti dan sem se domov odpravil z avtobusom in zanj kupil novo vozovnico. Odslej na kartomatu ne kupujem več. Se pač gnetem v vrsti nezadovoljnih potnikov na edinem prodajnem okencu za notranji promet na glavnem potniškem železniškem postajališču v državi.

Ravno predvčerajšnjim sem tako gotovo pol ducata minut prečakal v vrsti v blagajniški dvorani ljubljanske postaje, v katero so natrosili še nekaj kartomatov, ki so seveda stali neuporabljeni (zakaj bi nekdo šel k blagajnam uporabljat kartomat — naj jih rajši postavijo na peron), da mi je prodajalka prijazno omenila, da mi lahko na blagajni proda samo karto v polni ceni, če pa na vlaku kupim z gotovino, pa dobim “vikend popust”. Nadvse zanimivo. Ker se spomnim, da imam samo petdesetevrski bankovec, ji predlagam, da mi ga na blagajni zamenja za manjše, da ne bom sprevodniku povzročal težav z vračanjem gotovine. Noče. Prav.

Na vlaku torej sprevodniku omenim vikend popust in situacijo na blagajni. Navigirati mora čez nekaj menijev na svojem žepnem prodajnem terminalu, da izbere popust, sprejeti mora moj petdesetevrski bankovec in ga zamenjati za manjše iz lastne denarnice, naposled pa oznani, da sem prihranil celih DVAJSET CENTOV. Namesto 5,4€ za vlak v Trebnje sem dobil zmagoslaven slovenskoželezniški popust in plačal samo 5,2€.

Glede na to, kako dolga in razburljiva znajo biti potovanja z vlakom in avtobusi, si zaslužijo svojo redno rubriko na Rdeči pesi. Prebrali ste zgodbo, ki nam jo je poslal bralec Anton. 

Če imaš kakšno anektodo z vlaka ali avtobusa, nam jo pošlji na zasebno sporočilo ali pa na rdecepese@gmail.com. Poznaš osebo, ki ima zagotovo na zalogi kakšno anekdoto? Označi jo spodaj v komentar.

The post ZMEDA S KARTOMATI ALI ZAKAJ SPET ČAKAM V VRSTI first appeared on Rdeča Pesa.

(VIDEO) KAMENČKI SOLIDARNOSTI S PALESTINO

V Mariboru, na Trgu svobode, ta teden poteka solidarnostna akcija polaganja kamenčkov. Do petka jih želijo položiti 20 000, toliko kot je izraelska vojska umorila otrok  v Gazi. 

Še posebej apelirajo na učitelje in vzgojitelje. Ob začetku šole je namreč odlična priložnost, da svoje učence, dijake in študente spomnijo na dogajanje v Gazi in jih spodbudijo, da se tudi sami pridružijo tej solidarnostni gesti. Lahko prinesejo svoje pobarvane kamne, so pa kamni na voljo tudi direktno ob prizorišču. 

Organizatorka Anamarija Nađ je za Rdečo peso opisala, kako lahko sodelujete – več v posnetku. Urška Breznik iz Pekarne Magdalenske mreže pa je poudarila, zakaj je nujno izvajati akcije za Palestino ter opozorila, da vlada še zdaleč ni storila dovolj glede končanja genocida v Gazi.

The post (VIDEO) KAMENČKI SOLIDARNOSTI S PALESTINO first appeared on Rdeča Pesa.

SOLIDARNO Z NOVINARJI V GAZI!

Novinarji brez meja so pozvali k zaščiti novinarjev v Gazi in opozorili, da glede na hitrost, s katero izraelska vojska ubija novinarje v Gazi, kmalu ne bo nikogar več, ki bi vas obveščal o dogajanju.

Tole so zapisali: “Izraelske oblasti že 23 mesecev novinarjem zunaj Gaze ne dovolijo neodvisnega dostopa do palestinskega ozemlja – situacija, ki je v sodobnem vojskovanju brez primere. Tamkajšnji novinarji, ki so v najboljšem položaju, da povedo resnico, se spoprijemajo z razseljevanjem in lakoto. Do danes je izraelska vojska ubila najmanj 210 novinarjev. Še veliko več jih je bilo ranjenih in se spopadajo s stalnimi grožnjami s smrtjo zaradi opravljanja svojega dela: pričevanja. To je neposreden napad na novinarsko svobodo in pravico do informiranja.”

Rdeča pesa se pridružuje protestu.

The post SOLIDARNO Z NOVINARJI V GAZI! first appeared on Rdeča Pesa.

KAPITALIZMU PRIJAZNA ŠOLA: Ali bodo reforme izboljšale naše šole? 

Šole so ponovno odprle svoja vrata 190.000 osnovnošolcem in 88.000 dijakom in dijakinjam. Ob vstopu v novo šolsko leto nas čaka precej novosti in sprememb: od uveljavljanja novih učnih načrtov (ti bodo stopili v veljavo prihodnje šolsko leto), ki v ospredje postavljajo “kompetence” kot sta podjetnost in digitalna pismenost; pa do sprememb zakonodaje (prenovljena Zakon o financiranju VIZ in Zakon o osnovni šoli), ki naj bi ščitila učitelje pred starši, ki grozijo s tožbami in prestopajo šolski prag z odvetniki. https://bit.ly/45Gckxx   

Izgleda, da ima naš šolski sistem iz leta v leto več težav, a to ni naključje. Gre za še eno posledico razgradnje javnih storitev, med katere spadata tudi vzgoja in izobraževanje. Vzporedno s tem pa nove spremembe pomenijo prilagajanje potrebam kapitalističnega trga delovne sile.  

Dejstvo, da je šola je glavni ideološki aparat kapitalistične države, dokazuje to, da so skorajda vse dosedanje spremembe šolo še bolj približale kapitalističnemu sistemu; ne gre za novosti, ampak prilagoditve kapitalizmu, ki od šole pričakuje vzgojo poslušnih, pridnih, fleksibilnih, kompetentnih delavcev. Naloga pedagoških delavcev pa je, da jih na to pripravi, vzgoji in izobrazi. Kot piše v Beli knjigi vzgoje izobraževanja: “spreminjajoče se okoliščine trga dela pomenijo, da je za zaposlitev treba nenehno pridobivati nove spretnosti in znanje, sicer lahko posameznik izpade iz trga dela”.  

Trditev, ki jo učitelji vcepljajo (in ji največkrat tudi sami verjamejo), da v šoli gre za “vseživljenjsko učenje” pomeni preprosto to, da se morajo bodoči delavke in delavci nenehno prilagajati zahtevam trga delovne sile. Šolski pouk se vse bolj pomika v smer usposabljanja, kar se najbolj kaže v prenosu poudarkov z znanja na t.i. veščine. 

Kljub vsem škodljivim liberalnim reformam še zmeraj prevladuje razmišljanje, da šola “blaži socialne razlike” in da deluje po principu meritokracije. Učitelji bi naj bili posredniki t.i. socialne mobilnosti. S tem seveda utrjujejo škodljivo logiko, da je družba meritokratska in ustvarjajo vtis, da živimo v družbi, ki ne pozna razrednih razlik in da je uspeh (in tudi neuspeh) učencev odvisen zgolj od trdega dela, pridnosti in dobrih ocen.    

Resnica je žal drugačna. Že ko otrok vstopi v šolski proces se mu dodeli vloga. Te vloge podelijo največkrat učitelji sami, pogosto brez da bi se tega zavedali. Dodeljevanje vlog pa pomeni razvrščanje in pripravo različnih delov prebivalstva na kapitalistični trg dela. Slovenija spada med države z eno najvišjih stopenj socialne segregacije pri izbiri srednje šole. Otroci bolj izobraženih staršev, ki imajo pogosto tudi višje prihodke, se veliko bolj pogosto vpisujejo na gimnazijske programe, saj jim starši nudijo dodatno materialno podporo (infrastruktura za učenje, inštrukcije, dostop do tehnologije). Ti starši dajejo tudi večji pomen izobrazbi in imajo visoka pričakovanja glede šolske uspešnosti, hkrati pa lahko finančno podpirajo otroka v gimnaziji, medtem ko otrokom ni treba takoj iskati poklica. Otroci iz družin z nižjim socio-ekonomskim statusom se pogostoje vpisujejo v poklicne programe, saj jim je pomembno, da si čim prej najdejo službo, s katero pogosto pomagajo pri preživljanju družine.  https://bit.ly/3HTXc6C 

To se ne odraža zgolj pri učnih uspehih ampak tudi pri t.i. “skritem kurikulu”, po domače vzgoji, ki se iz šolskih prostorov vse bolj umika v domeno družine, za razliko od podružbljene vzgoje v socialistični družbi. Pedagoške delavke in delavci že opažajo, da se vzgojno delovanje umika iz šole. Odgovornost za vzgojno neukrepanje prelagajo na lastno nemoč, češ da ni v skladu s šolsko zakonodajo in hišnim redom. Vzgoja v šolah je postala zreducirana na natančno opredeljene pravilnike (ki bojda pravno ščitijo učitelje) in discipliniranje na različne, pogosto birokratske načine, ki dolgoročno nimajo dobronamernih vzgojnih učinkov in največkrat doletijo in s tem disciplinirajo učence, ki izhajajo iz šibkih socialnih okolij. Učiteljice in učitelji tako podeljujejo vzgojne ukrepe za “neprimerno in uporniško vedenje”, ne ukrepajo pa ob resnih problemih kot so nasilni izpadi, seksizem in nazadnjaško razmišljanje. 

Kljub reprodukciji razredne neenakosti skozi vzgojno-izobraževalni proces, šola ponuja možnosti za dvig razredne zavesti in poseg v razredni boj. To pa je mogoče zgolj z izgradnjo demokratičnih odnosov med pedagoškimi delavci in delavkami in njihovimi učenci ter vzpostavitvijo solidarnosti, ki bo najprej odpravila hierarhije znotraj šolskih poslopij.  

The post KAPITALIZMU PRIJAZNA ŠOLA: Ali bodo reforme izboljšale naše šole?  first appeared on Rdeča Pesa.

MARIBOR, BLED, SREDOZEMSKO MORJE – VSI ZA SVOBODNO PALESTINO!

Izraelski okupator v zadnjih tednih še močneje stiska svoje morilske klešče v Gazi. Palestinke in Palestinci množično umirajo zaradi krogel in bomb sionistične teroristične organizacije imenovane “Izraelske obrambne sile”, vse bolj pa tudi zaradi lakote, ki je neposredna posledica popolne zračne, kopenske in morske blokade tega dela Palestine. Lakota, ki so jo pretekli teden uradno potrdili tudi organi Združenih narodov, je terjala že najmanj 333 življenj Palestink in Palestincev, med njimi najmanj 126 otrok. 

Z namenom prebitja blokade se je danes na morsko pot do Gaze odpravila nova karavana ladij. Če je šlo v prvih dveh poskusih za individualna ladijska plovila, se tokrat na pot odpravlja več kot petdeset ladij! Prve so danes izplule iz pristanišča v Barceloni, mnoge pa se jim bodo pridružile na poti preko Sredozemskega morja. Pred odhodom ladij, ki so del omenjene karavane je včeraj v Genovi potekalo množično zborovanje, ki se ga je udeležilo kar 40 tisoč ljudi. Pomembno je dodati, da gre pri barcelonskem pristanišču za isto luko, v katero je med špansko državljansko vojno prihajalo na tisoče pripadnikov mednarodnih brigad, ki so se pridružili  borbi proti fašizmu. Danes iz nje plujejo nove borke in borci proti fašizmu, ki v okupirani Palestini kaže svoj krvavi obraz. 

Pot mednarodne flotilje pod imenom “Global Sumud Flotila”, v sestavu katere so prostovoljke in prostovoljci iz več kot štiridesetih držav sveta, bomo na Rdeči pesi podrobno spremljali. Če se želite pridružiti dejavnostim omenjene organizacije, pa lahko to storite preko povezave v komentarju, ki ga objavljamo pod prispevkom. 

Nove etape mednarodnega boja za svobodno Palestino pa se v teh dneh ne bodo odvijale le na Mediteranu. Jutri, v ponedeljek 1. septembra, bosta na dveh koncih Slovenije potekali akciji, kjer lahko vsak izmed nas da svoj prispevek k širjenju in utrjevanju solidarnosti s Palestinkami in Palestinci ter njihovim osvobodilnim bojem. 

Na mariborskem Trgu svobode se bo ob 12:00 pričela večdnevna akcija polaganja kamenčkov v spomin na umorjene otroke v Gazi. Cilj je do 5. septembra na trgu zbrati 20.000 kamenčkov – toliko otrok so namreč sionistične sile umorile do sedaj. K sodelovanju v akciji so še posebej vabljeni vrtci, osnovne in srednje šole ter vsi, ki si želijo na ta način izraziti solidarnost s Palestinkami in Palestinci. Prav tako bo jutri ob 12:00 pred Festivalno dvorano Bled, kjer se pričenja Blejski strateški forum (BSF), potekal protest z naslovom “EU sokriva, krvave roke skriva.” Povezave do obeh dogodkov lahko najdete v komentarjih pod objavo. 

Kljub nezamisljivim zločinom sionističnih okupatorjev in molku svetovnih političnih voditeljev je torej ta veriga solidarnosti s Palestinkami in Palestinci ter njihovim bojem dosegla vasi in mesta, center in obrobje, delovni ljudje povsod po svetu pa ponosno vihtijo palestinske zastave in dajejo svoj doprinos skupnemu boju za mir, solidarnost in svobodno Palestino – od reke do morja. Palestina je postala Španija naše generacije – njihov boj proti fašističnim falangam je naš boj proti sionističnim okupatorjem. V tem izkazovanju solidarnosti in boju za svobodno Palestino bomo s skupnimi močmi vztrajali do končne zmage. 

En svet, en boj! 

The post MARIBOR, BLED, SREDOZEMSKO MORJE – VSI ZA SVOBODNO PALESTINO! first appeared on Rdeča Pesa.

BREZSRAMNOST KAPITALISTOV NE POZNA MEJA ALI KAKO PODJETJE MELAMIN TOŽI DRUŽINO MRTVEGA DELAVCA 

Po medijih lahko spremljamo bizarni primer, ko podjetje, v katerem je zaradi delovne nesreče umrl delavec, toži njegove svojce. Kočevsko kemično podjetje Melamin toži družino delavca, ki je pred tremi leti umrl v nesreči na delu, ko so delavci napačno združili nezdružljive kemikalije. Zaradi eksplozije je umrlo sedem ljudi. Čeprav je kriminalistična preiskava ugotovila, da v podjetju niso ravnali v skladu s predpisi, zdaj podjetje odgovornost prelaga na pokojnega delavca. Od družine terja 60 tisoč evrov.

Največji lastnik Melamina je avstrijsko podjetje Panta Rhei Beratungs, ki ga lastniško obvladuje Aleš Štrancar, (med drugim tudi lastnik medija Domovina), v zadnjih dneh pa je pozornost javnosti vzbudil z objavo, v kateri je premierja Goloba primerjal z Mussolinijem.

Da podjetje ni zagotavljalo osnovnih pogojev za varno delo, so ugotovili tudi na Ministrstvu za okolje in na Inšpektoratu za delo. Nesreča se je po njihovih ugotovitvah zgodila kot posledica pomanjkljivih varnostnih ukrepov in slabe organizacije dela v podjetju. Poleg tega naj delavci ne bi bili usposobljeni za varno delo in brez ustreznih navodil glede ravnanja s kemičnimi snovmi. 

Čeprav na papirju obstajajo seznami in postopki, ki naj bi vsaj minimalno ščitili delavca pred poškodbami in nevarnostmi ob delu, so ti v praksi za podjetnike španska vas. Po podatkih Svetovne zdravstvene organizacije, ki skupaj z Mednarodno organizacijo dela beleži zdravstveno stanje delavk in delavcev, naj bi ob zadnjem popisu leta 2016 zaradi poklicnih bolezni, nesreč pri delu in z delom povezanih bolezni umrla skoraj dva milijona ljudi, obolelih na račun dela pa naj bi bilo skoraj 90 milijonov.

Kot vidimo na primeru Melamin, kapitalisti svojo odgovornost še ob smrtnih izidih radi prelevijo na delavce. O podobnem primeru, ko je zaradi posledic kapitalističnih mahinacij, privatizacije in šparanja prišlo do nesreče na železnicah v Grčiji, smo že poročali. Delavca, ki naj bi vlaka, ki sta potovala v različnih smereh, postavil na isto progo, so hitro obtožili umora iz malomarnosti, kljub temu, da so že desetletja mnogi opozarjali na varnostne probleme na železnicah.

Primer Melamin ponovno potrjuje, da kapitalistični pravni sistem deluje v prid kapitalistom.  Čeprav je policija preiskavo zaključila februarja 2023 in Okrožnemu državnemu tožilstvu v Ljubljani kazensko ovadila štiri osebe, med drugim tudi direktorja Srečka Štefaniča, se sodna preiskava še kar ni začela. Dolgotrajni sodni postopki so tako omogočili trenutno stanje, ko lahko podjetnik ustrahuje svojce pokojnega delavca in jim zagreni življenje s svojo odškodninsko tožbo, kot da ni bila že smrt bližnjega dovolj boleča. Bogati imajo denar za sodne postopke, tako lahko navadne ljudi utrujajo, izčrpavajo in pravno celo zmagajo v situacijah, za katere so očitno krivi.

Primer največje delovne nesreče v Sloveniji v Melaminu ni eksces, temveč se v kapitalizmu  uničevanje fizičnega in psihičnega zdravja ter prerane smrti dogajajo vsak dan. Medtem pa podjetniki in bogataši, ki imajo koristi od kapitalističnega sistema, uživajo na svojih jahtah, v vilah in lahko živijo dolgo v svojo starost. 

Kapitalizma se ne da regulirati in izboljšati, vedno bo našel način, kako izkoristiti delavca in naravo. Temelji na izkoriščanju, pohabljanju in uničevanju večine. Zato ga moramo odpraviti!

The post BREZSRAMNOST KAPITALISTOV NE POZNA MEJA ALI KAKO PODJETJE MELAMIN TOŽI DRUŽINO MRTVEGA DELAVCA  first appeared on Rdeča Pesa.

VEŠ, KAJ SE MI JE ZGODILO ZADNJIČ NA VLAKU?

Glede na to, kako dolga in razburljiva znajo biti potovanja z vlakom in avtobusi, si zaslužijo svojo redno rubriko na Rdeči pesi. Danes objavljamo prvo, ki sta jo spisala dva člana našega uredništva. Potem pa žogico podajamo tebi, draga bralka in bralec. 

Če imaš kakšno anektodo z vlaka ali avtobusa, nam jo pošlji na zasebno sporočilo ali pa na rdecepese@gmail.com. Poznaš osebo, za katero veš, da ima zagotovo na zalogi kakšno anekdoto? Označi jo spodaj v komentar.

“TO SO NAREDILI, DA BI SPRAZNILI VLAKE!”

Mariborska železniška je v nedeljo zjutraj cela razštelana, ampak vseeno se mi uspe brez težav prebiti do edinega odprtega okna za prodajo vozovnic. Pred mano sta dve osebi, časa pa je še slabih sedem minut. Bo že moglo it skozi, če bo pa sila, pa bom poiskala tisti avtomat. Nisem tako stara, da bi me avtomat plašil, ampak imam raje človeški stik. Še vedno se mi zdi, da je gospa za šalterjem precej hitrejša kot pa sem jaz za tistim avtomatom, ko iščem pravi vlak. 

Kmalu se mi pridruži še en mariborski rdečepesnik, namenjena sva v Ljubljano na piknik z ostalimi člani uredništva. Vlak je pač bolj ekološka in na komot izbira, odkar pa gradijo na cesti je pa tudi hitrejša. No, če doplačaš in greš z ICS-om seveda, potniški vlaki so druga kategorija. Včasih so bila ta potovanja za vikend zelo ugodna in nam, Mariborčanom, se je splačalo iti na vlak. Kljub temu, da pot traja dolgo in da si vezan na vozni red. Zdaj ob dražjih cenah se nam ne splača več zares, sploh če potujeva dva ali več.

“2x povratna za ICS do Ljubljane,” rečem ženski za šalterjem in izda nama dve karti za 20 evrov. Čeprav pričakujeva podražitev, naju cena šokira, ampak kaj pa naj, plačama in šibama na vlak. Vlak do Celja, potem pa prestop na avtobus do Ljubljane. Luksuzno potovanje, bojda vredno tega prestižnega doplačila. 

Na vlaku na srečo ni gneče, brez težav najdeva sedeže z mizo na sredini. Pa še wifi začuda danes deluje. Če si na vlaku, je pač fajn izkoristit čas, da na primer pogledaš kak tekst na googledoksu in če ti wifija Slovenske železnice ne uspe povezati, si v problemih. 

Vlak se premakne in kmalu zatem pride sprevodnik in izročiva mu vozovnici.

“To mata neke čudne karte.” 

“Kaj? Kaj pa je narobe?”

“Pa te karte za 20 evrov. Naslednjič si vzameta IZLETko. 15 evrov stane pa neomejeno voženj po celi Sloveniji.” 

“Jao, ja to karto nama je dala, ko sem rekla za Ljubljano.” 

Sprevodnik skomigne z rameni in zmaje z glavo. “Prej smo imeli za vikend 75 % ceneje. Da bi spodbudili rabo javnega prevoza. Zdaj pa je baje državi zmanjkalo denarja in so nam rekli, naj se znajdemo sami. Pol smo pač to IZLEtko dali, da je vsaj nekaj.“

Prikimava. Veva, da so podražili.

Sprevodnik je še vedno vidno razburjen: “Kaj podražili, za 300 % so podražili! Veste zakaj so to naredili? Da bi izpraznili vlake. Najprej jih polnijo, potem pa praznijo, pa saj to ne moreš verjet.” Zamahne in se odmaje naprej proti vlaku. 

V Celju prestopiva na precej manj udoben avtobus, ampak ker ne potujemo v času prometnih konic, smo na železniški postaji nekaj minut hitreje kot če bi šli z vlakom. Vsaj nekaj dobrega. 

P. S. Nazaj grede se ponudi prevoz do Maribora. Z avtom. In če moraš izbirati med tem, da potuješ s potniškim vlakom tri ure ali pa z avtom slabi dve uri… Zbereš avto, čeprav si karto trikrat preplačal.

The post VEŠ, KAJ SE MI JE ZGODILO ZADNJIČ NA VLAKU? first appeared on Rdeča Pesa.

GOSTUJOČE PERO: KAKO IZRAELSKI IMPERIALISTI S POMOČJO ZAHODNIH ELIT STRADAJO GAZO

Vse od odgovora Izraela na Hamasove napade 7. Oktobra 2023 je na območju Gaze življenje izgubilo okoli več kot 100.000 civilistov. Združeni narodi poročajo, da je porušene 70 odstotkov vse infrastrukture, kar je v životarjenje pognalo 1.9 milijona Palestink in Palestincev. To je okoli 90 odstotkov celotnega prebivalstva Gaze. Varna območja tvorijo samo še 12 odstotkov celotnega ozemlja. Nasilna premestitev takšnega obsega je v novejši zgodovini nevidena.

Po klasifikaciji faz integrirane prehranske varnosti Združenih narodov, lahko situacijo v Gazi čedalje bolj očitno opredelimo za humanitarno krizo epskih proporcev, saj vsestranska lakota, bolezni in podhranjenost neposredno vodijo v smrt mnogoterih ljudi. Združeni narodi lakoto oziroma humanitarno katastrofo razglasijo takrat, ko se vsaj 20 odstotkov gospodinjstev na nekem območju sooča s popolno odsotnostjo hrane in/ali ne morejo zadovoljevati temeljnih potreb, smrt in uničenje pa sta evidentna. Razširjenost akutne podhranjenosti presega 30 odstotkov, stopnje umrljivosti pa presegajo raven dveh smrti na deset tisoč prebivalcev.

96 odstotkov, torej približno 2.15 milijona Palestincev se sooča z akutno prehransko negotovostjo. Vsak tretji Palestinec je brez hrane lahko po več dni. Skoraj pol milijona vseh prebivalcev se sooča z najhujšo obliko lakote po definiciji faz integrirane prehranske varnosti. Okoli 20.000 otrok je v zdravniški obravnavi zaradi akutne podhranjenosti, umrlo pa jih je že na desetine. Zaradi akutne podhranjenosti trpi vsak tretji otrok pod petim letom starosti.

Izrael že peti mesec zapored preprečuje vstop humanitarnih konvojev s kritičnimi dobrinami v Gazo. Tudi delna omilitev totalne vojne po 19. maju letos ni prinesla odprave restrikcij na določene kritične dobrine, kot sta denimo gorivo ali pa plin za kuhanje. Brez goriva ni mogoča dobava elektrike, ki je potrebna za funkcioniranje raznoraznih zdravniških naprav.

Po drugi strani se stvari zapletejo tudi takrat, ko humanitarna pomoč doseže obubožane. Pogostokrat jo delijo raznorazne tolpe, ali pa kar sama Humanitarna fundacija za Gazo, ameriška nevladna organizacija znana po novačenju najemniških vojakov, ki odprto streljajo na palestinske civiliste. Vse od julija letos je na tako imenovanih humanitarnih območjih, kjer se deli pomoč, zaradi delovanja Izraela, Humanitarne fundacije za Gazo in klana, povezanega z ekstremisti ISIS umrlo čez 700 ljudi. Po poročanju neodvisnega novinarskega časnika GrayZone, naj bi Humanitarna fundacija za Gazo zajeten del financ prejela od izraelske varnostno-obveščevalne službe Mossad in izraelskega ministrstva za obrambo. Fundacija v Gazi zaposluje dve privatni najemniški firmi UG Solutions in Safe Reach Solutions, ki sta povezani z bivšim agentom ameriške Osrednje obveščevalne službe (CIE), Philipom Reillyem.

Genocid v Gazi je razodel vso hipokrizijo zahodnih vladajočih razredov, ki finančno, vojaško in še kako drugače podpirajo Izrael. Masovni protesti v podporo v Palestini, ki so se v zadnjih dveh letih vneli po Evropi in ZDA, so politike prisilili v sklepanje kompromisov. Macron, Merz, Starmer in Trump so vsi priznali, da je situacija v Gazi šla predaleč. Francija, Britanija in Kanada zdaj grozijo, da bodo na generalni skupščini Združenih narodov septembra priznale Palestino, medtem pa se cinično bahajo, da so Palestincem zagotovile hudo potrebno pomoč. Vsi ti koraki so zgolj kapljica v morje: Izrael ostaja nesankcioniran in neoviran pri izvrševanju svojega sprevrženega poslanstva.

Liberalni maliki pri tem po vseh teh letih še vedno zagovarjajo rešitev dveh držav, ki bi Izraelu podelila pravico do 78 odstotkov celotnega območja, Palestinska država pa bi obdržala ostalih 22. Zgodovinsko se je to že zgodilo in ni delovalo. Edini način, da si Palestinke in Palestinci izbojujejo svobodo je popolno uničenje sionističnega projekta po imenu Izrael. Brez odločilnega udarca proti celotni zahodni imperialistični infrastrukturi pa to preprosto ne bo mogoče. Naša dolžnost je, da podpremo vsakršna prizadevanja za odpravo tega nepravičnega statusa quo.

Gostujoče pero je napisal Matej Trontelj.

The post GOSTUJOČE PERO: KAKO IZRAELSKI IMPERIALISTI S POMOČJO ZAHODNIH ELIT STRADAJO GAZO first appeared on Rdeča Pesa.

PREDLAGAMO: Ali smo na poti k šestemu velikemu množičnemu izumrtju? 

Z vami delimo izseke prispevka o povezavi med izpusti velikih količin ogljikovega dioksida in množičnimi izumrtji. Članek je povzetek iz knjige “The Story of CO2 Is the Story of Everything: A Planetary Experiment”, ki jo je 26. avgusta letos izdal Allen Lane. Članek v originalu si lahko v angleščini preberete na povezavi v komentarju. 

“Če bomo v ozračje izpuščali takšne količine ogljikovega dioksida, kot jih izpuščamo zdaj, lahko to pripelje do novega velikega izumrtja na planetu.” Daniel Rothman proučuje vedenje ogljikovega cikla planeta v davni preteklosti Zemlje, zlasti v tistih redkih primerih, ko je bil presežen prag in je izgubil nadzor, ter je ponovno dosegel ravnovesje šele po sto tisočih letih. Glede na to, da je vse življenje na Zemlji osnovano na ogljiku, se te ekstremne motnje v ogljikovem ciklu kažejo kot »množična izumrtja« in so tudi bolj znana pod tem imenom.

Zaskrbljujoče je, da so geologi v zadnjih nekaj desetletjih odkrili, da mnoga, če ne večina, množičnih izumrtij v zgodovini Zemlje – vključno z daleč najhujšim doslej – niso povzročili asteroidi, kot so pričakovali, ampak vulkanski izbruhi, ki so zajeli celotne kontinente in v zrak in oceane izpustili katastrofalne količine CO2.

Če v sistem naenkrat sprostimo dovolj CO2 in življenjsko pomemben ogljikov cikel preveč izravnamo, lahko pride do nekakšnega planetarnega izpada, kjer prevzamejo nadzor procesi, ki so neločljivo povezani z Zemljo, in delujejo kot pozitivna povratna zveza, ki v sistem sprosti še veliko več ogljika. Ta naknadna sproščanja ogljika bi planet poslala na uničujočo 100-tisočletno “okrevanje”, preden bi ponovno pridobil ravnovesje.

To je zato, ker ogljikov cikel sprejema stalni tok CO2, ki izhaja iz vulkanov že milijone let, ko se giblje med zrakom in oceani, se reciklira v biosferi in se na koncu vrne v geologijo. Gre za ogljikov cikel. A če ta planetarni proces prekinemo s preobremenitvijo s tako ogromno količino CO2 v geološko kratkem časovnem obdobju, ki presega zmogljivosti Zemlje, lahko sprožimo nebrzdano reakcijo, ki bo veliko bolj uničujoča od katere koli katastrofe, ki je sploh sprožila celoten dogodek. 

Obstaja le nekaj poznanih načinov za sproščanje gigaton ogljika iz zemeljske skorje v atmosfero. Na eni strani so to približno vsakih 50 milijonov let ponavljajoči se izbruhi vulkanizma v velikih magmatskih območjih, na drugi strani pa industrijski kapitalizem, ki se je, kolikor vemo, zgodil le enkrat. A čeprav je naš planet trden in odporen na vse vrste nepredstavljivih udarcev, ki jih redno prejema, se enkrat na 50–100 milijonov let zgodijo katastrofalni dogodki. To so velika množična izumrtja, ko se razmere na površini Zemlje povsod tako poslabšajo, da presegajo prilagodljivost skoraj vseh kompleksnih oblik življenja.

V zgodovini živalskega življenja je ta opustošenje petkrat doseglo (in v enem primeru daleč preseglo) nekoliko arbitrarno mejo izumrtja 75 % vrst na Zemlji, zato je dobilo status »velikega množičnega izumrtja«. V paleontološki skupnosti so znana kot velika petorica. Najnovejše od velikih petih je prizadelo planet pred 66 milijoni let. Šlo je za globalno katastrofo, ki je zadostovala, da je končala dobo dinozavrov.

V primerjavi s tem je uničenje, ki ga povzročamo ljudje, relativno blago; morda manj kot 10 %. No, vsaj zaenkrat. Po vplivni študiji paleobiologa Anthonyja Barnoskya, objavljeni v reviji Nature leta 2011, bi lahko, če bomo nadaljevali s trenutnim tempom izumiranja, v treh stoletjih do 11.330 letih od danes preskočili iz (še vedno grozljivega) ranga manjšega množičnega izumrtja v šesto večje množično izumrtje, ki ga bodo geologi v prihodnosti težko ločili od udarca asteroida. Glede na to, kako katastrofalen je že vpliv človeka na biosfero, je strašljivo pomisliti, da nas morda še čaka vrhunec našega množičnega izumrtja.

Ko govorimo o izpustih, ni pomembna samo količina CO2, ki vstopi v sistem, ampak tudi pretok. Če ga v zelo dolgem času vnesemo veliko, se planet lahko prilagodi. Če pa v kratkem času vnesemo več kot veliko, lahko pride do kratkega stika v biosferi.

Na žalost hitrost, s katero ljudje danes izpuščamo CO2 v oceane in ozračje, daleč presega sposobnost planeta, da bi sledil temu tempu. Trenutno smo v začetni fazi okvare sistema. Če bomo tako nadaljevali še dolgo, bomo morda videli, kaj dejanska okvara resnično pomeni.”

Povezava do članka v originalu: https://bit.ly/41hPE4b

The post PREDLAGAMO: Ali smo na poti k šestemu velikemu množičnemu izumrtju?  first appeared on Rdeča Pesa.

GUŽVE NA CESTAH, DRAŽJI VLAKI IN BUSI TER PREDSEDNICA V HELIKOPTERJU

Mnogi smo znaten del letošnjega poletja preživeli v zastojih na avtocestah. Pobliže smo spoznavali pokrajino na odseku Slovenske Konjice – Dramlje in se potili ter dušili na ljubljanski obvoznici. To je negativno vplivalo na okolje in naše zdravje. 

Kot smo videli v preteklih dneh, vidne politične predstavnice in predstavniki tovrstnih problemov nimajo. Če je gužva na cesti in morajo na hitro skočiti s svojega morskega oddiha, pač naročijo helikopterski prevoz in uživajo v razgledu na kilometrske kače pločevine, ki se raztezajo pod njimi na avtocestah. Tako je predsednica republike Nataša Pirc Musar na državno proslavo v Beltincih z obale (ter nazaj) priletela kar v policijskem helikopterju. V uradu predsednice pojasnjujejo, da so se za takšen korak odločili, ker se z zračnim prevozom “v veliki meri izločijo tveganja povezana z zastoji na cestah in drugimi nepredvidenimi dogodki.” 

Kako pa naša kapitalistična država skrbi, da se tudi delovni ljudje “izognemo tveganjem povezanimi z zastoji na cestah”? S podražitvijo enkratnih, dnevnih in tedenskih vozovnic za integrirani javni potniški promet (avtobusi in vlaki) ter z ukinitvijo cenovno dostopnih vikend vozovnic. Če vas je prej železniški prevoz ob vikendih na morje iz Maribora stal med dobrimi štirimi in šestimi evri, se je sedaj cena dvignila na med 12,40 € in 18,60 €. Vozovnica za razdaljo do 5 kilometrov pa se je podražila za 15,4 %. 

Zaradi nezadostnih vlaganj v železniški promet v zadnjih nekaj desetletjih in vdora privatne iniciative na polje medkrajevnih in nekaterih mestnih avtobusnih povezav, sta ta že tako ali tako časovno in lokacijsko neprilagojena družbenim potrebam. Z zgoraj opisanim zvišanjem cen, o katerem smo že obširneje poročali, pa je bil storjen še dodaten korak v napačno smer. V smer javnih storitev, ki temeljijo na obsežnem financiranju s strani uporabnic in uporabnikov, ki že po svoji naravi povečuje ekonomsko in družbeno neenakost. V smer družbe, kjer individualno prevladuje nad kolektivnim. V smer družbe, ki še hitreje drvi proti okoljski katastrofi.

Direktor DARS-a, Andrej Ribič, težave z zastoji pripisuje temu, da smo v preteklem obdobju premalo vlagali v cestno infrastrukturo. Resnica je ravno obratna. Preveč denarja smo vložili v (avto)ceste, s predpostavko, da nas bo pred prometnim kolapsom rešil samo še en dodaten pas. Bistveno premalo sredstev smo namenili javnemu potniškemu prometu, ki hkrati predstavlja družbeno dobro in eno najboljših orožij v spopadu s podnebno krizo, katere učinke danes čutimo že vsi. 

The post GUŽVE NA CESTAH, DRAŽJI VLAKI IN BUSI TER PREDSEDNICA V HELIKOPTERJU first appeared on Rdeča Pesa.

Nix meetup

OBVESTILO:
Žal dogodek odpade.

Vas zanima Nix in NixOS? Bi radi spoznali druge uporabnike in navdušence v Ljubljani? Pridružite se skupini Nix uporabnikov na prvem srečanju Nix User Group Slovenia!

Ne glede na to, ali ste izkušen uporabnik ali šele začenjate raziskovati – vsi ste dobrodošli. Izmenjali si boste izkušnje, pogovarjali  o projektih in poiskali navdih.

Dnevnega reda ni – samo dobra družba in pogovori v duhu Nixa.

The post Nix meetup first appeared on Računalniški muzej.

Dan zmage

Mimoidoči na križišču Celovške in Tivolske ceste smo bili 18. marca 2025 priča nenavadnemu odprtju novega športnega kompleksa. Slovesna ceremonija je potekala v objektu, ki več kot očitno ni bil končan. Jan Plestenjak in Magnifico sta nastopala v nekakšnem jeklenem hangarju – pod veliko strešno strukturo, po kateri so bile rokohitrsko razmetane sončne celice, pod njo pa so se sušile široke zaplate betona. Visoki gostje so vstopali v to okostje po gramoznem zemljišču, mimo industrijskih ventilatorjev in grobih sivih zidov, prestreljenih z železnimi tramovi; za njimi so bili še vedno parkirani gradbeni stroji, okoli zemljišča pa so vijugali varovalni trakovi. Z ad hoc tribune je donel glas Zorana Jankovića: “Danes je dan zmage, dan uspeha. Moje srce je polno zaradi mojih sodelavcev, ki so spet pripeljali en projekt do konca.” Kako do konca? Ali ne vidi, da smo še vedno na gradbišču? Ob tem spektaklu smo lahko prišli le do zaključka, da so očitno prehitro napovedali odprtje in so ga morali izpeljati, čeprav so pri projektu še manjkali ključni elementi.

Toda nekaj tednov kasneje smo se še naprej čudili napredovanju prenove Ilirije. Gradbene stroje so sicer preparkirali, varovalni trakovi so izginili, na ploščadi se je znašel par Ikeinih gostinskih miz, a vse drugo je ostalo nespremenjeno. In po več mesecih takšnega stanja se nam je le utrnilo: to je dejansko končan objekt! Prav smo videli – kopališče Ilirija ni nič drugega kot megalomansko jekleno okostje, ki obdaja voluminozne betonske plošče. Nekje tam notri med cementom in železnimi rešeti naj bi bil sicer tudi olimpijski bazen, a to je zadnja stvar, ki bi jo v takšnem objektu pričakovali. Stavba dejansko še najbolj spominja na letališki hangar, le da je v njem namesto tovornih letal parkiran … gradbeni material.

Nova Ilirija je predimenzionirana sejemska streha, pod katero je razstavljena ponudba iz kataloga slovenskih gradbenih izvajalcev: klančine, ploščadi, stopnišča, stebrišča, odri, fasade, pločevina …

Vse to ne bi moglo biti dlje od tega, kar je občina obljubljala v letih pred uresničitvijo načrta. Novo kopališče Ilirija ni “poklon slovenski plavalni tradiciji” – ikonično pročelje Stanka Bloudka zdaj ponižno kleči pod megalomanskim oklepom in je tam zreducirano na nekakšno trafiko. Prav tako ni “idealno prizorišče za slovenski šport”, saj je za plavalce, ki so razstavljeni na ogled mimovozečim kot ribe v akvariju, podstandardno poskrbljeno. Še manj je Ilirija “povezava med mestnim jedrom in parkom Tivoli”; struktura kvečjemu uvaja grobo ločnico med dvema deloma mesta, “prehod” pod pločevinasto ogrado se grozeče vsiljuje Ljubljančanom kot vojaški check-point izraelske vojske. Objekt ni nastal zaradi starih uporabnikov Ilirije, nastal je njihovemu nasprotovanju navkljub. Utemeljen ni na kulturni dediščini, ampak je morala občina za gradnjo sprejemati sporne odloke, s katerimi je razdejala kulturnospomeniške standarde. Tudi produkt arhitekturne stroke ni, ampak nasprotje vsem strokovnim opozorilom: slovenski arhitekti so v preblisku kolektivne družbene odgovornosti celo bojkotirali razpis Mestne občine Ljubljana. Stroka je bila enotna v prepričanju, da se v park Tivoli ne bi smelo gradbeno posegati in da bi bilo bolje prenoviti staro Bloudkovo kopališče, saj pokriti bazen ne spada v to okolje. Nazadnje je morala občina najti arhitekturnega stavkokaza iz Innsbrucka, Petra Lorenza, da je kot v posmeh domači stroki pripravil načrt za “najspektakularnejši objekt v Ljubljani”, piazzo, “kjer so lahko športne prireditve, turnir v boksu, predstavitve novih avtomobilov, morda politični govori …”

Arhitekturni stvor na Tivolski cesti lahko razumemo šele, ko prisluhnemo njegovim financerjem in snovalcem. Stavba, ki je za prebivalce mesta nesmiselna grdobija, je za gradbene izvajalce nekaj najlepšega na svetu. Naslov promocijskega videa, ki ga je po tiskovni konferenci leta 2023 lansiralo podjetje Makro 5, pove vse: “Športni center Ilirija – največja betonaža v zgodovini podjetja!” V njem lahko ob pompozni navdihujoči glasbi iz zraka spremljamo, kako se v gradbeno jamo pretakajo slapovi betona in jo spreminjajo v sivo jezero, medtem ko Zoran Janković zadovoljno komentira: “1500 kubikov! Čestitam!” Če se vam objekt zdi kot primer zidarske hiperprodukcije, se niste zmotili, saj je to dejanski smoter Ilirije: spraviti maksimalno količino gradbenega materiala na dodeljeno površino. Večja ko je količina blaga, ki ga izvajalec proda naročniku, in več ko je aneksov, ki jih naserje k pogodbi, uspešnejši je posel; s pridobljenimi referencami pa lahko pridobi nove, še večje projekte. Tako je podjetje Makro 5 Rajka Žiganteja s posli z Jankovićem v nekaj letih potrojilo svojo vrednost: po 27-milijonskem Centru Rog in 62-milijonski Iliriji se njegovi bagri selijo na 108-milijonski atletski stadion v Šiški. Njuno javno-zasebno partnerstvo poteka tako dobro, da je Ljubljana sama postala razstavni prostor za gradbinčeve storitve. Podjetja ne obstajajo, da bi s stavbami oskrbovala Ljubljano, Ljubljana obstaja, da imajo podjetja posel.

Takšna izpeljava ni tuja nikomur, ki je spremljal tranzicijske procese od 90. let prejšnjega stoletja. Od prvega dne so namreč Sloveniji vladali gradbinci – “rdeči direktorji”, ki so si prisvojili produkcijska sredstva skupne države in jih spremenili v svoj zasebni vir dobičkov in politične moči … Zgodnji primer te tranzicijske oblike vladanja je bil mitološki “stric iz ozadja” Janez Zemljarič. Kdor je obvladoval naše gradbene koncerne, je imel tudi vpliv na medije in politične stranke, katerih rast so vse bolj pogojevali veliki infrastrukturni projekti. Slovenske elite je že od prvih dni gradnje avtocestnega križa pa do postavitve TEŠ 6 združeval posel z betonom. Gradbena panoga je po osamosvojitvi desetletje in pol cvetela skoraj izključno zaradi javnih investicij, zato so njeni velikani postali tudi glavni zaposlovalci in poganjalci gospodarske rasti pri nas. Tako smo dobili kasto gradbenih baronov – Ivana Zidarja, Dušana Černigoja, Hildo Tovšak in druge, ki so si lahko zagotovili velike projekte in pri tem navijali cene, hkrati pa nekaznovano izčrpavali delavce, podizvajalce in ne nazadnje tudi lastna podjetja.

Prvi cikel gradbene ekspanzije se je končal s krizo leta 2008, po kateri so se ključna domača podjetja drugo za drugim znašla v stečajnih postopkih, njihovi šefi pa v korupcijskih škandalih. Spektakularna javna sojenja Tovšakovi in Zidarju so pravzaprav prikazovala zgodovinski prelom znotraj slovenskega vladajočega razreda – travmatični odhod stare garde domačih kapitalistov, katerih orjaški kompleksi fiksnega kapitala niso mogli preživeti neoliberalnega obrata k novi, vitki državi, ki ne gradi in ne investira več. Njihov padec je temeljito premešal politične karte in odprl vrata v desetletje novih političnih obrazov in podjetniških povzpetnikov, med katerimi pa nikomur ni uspelo zasesti položaja hegemona. Nekaj časa smo celo verjeli, da utegnejo po krizi v državo priti nove industrije, tehnološke inovacije in drugačne razvojne strategije, ki bodo slovensko gospodarstvo popeljale onkraj diktata betona.

Teh ugibanj je danes konec. Desetletnemu zatišju je sledil postkoronski investicijski bum, val državnega in občinskega zapravljanja, ki se oplaja z nizkimi obrestnimi merami in evropskimi kohezijskimi sredstvi. Evropska unija je v odmiku od neoliberalnih dogem želela s sproščanjem javnih financ na vsak način obuditi gospodarsko rast, ta pa se je v Sloveniji znova realizirala v obliki betona. Ko je Evropska centralna banka odprla pipo, so se vrnili tudi žerjavi in mešalci. Z novim ciklom javnih naročil, od drugega tira pa do hidroelektrarn in kulturne/športne infrastrukture, je tako vzklila tudi nova gradbena elita – Petričev Kolektor, CGP Darija Južne, Riko Janeza Škrabca in Joza Dragana ter seveda Žigantejev Makro 5 … Majhna, razmeroma butična podjetja so se priklopila na nove kanale financiranja in čez noč postala vplivna stičišča moči.

Vladajoči razred se je preoblikoval, poslovni akterji so se premešali, zamenjale so se celo generacije, a nekaj je ostalo nespremenjeno: osrednja točka, okoli katere se bo združevala slovenska politika, bo še naprej gradbeni material.

Zato Ilirija ni zgolj še en velik posel, ampak je, prosto po Jankoviću, pomnik zmage – velike vrnitve gradbenih baronov, ki so prebrodili krizo in se vrnili močnejši kot kadarkoli. Bizarno odprtje, ki smo ga doživeli, je bilo v tem pogledu edino logično: to ni bil prvi pogled na novo uporabno stavbo, ampak razstava gradbenih storitev za bodoče kupce. Še več, bilo je nekakšna predaja štafete: prejšnja generacija naročnikov (Kučan, Turk, Kocjančič …) je bila povabljena na prodnato gradbišče skupaj z novo (Han, Boštjančič, Props, poslanci Svobode in Tina Gaber). Nad vsemi pa je lebdel nasmeh Zorana Jankovića, edinega “rdečega direktorja”, ki je preživel krizo gradbeno-političnega kartela in mu ga je uspelo pripeljati od zatona v Stožicah do novega življenja v Iliriji. Brez skrbi, Janković ve, da je zmagal. Njegova Ilirija je pravzaprav zasnovana kot pikro sporočilo mestnega patriarha vsem njegovim nasprotnikom, kot sredinec, pokazan stroki, civilni družbi, nekdanjim uporabnikom in drugim prebivalcem. Domači arhitekti da se upirate? Evo, pripeljemo vam stavkokaza iz Avstrije, ki se poserje na vaša mnenja! Želeli ste obnovo dediščine? Evo, Bloudkov vhod smo vam postavili v pločevinasti hangar! Kaj, radi bi ohranili park? Evo vam betonski kolos sredi zelenice! Stroški da se vam zdijo previsoki? Evo vam še za nekaj deset milijonov evrov aneksov! Naj se ve, kdo je šef v tem mestu. Novi vladajoči razred je prispel in pri vladanju bo še objestnejši od prejšnjega.

 

 

 

 

 

 

Tegobe urejanja ljubljanskega potniškega vozlišča

Vse se je začelo leta 2002, po obisku načrtovalcev ljubljanske železniške postaje v mestu Lille v francoski Flandriji (Lille ima milijon in pol prebivalcev in je važno križišče železniških prog iz francoskih velemest, Londona, Amsterdama in Bruslja).

»Lepo bi bilo imeti takšno železniško postajo v Ljubljani!«

Ideja je bila rojena. Izveden je bil mednarodni natečaj za Potniški center Ljubljana (PCL) in našel se je partner za ustanovitev javno-zasebnega partnerstva – Emonika. Slovenske železnice so v partnerstvo vložile zemljišča, ki so v Emoniki predstavljala 22-odstotni delež, madžarski partner TriGranit pa naj bi financiral največje zabaviščno-nakupovalno središče v Sloveniji ter na lastne stroške zgradil železniško in avtobusno postajo (dogovor iz leta 2007). Ker so Slovenske železnice potrebovale denar, so že kmalu po podpisu družbene pogodbe za 19 milijonov prodale večino svojega deleža v Emoniki (zemljišča!!!) madžarskemu partnerju in ohranile le še 3 odstotke svojega deleža v družbi. Kasneje je bilo sprejetih več sprememb in dopolnitev zazidalnega načrta za območje PCL, večinoma na pobudo in v korist madžarskega partnerja. Kljub temu je TriGranit (kasneje Granit Polus) leta 2014 na arbitrarno sodišče na Dunaju vložil zahtevek za razveljavitev družbene pogodbe in odstopil od projekta.

V Slovenskih železnicah so s projektom Emonika želeli nadaljevati. Našli so dva nova potencialna partnerja: romunski Prime Kapital in južnoafriški konzorcij Mas Real Estate. A Madžari, večinski lastniki zemljišč na območju PCL, zemljišč niso hoteli prodati in pogajanja z novima interesentoma so propadla.

Leta 2018 sta se državni instituciji Direkcija RS za infrastrukturo (DRSI) in Slovenske železnice odločili, da bosta v soglasju z državo in MOL sami financirali in zgradili železniško in avtobusno postajo (ocena investicije: 50 do 60 milijonov evrov) ter infrastrukturni del (ocenjen na 20 do 30 milijonov evrov).

Konec leta 2020 je minister za infrastrukturo Jernej Vrtovec v imenu vlade RS z družbo v posredni lasti madžarske banke OTP Mendota Invest (ki je nadomestila Granit Polus), s Slovenskimi železnicami in Mestno občino Ljubljana (MOL) podpisal memorandum o vlaganju v Emoniko: Mendota Invest bo financirala komercialni del Emonike (ocena investicije: 250 milijonov evrov), Potniški center Ljubljana (PCL) (ocena investicije: 137 milijonov evrov) bo financirala država. Memorandum predvideva, da bo madžarska Mendota od Slovenskih železnic odkupila še preostali delež v družbi (zemljišča!!!) za 3 milijone evrov.

Madžarska družba bo tako postala edini lastnik1 strateško najpomembnejšega zemljišča v Ljubljani, preko katerega potekajo vse mednarodne in regionalne železniške proge!!!

Poglobitev tirov ali obvoznica za tovorni železniški promet

Decembra 2020 sta se vlada in MOL odločili, da se bo železniško vozlišče poglobilo. Čeprav stroka ni prisostvovala pri odločitvi, se bo nova železniška postaja gradila tako, da bo mogoče železniške tire za tovorni promet izvesti skozi klet postaje kar 20 metrov globoko pod zemljo. Ljubljanski podžupan, arhitekt Koželj nasprotuje ideji, da bi se poglabljalo proge v dveh nivojih: 20 metrov pod zemljo tovorna železnica in nad njo, 10 metrov pod zemljo potniška proga. Hkrati pa sprejema varianto s poglobitvijo samo proge za tovorni promet (v tem primeru potniški promet ostane na nivoju terena). Na tako rešitev se projektira tudi novo železniško postajo:

»Ves čas sem bil v stiku z nemškimi inženirji Vössing in Verpro, ki sta sodelovala pri izdelavi študije variant ljubljanskega železniškega vozlišča. Že ob prvih pregledih se je jasno pokazalo, da bi bilo poglabljanje vseh tirov, torej poleg tovornih tudi potniških prog nove železniške postaje, s čimer naj bi se sprostila obsežna zemljišča za gradnjo med centrom in Bežigradom, težko izvedljivo in razmišljanje o tem je skoraj povsem utopično. Progo za tovorni promet bo pač treba poglobiti, druge možnosti pravzaprav nimamo. V zazidalnem načrtu sta globina in trasa predora za tovorno progo predvideni. Vendar na končno odločitev o tem ne moremo čakati še eno desetletje, medtem pa bo mesto še naprej brez normalne železniške in avtobusne postaje.« (januar 2021)

Mnenje generalnega sekretarja Slovenskih železnic Dušana Mesa pa je, da je za tovorni promet nujno potrebno zgraditi obvozno progo, hkrati pa je treba urediti tudi učinkovite potniške povezave z okolico Ljubljane s hitrimi »shuttle« linijami. Potreben je celovit pristop k ureditvi celotnega železniškega vozlišča.

Prekletstvo evropskega denarja

Konec decembra 2020 smo iz sredstev javnega obveščanja prejeli prvo sporočilo o novem dolgoročnem proračunu Evropske unije skupaj s sredstvi za okrevanje po pandemiji. Slovenija bi iz tega paketa lahko prejela dobrih 8 milijard evrov. O porabi razvojnega denarja v Sloveniji diskusij ni bilo. Osnutki dokumentov o razporeditvi sredstev so imeli oznako tajnosti. Nacionalni načrt, ki ga je sprejela vlada, ni bil dovolj ambiciozen, da bi Slovenija upravičila prejem vseh njej namenjenih sredstev. Vlada in mesto sta imela možnost pridobiti sredstva za financiranje celotnega projekta rešitve železniškega vozlišča (LŽV), kar bi bilo za prestolnico naše države primarnega pomena. V nacionalni načrt pa je bila, kljub temu, da enotna strategija za rešitev ljubljanskega železniškega vozlišča ni sprejeta, vključena posodobitev primorske proge od Brezovice do Ljubljane.

Posodobitev železnice s protihrupnimi ograjami od Brezovice do Ljubljane, ki vodi skozi park Tivoli in se bo po etapah nadaljevala vse do Divače, sva avtorja ocenila kot najbolj škodljiv projekt v seznamu projektov, ki jih financira EU: »Predstavniki vlade in mesta imajo kratek spomin. Samo štiri mesece je minilo od vladno-občinske odločitve, da se železniško vozlišče poglobi, v program evropskih sredstev pa so umestili nadgradnjo železniškega odseka Ljubljana–Brezovica–Borovnica, ki jo bo treba ob poglobitvi železniških tirov rušiti.«

Kot kaže, je logika v ozadju teh odločitev sledeča: evropski denar je, treba ga je porabiti. Pa čeprav za stihijske posege, ki mestu škodujejo in bodo še dolgo negativno vplivala na njegov nadaljnji razvoj.

»Najprej posodobitev, potem poglobitev«

Še septembra 2021 je takratni minister za infrastrukturo Jernej Vrtovec povedal, da je kljub skoraj 68 milijonov evrov vredni posodobitvi odseka železniške proge med Ljubljano in Brezovico, ki je trenutno v teku, poglobitev proge v središču mesta še vedno v državnih načrtih. Prav tako tudi Tivolski lok.

Že decembra 2021 pa se je država odrekla izvedbi Tivolskega loka – petletno projektiranje se je izkazalo za sizifovo delo. V državni Viziji 2050+ bo Tivolski lok nadomeščen z zahodno obvoznico med Dolgim mostom in Vižmarjami.

Stihijsko in nepremišljeno planiranje po posameznih odsekih in brez zasledovanja vnaprej usklajenega cilja ter jasne vizije želenega razvoja (kamor spadata tudi nadgradnja primorske železnice in odprodaja državnih zemljišč madžarskemu Trigranitu, sedaj Mendoti Invest) pa se nadaljuje.

Projekt Emonike

Decembra 2021 je podjetje Mendota opustilo idejo o gradnji ogromnega nakupovalno-zabaviščnega centra nad železniškimi tiri. V projektu Emonika je ostala poslovna stolpnica (hotel) na vogalu Dunajske ceste in Trga OF, z objektom vzdolž Masarykove ceste (južna Emonika), ter stanovanjski kompleks na vogalu Dunajske in Vilharjeve (severna Emonika). Projekt Emonike, z izjemo lokacije, nima več nikakršne povezave s PCL in železnico.

Dokler ne bo realizirana kakšna od možnih rešitev za umik železniškega tovornega prometa iz mesta, bo le-ta – predvidenih je 450 vlakov dnevno – potekal po mestnem parterju skozi novo železniško postajo. Zato je potrebna izvedba dodatnih štirih železniških tirov, ki bodo stisnjeni med severno in južno Emoniko. Načrtovane so tudi širitev obstoječega podvoza na Dunajski cesti, gradnja novega megalomanskega podvoza na Šmartinski cesti ter izvedba podvoza na Parmovi ulici. Priprava projektov za predvidene posege je že v polnem teku.

Potniški center Ljubljana (PCL)

Novembra 2021 je bil v Dnevniku predstavljen nov projekt nove železniške postaje, ki se kljub temu, da odstopa od izbrane natečajne rešitve iz leta 2008 in od veljavnega zazidalnega načrta, pripravlja brez javnega natečaja. Rešitev PCL je opisal podžupan Koželj:

»Treba je zgraditi most nad tiri, kjer lahko v udobni postajni dvorani potniki počakajo vlak, do katerega se po tekočih stopnicah ali z dvigalom spustijo na peron. Na drugo stran pa je ta most povezan s centralno avtobusno postajo ob Vilharjevi cesti, kjer lahko prestopijo na avtobus ali se odpeljejo s taksijem. Pri tem so ves čas na suhem in toplem v spodobnem kulturnem okolju. /…/ Čakalnica predstavlja vezni člen med potniki, ki na etažo vstopajo z železniške postaje, in tistimi, ki dostopajo z nivoja mestnega parterja. Je sodobni ‘gate’ (kot “izhod” na letališču (op.p.)), na katerem potniki v ogrevanem prostoru udobno čakajo na javno prevozno sredstvo.«

Kako pa bo urejena povezava novega PCL z mestnim avtobusnim prometom LPP? Do najbližjih obstoječih postajališč na Bavarskem dvoru ali pri Gospodarskem razstavišču je približno deset minut hoje! Potnikom bi veliko bolj ustrezala lokacija na Masarykovi cesti pred železniško postajo, kjer pa bi bilo treba urediti novo postajo ljubljanskega potniškega prometa LPP. Rešitve povezav novega PCL z LPP javnosti še vedno niso predstavljene.

Glede prometne ureditve okolice PCL je še vedno aktualen Fabianijev obroč, ki je predviden po Masarykovi cesti ob železniški postaji, nanj pa pritekajo skoraj vse mednarodne in regionalne avtobusne linije. Dovoz na avtobusno postajo na Vilharjevi cesti bo potekal po pentlji skozi Dunajski podvoz, nato po Vilharjevi cesti, odvoz pa po Vilharjevi cesti in skozi Šmartinski podvoz na Masarykovo cesto.

Državni prostorski načrt in občinski prostorski načrt

Ministrstvo in direkcija za infrastrukturo (DRSI) sta objavili zgoraj navedeno Vizijo 2050+ za ljubljansko železniško vozlišče in PCL, s katero se je vlada seznanila 18. novembra lani. V pripravi so osnutki za Državni prostorski načrt (DPN) za nadgradnjo obstoječih železniških prog. Za tovorni promet in hitro progo se načrtuje zahodna obvoznica, ki bo s predorom pod Rožnikom povezala primorsko in gorenjsko progo.

Kljub temu, da se že projektirajo izvedbeni načrti za PCL in posodobitve posameznih segmentov prog, je DRSI šele letos začela s pripravo strokovnih podlag za Državni prostorski načrt (DPN – pravna podlaga za izdelavo projektov). Prve strokovne podlage in predlagane rešitve prog LŽV bodo pripravljene v letih 2023/24. Poglavitna priprava celovitne zasnove bo umik tovornega prometa s površja mesta. Danes odločitve o poteku tirov še ni, opravljene niso niti študije za varstvo pred hrupom in zdravju škodljivimi emisijami železniškega in cestnega prometa.

Tudi v strateškem delu veljavnega občinskega prostorskega načrta MOL (OPN MOL) je napovedana izvedba navezave zahodne obvoznice za tovorni in hitri železniški promet na gorenjsko železnico. Zahodno od Rožnika je predviden odcep kraka proge proti vzhodu, ki skozi mestno središče poteka v predoru v enem ali dveh nivojih ter se nadaljuje do tovorne postaje Moste ter naprej do Zaloga.

V prostorskem aktu je zapisano tudi, da severna obvoznica ni sprejemljiva, da pa je potek prog za hitro železnico in tovorni promet skozi mesto mogoče spremeniti. Vitalni interesi mesta so nemoten urbani razvoj in povezovanje z železniškimi progami na pet delov »razrezane« celote, varstvo pred hrupom in varstvo pred zdravju škodljivimi emisijami (ki so že danes v Ljubljani drugi najpomembnejši vzrok prezgodnje smrti občanov). Tovorni železniški promet mora potekati izven gosto poseljenih mestnih območij!

Predlog celostne rešitve ljubljanskega železniškega vozlišča

Arhitekta in urbanista Peter Kerševan in Milan Kovač se že leta poglobljeno posvečava problematiki ljubljanskega železniškega vozlišča in Potniškega centra Ljubljana. Leta 2018 sva svoja raziskovanja in razmišljanja javnosti predstavila v brošuri Drugi tir Koper–Divača – kako naprej? : ljubljansko železniško vozlišče in Fabianijev obroč, leta 2021 pa v brošuri Zelena Ljubljana ali razkosana betonska džungla. Svoja spoznanja sva strnila v Predlogu celostne rešitve ljubljanskega železniškega vozlišča.

Ker velja, da severne obvozne železniške proge zaradi goste poseljenosti območja, dolžine trase, vodovarstvenega območja itd. ni več mogoče realizirati, sva za obvoznico za železniški tovorni promet predlagala izvedbo južno-vzhodne trase med Dolgim mostom in Zalogom. Trasa je predlagana po že degradiranem območju vzporedno z južno avtocestno obvoznico, izven območja Nature 2000, brez nevarnosti poplav, izven naseljenega območja, na primernem vodovarstvenem območju. Slaba nosilna tla niso problematična za izvedbo, geologija za predor kot pri avtocesti je znana.

Za ureditev potniškega železniškega prometa pa predlagava poglobitev železnice na območju PCL. Taka ureditev bi omogočila združitev severne in južne Ljubljane in nemoten urbanističen razvoj mesta v smeri vzhod–zahod. Podvoza na Dunajski in Šmartinski cesti ne bi bila (več) potrebna. Danes z železnico na pet delov razkosano mesto bi postalo spet povezano.

Gorenjska železnica bi bila znotraj avtocestnega obroča poglobljena skladno s projektom nemških konzultantov Vössing & Vepro iz leta 2009 ter modernizirana tako, da bi ustrezala mednarodnim standardom za InterCity omrežje. Kamniška železnica bi se izven avtocestnega obroča navezala na gorenjsko železnico, znotraj avtocestnega obroča pa poglobila po sedanji trasi do PCL, z mestom bi povezovala športni center Stožice in letališče Jožeta Pučnika. Dolenjska železnica bi se priključila na obvoznico. Primorska železnica za potniški promet bi se lahko priključila na južno-vzhodno obvoznico ali pa bi bila znotraj avtocestnega obroča poglobljena po obstoječi trasi do PCL.

Celostna rešitev ljubljanskega železniškega vozlišča vpliva tudi na projekte železniške in cestne infrastrukture ter nadgradnjo železniške in nove avtobusne postaje. Železniška postaja ne bi potrebovala ploščadi v prvem nadstropju nad tiri. Potniki bi v območje postaje vstopali skozi obstoječ 178 let star spomeniško zavarovan objekt, ponos mesta, ki se uvršča med najstarejše ohranjene objekte železniških postaj v Evropi.

Velika večina mednarodnih in medkrajevnih avtobusov pripelje na Fabianijev obroč. Dostop do avtobusne postaje bi moral biti s Fabianijevega obroča na nivoju terena. To bi omogočalo direkten prestop potnikov z avtobusne na železniško postajo in v mesto. Nepotrebna je »pentlja« na Vilharjevo cesto. Prestop na mestne avtobusne linije bi bil urejen na Masarykovi cesti.

Kaj pa sedaj?

Država in MOL bi morala napeti vse sile, da bi bila strategija rešitve ljubljanskega železniškega vozlišča čimprej dorečena (letnici 2030 oz. 2050 so za problematiko, ki bi morala biti že davno rešena, postavljene v nedopustno oddaljeno prihodnost), potem pa morajo vsi nadaljnji projekti strogo slediti zastavljenemu cilju. Trenutno smo priča popolnoma neodgovornemu in neusklajenemu planiranju po posameznih odsekih: projektira in izvaja se železniška in cestna infrastruktura na območju PCL, pripravljajo se načrti za gradnjo železniške in avtobusne postaje, v soglasju z ZVKDS je bila izvedena sprememba namembnosti sedanjega z zakonom zaščitenega železniškega postajnega poslopja v hotel, v kratkem se bo rušil in širil nadvoz na Dunajski cesti, gradil se bo megalomanski podvoz na Šmartinski cesti, razprodajajo se zemljišča ob Masarykovi cesti, nujno potrebna za razvoj in povezavo železnice z LPP … Bodo vse te realizacije začasne? Če je odgovor pritrdilen, smo priča skrajno negospodarnemu ravnanju z javnim denarjem. Če pa je odgovor negativen, bodo hudo omejevale in usmerjale nadaljnje rešitve.

Vse aktivnosti potekajo brez Državnega prostorskega načrta, ki je osnova za kakršno koli projektiranje, vse poteka brez sodelovanja javnosti. Dogajanja na območju PCL so brez zakonske podlage nezakonita, po mnenju podžupana Janeza Koželja pa »preuranjena«: »O poteku tovornega in potniškega prometa skozi PCL ni nič odločenega, niti o severni niti o južni obvozni progi, niti o celotni niti o delni poglobitvi tirov.”«

Ali bo železniški tovorni promet – 450 vlakov dnevno – potekal skozi Potniški center Ljubljana in s škodljivimi emisijami povzročal bolezni in smrt sedanjim in bodočim generacijam prebivalcev, ali se bo centru mesta izognil po južno-vzhodni obvozni progi? O tako pomembnih temah mora odločati javnost na referendumu.

»Še je čas za referendum,« pravi gospod Janez Koželj. Upava, da to ni debela laž in zavajanje občanov.

Članek je bil prvotno spisan in objavljen konec leta 2022. Uredila ga je Kaja Lipnik Vehovar.

Testiranje anabolnih androgenih steroidov (NLZOH)

Po uspešni pilotni fazi v prvi polovici leta 2025 je v Sloveniji omogočeno redno testiranje anabolnih androgenih steroidov (AAS) – brezplačno in anonimno za vse uporabnike.

Namen projekta je analiza nedovoljenih snovi v športu in drugih snovi, ki izboljšujejo telesne sposobnosti in izgled, ugotavljanje prisotnosti teh snovi v izdelkih v prosti prodaji in na črnem trgu, kar predstavlja podlago za pripravo ukrepov zmanjševanja škode, preventive in ozaveščanja.

Pri vsakem oddanem vzorcu uporabniki prejmejo:

  • Rezultate laboratorijske analize vsebnost aktivnih učinkovin ter možnih kontaminantov
  • Svetovalni pogovor, v katerem se preko vprašalnika oceni način uporabe, motivacija ter se spodbuja varnejša uporaba

KJE poteka sprejem AAS (anabolnih androgenih steroidov)?

LJUBLJANA

Naslov: NLZOH, Grablovičeva 44, 1000 Ljubljana

Kontakt: testiranje.aas@nlzoh.si, 068 164 409

Četrtek glede na termin, 17:00-20:00

MARIBOR

Naslov: NLZOH, Prvomajska 1, 2000 Maribor

Kontakt: testiranje.aas@nlzoh.si, 068 164 409

Prvi četrtek v mesecu, 17:00-20:00

KDAJ poteka sprejem?

Sprejem vzorcev bo potekal na vnaprej določenih terminih. Za oddajo vzorcev na terminu je potrebna PRIJAVA, ki jo morate izvesti najmanj 1 dan pred terminom. Za dodatne informacije smo dosegljivi na e-poštnem naslovu testiranje.aas@nlzoh.si ali pa na telefonski številki 068 164 409. Število mest na terminu je omejeno!

KAKO se prijavim na termin?

Prijavo na testiranje opravite preko povezave, kjer izberete enega izmed razpoložljivih terminov. Polja »Ime in priimek« ni potrebno izpolniti s pravim imenom – vpišete lahko vzdevek ali katerikoli drug naziv, saj je oddaja vzorcev anonimna. Obvezno pa morate vnesti elektronski naslov, saj ga boste potrebovali za morebitno spreminjanje ali preklic termina. Prosimo vas, da v primeru, da se testiranja ne boste mogli udeležiti, termin pravočasno odpoveste. Povezava za odpoved ali spremembo termina se nahaja v potrditvenem e-poštnem sporočilu, ki ga prejmete ob prijavi. Na testiranje lahko prinesete največ dva vzorca!

Termini za leto 2025

  • 4. september, LJUBLJANA
  • 2. oktober, MARIBOR
  • 13. november, LJUBLJANA
  • 4. december, MARIBOR

KATERE vzorce lahko oddate?

  • Tekočine (ampule): najmanj 1 ml (če je mogoče, še zaprte)
  • Tablete: najmanj 2 kosa (zaradi možne neenakomerne vsebnosti učinkovin)

Kvantitativni rezultat je možen samo za substance, ki so vključene v SEZNAM. V kolikor imaš dodatna vprašanja glede oddaje vzorcev, piši na testiranje.aas@nlzoh.si

KAJ prejmete?

  • Laboratorijsko analizo: čistost, vsebnost učinkovin, kontaminacije
  • Svetovanje o varnejši uporabi in tveganjih

REZULTATI analiz so na voljo:

  • po telefonu 068 164 409
  • osebno po dogovoru

ZAKAJ sodelovati?

  • 20-50% vzorcev v tujini je bilo napačno deklariranih
  • Možna vsebnost škodljivih primesi ali napačne koncentracije
  • S testiranjem zmanjšate tveganje za zdravje in stranske učinke
  • Storitev je anonimna, brezplačna in dostopna v Sloveniji

V kolikor imaš dodatna vprašanja glede oddaje vzorcev, piši na testiranje.aas@nlzoh.si

The post Testiranje anabolnih androgenih steroidov (NLZOH) appeared first on DrogArt.

Obvestilo o novi lokaciji DrogArta v Ljubljani

Z veseljem vas obveščamo, da se Združenje DrogArt Ljubljana seli na novo lokacijo!

Od 15. septembra nas boste našle_i v Spodnji Šiški na naslovu:

Jezerska ulica 1, 1000 Ljubljana

Na novi lokaciji bomo lahko še bolj prijazno, udobno in kakovostno izvajale_i naše aktivnosti ter vas podpirale_i pri vseh vprašanjih, povezanih z zmanjševanjem škode in zdravimi odločitvami.

V času selitve od 1. do 12. septembra, bomo naše aktivnosti izvajale_i prilagojeno:

  • Info točka – zaprto
  • Anonimno testiranje psihoaktivnih snovi – obratuje s prilagojenim urnikom. Vzorce bomo že s septembrom sprejemale_i na novi lokaciji (Jezerska ulica 1) po prilagojenem urniku – ob ponedeljkih 14:00 – 18:00 in petkih 16:00 – 18:00
  • Dnevni center za mlade – zaprto
  • Terensko delo – na pavzi
  • Svetovanje – obratuje glede na individualne dogovore s svetovalkami_ci

Veselimo se vašega obiska in skupnih trenutkov v novih prostorih!

Ekipa DrogArt

The post Obvestilo o novi lokaciji DrogArta v Ljubljani appeared first on DrogArt.

My Brand (MDMA)

Aktivne snovi

MDMA (162 mg)

Dodaten opis

Pri več kot 1,5 mg MDMA na kg telesne teže se hitreje pojavijo neželeni učinki, kot so zategovanje čeljusti, mišični krči, panična reakcija in epileptični napad. V naslednjih dneh se po zaužitju večjih odmerkov MDMA lahko pojavi povečana depresija, pomanjkanje koncentracije, motnje spanja, izguba apetita in občutek močne brezvoljnosti. Simptomi po nekaj dneh izzvenijo.

Upoštevaj spodnje smernice zmanjševanja tveganj!

Disclaimer: Količina MDMA v tableti je zgolj informativne narave in se lahko pri tabletah z istim logotipom in barvo bistveno razlikuje.

Datum testa

29.8.2025

Zmanjševanje tveganj

  • Prilagodi odmerek glede na svojo težo in izkušenost. Literatura navaja, da je odmerek MDMA 1─1,5 mg/kg telesne mase, kar za 60 kg težkega človeka znaša 60─90 mg.
  • Bodi zelo pozoren, če MDMA uporabljaš prvič, ali če ne veš, koliko čist MDMA imaš. Učinki so lahko zelo raznoliki in nekateri ljudje čutijo veliko bolj intenzivno negativne učinke (tako fizične kot psihične). Zmeraj začni z majhnimi dozami (npr. četrtinko ekstazija ali lahko dozo MDMA-ja v kristalih) in počakaj vsaj 2h.
  • Delaj redne premore med plesom.
  • Vsako uro spij do pol litra izotoničnega napitka, če plešeš, drugače pa manj.
  • Ne mešaj različnih drog med seboj, ne mešaj z zdravili.
  • Ne jemlji različnih tablet v eni noči.
  • Poskrbi za ustrezno prehrano in dovolj spanca med tednom.
  • Delaj pavze med uživanjem MDMA-ja (2-3 mesece med eno uporabo in drugo).
  • Če opaziš težave, ki bi bile lahko povezane z uporabo ekstazija, poišči pomoč.

The post My Brand (MDMA) appeared first on DrogArt.

2-MMC prodan kot sladoled (3-MMC) v Ljubljani

Aktivne snovi

2-MMC (okvirno 99 %)

Dodaten opis

Rezultati analize, narejene v Nacionalnem laboratoriju za zdravje, okolje in hrano, so pokazali, da je vzorec, ki se v Ljubljani prodajal kot 3-MMC, vseboval njegov analog 2-MMC.

2-MMC je analog 3-MMC in 4-MMC, a po poročanju uporabnikov naj ne bi imel izrazitih stimulativnih učinkov. Trenutne informacije temeljijo skoraj izključno na poročilih uporabnikov, ki poročajo o še večjem cravingu (želji po ponovitvi odmerka), kot je to značilno pri 3-MMC.

Verjetno je razlog, nekoliko zmanjšan stimulativni in empatogeni učinek v primerjavi s 3 in 4-MMC, ki ga potem uporabniki poizkušajo doseči z redoziranjem in večjimi odmerki, kar pa poveča tveganje za zdravstvene zaplete.

Gre za najmanj raziskan analog, tako da ni zbranih veliko drugih informacij o učinkih in možnih tveganjih.

Datum testa

4.7.2025

Zmanjševanje tveganj

  • V letošnjem letu smo zaznali večje število lažnih produktov, ki so se prodajali kot sladoled (3-MMC). Če ga uporabljaš, se zato še posebej priporoča uporabo anonimne storite testiranja drog.

Vzorec je v okviru anonimnega zbiranja vzorcev psihoaktivnih snovi zbrala info točka Drogarta. Analizo vzorca je izvedel Nacionalni laboratorij za zdravje, okolje in hrano. Obvestilo je pripravil Nacionalni inštitut za varovanje zdravja.

The post 2-MMC prodan kot sladoled (3-MMC) v Ljubljani appeared first on DrogArt.

Kokain z visoko vsebnostjo levamisola v okolici Ljubljane

Aktivne snovi

Kokain

Levamisol (40 %)

Dodaten opis

Rezultati analize, narejene v Nacionalnem laboratoriju za zdravje, okolje in hrano, so pokazali, da vzorec kupljen kot kokain v okolici Ljubljane, vsebuje 40% levamisola, preostanek je kokain.

Levamisol je sicer zdravilo, ki se uporablja v veterini za zdravljenje okužb z zajedavci. Pri ljudeh so opisani naslednji negativni učinki levamisola: slabost, diareja, omotica, vročina, nespečnost, glavobol, krči, slabo počutje. Levamisol lahko povzroči zmanjšanje števila belih krvničk in ošibi imunski sistem, zaradi česar je posameznik izpostavljen tudi večjemu tveganju za razvoj nevarnih infekcij.

Datum testa

22.8.2024

Zmanjševanje tveganj

Če se odločiš za uporabo kokaina, upoštevaj smernice zmanjševanja škode: 

  • Zmeraj uporabljaj svoj pribor. Z deljenjem pribora lahko pride do prenosa nalezljivih bolezni, kot je hepatitis C. Ob redni uporabi postane nosna sluznica tanjša in hitreje zakrvavi. Ta kri lahko ostane na priboru za snifanje in, če je okužena, lahko pride do prenosa virusa na naslednjo osebo, ki bo snifala s tem priborom.
  • Ne mešaj kokaina z drugimi drogami. Mešanje z drugimi drogami pomeni bistveno večjo nevarnost za nastanek življenjsko ogrožajočih zapletov, saj ob mešanju pravila, ki veljajo za posamezno drogo, velikokrat ne držijo več.
  • Pred zaužitjem substanco testiraj. Kokain je zelo pogosto namešan z nevarnimi aktivnimi snovmi (druge droge, zdravila). Zaužitju nevarno namešanega kokaina se lahko izogneš s pravočasnim testiranjem. Več o testiranju substanc si lahko prebereš tukaj.
  • Če se ob ali po uporabi kokaina pojavi pekoča ali tiščoča bolečina v prsih, ki se lahko širi v levo roko ali vrat, je treba nemudoma poklicati zdravniško pomoč, saj obstaja velika možnost, da je prišlo do srčnega infarkta, ki se, nezdravljen, lahko konča s smrtjo.
  • Ne uporabljaj preveč in prepogosto. Redna uporaba kokaina lahko vodi k nastanku hujših zdravstvenih in psihičnih težav ter močne psihične zasvojenosti. Če opaziš, da imaš težave z zmanjšanjem ali prenehanjem uporabe, lahko poiščeš pomoč v naši svetovalnici. Več informacij lahko najdeš tukaj.

Več informacij o kokainu najdeš na spletni strani www.kokain.si

Vzorec je v okviru anonimnega zbiranja vzorcev psihoaktivnih snovi zbrala Stigma. Analizo vzorca je izvedel Nacionalni laboratorij za zdravje, okolje in hrano. Obvestilo je pripravil Nacionalni inštitut za varovanje zdravja.

The post Kokain z visoko vsebnostjo levamisola v okolici Ljubljane appeared first on DrogArt.

Problemi s PayPalom še niso povsem odpravljeni

Težave, ki so se minuli teden zgrnile nad PayPal, zaradi katerih je bilo blokiranih za 10 milijard evrov plačil, še niso docela odpravljene. Uporabnika PayPala v Evropi še vedno pričaka opozorilo, da je lahko prikazano stanje nepravilno zaradi "začasnega problema z direktnimi obremenitvami". PayPal še vedno pošilja tudi elektronska sporočila uporabnikom, ki vsebujejo omenjeno opozorilo. Gre za posledice odpovedi varnostnih sistemov prejšnji teden. PayPalovi sistemi za zaznavanje prevar, ki preprečujejo predložitev zlonamernih transakcij, so odpovedali, zato se je v banke med 25. in 27. avgustom usulo na milijone zahtevkov. Te so zaznali bančni sistemi za prepoznavanje sumljivih transakcij, zato so banke množično blokirale plačila prek PayPala, ki v praksi potekajo, da PayPal bremeni kartice svojih uporabnikov. PayPal svoje uporabnike miri, da jim ni treba storiti ničesar in da bodo težavo odpravili v najkrajšem možnem času. Ob tem uporabnike prosi, naj ne kličejo na številke za pomoč uporabnikom, ki so trenutno preobremenjene. [st.slika 75775]

Konec tedna pričakujemo Lunin mrk

Konec tedna se obeta eden izmed letošnjih mrkov, ko bo Luna potovala skozi Zemljino senco. Znašla se bo v pravi senci, zato bo na nebu dobro uro vztrajal popolni Lunin mrk. Najlepše in v celoti bo viden v Aziji in Avstraliji, v Sloveniji pa bo viden ob vzhodu Meseca. Vzhajal bo na vzhodu (80°), med totalno fazo pa bo luna do 8° nad obzorjem. Najlepše bo torej viden na neosvetljenih osamelcih. Mrk se bo začel v nedeljo ob 17.28 po slovenskem času, ko bo Luna vstopila v polsenco, ob 18.27 se bo dotaknila sence in ob 19.30 popolnoma mrknila. Totalna faza se bo končala ob 20.52. V Sloveniji bo luna vzšla ob 20.03, torej bomo lahko mrk opazovali v drugi polovici dogajanja. Za nedeljo je napovedano delno oblačno vreme, zato lahko držimo pesti, da bo mrk lepo viden. To bo tudi zadnji popoln Lunin mrk, ki bo iz naših krajev viden v naslednjih treh letih. Naslednji bo šele na silvestrovo 2028. V trenutnem ciklu mrkov bo že 21. septembra delni Sončev mrk, ki pa bo viden le v južnem Pacifiku, na Antarktiki in Novi Zelandiji. [st.slika 75773][st.slika 75774]

Tesla je lagala, da nima podatkov

Minuli mesec je bila Tesla na sodišču v Miamiju spoznana za sokrivo prometne nezgode, v kateri je bila ena oseba hudo poškodovana, druga pa je izgubila življenje. Večinski del odgovornosti je sodišče prepoznalo pri vozniku, Teslina krivda pa je znašala eno tretjino, ker se je voznik zanašal na oglaševanje, da ima Model S avtonomno vožnjo. Sedaj smo izvedeli, da je Tesla med postopkom skušala zamolčati obstoj podatkov, ki jih je vozilo zajelo tik pred nesrečo in ob trku. Tesla je lagala, da podatkov nima. Že sama trditev je sila nenavadna, saj je v preteklosti Tesla z veseljem hitro jela posredovati tovrstne podatke, kadar se je tako oprala krivde. To pot pa podatkov na strežnikih ni bilo, so zatrjevali v podjetju. Podatke bi lahko imelo tudi vozilo, a odvisno od posledic trka niso vedno rešljivi. Sprva je kazalo, da nesrečna tesla nima shranjenih podatkov, a je nato heker in uporabnik X-a @greentheonly podatke pridobil z vozila. Za The Washington Post je dejal, da je bilo ves čas očitno, da podatki so na voljo, treba jih je bilo le prebrati. Izkazalo se je, da so podatki tudi na Teslinih strežnikih, kjer naj bi bili označeni za izbris. Ko jih je Greentheonly odkril na vozilu, so jih začuda našli tudi v Tesli. Odvetnik Joel Smith se je opravičeval, da so bili pri ravnanju s podatki nerodni (clumsy), a da niso zanalašč storili ničesar narobe. Šlo naj bi za res velik slučaj, da podatkov niso mogli najti. Popolna nevihta (perfect storm), je dejal. Tesla naj ne bi nikoli razmišljala o skrivanju, temveč so bili prepričani, da podatkov ni. Zato so bili toliko bolj veseli, da jih je heker obvestil o obstoju teh podatkov, ki so jih potem našli še sami. Tudi to sprenevedanje je prispevalo k razsodbi, kjer je bila Tesla spoznana za soodgovorno. Sodnica sicer ni našla dokazov, da je bilo Teslino zadrževanje podatkov namerno. Mora pa Tesla tožnikom povrniti stroške, ki so jih imeli sami, da so samostojno pridobili podatke.[st.slika 75772]

Nagradnja Windows 11 25H2 za najbolj neučakane že na voljo

Microsoft je peto inačico Windows 11, ki v skladu z aktualnim modelom poimenovanj nosi ime 25H2, sprostil v Release Preview Channel. To je zadnja stopnja pred izidom za širšo javnost in že omogoča namestitev vsem, ki to izrecno zahtevajo. V naslednjih tednih, ki nas še ločijo do prestavitve v splošni kanal, ni pričakovati bistvenih sprememb. Večino hroščev so že polovili v minulih mesecih, ko so verzijo uporabljali beta preizkuševalci. V nasprotju s prejšnjo inačico 24H2, ki je prinesla precej novih funkcionalnosti, bo 25H2 precej manj polna novosti. Verzijo imenujejo paket omogočitev (enablement package), ki sicer v drobovje doda precej novosti, a ostanejo zaklenjene. Ko Microsoft v prihodnosti posamezno funkcionalnost aktivira, jo s kratko posodobitvijo odklene. Hkrati pa tak paket kakšno funkcionalnost tudi pobriše. Microsoft je potrdil, da se poslavljata PowerShell 2.0 in Windows Management Instrumentation Command-line (WMIC), po drugi strani pa bodo poslovne inačice (končno!) dobile možnost odstraniti prednameščene aplikacije Windows 11. Nekaj novosti vendarle najdemo, če podrobno pogledamo. Meni Start je nekoliko predrugačen, zlogasne Nastavitve (ki so nepopolno zamenjale Nadzorno ploščo) imajo več funkcij in možnost iskanja z umetno inteligenco. Računalnike, ki se ne zaženejo več, bo reševal Quick Machine Recovery, iskalnik pa dobiva možnost semantična iskanja (npr. "letno poročilo, ki sem ga pisal prejšnji mesec"). Kdor želi novo verzijo namestiti, se lahko včlani v kanal Release Preview. [st.slika 75771]

Ranljivost v WhatsAppu ogroža lastnike iPhonov in Macov

Meta je potrdila, da je v aktualni verziji odjemalcev za WhatsApp, ki tečejo na operacijskih sistemih iOS in macOS, resna luknja, ki jo je mož izrabiti brez posredovanja uporabnika (zero click). Napad izkorišča ranljivost CVE-2025-55177, s čimer operacijski sistem prelisiči v namestitev zlonamerne programske opreme, ki jo sname z internetne povezave. Problematično je zlasti avtomatično izvajanje, saj uporabnikom ni treba klikniti nobene povezave, da se napad sproži. Ranljive so verzije WhatsApp for iOS 2.25.21.73, WhatsApp Business for iOS 2.25.21.78 in WhatsApp for Mac 2.25.21.78. Meta je potrdila, da ranljivost ni le teoretična, saj so jo hekerji že izrabili tudi v praksi. Tudi Security Lab, ki deluje pri Amnesty International in je na primer v preteklosti odkril nameščanje prisluškovalne opreme na telefone novinarjev v Srbiji, kar je počela policija, je potrdil aktivne napade. Meta zato poziva vse uporabnika, naj nemudoma nadgradijo aplikacije. Kdor je v dilemi, ali so njegovo napravo že napadli, pa jo lahko tovanriško ponastavi. [st.slika 75770]

Platforme na Otoku na različne načine preverjajo starost uporabnikov

Platforme so se v Veliki Britaniji lotile preverjanja starosti svojih uporabnikov, kot to od njih zahteva Online Safety Act. Steam se je odločil, da bo za dostop do vsebin, ki za mladoletne niso primerne, potrebna veljavna številka kreditne kartice. V Veliki Britaniji lahko kartice dobijo le polnoletni, s čimer je Valve elegantno preverjanje polnoletnosti preložil na bančni sistem, ki to že počne. Nekateri uporabniki so se nad tem tudi pritoževali, a prednosti za Valve so očitne. Pojasnili so, da se z vpisom številke kreditne kartice v platformo precej zmanjša deljenje uporabniških profilov, saj malokdo deli profil, v katerem je aktivno plačilno sredstvo. Večina drugih platform se je odločila za manj rigorozne načine, denimo preverjanje starosti s fotografijo uporabnika. Tako so storili Reddit, Bluesky in Discord, ki so se potem čudili, kako je možno sistem prelisičiti s fotografijo iz videoigre. Ko so to luknjo zakrpali, so se ljudje preprosto zatekli k VPN-jem, da so se navidezno preselili iz Velike Britanije. Ker pa zakon velja tudi za tuje platforme, ki imajo na Otoku znatno število uporabnikov, želijo to spremeniti. V ZDA je 4chan vložil tožbo, s katero želi od ameriških sodišč potrditev, da britanski regulator Ofcam nima pristojnosti za ukrepanje proti ameriškim podjetjem. Ofcam je na primer že izrekel globo za 4chan, ki je slednji ne namerava plačati.[st.slika 75768]

Po nekaj tednih strokovnjaki že zapuščajo Metin laboratorij za superinteligenco

Vrhunski strokovnjaki za umetno inteligenco v Silicijevi dolini dobivajo tudi stomilijonske ponudbe, a niti to ni dovolj, da bi vztrajali v Meti. V vsaj enem primeru niti desetmestna ponudba ni bila dovolj, iz Meta Superintelligence Labs pa je nekaj najvidnejših zaposlenih po vsega nekaj mesecih že odšlo. Iz OpenAI je v Meto prestopil Avi Verma, ki je po približno mesecu dni podjetje že zapustil. Enako je storil Ethan Knight. Rishabh Agarwal pa se je Meti pridružil po tem, ko je zapustil xAI, sedaj pa zapušča še Meto in se vrača v Kanado. Chaya Nayak je bila direktorica produktov generativne umetne inteligence v Meti, sedaj pa se po precej daljši karieri, ki je trajala deset let, premika v OpenAI. Metina služba za odnose z javnostmi skuša prestope predstaviti kot nepomembne in se pri tem zapleta v razlikovanje, ali odhajajo ključni člani jedrne ekipe ali drugi strokovnjaki. V resnici to ni pomembno. Mark Zuckerberg je letos na umetno inteligenco stavil vse, ko je ustanovil novo enoto oziroma laboratorij za superinteligenco, ki jo je poplavil z denarjem. A razmere v tem oddelku naj bi bile precej kaotične[st.slika 75769]

Samsung in SK Hynix bosta potrebovala ameriške licence za proizvodnjo pomnilnika na Kitajskem

Ameriško ministrstvo za trgovino je s seznama preverjenih proizvajalcev čipov izbrisala podjetja Intel Semiconductor (Dalian), Samsung China Semiconductor in SK hynix Semiconductor (China). S tem so podjetja izgubila prednostno obravnavo pri uporabi tehnologije za proizvodnjo čipov (VEU). Doslej za postavitev proizvodnih kapacitet, uvoz, prenos in izvoz svojih izdelkov niso potrebovala ameriških licenc, torej so lahko na primer pomnilnik proizvajala na Kitajskem v Indiji praktično brez omejitev. Tega je sedaj konec. Samsung in SK Hynix sta največja proizvajalca čipov DRAM in NAND na svetu, ki imata sicer sedež v Južni Koreji, a veliko proizvodnje opravita na Kitajskem. Samo Samsungova tovarna v Xianu na Kitajskem proizvede okrog 40 odstotkov vseh čipov NAND na svetu. SK Hynix v kraju Wuxi proizvede 40 odstotkov čipov DRAM. Intel, ki je takisto na seznamu, je spomladi svoje proizvodne kapacitete prodal SK Hynixu. Sedaj morata podjetji v 120 dneh pridobiti ustrezne licence, da bodo lahko na Kitajskem proizvodnjo opravljali tudi v prihodnosti. Kaj točno bo nepredvidljiva ameriška administracija zahtevala za izdajo licenc, še ni znano. Spomnimo na nenavadno zahtevo, na katero sta tega meseca pristala Nvidia in AMD, ki bosta 15 odstotkov prihodkov od prodaje svojih čipov (H20 in MI308) vplačala v zvezni proračun.[st.slika 75767]

Stavek, ki jim vsem vlada

Z razmahom velikih jezikovnih modelov se je pojavila nova vrsta groženj, saj lahko v dokumente ali druge vire, ki jih ti modeli dobijo kot vhodne podatke, preprosto podtaknemo besedila z zlemi nameni. Ta jezikovne modele pretentajo, da bodisi ne delujejo tako, kot pričakuje uporabnik, ali pa delujejo drugače od želja in omejitev avtorjev. Raziskovalci iz podjetja za računalniško varnost Palo Alto Networks so pokazali, da za zlom okov zadostujejo že dovolj dolgi in napol razumljivi stavki. Če poziv (prompt) oblikujemo tako, da je čim daljši in po možnosti napisan v polomljenem jeziku s slabo slovnico, lahko jezikovne modele prevaramo in se prebijemo mimo varovalk. Te vsebujejo vsi javno dostopni modeli velikih podjetij, ki si ne morejo privoščiti, da bi modeli ustvarjali sovražni govor, nezakonito pornografijo ali navodila za izdelavo razstreliv. Zanašajo se na odpornost varovalk, ki pa ni absolutna. Raziskovalci so analizirali dejstvo, da trening ne prepreči ustvarjanja škodljivih odzivov, le zelo močno zmanjša verjetnost, da se to zgodi (refusal-affirmation logit gap). Pokazali so, da je to dejstvo moč izrabiti. Razlog se skriva v dejstvu, da ti modeli videzu navkljub ne znajo razmišljati in razumeti, zato tudi ne razumejo koncepta škodljivosti, čeprav lahko nekatere vsebine označijo kot take. Modeli zgolj iščejo nadaljevanje besedila, ki je najbolj verjetno. Škodljivih vsebin ne ustvarjajo, ker se med urjenjem naučijo, da takšno besedilo prinaša zelo malo točk. To pa se da prelisičiti, če nakopičimo stavke brez ločil in reda (run-on sentences). Ločila so tista ključna, ki ponovno vzpostavijo filtre in negativno ocenijo škodljive odzive. Če ločil ni, se navodila zgolj kopičijo. Direktor raziskav umetne inteligence pri podjetju Billy Hewlett je dejal, da je možno verjetnost za škodljive odzive zmanjšati, nikoli pa ne bo nič. Praktična rešitev je zunanje ali ločeno preverjanje odziva modelov, ne pa zanašanje na benevolentnost modelov. Kdo bi si mislil, da je rešitev za umetno inteligenco tako zelo običajna: zunanji nadzor in varovalke. [st.slika 75766]

Konzole se ne ceníjo

Dasiravno se novi Nintendo Switch 2 prodaja kot vroče žemljice, mu smemo oporekati precej visoko ceno. S 470 evri ni ekstremno drag, a še vedno ne gre za široko dosegljivo igračo, ki bi jo kupili brez premisleka. Pri tem ni nobena izjema, saj je tudi konkurenca v istem cenovnem razredu. To je po analizi ArsTechnice preveč. Proučili so cene konzol skozi zgodovino in odkrili, da so današnje cene višje od pričakovanj glede na zgodovinske trende. Tako Nintendo kot tudi konkurenca, denimo Microsoft in Sony, v zadnjih letih cen konzol niso zniževali, z zadnjimi izdajami pa so priporočene maloprodajne cene še zvišali. Tudi starejše konzole še vedno držijo nenavadno visoke cene. Celo če upoštevamo inflacijo, so nekatere konzole PlayStation 5 danes celo dražje kot ob izidu. Sprememba se je zgodila sredi drugega desetletja. PlayStation 4 Pro, Switch 1 in Xbox so zelo počasi izgubljali ceno, tudi ko so se precej postarali. Nekoč je bilo precej drugače, saj je na primer Atari 2600 v 70. letih izgubil več kot dve tretjini vrednosti v nekaj letih. Tudi novejši PlayStation 3 je bil v nekaj letih pol cenejši. Danes tega trenda ni več in čeprav konzole niso najdražje v zgodovini, če gledamo ceno ob izidu, prilagojeno za inflacijo, jo trdovratno ohranjajo. Razlogov za to dogajanje je več, a skupni imenovalec je bržkone zelo preprost. Trg današnje cene konzol preprosto prenese. Nihče jih ne bo zniževal, če mu tega ni treba. [st.slika 75764][st.slika 75765]

Microsoft razvil dva lastna modela umetne inteligence

Odnos med Microsoftom in OpenAI je, milo rečeno, zapleten, zato ni zelo presenetljivo, da so v Redmondu izdali dva modela umetne inteligence, ki so ju razvili sami. Kot je očitno iz imena, MAI-Voice-1 sintetizira govor precej učinkovito. Microsoft trdi, da na enem grafičnem procesorju v sekundi ustvari minuto govora. Microsoft ga že uporablja v produkciji, saj z njim sinhronizira Copilot Daily in Podcast (podobno kot Google NotebookLLM) MAI-1-preview še ni javno dostopen vsakomur, temveč je omejen na platformo LMArena, kjer ga lahko izbrani preizkuševalci soočijo s konkurenčnimi modeli. V prihodnjih dneh bo na voljo za generiranje besedila v Copilotovem sogovorniku (chatbot), a še vedno v beta fazi. Microsoft in OpenAI sta sicer tesna partnerja, saj je Microsoft vanj vložil dobrih 13 milijard dolarjev. Obenem še vedno nudi infrastrukturo, ne sicer vso, na kateri se urijo ali tečejo OpenAI-jevi modeli. V zameno ima Microsoft prednost pri uporabi teh modelov, ki jih vgrajuje v svoje izdelke - a ne izključno. V zadnjih tednih je namreč odnos postal še bolj zapleten, saj je OpenAI začel najemati računsko moč tudi drugod, Microsoft pa sedaj kaže lastne modele.[st.slika 75763]

Banke blokirale za 10 milijard evrov prevar prek PayPala

Nemško združenje hranilnic (DSGV) je sporočilo, da so zabeležili poizkuse neavtoriziranih direktnih obremenitev kartic, ki so povezane na PayPalove račune. Šlo je za približno 10 milijard evrov, ki so jih banke blokirale, preden bi prispeli v napačne roke. Krivec naj bi bil PayPalov sistem za lovljenje prevar, ki je odpovedal. Plačila so ustavili že v ponedeljek, ko so se pojavile prve informacije o sumljivih plačilih. DSGV je potrdil, da je šlo za neavtorizirane obremenitve. PayPal pa je v izjavi za Reuters dejal, da je začasno nedelovanje storitev vplivalo na izvedbo nekaterih transakcij, kar je najbolj diplomatska možna izjava. Težavo so že identificirali in odpravili. Incident je najbolj vplival na plačila v Nemčiji, posledice pa so čutili tudi v drugih državah. V Sloveniji združenje bank priporoča menjavo gesel in vklop dvojne avtentikacije.[st.slika 75762]

DOGE na nezaščiteni strežnik shranil osebne podatke milijonov Američanov

Žvižgač iz ameriškega Oddelka za vladno učinkovitost (DOGE), ki ga je več mesecev vodil kar Elon Musk, je razkril, da so uslužbenci na javno dostopni strežnik naložili celotno bazo številk zdravstvenega zavarovanja. Te v ZDA služijo kot unikatna identifikacijska številka, ker boljšega in bolj univerzalnega približka nimajo. Za zdaj ni dokazov o nepooblaščenem dostopu do baze, a že dejstvo, da je bil ta mogoč, je izjemno skrb vzbujajoče. Šlo bi za največje razkritje osebnih podatkov, ki bi imelo katastrofalne posledice za varnost in bi omogočalo kraje identitete. Zažvigal je Charles Borges, ki navaja, da so v bazi imena, naslovi, rojstni podatki, številne zdravstvenega zavarovanja in drugi osebni podatki. Podatki so bili iz baze Numerical Identification System (NUMIDENT) skopirani na sicer interni strežnik, ki pa je bil nezaščiten in tako slabo konfiguriran, da je dovoljeval zunanji dostop. Internetni dostop pa se sploh ni beležil, zato bi teoretično lahko do strežnika dostopal kdorkoli iz agencije DOGE. Baza vsebuje okrog 550 milijonov vnosov, torej vse, ki so kdajkoli dobili številko zdravstvenega zavarovanja. Vodja IT-ja v agenciji DOGE naj bi ob vzpostavitvi baze dejal, da je operativnost pomembnejša od morebitnih tveganj, ki jih prevzema na svoja pleča. Tako trdi žvižgač v pritožbi. Zakaj točno je DOGE potreboval svojo kopijo baze NUMIDENT, ni znano. [st.slika 75761]

2025-08-25 teletext in north america

I have an ongoing fascination with "interactive TV": a series of efforts, starting in the 1990s and continuing today, to drag the humble living room television into the world of the computer. One of the big appeals of interactive TV was adoption, the average household had a TV long before the average household had a computer. So, it seems like interactive TV services should have proliferated before personal computers, at least following the logic that many in the industry did at the time.

This wasn't untrue! In the UK, for example, Ceefax was a widespread success by the 1980s. In general, TV-based teletext systems were pretty common in Europe. In North America, they never had much of an impact---but not for lack of trying. In fact, there were multiple competing efforts at teletext in the US and Canada, and it may very well have been the sheer number of independent efforts that sunk the whole idea. But let's start at the beginning.

The BBC went live with Ceefax in 1974, the culmination of years of prototype development and test broadcasts over the BBC network. Ceefax was quickly joined by other teletext standards in Europe, and the concept enjoyed a high level of adoption. This must have caught the attention of many in the television industry on this side of the ocean, but it was Bonneville International that first bit [1]. Its premier holding, KSL-TV of Salt Lake City, has an influence larger than its name suggests: KSL was carried by an extensive repeater network and reached a large portion of the population throughout the Mountain States. Because of the wide reach of KSL and the even wider reach of the religion that relied on Bonneville for communications, Bonneville was also an early innovator in satellite distribution of television and data. These were ingredients that made for a promising teletext network, one that could quickly reach a large audience and expand to broader television networks through satellite distribution.

KSL applied to the FCC for an experimental license to broadcast teletext in addition to its television signal, and received it in June of 1978. I am finding some confusion in the historical record over whether KSL adopted the BBC's Ceefax protocol or the competing ORACLE, used in the UK by the independent broadcasters. A 1982 paper on KSL's experiment confusingly says they used "the British CEEFAX/Oracle," but then in the next sentence the author gives the first years of service for Ceefax and ORACLE the wrong way around, so I think it's safe to say that they were just generally confused. I think I know the reason why: in the late '70s, the British broadcasters were developing something called World System Teletext (WST), a new common standard based on aspects of both Ceefax and ORACLE. Although WST wasn't quite final in 1978, I believe that what KSL adopted was actually a draft of WST.

That actually hints at an interesting detail which becomes important to these proposals: in Europe, where teletext thrived, there were usually not very many TV channels. The US's highly competitive media landscape lead to a proliferation of different TV networks, and local operations in addition. It was a far cry from the UK, for example, where 1982 saw the introduction of a fourth channel called, well, Channel 4. By contrast, Salt Lake City viewers with cable were picking from over a dozen channels in 1982, and that wasn't an especially crowded media market. This difference in the industry, between a few major nationwide channels and a longer list of often local ones, has widespread ramifications on how UK and US television technology evolved.

One of them is that, in the UK, space in the VBI to transmit data became a hotly contested commodity. By the '80s, obtaining a line of the VBI on any UK network to use for your new datacasting scheme involved a bidding war with your potential competitors, not unlike the way spectrum was allocated in the US. Teletext schemes were made and broken by the outcomes of these auctions. Over here, there was a long list of television channels and on most of them only a single line of the VBI was in use for data (line 21 for closed captions). You might think this would create fertile ground for VBI-based services, but it also posed a challenge: the market was extensively fractured. You could not win a BBC or IBA VBI allocation and then have nationwide coverage, you would have to negotiate such a deal with a long list of TV stations and then likely provide your own infrastructure for injecting the signal.

In short, this seems to be one of the main reasons for the huge difference in teletext adoption between Europe and North America: throughout Europe, broadcasting tended to be quite centralized, which made it difficult to get your foot in the door but very easy to reach a large customer base once you had. In the US, it was easier to get started, but you had to fight for each market area. "Critical mass" was very hard to achieve [2].

Back at KSL, $40,000 (~$200,000 today) bought a General Automation computer and Tektronix NTSC signal generator that made up the broadcast system. The computer could manage as many as 800 pages of 20x32 teletext, but KSL launched with 120. Texas Instruments assisted KSL in modifying thirty television sets with a new decoder board and a wired remote control for page selection. This setup, very similar to teletext sets in Europe, nearly doubled the price of the TV set. This likely would have become a problem later on, but for the pilot stage, KSL provided the modified sets gratis to their 30 test households.

One of the selling points of teletext in Europe was its ability to provide real-time data. Things like sports scores and stock quotations could be quickly updated in teletext, and news headlines could make it to teletext before the next TV news broadcast. Of course, collecting all that data and preparing it as teletext pages required either a substantial investment in automation or a staff of typists. At the pilot stage, KSL opted for neither, so much of the information that KSL provided was out-of-date. It was very much a prototype. Over time, KSL invested more in the system. In 1979, for example, KSL partnered with the National Weather Service to bring real-time weather updates to teletext---all automatically via the NWS's computerized system called AFOS.

At that time, KSL was still operating under an experimental license, one that didn't allow them to onboard customers beyond their 30-set test market. The goal was to demonstrate the technology and its compatibility with the broader ecosystem. In 1980, the FCC granted a similar experimental license to CBS affiliated KMOX in St. Louis, who started a similar pilot effort using a French system called Antiope. Over the following few years, the FCC allowed expansion of this test to other CBS affiliates including KNXT in Los Angeles. To emphasize the educational and practical value of teletext (and no doubt attract another funding source), CBS partnered with Los Angeles PBS affiliate KCET who carried their own Teletext programming with a characteristic slant towards enrichment. Meanwhile, in Chicago, station WFLD introduced a teletext service called Keyfax, built on Ceefax technology as a joint venture with Honeywell and telecom company Centel. Despite the lack of consumer availability, teletext was becoming a crowded field---and for the sake of narrative simplicity I am leaving out a whole set of other North American ventures right now.

In 1983, there were at least a half dozen stations broadcasting teletext based on British or French technology, and yet, there were zero teletext decoders on the US market. Besides their use of an experimental license, the teletext pilot projects were constrained by the need for largely custom prototype decoders integrated into customer's television sets. Broadcast executives promised the price could come down to $25, but the modifications actually available continued to cost in the hundreds. The director of public affairs at KSL, asked about this odd conundrum of a nearly five-year-old service that you could not buy, pointed out that electronics manufacturers were hesitant to mass produce an inexpensive teletext decoder as long as it was unclear which of several standards would prevail. The reason that no one used teletext, then, was in part the sheer number of different teletext efforts underway. And, of course, things were looking pretty evenly split: CBS had fully endorsed the French-derived system, and was a major nationwide TV network. But most non-network stations with teletext projects had gone the British route. In terms of broadcast channels, it was looking about 50/50.

Further complicating things, teletext proper was not the only contender. There was also videotex. The terminology has become somewhat confused, but I will stick to the nomenclature used in the 1980s: teletext services used a continuous one-way broadcast of every page and decoders simply displayed the requested page when it came around in the loop. Videotex systems were two-way, with the customer using a phone line to request a specific page which was then sent on-demand. Videotex systems tended to operate over telephone lines rather than television cable, but were frequently integrated into television sets. Videotex is not as well remembered as teletext because it was a massive commercial failure, with the very notable exception of the French Minitel.

But in the '80s they didn't know that yet, and the UK had its own videotex venture called Prestel. Prestel had the backing of the Post Office, because they ran the telephones and thus stood to make a lot of money off of it. For the exact same reason, US telephone company GTE bought the rights to the system in the US.

Videotex is significantly closer to "the internet" in its concept than teletext, and GTE was entering a competitive market. In 1981, Radio Shack had introduced a videotex terminal for several years already, a machine originally developed as the "AgVision" for use with an experimental Kentucky agricultural videotex service and then offered nationwide. This creates an amusing irony: teletext services existed but it was very difficult to obtain a decoder to use them. Radio Shack was selling a videotex client nationwide, but what service would you use it with? In practice, the "TRS-80 Videotex" as the AgVision came to be known was used mostly as a client for CompuServe and Dow Jones. Neither of these were actually videotex services, using neither the videotex UX model nor the videotex-specific features of the machine. The TRS-80 Videotex was reduced to just a slightly weird terminal with a telephone modem, and never sold well until Radio Shack beefed it up into a complete microcomputer and relaunched it as the TRS-80 Color Computer.

Radio Shack also sold a backend videotex system, and apparently some newspapers bought it in an effort to launch a "digital edition." The only one to achieve long-term success seems to have been StarText, a service of the Fort Worth Star-Telegram. It was popular enough to be remembered by many from the Fort Worth area, but there was little national impact. It was clearly not enough to float sales of the TRS-80 Videotex and the whole thing has been forgotten. Well, with such a promising market, GTE brought its US Prestel service to market in 1982. As the TRS-80 dropped its Videotex ambitions, Zenith launched a US television set with a built-in Prestel client.

Prestel wasn't the only videotex operation, and GTE wasn't the only company marketing videotex in the US. If the British Post Office and GTE thought they could make money off of something, you know AT&T was somewhere around. They were, and in classic AT&T fashion. During the 1970s, the Canadian Communications Research Center developed a vector-based drawing system. Ontario manufacturer Norpak developed a consumer terminal that could request full-color pages from this system using a videotex-like protocol. Based on the model of Ceefax, the CRC designed a system called Telidon that worked over television (in a more teletext-like fashion) or phone lines (like videotex), with the capability of publishing far more detailed graphics than the simple box drawings of teletext.

Telidon had several cool aspects, like the use of a pretty complicated vector-drawing terminal and a flexible protocol designed for interoperability between different communications media. That's the kind of thing AT&T loved, so they joined the effort. With CRC, AT&T developed NABTS, the North American Broadcast Teletext Specification---based on Telidon and intended for one-way broadcast over TV networks.

NABTS was complex and expensive compared to Ceefax/ORACLE/WST based systems. A review of KSL's pilot notes how the $40,000 budget for their origination system compared to the cost quoted by AT&T for an NABTS headend: as much as $2 million. While KSL's estimates of $25 for a teletext decoder had not been achieved, the prototypes were still running cheaper than NABTS clients that ran into the hundreds. Still, the graphical capabilities of NABTS were immediately impressive compared to text-only services. Besides, the extensibility of NABTS onto telephone systems, where pages could be delivered on-demand, made it capable of far larger databases.

When KSL first introduced teletext, they spoke of a scheme where a customer could call a phone number and, via DTMF menus, request an "extended" page beyond the 120 normally transmitted. They could then request that page on their teletext decoder and, at the end of the normal 120 page loop, it would be sent just for them. I'm not sure if that was ever implemented or just a concept. In any case, videotex systems could function this way natively, with pages requested and retrieved entirely by telephone modem, or using hybrid approaches.

NABTS won the support of NBC, who launched a pilot NABTS service (confusingly called NBC Teletext) in 1981 and went into full service in 1983. CBS wasn't going to be left behind, and trialed and launched NABTS (as CBS ExtraVision) at the same time. That was an ignominious end for CBS's actual teletext pilot, which quietly shut down without ever having gone into full service. ExtraVision and NBC Teletext are probably the first US interactive TV services that consumers could actually buy and use.

Teletext was not dead, though. In 1982, Cincinnati station WKRC ran test broadcasts for a WST-based teletext service called Electra. WKRC's parent company, Taft, partnered with Zenith to develop a real US-market consumer WST decoder for use with the Electra service. In 1983, the same year that ExtraVision and CBS Teletext went live, Zenith teletext decoders appeared on the shelves of Cincinnati stores. They were plug-in modules for recent Zenith televisions, meaning that customers would likely also need to buy a whole new TV to use the service... but it was the only option, and seems to have remained that way for the life of US teletext.

I believe that Taft's Electra was the first teletext service to achieve a regular broadcast license. Through the mid 1980s, Electra would expand to more television stations, reaching similar penetration to the videotex services. In 1982, KeyFax (remember KeyFax? it was the one on WFLD in Chicago) had made the pivot from teletext to videotex as well, adopting the Prestel-derived technology from GTE. In 1984, KeyFax gave up on their broadcast television component and became a telephone modem service only. Electra jumped on the now-free VBI lines of WFLD and launched in Chicago. WTBS in Atlanta carried Electra, and then in the biggest expansion of teletext, Electra appeared on SPN---a satellite network that would later become CNBC.

While major networks, and major companies like GTE and AT&T, pushed for the videotex NABTS, teletext continued to have its supporters among independent stations. Los Angeles's KTTV started its own teletext service in 1984, which combined locally-developed pages with national news syndicated from Electra. This seemed like the start of a promising model for teletext across independent stations, but it wasn't often repeated.

Oh, and KSL? at some point, uncertain to me but before 1984, they switched to NABTS.

Let's stop for a moment and recap the situation. Between about 1978 and 1984, over a dozen major US television stations launched interactive TV offerings using four major protocols that fell into two general categories. One of those categories was one-way over television while the other was two-way over telephone or one-way over television with some operators offering both. Several TV stations switched between types. The largest telcos and TV networks favored one option, but it was significantly more expensive than the other, leading smaller operators to choose differently. The hardware situation was surprisingly straightforward in that, within teletext and videotex, consumers only had one option and it was very expensive.

Oh, and that's just the technical aspects. The business arrangements could get even stranger. Teletext services were generally free, but videotex services often charged a service fee. This was universally true for videotex services offered over telephone and often, but not always, true for videotex services over cable. Were the videotex services over cable even videotex? doesn't that contradict the definition I gave earlier? is that why NBC called their videotex service teletext? And isn't videotex over telephone barely differentiated from computer-based services like CompuServe and The Source that were gaining traction at the same time?

I think this all explains the failure of interactive TV in the 1980s. As you've seen, it's not that no one tried. It's that everyone tried, and they were all tripping over each other the entire time. Even in Canada, where the government had sponsored development of the Telidon system ground-up to be a nationwide standard, the influence of US teletext services created similar confusion. For consumers, there were so many options that they didn't know what to buy, and besides, the price of the hardware was difficult to justify with the few stations that offered teletext. The fact that teletext had been hyped as the "next big thing" by newspapers since 1978, and only reached the market in 1983 as a shambled mess, surely did little for consumer confidence.

You might wonder: where was the FCC during this whole thing? In the US, we do not have a state broadcaster, but we do have state regulation of broadcast media that is really quite strict as to content and form. During the late '70s, under those first experimental licenses, the general perception seemed to be that the FCC was waiting for broadcasters to evaluate the different options before selecting a nationwide standard. Given that the FCC had previously dictated standards for television receivers, it didn't seem like that far of a stretch to think that a national-standard teletext decoder might become mandatory equipment on new televisions.

Well, it was political. The long, odd experimental period from 1978 to 1983 was basically a result of FCC indecision. The commission wasn't prepared to approve anything as a national standard, but the lack of approval meant that broadcasters weren't really allowed to use anything outside of limited experimental programs. One assumes that they were being aggressively lobbied by every side of the debate, which no doubt factored into the FCC's 1981 decision that teletext content would be unregulated, and 1982 statements from commissioners suggesting that the FCC would not, in fact, adopt any technical standards for teletext.

There is another factor wrapped up in this whole story, another tumultuous effort to deliver text over television: closed captioning. PBS introduced closed captioning in 1980, transmitting text over line 21 of the VBI for decoding by a set-top box. There are meaningful technical similarities between closed captioning and teletext, to the extent that the two became competitors. Some broadcasters that added NABTS dropped closed captioning because of incompatibility between the equipment in use. This doesn't seem to have been a real technical constraint, and was perhaps more likely cover for a cost-savings decision, but it generated considerable controversy that lead to the National Association for the Deaf organizing for closed captioning and against teletext.

The topic of closed captioning continued to haunt interactive TV. TV networks tended to view teletext or videotex as the obvious replacements for line 21 closed captioning, due to their more sophisticated technical features. Of course, the problems that limited interactive TV adoption in general, high cost and fragmentation, made it unappealing to the deaf. Closed captioning had only just barely become well-standardized in the mid-1980s and its users were not keen to give it up for another decade of frustration. While some deaf groups did support NABTS, the industry still set up a conflict between closed captioning and interactive TV that must have contributed to the FCC's cold feet.

In April of 1983, at the dawn of US broadcast teletext, the FCC voted 6-1 to allow television networks and equipment manufacturers to support any teletext or videotex protocol of their choice. At the same time, they declined to require cable networks to carry teletext content from broadcast television stations, making it more difficult for any TV network to achieve widespread adoption [3]. The FCC adopted what was often termed a "market-based" solution to the question of interactive TV.

The market would not provide that solution. It had already failed.

In November of 1983, Time ended their teletext service. That's right, Time used to have a TV network and it used to have teletext; it was actually one of the first on the market. It was also the first to fall, but they had company. CBS and NBC had significantly scaled back their NABTS programs, which were failing to make any money because of the lack of hardware that could decode the service.

On the WST side of the industry, Taft reported poor adoption of Electra and Zenith reported that they had sold very few decoders, so few that they were considering ending the product line. Taft was having a hard time anyway, going through a rough reorganization in 1986 that seems to have eliminated most of the budget for Electra. Electra actually seems to have still been operational in 1992, an impressive lifespan, but it says something about the level of adoption that we have to speculate as to the time of death. Interactive TV services had so little adoption that they ended unnoticed, and by 1990, almost none remained.

Conflict with closed captioning still haunted teletext. There had been some efforts towards integrating teletext decoders into TV sets, by Zenith for example, but in 1990 line 21 closed caption decoding became mandatory. The added cost of a closed captioning decoder, and the similarity to teletext, seems to have been enough for the manufacturing industry to decide that teletext had lost the fight. Few, possibly no teletext decoders were commercially available after that date.

In Canada, Telidon met a similar fate. Most Telidon services were gone by 1986, and it seems likely that none were ever profitable. On the other hand, the government-sponsored, open-standards nature of Telidon mean that it and descendants like NABTS saw a number of enduring niche uses. Environment Canada distributed weather data via a dedicated Telidon network, and Transport Canada installed Telidon terminals in airports to distribute real-time advisories. Overall, the Telidon project is widely considered a failure, but it has had enduring impact. The original vector drawing language, the idea that had started the whole thing, came to be known as NAPLPS, the North American Presentation Layer Protocol Syntax. NAPLPS had some conceptual similarities to HTML, as Telidon's concept of interlinking did to the World Wide Web. That similarity wasn't just theoretical: Prodigy, the second largest information service after CompuServe and first to introduce a GUI, ran on NAPLPS. Prodigy is now viewed as an important precursor to the internet, but seen in a different light, it was just another videotex---but one that actually found success.

I know that there are entire branches of North American teletext and videotex and interactive TV services that I did not address in this article, and I've become confused enough in the timeline and details that I'm sure at least one thing above is outright wrong. But that kind of makes the point, doesn't it? The thing about teletext here is that we tried, we really tried, but we badly fumbled it. Even if the internet hadn't happened, I'm skeptical that interactive television efforts would have gotten anywhere without a complete fresh start. And the internet did happen, so abruptly that it nearly killed the whole concept while television carriers were still tossing it around.

Nearly killed... but not quite. Even at the beginning of the internet age, televisions were still more widespread than computers. In fact, from a TV point of view, wasn't the internet a tremendous opportunity? Internet technology and more compact computers could enable more sophisticated interactive television services at lower prices. At least, that's what a lot of people thought. I've written before about Cablesoft and it is just one small part of an entire 1990s renaissance of interactive TV. There's a few major 1980s-era services that I didn't get to here either. Stick around and you'll hear more.

You know what's sort of funny? Remember the AgVision, the first form of the TRS-80? It was built as a client for AGTEXT, a joint project of Kentucky Educational Television (who carried it on the VBI of their television network) and the Kentucky College of Agriculture. At some point, AGTEXT switched over to the line 21 closed captioning protocol and operated until 1998. It was almost the first teletext service, and it may very well have been the last.

[1] There's this weird thing going on where I keep tangentially bringing up Bonnevilles. I think it's just a coincidence of what order I picked topics off of my list but maybe it reflects some underlying truth about the way my brain works. This Bonneville, Bonneville International, is a subsidiary of the LDS Church that owns television and radio stations. It is unrelated, except by being indirectly named after the same person, to the Bonneville Power Administration that operated an early large-area microwave communications network.

[2] There were of course large TV networks in the US, and they will factor into the story later, but they still relied on a network of independent but affiliated stations to reach their actual audience---which meant a degree of technical inconsistency that made it hard to rollout nationwide VBI services. Providing another hint at how centralization vs. decentralization affected these datacasting services, adoption of new datacasting technologies in the US has often been highest among PBS and NPR affiliates, our closest equivalents to something like the BBC or ITV.

[3] The regulatory relationship between broadcast TV stations, cable network TV stations, and cable carriers is a complex one. The FCC's role in refereeing the competition between these different parts of the television industry, which are all generally trying to kill each other off, has lead to many odd details of US television regulation and some of the everyday weirdness of the American TV experience. It's also another area where the US television industry stands in contrast to the European television industry, where state-owned or state-chartered broadcasting meant that the slate of channels available to a consumer was generally the same regardless of how they physically received them. Not so in the US! This whole thing will probably get its own article one day.

2025-08-16 passive microwave repeaters

One of the most significant single advancements in telecommunications technology was the development of microwave radio. Essentially an evolution of radar, the middle of the Second World War saw the first practical microwave telephone system. By the time Japan surrendered, AT&T had largely abandoned their plan to build an extensive nationwide network of coaxial telephone cables. Microwave relay offered greater capacity at a lower cost. When Japan and the US signed their peace treaty in 1951, it was broadcast from coast to coast over what AT&T called the "skyway": the first transcontinental telephone lead made up entirely of radio waves. The fact that live television coverage could be sent over the microwave system demonstrated its core advantage. The bandwidth of microwave links, their capacity, was truly enormous. Within the decade, a single microwave antenna could handle over 1,000 simultaneous calls.

Passive repeater at Pioche

Microwave's great capacity, its chief advantage, comes from the high frequencies and large bandwidths involved. The design of microwave-frequency radio electronics was an engineering challenge that was aggressively attacked during the war because microwave frequency's short wavelengths made them especially suitable for radar. The cavity magnetron, one of the first practical microwave transmitters, was an invention of such import that it was the UK's key contribution to a technical partnership that lead to the UK's access to US nuclear weapons research. Unlike the "peaceful atom," though, the "peaceful microwave" spread fast after the war. By the end of the 1950s, most long-distance telephone calls were carried over microwave. While coaxial long-distance carriers such as L-carrier saw continued use in especially congested areas, the supremacy of microwave for telephone communications would not fall until adoption of fiber optics in the 1980s.

The high frequency, and short wavelength, of microwave radio is a limitation as well as an advantage. Historically, "microwave" was often used to refer to radio bands above VHF, including UHF. As RF technology improved, microwave shifted higher, and microwave telephone links operated mostly between 1 and 9 GHz. These frequencies are well beyond the limits of beyond-line-of-sight propagation mechanisms, and penetrate and reflect only poorly. Microwave signals could be received over 40 or 50 miles in ideal conditions, but the two antennas needed to be within direct line of sight. Further complicating planning, microwave signals are especially vulnerable to interference due to obstacles within the "fresnel zone," the region around the direct line of sight through which most of the received RF energy passes.

Today, these problems have become relatively easy to overcome. Microwave relays, stations that receive signals and rebroadcast them further along a route, are located in positions of geographical advantage. We tend to think of mountain peaks and rocky ridges, but 1950s microwave equipment was large and required significant power and cooling, not to mention frequent attendance by a technician for inspection and adjustment. This was a tube-based technology, with analog and electromechanical control. Microwave stations ran over a thousand square feet, often of thick hardened concrete in the post-war climate and for more consistent temperature regulation, critical to keeping analog equipment on calibration. Where commercial power wasn't available they consumed a constant supply of diesel fuel. It simply wasn't practical to put microwave stations in remote locations.

In the flatter regions of the country, locating microwave stations on hills gave them appreciably better range with few downsides. This strategy often stopped at the Rocky Mountains.

Illustration from Microflect manual

In much of the American West, telephone construction had always been exceptionally difficult. Open-wire telephone leads had been installed through incredible terrain by the dedication and sacrifice of crews of men and horses. Wire strung over telephone poles proved able to handle steep inclines and rocky badlands, so long as the poles could be set---although inclement weather on the route could make calls difficult to understand. When the first transcontinental coaxial lead was installed, the route was carefully planned to follow flat valley floors whenever possible. This was an important requirement since it was installed mostly by mechanized equipment, heavy machines, which were incapable of navigating the obstacles that the old pole and wire crews had on foot.

The first installations of microwave adopted largely the same strategy. Despite the commanding views offered by mountains on both sides of the Rio Grande Valley, AT&T's microwave stations are often found on low mesas or even at the center of the valley floor. Later installations, and those in the especially mountainous states where level ground was scarce, became more ambitious. At Mt. Rose, in Nevada, an aerial tramway carried technicians up the slope to the roof of the microwave station---the only access during winter when snowpack reached high up the building's walls. Expansion in the 1960s involved increasing use of helicopters as the main access to stations, although roads still had to be graded for construction and electrical service.

These special arrangements for mountain locations were expensive, within the reach of the Long Lines department's monopoly-backed budget but difficult for anyone else, even Bell Operating Companies, to sustain. And the West---where these difficult conditions were encountered the most---also contained some of the least profitable telephone territory, areas where there was no interconnected phone service at all until government subsidy under the Rural Electrification Act. Independent telephone companies and telephone cooperatives, many of them scrappy operations that had expanded out from the manager's personal home, could scarcely afford a mountaintop fortress and a helilift operation to sustain it.

For the telephone industry's many small players, and even the more rural Bell Operating Companies, another property of microwave became critical: with a little engineering, you can bounce it off of a mirror.

Passive repeater at Pioche

James Kreitzberg was, at least as the obituary reads, something of a wunderkind. Raised in Missoula, Montana, he earned his pilots license at 15 and joined the Army Air Corps as soon as he was allowed. The Second World War came to a close shortly after, and so, he went on to the University of Washington where he studied aeronautical engineering and then went back home to Montana, taking up work as an engineer at one of the states' largest electrical utilities. His brother, George, had taken a similar path: a stint in the Marine Corps and an aeronautical engineering degree from Oklahoma. While James worked at Montana Power in Butte, George moved to Salem, Oregon, where he started an aviation company that supplemented their cropdusting revenue by modifying Army-surplus aircraft for other uses.

Montana Power operated hydroelectric dams, coal mines, and power plants, a portfolio of facilities across a sparse and mountainous state that must have made communications a difficult problem. During the 1950s, James was involved in an effort to build a new private telephone system connecting the utility's facilities. It required negotiating some type of obstacle, perhaps a mountain pass. James proposed an idea: a reflector.

Because the wavelength of microwaves are so short, say 10cm, it's practical to build a flat metallic panel that spans multiple wavelengths. Such a panel will function like a reflector or mirror, redirecting microwave energy at an angle proportional to the angle on which it arrived. Much like you can redirect a laser using mirrors, you can also redirect a microwave signal. Some early commenters referred to this technique as a "radio mirror," but by the 1950s the use of "active" microwave repeaters with receivers and transmitters had become well established, so by comparison reflectors came to be known as "passive repeaters."

James believed a passive repeater to be a practical solution, but Montana Power lacked the expertise to build one. For a passive repeater to work efficiently, its surface must be very flat and regular, even under varying temperature. Wind loading had to be accounted for, and the face sufficiently rigid to not flex under the wind. Of course, with his education in aeronautics, James knew that similar problems were encountered in aircraft: the need for lightweight metal structures with surfaces that kept an engineered shape. Wasn't he fortunate, then, that his brother owned a shop that repaired and modified aircraft.

I know very little about the original Montana Power installation, which is unfortunate, as it may very well be the first passive microwave repeater ever put into service. What I do know is that in the fall of 1955, James called his brother George and asked if his company, Kreitzberg Aviation, could fabricate a passive repeater for Montana Power. George, he later recounted, said that "I can build anything you can draw." The repeater was made in a hangar on the side of Salem's McNary Field, erected by the flightline as a test, and then shipped in parts to Montana for reassembly in the field. It worked. It worked so well, in fact, that as word of Montana Power's new telephone system spread, other utilities wrote to inquire about obtaining passive repeaters for their own telephone systems.

In 1956, James Kreitzberg moved to Salem and the two brothers formed the Microflect Company. From the sidelines of McNary Field, Microflect built aluminum "billboards" that can still be found on mountain passes and forested slopes throughout the western United States, and in many other parts of the world where mountainous terrain, adverse weather, and limited utilities made the construction of active repeaters impractical.

Passive repeaters can be used in two basic configurations, defined by the angle at which the signal is reflected. In the first case, the reflection angle is around 90 degrees (the closer to this ideal angle, of course, the more efficiently the repeater performs). This situation is often encountered when there is an obstacle that the microwave path needs to "maneuver" around. For example, a ridge or even a large structure like a building in between two sites. In the second case, the microwave signal must travel in something closer to a straight line---over a mountain pass between two towns, for example. When the reflection angle is greater than 135 degrees, the use of a single passive repeater becomes inefficient or impossible, so Microflect recommends the use of two. Arranged like a dogleg or periscope, the two repeaters reflect the signal to the side and then onward in the intended direction.

Microflect published an excellent engineering manual with many examples of passive repeater installations along with the signal calculations. You might think that passive repeaters would be so inefficient as to be impractical, especially when more than one was required, but this is surprisingly untrue. Flat aluminum panels are almost completely efficient reflectors of microwave, and somewhat counterintuitively, passive repeaters can even provide gain.

In an active repeater, it's easy to see how gain is achieved: power is added. A receiver picks up a signal, and then a powered transmitter retransmits it, stronger than it was before. But passive repeaters require no power at all, one of their key advantages. How do they pull off this feat? The design manual explains with an ITU definition of gain that only an engineer could love, but in an article for "Electronics World," Microflect field engineer Ray Thrower provided a more intuitive explanation.

A passive repeater, he writes, functions essentially identically to a parabolic antenna, or a telescope:

Quite probably the difficulty many people have in understanding how the passive repeater, a flat surface, can have gain relates back to the common misconception about parabolic antennas. It is commonly believed that it is the focusing characteristics of the parabolic antenna that gives it its gain. Therefore, goes the faulty conclusion, how can the passive repeater have gain? The truth is, it isn't focusing that gives a parabola its gain; it is its larger projected aperture. The focusing is a convenient means of transition from a large aperture (the dish) to a small aperture (the feed device). And since it is projected aperture that provides gain, rather than focusing, the passive repeater with its larger aperture will provide high gain that can be calculated and measured reliably. A check of the method of determining antenna gain in any antenna engineering handbook will show that focusing does not enter into the basic gain calculation.

We can also think of it this way: the beam of energy emitted by a microwave antenna expands in an arc as it travels, dissipating the "density" of the energy such that a dish antenna of the same size will receive a weaker and weaker signal as it moves further away (this is the major component of path loss, the "dilution" of the energy over space). A passive repeater employs a reflecting surface which is quite large, larger than practical antennas, and so it "collects" a large cross section of that energy for reemission.

Projected aperture is the effective "window" of energy seen by the antenna at the active terminal as it views the passive repeater. The passive repeater also sees the antenna as a "window" of energy. If the two are far enough away from one another, they will appear to each other as essentially point sources.

In practice, a passive repeater functions a bit like an active repeater that collects a signal with a large antenna and then reemits it with a smaller directional antenna. To be quite honest, I still find it a bit challenging to intuit this effect, but the mathematics bear it out as well. Interestingly, the effect only occurs when the passive repeater is far enough from either terminal so as to be usefully approximated as a point source. Microflect refers to this as the far field condition. When the passive repeater is very close to one of the active sites, within the near field, it is more effective to consider the passive repeater as part of the transmitting antenna itself, and disregard it for path loss calculations. This dichotomy between far field and near field behavior is actually quite common in antenna engineering (where an "antenna" is often multiple radiating and nonradiating elements within the near field of each other), but it's yet another of the things that gives antenna design the feeling of a dark art.

Illustration from Microflect manual

One of the most striking things about passive repeaters is their size. As a passive repeater becomes larger, it reflects a larger cross section of the RF energy and thus provides more gain. Much like with dish or horn antennas, the size of a passive repeater can be traded off with transmitter power (and the size of other antennas involved) to design an economical solution. Microflect offered as standard sizes ranging from 8'x10' (gain at around 6.175GHz: 90.95 dB) to 40'x60' (120.48dB, after a "rough estimate" reduction of 1dB due to interference effects possible from such a short wavelength reflecting off of such a large panel as to invoke multipath effects).

By comparison, a typical active microwave repeater site might provide a gain of around 140dB---and we must bear in mind that dB is a logarithmic unit, so the difference between 121 and 140 is bigger than it sounds. Still, there's a reason that logarithms are used when discussing radio paths... in practice, it is orders of magnitude that make the difference in reliable reception. The reduction in gain from an active repeater to a passive repeater can be made up for with higher-gain terminal antennas and more powerful transmitters. Given that the terminal sites are often at far more convenient locations than the passive repeater, that tradeoff can be well worth it.

Keep in mind that, as Microflect emphasizes, passive repeaters require no power and very little ("virtually no") maintenance. Microflect passive repeaters were manufactured in sections that bolted together in the field, and the support structures provided for fine adjustment of the panel alignment after mounting. These features made it possible to install passive repeaters by helicopter onto simple site-built foundations, and many are found on mountainsides that are difficult to reach even on foot. Even in less difficult locations, these advantages made passive repeaters less expensive to install and operate than active repeaters. Even when the repeater side was readily accessible, passives were often selected simply for cost savings.

Let's consider some examples of passive repeater installations. Microflect was born of the power industry, and electrical generators and utilities remained one of their best customers. Even today, you can find passive repeaters at many hydroelectric dams. There is a practical need to communicate by telephone between a dispatch center (often at the utility's city headquarters) and the operators in the dam's powerhouse, but the powerhouse is at the base of the dam, often in a canyon where microwave signals are completely blocked. A passive repeater set on the canyon rim, at an angle downwards, solves the problem by redirecting the signal from horizontal to vertical. Such an installation can be seen, for example, at the Hoover Dam. In some sense, these passive repeaters "relocate" the radio equipment from the canyon rim (where the desirable signal path is located) to a more convenient location with the other powerhouse equipment. Because of the short distance from the powerhouse to the repeater, these passives were usually small.

This idea can be extended to relocating en-route repeaters to a more serviceable site. In Glacier National Park, Mountain States Telephone and Telegraph installed a telephone system to serve various small towns and National Park Service sites. Glacier is incredibly mountainous, with only narrow valleys and passes. The only points with long sight ranges tend to be very inaccessible. Mt. Furlong provided ideal line of sight to East Glacier and Essex along highway 2, but it would have been extremely challenging to install and maintain a microwave site on the steep peak. Instead, two passive repeaters were installed near the mountaintop, redirecting the signals from those two destinations to an active repeater installed downslope near the highway and railroad.

This example raises another advantage of passive repeaters: their reduced environmental impact, something that Microflect emphasized as the environmental movement of the 1970s made agencies like the Forest Service (which controlled many of the most appealing mountaintop radio sites) less willing to grant permits that would lead to extensive environmental disruption. Construction by helicopter and the lack of a need for power meant that passive repeaters could be installed without extensive clearing of trees for roads and power line rights of way. They eliminated the persistent problem of leakage from standby generator fuel tanks. Despite their large size, passive repeaters could be camouflaged. Many in national forests were painted green to make them less conspicuous. And while they did have a large surface area, Microflect argued that since they could be installed on slopes rather than requiring a large leveled area, passive repeaters would often fall below the ridge or treeline behind them. This made them less visually conspicuous than a traditional active repeater site that would require a tower. Indeed, passive repeaters are only rarely found on towers, with most elevated off the ground only far enough for the bottom edge to be free of undergrowth and snow.

Other passive repeater installations were less a result of exceptionally difficult terrain and more a simple cost optimization. In rural Nevada, Nevada Bell and a dozen independents and coops faced the challenge of connecting small towns with ridges between them. The need for an active repeater at the top of each ridge, even for short routes, made these rural lines excessively expensive. Instead, such towns were linked with dual passive repeaters on the ridge in a "straight through" configuration, allowing microwave antennas at the towns' existing telephone exchange buildings to reach each other. This was the case with the installation I photographed above Pioche. I have been frustratingly unable to confirm the original use of these repeaters, but from context they were likely installed by the Lincoln County Telephone System to link their "hub" microwave site at Mt. Wilson (with direct sight to several towns) to their site near Caliente.

The Microflect manual describes, as an example, a very similar installation connecting Elko to Carlin. Two 20'x32' passive repeaters on a ridge between the two (unfortunately since demolished) provided a direct connection between the two telephone exchanges.

As an example of a typical use, it might be interesting to look at the manual's calculations for this route. From Elko to the repeaters is 13.73 miles, the repeaters are close enough to each other as to be in near field (and so considered as a single antenna system), and from the repeaters to Carlin is 6.71 miles. The first repeater reflects the signal at a 68 degree angle, then the second reflects it back at a 45 degree angle, for a net change in direction of 23 degrees---a mostly straight route. The transmitter produces 33.0 dBm, both antennas provide a 34.5 dB gain, and the passive repeater assembly provides 88 dB gain (this calculated basically by consulting a table in the manual). That means there is 190 dB of gain in the total system. The 6.71 and 13.73 mile paths add up to 244 dB of free space path loss, and Microflect throws in a few more dB of loss to account for connectors and cables and the less than ideal performance of the double passive repeater. The net result is a received signal of -58 dBm, which is plenty acceptable for a 72-channel voice carrier system. This is all done at a significantly lower price than the construction of a full radio site on the ridge [1].

The combination of relocating radio equipment to a more convenient location and simply saving money leads to one of the iconic applications of passive repeaters, the "periscope" or "flyswatter" antenna. Microwave antennas of the 1960s were still quite large and heavy, and most were pressurized. You needed a sturdy tower to support one, and then a way to get up the tower for regular maintenance. This lead to most AT&T microwave sites using short, squat square towers, often with surprisingly convenient staircases to access the antenna decks. In areas where a very tall tower was needed, it might just not be practical to build one strong enough. You could often dodge the problem by putting the site up a hill, but that wasn't always possible, and besides, good hilltop sites that weren't already taken became harder to find.

When Western Union built out their microwave network, they widely adopted the flyswatter antenna as an optimization. Here's how it works: the actual microwave antenna is installed directly on the roof of the equipment building facing up. Only short waveguides are needed, weight isn't an issue, and technicians can conveniently service the antenna without even fall protection. Then, at the top of a tall guyed lattice tower similar to an AM mast, a passive repeater is installed at a 45 degree angle to the ground, redirecting the signal from the rooftop antenna to the horizontal. The passive repeater is much lighter than the antenna, allowing for a thinner tower, and will rarely if ever need service. Western Union often employed two side-by-side lattice towers with a "crossbar" between them at the top for convenient mounting of reflectors each direction, and similar towers were used in some other installations such as the FAA's radar data links. Some of these towers are still in use, although generally with modern lightweight drum antennas replacing the reflectors.

Passive repeater at Pioche

Passive microwave repeaters experienced their peak popularity during the 1960s and 1970s, as the technology became mature and communications infrastructure proliferated. Microflect manufactured thousands of units from their new, larger warehouse, across the street from their old hangar on McNary Field. Microflect's customer list grew to just about every entity in the Bell System, from Long Lines to Western Electric to nearly all of the BOCs. The list includes GTE, dozens of smaller independent telephone companies, most of the nation's major railroads, electrical utilities from the original Montana Power to the Tennessee Valley Authority. Microflect repeaters were used by ITT Arctic Services and RCA Alascom in the far north, and overseas by oil companies and telecoms on islands and in mountainous northern Europe.

In Hawaii, a single passive repeater dodged a mountain to connect Lanai City telephones to the Hawaii Telephone Company network at Tantalus on Oahu---nearly 70 miles in one jump. In Nevada, six passive repeaters joined two active sites to connect six substations to the Sierra Pacific Power Company's control center in Reno. Jamaica's first high-capacity telephone network involved 11 passive repeaters, one as large as 40'x60'.

The Rocky Mountains are still dotted with passive repeaters, structures that are sometimes hard to spot but seem to loom over the forest once noticed. In Seligman, AZ, a sun-faded passive repeater looks over the cemetery. BC Telephone installed passive repeaters to phase out active sites that were inaccessible for maintenance during the winter. Passive repeaters were, it turns out, quite common---and yet they are little known today.

First, it cannot be ignored that passive repeaters are most common in areas where communications infrastructure was built post-1960 through difficult terrain. In North America, this means mostly the West [2], far away from the Eastern cities where we think of telephone history being concentrated. Second, the days of passive repeaters were relatively short. After widespread adoption in the '60s, fiber optics began to cut into microwave networks during the '80s and rendered microwave long-distance links largely obsolete by the late '90s. Considerable improvements in cable-laying equipment, not to mention the lighter and more durable cables, made fiber optics easier to install in difficult terrain than coaxial had ever been.

Besides, during the 1990s, more widespread electrical infrastructure, miniaturization of radio equipment, and practical photovoltaic solar systems all combined to make active repeaters easier to install. Today, active repeater systems installed by helicopter with independent power supplies are not that unusual, supporting cellular service in the Mojave Desert, for example. Most passive repeaters have been obsoleted by changes in communications networks and technologies. Satellite communications offer an even more cost effective option for the most difficult installations, and there really aren't that many places left that a small active microwave site can't be installed.

Moreover, little has been done to preserve the history of passive repeaters. In the wake of the 2015 Wired article on the Long Lines network, considerable enthusiasm has been directed towards former AT&T microwave stations, having been mostly preserved by their haphazard transfer to companies like American Tower. Passive repeaters, lacking even the minimal commercial potential of old AT&T sites, were mostly abandoned in place. Often being found in national forests and other resource management areas, many have been demolished for restoration. In 2019, a historic resources report was written on the Bonneville Power Administration's extensive microwave network. It was prepared to address the responsibility that federal agencies have for historical preservation under the National Historic Preservation Act and National Environmental Policy Act, policies intended to ensure that at least the government takes measures to preserve history before demolishing artifacts. The report reads: "Due to their limited features, passive repeaters are not considered historic resources, and are not evaluated as part of this study."

In 1995, Valmont Industries acquired Microflect. Valmont is known mostly for their agricultural products, including center-pivot irrigation systems, but they had expanded their agricultural windmill business into a general infrastructure division that manufactured radio masts and communication towers. For a time, Valmont continued to manufacture passive repeaters as Valmont Microflect, but business seems to have dried up.

Today, Valmont Structures manufactures modular telecom towers from their facility across the street from McNary Field in Salem, Oregon. A Salem local, descended from early Microflect employees, once shared a set of photos on Facebook: a beat-up hangar with a sign reading "Aircraft Repair Center," and in front of it, stacks of aluminum panel sections. Microflect workers erecting a passive repeater in front of a Douglas A-26. Rows of reflector sections beside a Shell aviation fuel station. George Kreitzberg died in 2004, James in 2017. As of 2025, Valmont no longer manufactures passive repeaters.

Illustration from Microflect manual

Postscript

If you are interested in the history of passive repeaters, there are a few useful tips I can give you.

  • Nearly all passive repeaters in North America were built by Microflect, so they have a very consistent design. Locals sometimes confuse passive repeaters with old billboards or even drive-in theater screens, the clearest way to differentiate them is that passive repeaters have a face made up of aluminum modules with deep sidewalls for rigidity and flatness. Take a look at the Microflect manual for many photos.
  • Because passive repeaters are passive, they do not require a radio license proper. However, for site-based microwave licenses, the FCC does require that passive repeaters be included in paths (i.e. a license will be for an active site but with a passive repeater as the location at the other end of the path). These "other location" entries often have names ending in "PR" and their type set to "Passive Repeater."
  • I don't have any straight answer on whether or not any passive repeaters are still in use. It has likely become very rare but there are probably still examples. Two sources suggest that Rachel, NV still relies on a passive repeater for telephone and DSL, but I'm pretty sure this hasn't been true for some years (I can't find any license covering it). I have so far found one active site-based microwave license covering a passive repeater, but it serves a mine that has been closed since the 1980s and I suspect the license has only been renewed due to a second, different path that does not involve a passive. A reader let me know that Industry Canada has some 80 passive repeaters licensed, but I do not know how many (if any) are in active use.
  • For the sake of simplicity I have used "passive repeater" here to refer to microwave reflectors only, but the same term is also used for arrangements of two antennas connected back-to-back. These are much more common in VHF/UHF than in the microwave, although microwave passive repeaters of two parabolic antennas have been used in limited cases.
  • Microflect dominated the US and European market for passive repeaters, but the technology was also used in the Soviet Union, seemingly around the same time. I do not know where it was developed first, or whether it was a case of independent invention. The Soviet examples I have seen use a noticeably different support structure from Microflect, and seem to have been engineered for helicopter hoisting in complete form rather than in parts. Passive repeaters proved very useful in the arctic and so I would assume that the Soviet Union installed quite a few.
  • Most passive repeaters were installed by "classic communications organizations," meaning telephone companies, power utilities, and railroads---industries that used long-distance communications systems since the turn of the century. I have heard of one passive repeater installed by a television studio for an STL link, and there might be others, but I don't think it was common.

[1] If you find these dB gain/loss calculations confusing, you are not alone. It is deceptively simple in a way that was hard for me to learn, and perhaps I will devote an article to it one day.

[2] Although not exclusively, with installations in places like Vermont and Newfoundland where similar constraints applied.

The Modern Job Hunt: Part 1

Ellis knew she needed a walk after she hurried off of Zoom at the end of the meeting to avoid sobbing in front of the group.

She'd just been attending a free online seminar regarding safe job hunting on the Internet. Having been searching since the end of January, Ellis had already picked up plenty of first-hand experience with the modern job market, one rejection at a time. She thought she'd attend the seminar just to see if there were any additional things she wasn't aware of. The seminar had gone well, good information presented in a clear and engaging way. But by the end of it, Ellis was feeling bleak. Goodness gracious, she'd already been slogging through months of this. Hundreds of job applications with nothing to show for it. All of the scams out there, all of the bad actors preying on people desperate for their and their loved ones' survival!

Whiteboard - Job Search Process - 27124941129

Ellis' childhood had been plagued with anxiety and depression. It was only as an adult that she'd learned any tricks for coping with them. These tricks had helped her avoid spiraling into full-on depression for the past several years. One such trick was to stop and notice whenever those first feelings hit. Recognize them, feel them, and then respond constructively.

First, a walk. Going out where there were trees and sunshine: Ellis considered this "garbage collection" for her brain. So she stepped out the front door and started down a tree-lined path near her house, holding on to that bleak feeling. She was well aware that if she didn't address it, it would take root and grow into hopelessness, self-loathing, fear of the future. It would paralyze her, leave her curled up on the couch doing nothing. And it would all happen without any words issuing from her inner voice. That was the most insidious thing. It happened way down deep in a place where there were no words at all.

Once she returned home, Ellis forced herself to sit down with a notebook and pencil and think very hard about what was bothering her. She wrote down each sentiment:

  • This job search is a hopeless, unending slog!
  • No one wants to hire me. There must be something wrong with me!
  • This is the most brutal job search environment I've ever dealt with. There are new scams every day. Then add AI to every aspect until I want to vomit.

This was the first step of a reframing technique she'd just read about in the book Right Kind of Wrong by Amy Edmonson. With the words out, it was possible to look at each statement and determine whether it was rational or irrational, constructive or harmful. Each statement could be replaced with something better.

Ellis proceeded step by step through the list.

  • Yes, this will end. Everything ends.
  • There's nothing wrong with me. Most businesses are swamped with applications. There's a good chance mine aren't even being looked at before they're being auto-rejected. Remember the growth mindset you learned from Carol Dweck. Each application and interview is giving me experience and making me a better candidate.
  • This job market is a novel context that changes every day. That means failure is not only inevitable, it's the only way forward.

Ellis realized that her job hunt was very much like a search algorithm trying to find a path through a maze. When the algorithm encountered a dead end, did it deserve blame? Was it an occasion for shame, embarrassment, and despair? Of course not. Simply backtrack and keep going with the knowledge gained.

Yes, there was truth to the fact that this was the toughest job market Ellis had ever experienced. Therefore, taking a note from Viktor Frankl, she spent a moment reimagining the struggle in a way that made it meaningful to her. Ellis began viewing her job hunt in this dangerous market, her gradual accumulation of survival information, as an act of resistance against it. She now hoped to write all about her experience once she was on the other side, in case her advice might help even one other person in her situation save time and frustration.

While unemployed, she also had the opportunity to employ the search algorithm against entirely new mazes. Could Ellis expand her freelance writing into a sustainable gig, for instance? That would mean exploring all the different ways to be a freelance writer, something Ellis was now curious and excited to explore.

[Advertisement] Keep all your packages and Docker containers in one place, scan for vulnerabilities, and control who can access different feeds. ProGet installs in minutes and has a powerful free version with a lot of great features that you can upgrade when ready.Learn more.

Best of…: Classic WTF: We Are Not Meatbots!

Today's Labor Day in the US, a day where we celebrate workers. Well, some of us. This story from the archives is one of the exceptions. Original. --Remy

Sales, as everyone knows, is the mortal enemy of Development.

Their goals are opposite, their people are opposite, their tactics are opposite. Even their credos - developers "Make a good product" but sales will "Do anything to get that money" - are at complete odds.

The company Jordan worked for made a pseudo-enterprise product responsible for everything e-commerce: contacts, inventory, website, shipping, payment...everything. His responsibility included the inventory package, overseeing the development team, designing APIs, integration testing, and coordination with the DBAs and sysadmins...you know, everything. One of his team members implemented a website CMS into the product, letting the website design team ignore the content and focus on making it look good.

Care to guess who was responsible for the site content? If you guessed the VP of Sales, congratulations! You win a noprize.

A couple months passed by without incident. Everything's peachy in fact...that is, until one fateful day when the forty-person stock-and-shipping department are clustered in the parking lot when Jordan shows up.

Jordan parked, crossed the asphalt, and asked one of the less threatening looking warehouse guys, "What's the problem?"

The reply was swift as the entire group unanimously shouted "YOUR F***ING WEBSITE!" Another worker added, "You guys in EYE TEE are so far removed from real life out here. We do REAL WORK, what you guys do from behind your desks?"

Jordan was dumbfounded. What brought this on? For a moment he considered defending his and his team's honor but decided it wouldn't accomplish much besides get his face rearranged and instead replied with a meek "Sure, just let me check into this..." before quickly diving into the nearest entry door.

It didn't take much long after for Jordan to ascertain that the issue wasn't that the website was down, but that the content of one page in particular , the "About Us" page, had upset the hardworking staff who accomplished what the company actually promised: stock and ship the products that they sold on their clients' websites.

After an hour of mediation, it was discovered that the VP of Sales, in a strikingly-insensitive-even-for-him moment, had referred to the warehouse staff as "meatbots." The lively folk who staffed the shipping and stocking departments naturally felt disrespected by being reduced to some stupid sci-fi cloning trope nomenclature. The VP's excuse was simply that he had drunk a couple of beers while he wrote the page text for the website. Oops!

Remarkably, the company (which Jordan left some time later for unrelated reasons) eventually caught up to the backlog of orders to go out. It took a complete warehouse staff replacement, but they did catch up. Naturally, the VP of Sales is still there, with an even more impressive title.


photo credit: RTD Photography via photopin cc

[Advertisement] BuildMaster allows you to create a self-service release management platform that allows different teams to manage their applications. Explore how!

Error'd: Scamproof

Gordon S. is smarter than the machines. "I can only presume the "Fix Now with AI" button adds some mistakes in order to fix the lack of needed fixes."

0

 

"Sorry, repost with the link https://www.daybreaker.com/alive/," wrote Michael R.

3

 

And yet again from Michael R., following up with a package mistracker. "Poor DHL driver. I hope he will get a break within those 2 days. And why does the van look like he's driving away from me."

1

 

Morgan airs some dirty laundry. "After navigating this washing machine app on holiday and validating my credit card against another app I am greeted by this less than helpful message each time. So is OK okay? Or is the Error in error?
Washing machine worked though."

2

 

And finally, scamproof Stuart wondered "Maybe the filter saw the word "scam" and immediately filed it into the scam bucket. All scams include the word "scam" in them, right?"

4

 

[Advertisement] Keep all your packages and Docker containers in one place, scan for vulnerabilities, and control who can access different feeds. ProGet installs in minutes and has a powerful free version with a lot of great features that you can upgrade when ready.Learn more.

Representative Line: Springs are Optional

Optional types are an attempt to patch the "billion dollar mistake". When you don't know if you have a value or not, you wrap it in an Optional, which ensures that there is a value (the Optional itself), thus avoiding null reference exceptions. Then you can query the Optional to see if there is a real value or not.

This is all fine and good, and can cut down on some bugs. Good implementations are loaded with convenience methods which make it easy to work on the optionals.

But then, you get code like Burgers found. Which just leaves us scratching our heads:

private static final Optional<Boolean> TRUE = Optional.of(Boolean.TRUE);
private static final Optional<Boolean> FALSE = Optional.of(Boolean.FALSE);

Look, any time you're making constants for TRUE or FALSE, something has gone wrong, and yes, I'm including pre-1999 versions of C in this. It's especially telling when you do it in a language that already has such constants, though- at its core- these lines are saying TRUE = TRUE. Yes, we're wrapping the whole thing in an Optional here, which potentially is useful, but if it is useful, something else has gone wrong.

Burgers works for a large insurance company, and writes this about the code:

I was trying to track down a certain piece of code in a Spring web API application when I noticed something curious. It looked like there was a chunk of code implementing an application-specific request filter in business logic, totally ignoring the filter functions offered by the framework itself and while it was not related to the task I was working on, I followed the filter apply call to its declaration. While I cannot supply the entire custom request filter implementation, take these two static declarations as a demonstration of how awful the rest of the class is.

Ah, of course- deep down, someone saw a perfectly functional wheel and said, "I could make one of those myself!" and these lines are representative of the result.

[Advertisement] Keep all your packages and Docker containers in one place, scan for vulnerabilities, and control who can access different feeds. ProGet installs in minutes and has a powerful free version with a lot of great features that you can upgrade when ready.Learn more.

CodeSOD: The HTML Print Value

Matt was handed a pile of VB .Net code, and told, "This is yours now. I'm sorry."

As often happens, previous company leadership said, "Why should I pay top dollar for experienced software engineers when I can hire three kids out of college for the same price?" The experiment ended poorly, and the result was a pile of bad VB code, which Matt now owned.

Here's a little taste:

// SET IN SESSION AND REDIRECT TO PRINT PAGE
Session["PrintValue"] = GenerateHTMLOfItem();
Response.Redirect("PrintItem.aspx", true);

The function name here is accurate. GenerateHTMLOfItem takes an item ID, generates the HTML output we want to use to render the item, and stores it in a session variable. It then forces the browser to redirect to a different page, where that HTML can then be output.

You may note, of course, that GenerateHTMLOfItem doesn't actually take parameters. That's because the item ID got stored in the session variable elsewhere.

Of course, it's the redirect that gets all the attention here. This is a client side redirect, so we generate all the HTML, shove it into a session object, and then send a message to the web browser: "Go look over here". The browser sends a fresh HTTP request for the new page, at which point we render it for them.

The Microsoft documentation also has this to add about the use of Response.Redirect(String, Boolean), as well:

Calling Redirect(String) is equivalent to calling Redirect(String, Boolean) with the second parameter set to true. Redirect calls End which throws a ThreadAbortException exception upon completion. This exception has a detrimental effect on Web application performance. Therefore, we recommend that instead of this overload you use the HttpResponse.Redirect(String, Boolean) overload and pass false for the endResponse parameter, and then call the CompleteRequest method. For more information, see the End method.

I love it when I see the developers do a bonus wrong.

Matt had enough fires to put out that fixing this particular disaster wasn't highest on his priority list. For the time being, he could only add this comment:

// SET IN SESSION AND REDIRECT TO PRINT PAGE
// FOR THE LOVE OF GOD, WHY?!?
Session["PrintValue"] = GenerateHTMLOfItem();
Response.Redirect("PrintItem.aspx", true);
[Advertisement] Keep all your packages and Docker containers in one place, scan for vulnerabilities, and control who can access different feeds. ProGet installs in minutes and has a powerful free version with a lot of great features that you can upgrade when ready.Learn more.

Representative Line: Not What They Meant By Watching "AndOr"

Today's awfulness comes from Tim H, and while it's technically more than one line, it's so representative of the code, and so short that I'm going to call this a representative line. Before we get to the code, we need to talk a little history.

Tim's project is roughly three decades old. It's a C++ tool used for a variety of research projects, and this means that 90% of the people who have worked on it are PhD candidates in computer science programs. We all know the rule of CompSci PhDs and programming: they're terrible at it. It's like the old joke about the farmer who, when unable to find an engineer to build him a cow conveyer, asked a physicist. After months of work, the physicist introduced the result: "First, we assume a perfectly spherical cow in a vacuum…"

Now, this particularly function has been anonymized, but it's easy to understand what the intent was:

bool isFooOrBar() {
  return isFoo() && isBar();
}

The obvious problem here is the mismatch between the function name and the actual function behavior- it promises an or operation, but does an and, which the astute reader may note are different things.

I think this offers another problem, though. Even if the function name were correct, given the brevity of the body, I'd argue that it actually makes the code less clear. Maybe it's just me, but isFoo() && isBar() is more clear in its intent than isFooAndBar(). There's a cognitive overhead to adding more symbols that would make me reluctant to add such a function.

There may be an argument about code-reuse, but it's worth noting- this function is only ever called in one place.

This particular function is not itself, all that new. Tim writes:

This was committed as new code in 2010 (i.e., not a refactor). I'm not sure if the author changed their mind in the middle of writing the function or just forgot which buttons on the keyboard to press.

More likely, Tim, is that they initially wrote it as an "or" operation and then discovered that they were wrong and it needed to be an "and". Despite the fact that the function was only called in one place, they opted to change the body without changing the name, because they didn't want to "track down all the places it's used". Besides, isn't the point of a function to encapsulate the behavior?

[Advertisement] Utilize BuildMaster to release your software with confidence, at the pace your business demands. Download today!

The C-Level Ticket

Everyone's got workplace woes. The clueless manager; the disruptive coworker; the cube walls that loom ever higher as the years pass, trapping whatever's left of your soul.

But sometimes, Satan really leaves his mark on a joint. I worked Tech Support there. This is my story. Who am I? Just call me Anonymous.


It starts at the top. A call came in from Lawrence Gibbs, the CEO himself, telling us that a conference room printer was, quote, "leaking." He didn't explain it, he just hung up. The boss ordered me out immediately, told me to step on it. I ignored the elevator, racing up the staircase floor after floor until I reached the dizzying summit of C-Town.

The Big Combo (1955)

There's less oxygen up there, I'm sure of it. My lungs ached and my head spun as I struggled to catch my breath. The fancy tile and high ceilings made a workaday schmuck like me feel daunted, unwelcome. All the same, I gathered myself and pushed on, if only to learn what on earth "leaking" meant in relation to a printer.

I followed the signs on the wall to the specified conference room. In there, the thermostat had been kicked down into the negatives. The cold cut through every layer of mandated business attire, straight to bone. The scene was thick with milling bystanders who hugged themselves and traded the occasional nervous glance. Gibbs was nowhere to be found.

Remembering my duty, I summoned my nerve. "Tech Support. Where's the printer?" I asked.

Several pointing fingers showed me the way. The large printer/scanner was situated against the far wall, flanking an even more enormous conference table. Upon rounding the table, I was greeted with a grim sight: dozens of sheets of paper strewn about the floor like blood spatter. Everyone was keeping their distance; no one paid me any mind as I knelt to gather the pages. There were 30 in all. Each one was blank on one side, and sported some kind of large, blotchy ring on the other. Lord knew I drank enough java to recognize a coffee mug stain when I saw one, but these weren't actual stains. They were printouts of stains.

The printer was plugged in. No sign of foul play. As I knelt there, unseen and unheeded, I clutched the ruined papers to my chest. Someone had wasted a tree and a good bit of toner, and for what? How'd it go down? Surely Gibbs knew more than he'd let on. The thought of seeking him out, demanding answers, set my heart to pounding. It was no good, I knew. He'd play coy all day and hand me my pink slip if I pushed too hard. As much as I wanted the truth, I had a stack of unpaid bills at home almost as thick as the one in my arms. I had to come up with something else.

There had to be witnesses among the bystanders. I stood up and glanced among them, seeking out any who would return eye contact. There: a woman who looked every bit as polished as everyone else. But for once, I got the feeling that what lay beneath the facade wasn't rotten.

With my eyes, I pleaded for answers.

Not here, her gaze pleaded back.

I was getting somewhere, I just had to arrange for some privacy. I hurried around the table again and weaved through bystanders toward the exit, hoping to beat it out of that icebox unnoticed. When I reached the threshold, I spotted Gibbs charging up the corridor, smoldering with entitlement. "Where the hell is Tech Support?!"

I froze a good distance away from the oncoming executive, whose voice I recognized from a thousand corporate presentations. Instead of putting me to sleep this time, it jolted down my spine like lightning. I had to think fast, or I was gonna lose my lead, if not my life.

"I'm right here, sir!" I said. "Be right back! I, uh, just need to find a folder for these papers."

"I've got one in my office."

A woman's voice issued calmly only a few feet behind me. I spun around, and it was her, all right, her demeanor as cool as our surroundings. She nodded my way. "Follow me."

My spirits soared. At that moment, I would've followed her into hell. Turning around, I had the pleasure of seeing Gibbs stop short with a glare of contempt. Then he waved us out of his sight.

Once we were out in the corridor, she took the lead, guiding me through the halls as I marveled at my luck. Eventually, she used her key card on one of the massive oak doors, and in we went.

You could've fit my entire apartment into that office. The place was spotless. Mini-fridge, espresso machine, even couches: none of it looked used. There were a couple of cardboard boxes piled up near her desk, which sat in front of a massive floor-to-ceiling window admitting ample sunlight.

She motioned toward one of the couches, inviting me to sit. I shook my head in reply. I was dying for a cigarette by that point, but I didn't dare light up within this sanctuary. Not sure what to expect next, I played it cautious, hovering close to the exit. "Thanks for the help back there, ma'am."

"Don't mention it." She walked back to her desk, opened up a drawer, and pulled out a brand-new manila folder. Then she returned to conversational distance and proffered it my way. "You're from Tech Support?"

There was pure curiosity in her voice, no disparagement, which was encouraging. I accepted the folder and stuffed the ruined pages inside. "That's right, ma'am."

She shook her head. "Please call me Leila. I started a few weeks ago. I'm the new head of HR."

Human Resources. That acronym, which usually put me on edge, somehow failed to raise my hackles. I'd have to keep vigilant, of course, but so far she seemed surprisingly OK. "Welcome aboard, Leila. I wish we were meeting in better circumstances." Duty beckoned. I hefted the folder. "Printers don't just leak."

"No." Leila glanced askance, grave.

"Tell me what you saw."

"Well ..." She shrugged helplessly. "Whenever Mr. Gibbs gets excited during a meeting, he tends to lean against the printer and rest his coffee mug on top of it. Today, he must've hit the Scan button with his elbow. I saw the scanner go off. It was so bright ..." She trailed off with a pained glance downward.

"I know this is hard," I told her when the silence stretched too long. "Please, continue."

Leila summoned her mettle. "After he leaned on the controls, those pages spilled out of the printer. And then ... then somehow, I have no idea, I swear! Somehow, all those pages were also emailed to me, Mr. Gibbs' assistant, and the entire board of directors!"

The shock hit me first. My eyes went wide and my jaw fell. But then I reminded myself, I'd seen just as crazy and worse as the result of a cat jumping on a keyboard. A feline doesn't know any better. A top-level executive, on the other hand, should know better.

"Sounds to me like the printer's just fine," I spoke with conviction. "What we have here is a CEO who thinks it's OK to treat an expensive piece of office equipment like his own personal fainting couch."

"It's terrible!" Leila's gaze burned with purpose. "I promise, I'll do everything I possibly can to make sure something like this never happens again!"

I smiled a gallows smile. "Not sure what anyone can do to fix this joint, but the offer's appreciated. Thanks again for your help."

Now that I'd seen this glimpse of better things, I selfishly wanted to linger. But it was high time I got outta there. I didn't wanna make her late for some meeting or waste her time. I backed up toward the door on feet that were reluctant to move.

Leila watched me with a look of concern. "Mr. Gibbs was the one who called Tech Support. I can't close your ticket for you; you'll have to get him to do it. What are you going to do?"

She cared. That made leaving even harder. "I dunno yet. I'll think of something."

I turned around, opened the massive door, and put myself on the other side of it in a hurry, using wall signs to backtrack to the conference room. Would our paths ever cross again? Unlikely. Someone like her was sure to get fired, or quit out of frustration, or get corrupted over time.

It was too painful to think about, so I forced myself to focus on the folder of wasted pages in my arms instead. It felt like a mile-long rap sheet. I was dealing with an alleged leader who went so far as to blame the material world around him rather than accept personal responsibility. I'd have to appeal to one or more of the things he actually cared about: himself, his bottom line, his sense of power.

By the time I returned to the conference room to face the CEO, I knew what to tell him. "You're right, sir, there's something very wrong with this printer. We're gonna take it out here and give it a thorough work-up."

That was how I was able to get the printer out of that conference room for good. Once it underwent "inspection" and "testing," it received a new home in a previously unused closet. Whenever Gibbs got to jawing in future meetings, all he could do was lean against the wall. Ticket closed.

Gibbs remained at the top, doing accursed things that trickled down to the roots of his accursed company. But at least from then on, every onboarding slideshow included a photo of one of the coffee ring printouts, with the title Respect the Equipment.

Thanks, Leila. I can live with that.

[Advertisement] Picking up NuGet is easy. Getting good at it takes time. Download our guide to learn the best practice of NuGet for the Enterprise.

Error'd: 8 Days a Week

"What word can spell with the letters housucops?" asks Mark R. "Sometimes AI hallucinations can be hard to find. Other times, they just kind of stand out..."

1

 

"Do I need more disks?" wonders Gordon "I'm replacing a machine which has only 2 GB of HDD. New one has 2 TB, but that may not be enough. Unless Thunar is lying." It's being replaced by an LLM.

0

 

"Greenmobility UX is a nightmare" complains an anonymous reader. "Just like last week's submission, do you want to cancel? Cancel or Leave?" This is not quite as bad as last week's.

2

 

Cinephile jeffphi rated this film two thumbs down. "This was a very boring preview, cannot recommend."

4

 

Malingering Manuel H. muses "Who doesn't like long weekends? Sometimes, one Sunday per week is just not enough, so just put a second one right after the first." I don't want to wait until Oktober for a second Sunday; hope we get one søøn.

3

 

[Advertisement] ProGet’s got you covered with security and access controls on your NuGet feeds. Learn more.

A Countable

Once upon a time, when the Web was young, if you wanted to be a cool kid, you absolutely needed two things on your website: a guestbook for people to sign, and a hit counter showing how many people had visited your Geocities page hosting your Star Trek fan fiction.

These days, we don't see them as often, but companies still like to track the information, especially when it comes to counting downloads. So when Justin started on a new team and saw a download count in their analytics, he didn't think much of it at all. Nor did he think much about it when he saw the download count displayed on the download page.

Another thing that Justin didn't think much about was big piles of commits getting merged in overnight, at least not at first. But each morning, Justin needed to pull in a long litany of changes from a user named "MrStinky". For the first few weeks, Justin was too preoccupied with getting his feet under him, so he didn't think about it too much.

But eventually, he couldn't ignore what he saw in the git logs.

docs: update download count to 51741
docs: update download count to 51740
docs: update download count to 51738

And each commit was exactly what the name implied, a diff like:

- 51740
+ 51741

Each time a user clicked the download link, a ping was sent to their analytics system. Throughout the day, the bot "MrStinky" would query the analytics tool, and create new commits that updated the counter. Overnight, it would bundle those commits into a merge request, approve the request, merge the changes, and then redeploy what was at the tip of main.

"But, WHY?" Justin asked his peers.

One of them just shrugged. "It seemed like the easiest and fastest way at the time?"

"I wanted to wire Mr Stinky up to our content management system's database, but just never got around to it. And this works fine," said another.

Much like the rest of the team, Justin found that there were bigger issues to tackle.

[Advertisement] Plan Your .NET 9 Migration with Confidence
Your journey to .NET 9 is more than just one decision.Avoid migration migraines with the advice in this free guide. Download Free Guide Now!

CodeSOD: Copy of a Copy of a

Jessica recently started at a company still using Windows Forms.

Well, that was a short article. Oh, you want more WTF than that? Sure, we can do that.

As you might imagine, a company that's still using Windows Forms isn't going to upgrade any time soon; they've been using an API that's been in maintenance mode for a decade, clearly they're happy with it.

But they're not too happy- Jessica was asked to track down a badly performing report. This of course meant wading through a thicket of spaghetti code, pointless singletons, and the general sloppiness that is the code base. Some of the code was written using Entity Framework for database access, much of it is not.

While it wasn't the report that Jessica was sent to debug, this method caught her eye:

private Dictionary<long, decimal> GetReportDiscounts(ReportCriteria criteria)
{
    Dictionary<long, decimal> rows = new Dictionary<long, decimal>();

    string query = @"select  ii.IID,
        SUM(CASE WHEN ii.AdjustedTotal IS NULL THEN 
        (ii.UnitPrice * ii.Units)  ELSE
            ii.AdjustedTotal END) as 'Costs'
            from ii
                where ItemType = 3
            group by ii.IID
            ";

    string connectionString = string.Empty;
    using (DataContext db = DataContextFactory.GetInstance<DataContext>())
    {
        connectionString = db.Database.Connection.ConnectionString;
    }

    using (SqlConnection connection = new SqlConnection(connectionString))
    {
        using (SqlCommand command = new SqlCommand(query, connection))
        {
            command.Parameters.AddWithValue("@DateStart", criteria.Period.Value.Min.Value.Date);
            command.Parameters.AddWithValue("@DateEnd", criteria.Period.Value.Max.Value.Date.AddDays(1));
            command.Connection.Open();

            using (SqlDataReader reader = command.ExecuteReader())
            {
                while (reader.Read())
                {
                    decimal discount = (decimal)reader["Costs"];
                    long IID = (long)reader["IID"];

                    if (rows.ContainsKey(IID))
                    {
                        rows[IID] += discount;
                    }
                    else
                    {
                        rows.Add(IID, discount);
                    }
                }
            }
        }
    }

    return rows;
}

This code constructs a query, opens a connection, runs the query, and iterates across the results, building a dictionary as its result set. The first thing which leaps out is that, in code, they're doing a summary (iterating across the results and grouping by IID), which is also what they did in the query.

It's also notable that the table they're querying is called ii, which is not a result of anonymization, and actually what they called it. Then there's the fact that they set parameters on the query, for DateStart and DateEnd, but the query doesn't use those. And then there's that magic number 3 in the query, which is its own set of questions.

Then, right beneath that method was one called GetReportTotals. I won't share it, because it's identical to what's above, with one difference:

            string query = @"
select   ii.IID,
                SUM(CASE WHEN ii.AdjustedTotal IS NULL THEN 
                (ii.UnitPrice * ii.Units)  ELSE
                 ii.AdjustedTotal END)  as 'Costs' from ii
				  where  itemtype = 0 
				 group by iid
";

The magic number is now zero.

So, clearly we're in the world of copy/paste programming, but this raises the question: which came first, the 0 or the 3? The answer is neither. GetCancelledInvoices came first.

private List<ReportDataRow> GetCancelledInvoices(ReportCriteria criteria, Dictionary<long, string> dictOfInfo)
{
    List<ReportDataRow> rows = new List<ReportDataRow>();

    string fCriteriaName = "All";

    string query = @"select 
        A long query that could easily be done in EF, or at worst a stored procedure or view. Does actually use the associated parameters";


    string connectionString = string.Empty;
    using (DataContext db = DataContextFactory.GetInstance<DataContext>())
    {
        connectionString = db.Database.Connection.ConnectionString;
    }

    using (SqlConnection connection = new SqlConnection(connectionString))
    {
        using (SqlCommand command = new SqlCommand(query, connection))
        {
            command.Parameters.AddWithValue("@DateStart", criteria.Period.Value.Min.Value.Date);
            command.Parameters.AddWithValue("@DateEnd", criteria.Period.Value.Max.Value.Date.AddDays(1));
            command.Connection.Open();

            using (SqlDataReader reader = command.ExecuteReader())
            {
                while (reader.Read())
                {
                    long ID = (long)reader["ID"];
                    decimal costs = (decimal)reader["Costs"];
                    string mNumber = (string)reader["MNumber"];
                    string mName = (string)reader["MName"];
                    DateTime idate = (DateTime)reader["IDate"];
                    DateTime lastUpdatedOn = (DateTime)reader["LastUpdatedOn"];
                    string iNumber = reader["INumber"] is DBNull ? string.Empty : (string)reader["INumber"];
                    long fId = (long)reader["FID"];
                    string empName = (string)reader["EmpName"];
                    string empNumber = reader["EmpNumber"] is DBNull ? string.Empty : (string)reader["empNumber"];
                    long mId = (long)reader["MID"];

                    string cName = dictOfInfo[matterId];

                    if (criteria.EmployeeID.HasValue && fId != criteria.EmployeeID.Value)
                    {
                        continue;
                    }

                    rows.Add(new ReportDataRow()
                    {
                        CName = cName,
                        IID = ID,
                        Costs = costs * -1, //Cancelled i - minus PC
                        TimedValue = 0,
                        MNumber = mNumber,
                        MName = mName,
                        BillDate = lastUpdatedOn,
                        BillNumber = iNumber + "A",
                        FID = fId,
                        EmployeeName = empName,
                        EmployeeNumber = empNumber
                    });
                }
            }
        }
    }


    return rows;
}

This is the original version of the method. We can infer this because it actually uses the parameters of DateStart and DateEnd. Everything else just copy/pasted this method and stripped out bits until it worked. There are more children of this method, each an ugly baby of its own, but all alike in their ugliness.

It's also worth noting, the original version is doing filtering after getting data from the database, instead of putting that criteria in the WHERE clause.

As for Jessica's poor performing report, it wasn't one of these methods. It was, however, another variation on "run a query, then filter, sort, and summarize in C#". By simply rewriting it as a SQL query in a stored procedure that leveraged indexes, performance improved significantly.

[Advertisement] Keep the plebs out of prod. Restrict NuGet feed privileges with ProGet. Learn more.

CodeSOD: I Am Not 200

In theory, HTTP status codes should be easy to work with. In the 100s? You're doing some weird stuff and breaking up large requests into multiple sub-requests. 200s? It's all good. 300s? Look over there. 400s? What the hell are you trying to do? 500s? What the hell is the server trying to do?

This doesn't mean people don't endlessly find ways to make it hard. LinkedIn, for example, apparently likes to send 999s if you try and view a page without being logged in. Shopify has invented a few. Apache has added a 218 "This is Fine". And then there's WebDAV, which not only adds new status codes, but adds a whole bunch of new verbs to HTTP requests.

Francesco D sends us a "clever" attempt at handling status codes.

    try {
      HttpRequest.Builder localVarRequestBuilder = {{operationId}}RequestBuilder({{#allParams}}{{paramName}}{{^-last}}, {{/-last}}{{/allParams}}{{#hasParams}}, {{/hasParams}}headers);
      return memberVarHttpClient.sendAsync(
          localVarRequestBuilder.build(),
          HttpResponse.BodyHandlers.ofString()).thenComposeAsync(localVarResponse -> {
            if (localVarResponse.statusCode()/ 100 != 2) {
              return CompletableFuture.failedFuture(getApiException("{{operationId}}", localVarResponse));
            }
            {{#returnType}}
            try {
              String responseBody = localVarResponse.body();
              return CompletableFuture.completedFuture(
                  responseBody == null || responseBody.isBlank() ? null : memberVarObjectMapper.readValue(responseBody, new TypeReference<{{{returnType}}}>() {})
              );
            } catch (IOException e) {
              return CompletableFuture.failedFuture(new ApiException(e));
            }
            {{/returnType}}
            {{^returnType}}
            return CompletableFuture.completedFuture(null);
            {{/returnType}}
      });
    }

Okay, before we get to the status code nonsense, I first have to whine about this templating language. I'm generally of the mind that generated code is a sign of bad abstractions, especially if we're talking about using a text templating engine, like this. I'm fine with hygienic macros, and even C++'s templating system for code generation, because they exist within the language. But fine, that's just my "ok boomer" opinion, so let's get into the real meat of it, which is this line:

localVarResponse.statusCode()/ 100 != 2

"Hey," some developer said, "since success is in the 200 range, I'll just divide by 100, and check if it's a 2, helpfully truncating the details." Which is fine and good, except neither 100s nor 300s represent a true error, especially because if the local client is doing caching, a 304 tells us that we can used the cached version.

For Francesco, treating 300s as an error created a slew of failed requests which shouldn't have failed. It wasn't too difficult to detect- they were at least logging the entire response- but it was frustrating, if only because it seems like someone was more interested in being clever with math than actually writing good software.

[Advertisement] Keep all your packages and Docker containers in one place, scan for vulnerabilities, and control who can access different feeds. ProGet installs in minutes and has a powerful free version with a lot of great features that you can upgrade when ready.Learn more.

CodeSOD: Going Crazy

For months, everything at Yusuf's company was fine. Then, suddenly, he comes in to the office to learn that overnight the log exploded with thousands of panic messages. No software changes had been pushed, no major configurations had happened- just a reboot. What had gone wrong?

This particular function was invoked as part of the application startup:

func (a *App) setupDocDBClient(ctx context.Context) error {
	docdbClient, err := docdb.NewClient(
		ctx,
		a.config.MongoConfig.URI,
		a.config.MongoConfig.Database,
		a.config.MongoConfig.EnableTLS,
	)
	if err != nil {
		return nil
	}

	a.DocDBClient = docdbClient
	return nil
}

This is Go, which passes errors as part of the return. You can see an example where docdb.NewClient returns a client and an err object. At one point in the history of this function, it did the same thing- if connecting to the database failed, it returned an error.

But a few months earlier, an engineer changed it to swallow the error- if an error occurred, it would return nil.

As an organization, they did code reviews. Multiple people looked at this and signed off- or, more likely, multiple people clicked a button to say they'd looked at it, but hadn't.

Most of the time, there weren't any connection issues. But sometimes there were. One reboot had a flaky moment with connecting, and the error was ignored. Later on in execution, downstream modules started failing, which eventually led to a log full of panic level messages.

The change was part of a commit tagged merely: "Refactoring". Something got factored, good and hard, all right.

[Advertisement] Picking up NuGet is easy. Getting good at it takes time. Download our guide to learn the best practice of NuGet for the Enterprise.

Error'd: Abort, Cancel, Fail?

low-case jeffphi found "Yep, all kinds of technical errors."

0

 

Michael R. reports an off by 900 error.

1

 

"It is often said that news slows down in August," notes Stewart , wondering if "perhaps The Times have just given up? Or perhaps one of the biggest media companies just doesn't care about their paying subscribers?"

2

 

"Zero is a dangerous idea!" exclaims Ernie in Berkeley .

3

 

Daniel D. found one of my unfavorites, calling it "Another classic case of cancel dialog. This time featuring KDE Partition Manager."

4

 


Fail? Until next time.
[Advertisement] BuildMaster allows you to create a self-service release management platform that allows different teams to manage their applications. Explore how!

CodeSOD: An Array of Parameters

Andreas found this in a rather large, rather ugly production code base.

private static void LogView(object o)
{
    try
    {
        ArrayList al = (ArrayList)o;
        int pageId = (int)al[0];
        int userId = (int)al[1];

        // ... snipped: Executing a stored procedure that stores the values in the database
    }
    catch (Exception) { }
}

This function accepts an object of any type, except no, it doesn't, it expect that object to be an ArrayList. It then assumes the array list will then store values in a specific order. Note that they're not using a generic ArrayList here, nor could they- it (potentially) needs to hold a mix of types.

What they've done here is replace a parameter list with an ArrayList, giving up compile time type checking for surprising runtime exceptions. And why?

"Well," the culprit explained when Andreas asked about this, "the underlying database may change. And then the function would need to take different parameters. But that could break existing code, so this allows us to add parameters without ever having to change existing code."

"Have you heard of optional arguments?" Andreas asked.

"No, all of our arguments are required. We'll just default the ones that the caller doesn't supply."

And yes, this particular pattern shows up all through the code base. It's "more flexible this way."

[Advertisement] Keep all your packages and Docker containers in one place, scan for vulnerabilities, and control who can access different feeds. ProGet installs in minutes and has a powerful free version with a lot of great features that you can upgrade when ready.Learn more.

CodeSOD: Raise VibeError

Ronan works with a vibe coder- an LLM addicted developer. This is a type of developer that's showing up with increasing frequency. Their common features include: not reading the code the AI generated, not testing the code the AI generated, not understanding the context of the code or how it integrates into the broader program, and absolutely not bothering to follow the company coding standards.

Here's an example of the kind of Python code they were "writing":

if isinstance(o, Test):
    if o.requirement is None:
        logger.error(f"Invalid 'requirement' in Test: {o.key}")
        try:
            raise ValueError("Missing requirement in Test object.")
        except ValueError:
            pass

    if o.title is None:
        logger.error(f"Invalid 'title' in Test: {o.key}")
        try:
            raise ValueError("Missing title in Test object.")
        except ValueError:
            pass

An isinstance check is already a red flag. Even without proper type annotations and type checking (though you should use them) any sort of sane coding is going to avoid situations where your method isn't sure what input it's getting. isinstance isn't a WTF, but it's a hint at something lurking off screen. (Yes, sometimes you do need it, this may be one of those times, but I doubt it.)

In this case, if the Test object is missing certain fields, we want to log errors about it. That part, honestly, is all fine. There are potentially better ways to express this idea, but the idea is fine.

No, the obvious turd in the punchbowl here is the exception handling. This is pure LLM, in that it's a statistically probable result of telling the LLM "raise an error if the requirement field is missing". The resulting code, however, raises an exception, immediately catches it, and then does nothing with it.

I'd almost think it's a pre-canned snippet that's meant to be filled in, but no- there's no reason a snippet would throw and catch the same error.

Now, in Ronan's case, this has a happy ending: after a few weeks of some pretty miserable collaboration, the new developer got fired. None of "their" code ever got merged in. But they've already got a few thousand AI generated resumes out to new positions…

[Advertisement] Keep all your packages and Docker containers in one place, scan for vulnerabilities, and control who can access different feeds. ProGet installs in minutes and has a powerful free version with a lot of great features that you can upgrade when ready.Learn more.

U.S. Customs Searches of Electronic Devices Rise at Borders

By: Nick Heer

Rajpreet Sahota

U.S. Customs and Border Protection (CBP) has released new data showing a sharp rise in electronic device searches at border crossings.

From April to June alone, CBP conducted 14,899 electronic device searches, up more than 21 per cent from the previous quarter (23 per cent over the same period last year). Most of those were basic searches, but 1,075 were “advanced,” allowing officers to copy and analyze device contents.

U.S. border agents have conducted tens of thousands of searches every year for many years, along a generally increasing trajectory, so this is not necessarily specific to this administration. Unfortunately, as the Electronic Frontier Foundation reminds us, people have few rights at ports of entry, regardless of whether they are a U.S. citizen.

There are no great ways to avoid a civil rights violation, either. As a security expert told the CBC, people with burner devices would be subject to scrutiny because it is obviously not their main device. It stands to reason that someone travelling without any electronic devices at all would also be seen as more suspicious. Encryption is your best bet, but then you may need to have a whole conversation about why all of your devices are encrypted.

The EFF has a pocket guide with your best options.

⌥ Permalink

PetaPixel’s Google Pixel 10 Pro Review

By: Nick Heer

If you, thankfully, missed Google’s Pixel 10 unveiling — and even if you did not — you will surely appreciate PetaPixel’s review of the Pro version of the phone from the perspective of photographers and videographers. This line of phones has long boasted computational photography bonafides over the competition, and I thought this was a good exploration of what is new and not-so-new in this year’s models.

Come for Chris and Jordan; stay for Chris’ “pet” deer.

⌥ Permalink

Typepad Is Shutting Down Next Month

By: Nick Heer

Typepad:

After September 30, 2025, access to Typepad – including account management, blogs, and all associated content – will no longer be available. Your account and all related services will be permanently deactivated.   

I have not thought about Typepad in years, and I am certain I am not alone. That is not a condemnation; Typepad occupies a particular time and place on the web. As with anything hosted, however, users are unfortunately dependent on someone else’s interest in maintaining it.

If you have anything hosted at Typepad, now is a good time to back it up.

⌥ Permalink

Yet Another Article Claiming Music Criticism Lost Its Edge, With a Twist

By: Nick Heer

Kelefa Sanneh, the New Yorker:

[…] In 2018, the social-science blog “Data Colada” looked at Metacritic, a review aggregator, and found that more than four out of five albums released that year had received an average rating of at least seventy points out of a hundred — on the site, albums that score sixty-one or above are colored green, for “good.” Even today, music reviews on Metacritic are almost always green, unlike reviews of films, which are more likely to be yellow, for “mixed/average,” or red, for “bad.” The music site Pitchfork, which was once known for its scabrous reviews, hasn’t handed down a perfectly contemptuous score — 0.0 out of 10 — since 2007 (for “This Is Next,” an inoffensive indie-rock compilation). And, in 2022, decades too late for poor Andrew Ridgeley, Rolling Stone abolished its famous five-star system and installed a milder replacement: a pair of merit badges, “Instant Classic” and “Hear This.”

I have quibbles with this article, which I will get to, but I will front-load this with the twist instead of making you wait — this article is, in effect, Sanneh’s response to himself twenty-one years after popularizing the very concept of poptimism in the New York Times. Sanneh in 2004:

In the end, the problem with rockism isn’t that it’s wrong: all critics are wrong sometimes, and some critics (now doesn’t seem like the right time to name names) are wrong almost all the time. The problem with rockism is that it seems increasingly far removed from the way most people actually listen to music.

Are you really pondering the phony distinction between “great art” and a “guilty pleasure” when you’re humming along to the radio? In an era when listeners routinely — and fearlessly — pick music by putting a 40-gig iPod on shuffle, surely we have more interesting things to worry about than that someone might be lip-synching on “Saturday Night Live” or that some rappers gild their phooey. Good critics are good listeners, and the problem with rockism is that it gets in the way of listening. If you’re waiting for some song that conjures up soul or honesty or grit or rebellion, you might miss out on Ciara’s ecstatic electro-pop, or Alan Jackson’s sly country ballads, or Lloyd Banks’s felonious purr.

Here we are in 2025 and a bunch of the best-reviewed records in recent memory are also some of the most popular. They are well-regarded because critics began to review pop records on the genre’s own terms.

Here is one more bonus twist: the New Yorker article is also preoccupied with criticism of Pitchfork, a fellow Condé Nast publication. This is gestured toward twice in the article. Neither one serves to deflate the discomfort, especially since the second mention is in the context of reduced investment in the site by Condé.

Speaking of Pitchfork, though, the numerical scores of its reviews have led to considerable analysis by the statistics obsessed. For example, a 2020 analysis of reviews published between 1999 and early 2017 found the median score was 7.03. This is not bad at all, and it suggests the site is most interested in what it considers decent-to-good music, and cannot be bothered to review bad stuff. The researchers also found a decreasing frequency of very negative reviews beginning in about 2010, which fits Sanneh’s thesis. However, it also found fewer extremely high scores. The difference is more subtle — and you should ignore the dot in the “10.0” column because the source data set appears to also contain Pitchfork’s modern reviews of classic records — but notice how many dots are rated above 8.75 from 2004–2009 compared to later years. A similar analysis of reviews from 1999–2021 found a similar convergence toward mediocre.

As for Metacritic, I had to go and look up the Data Colada article referenced, since the New Yorker does not bother with links. I do not think this piece reinforces Sanneh’s argument very well. What Joe Simmons, its author, attempts to illustrate is that Metacritic skews positive for bands with few aggregated reviews because most music publications are not going to waste time dunking on a nascent band’s early work. I also think Simmons is particularly cruel to a Modern Studies record.

Anecdotally, I do not know that music critics have truly lost their edge. I read and watch a fair amount of music criticism, and I still see a generous number of withering takes. I think music critics, as they become established and busier, recognize they have little time for bad music. Maroon 5 have been a best-selling act for a couple of decades, but Metacritic has aggregated just four reviews of its latest album, because you can just assume it sucks. Your time might be better spent with the great new Water From Your Eyes record.

Even though I am unsure I agree with Sanneh’s conclusion, I think critics should make time and column space for albums they think are bad. Negative reviews are not cruel — or, at least, they should not be — but it is the presence of bad that helps us understand what is good.

⌥ Permalink

The Painful Downfall of Intel

By: Nick Heer

Tripp Mickle and Don Clark, New York Times:

Echoing IBM, Microsoft in 1985 built its Windows software to run on Intel processors. The combination created the “Wintel era,” when the majority of the world’s computers featured Windows software and Intel hardware. Microsoft’s and Intel’s profits soared, turning them into two of the world’s most valuable companies by the mid-1990s. Most of the world’s computers soon featured “Intel Inside” stickers, making the chipmaker a household name.

In 2009, the Obama administration was so troubled by Intel’s dominance in computer chips that it filed a broad antitrust case against the Silicon Valley giant. It was settled the next year with concessions that hardly dented the company’s profits.

This is a gift link because I think this one is particularly worth reading. The headline calls it a “long, painful downfall”, but the remarkable thing about it is that it is short, if anything. Revenue is not always the best proxy for this, but the cracks began to show in the early 2010s when its quarterly growth contracted; a few years of modest growth followed before being clobbered since mid-2020. Every similar company in tech seems to have made a fortune off the combined forces of the covid-19 pandemic and artificial intelligence except Intel.

Tobias Mann, the Register:

For better or worse, the US is now a shareholder in the chipmaker’s success, which makes sense given Intel’s strategic importance to national security. Remember, Intel is the only American manufacturer of leading edge silicon. TSMC and Samsung may be setting up shop in the US, but hell will freeze over before the US military lets either of them fab its most sensitive chips. Uncle Sam awarded Intel $3.2 billion to build that secure enclave for a reason.

Put mildly, The US government needs Intel Foundry and Lip Bu Tan needs Uncle Sam’s cash to make the whole thing work. It just so happens that right now Intel isn’t in a great position to negotiate.

Mann’s skeptical analysis is also worth your time. There is good sense in the U.S. government holding an interest in the success of Intel. Under this president, however, it raises entirely unique questions and concerns.

⌥ Permalink

Tesla Ordered to Pay $200 Million in Punitive Damages Over Fatal Crash

By: Nick Heer

Mary Cunningham, CBS News:

Tesla was found partly liable in a wrongful death case involving the electric vehicle company’s Autopilot system, with a jury awarding the plaintiffs $200 million in punitive damages plus additional money in compensatory damages.

[…]

“What we ultimately learned from that augmented video is that the vehicle 100% knew that it was about to run off the roadway, through a stop sign, through a blinking red light, through a parked car and through a pedestrian, yet did nothing other than shut itself off when the crash was unavoidable,” said Adam Boumel, one of the plaintiffs’ attorneys.

I continue to believe holding manufacturers legally responsible is the correct outcome for failures of autonomous driving technology. Corporations, unlike people, cannot go to jail; the closest thing we have to accountability is punitive damages.

⌥ Permalink

Will Smith’s Concert Crowds Are Real, but A.I. Is Blurring the Lines

By: Nick Heer

Andy Baio:

This minute-long clip of a Will Smith concert is blowing up online for all the wrong reasons, with people accusing him of using AI to generate fake crowds filled with fake fans carrying fake signs. The story’s blown up a bit, with coverage in Rolling Stone, NME, The Independent, and Consequence of Sound.

[…]

But here’s where things get complicated.

The crowds are real. Every person you see in the video above started out as real footage of real fans, pulled from video of multiple Will Smith concerts during his recent European tour.

The lines, in this case, are definitely blurry. This is unlike any previous is it A.I.? controversy over crowds I can remember because — and I hope this is more teaser than spoiler — note Baio’s careful word choice in that last quoted paragraph.

⌥ Permalink

Inside the Underground Trade in Flipper Zero Car Attacks

By: Nick Heer

Joseph Cox, 404 Media:

A man holds an orange and white device in his hand, about the size of his palm, with an antenna sticking out. He enters some commands with the built-in buttons, then walks over to a nearby car. At first, its doors are locked, and the man tugs on one of them unsuccessfully. He then pushes a button on the gadget in his hand, and the door now unlocks.

The tech used here is the popular Flipper Zero, an ethical hacker’s swiss army knife, capable of all sorts of things such as WiFi attacks or emulating NFC tags. Now, 404 Media has found an underground trade where much shadier hackers sell extra software and patches for the Flipper Zero to unlock all manner of cars, including models popular in the U.S. The hackers say the tool can be used against Ford, Audi, Volkswagen, Subaru, Hyundai, Kia, and several other brands, including sometimes dozens of specific vehicle models, with no easy fix from car manufacturers.

The Canadian government made headlines last year when it banned the Flipper Zero, only to roll it back in favour of a narrowed approach a month later. That was probably the right call. However, too many — including Hackaday and Flipper itself — were too confident in saying the device was not able to, or could not, be used to steal cars. This is demonstrably untrue.

⌥ Permalink

⌥ The U.S.’ Increasing State Involvement in the Tech Industry

By: Nick Heer

The United States government has long had an interest in boosting its high technology sector, with manifold objectives: for soft power, espionage, and financial dominance, at least. It has accomplished this through tax incentives, funding some of the best universities in the world, lax antitrust and privacy enforcement, and — in some cases — direct involvement. The internet began as a Department of Defense project, and the government invests in businesses through firms like In-Q-Tel.

All of this has worked splendidly for them. The world’s technology stack is overwhelmingly U.S.-dependent across the board, from consumers through large businesses and up to governments, even those which are not allies. Apparently, though, it is not enough and the country’s leaders are desperately worried about regulation in Europe and competition from Eastern Asia.

The U.S. Federal Trade Commission:

Federal Trade Commission Chairman Andrew N. Ferguson sent letters today to more than a dozen prominent technology companies reminding them of their obligations to protect the privacy and data security of American consumers despite pressure from foreign governments to weaken such protections. He also warned them that censoring Americans at the behest of foreign powers might violate the law.

[…]

“I am concerned that these actions by foreign powers to impose censorship and weaken end-to-end encryption will erode Americans’ freedoms and subject them to myriad harms, such as surveillance by foreign governments and an increased risk of identity theft and fraud,” Chairman [Andrew] Ferguson wrote.

These letters (PDF) serve as a reminder to, in effect, enforce U.S. digital supremacy around the world. Many of the most popular social networks are U.S.-based and export the country’s interpretation of permissive expression laws around the world, even to countries with different expectations. Occasionally, there will be conflicting policies which may mean country-specific moderation. What Ferguson’s letter appears to be asking is for U.S. companies to be sovereign places for U.S. citizens regardless of where their speech may appear.

The U.S. government is certainly correct to protect the interests of its citizens. But let us not pretend this is not also re-emphasizing the importance to the U.S. government of exporting its speech policy internationally, especially when it fails to adhere to it on its home territory. It is not just the hypocrisy that rankles, it is also the audacity requiring posts by U.S. users to be treated as a special class, to the extent that E.U. officials enforcing their own laws in their own territory could be subjected to sanctions.

As far as encryption, I have yet to see sufficient evidence of a radical departure from previous statements made by this president. When he was running the first time around, he called for an Apple boycott over the company’s refusal to build a special version of iOS to decrypt an iPhone used by a mass shooter. During his first term, Trump demanded Apple decrypt another iPhone in a different mass shooting. After two attempted assassinations last year, Trump once again said Apple should forcibly decrypt the iPhones of those allegedly responsible. It was under his first administration in which Apple was dissuaded from launching Advanced Data Protection in the first place. U.S. companies with European divisions recently confirmed they cannot comply with E.U. privacy and security guarantees as they are subject to the provisions of the CLOUD Act enacted during the first Trump administration.

The closest Trump has gotten to changing his stance is in a February interview with the Spectator’s Ben Domenech:

BD: But the problem is he [the British Prime Minister] runs, your vice president obviously eloquently pointed this out in Munich, he runs a nation now that is removing the security helmets on Apple phones so that they can—

DJT: We told them you can’t do this.

BD: Yeah, Tulsi, I saw—

DJT: We actually told him… that’s incredible. That’s something, you know, that you hear about with China.

The red line, it seems, is not at a principled opposition to “removing the security helmet” of encryption, but in the U.K.’s specific legislation. It is a distinction with little difference. The president and U.S. law enforcement want on-demand decryption just as much as their U.K. counterparts and have attempted to legislate similar requirements.

While the U.S. has been reinforcing the supremacy of its tech companies in Europe, it has also been propping them up at home:

Intel Corporation today announced an agreement with the Trump Administration to support the continued expansion of American technology and manufacturing leadership. Under terms of the agreement, the United States government will make an $8.9 billion investment in Intel common stock, reflecting the confidence the Administration has in Intel to advance key national priorities and the critically important role the company plays in expanding the domestic semiconductor industry.

The government’s equity stake will be funded by the remaining $5.7 billion in grants previously awarded, but not yet paid, to Intel under the U.S. CHIPS and Science Act and $3.2 billion awarded to the company as part of the Secure Enclave program. Intel will continue to deliver on its Secure Enclave obligations and reaffirmed its commitment to delivering trusted and secure semiconductors to the U.S. Department of Defense. The $8.9 billion investment is in addition to the $2.2 billion in CHIPS grants Intel has received to date, making for a total investment of $11.1 billion.

Despite its size — 10% of the company, making it the single largest shareholder — this press release says this investment is “a passive ownership, with no Board representation or other governance or information rights”. Even so, this is the U.S. attempting to reassert the once-vaunted position of Intel.

This deal is not as absurd as it seems. It is entirely antithetical to the claimed free market capitalist principles common to both major U.S. political parties but, in particular, espoused by Republicans. It is probably going to be wielded in terrible ways. But I can see at least one defensible reason for the U.S. to treat the integrity of Intel as an urgent issue: geology.

Near the end of Patrick McGee’s “Apple in China” sits a section that will haunt the corners of my brain for a long time. McGee writes that a huge amount of microprocessors — “at least 80 percent of the world’s most advanced chips” — are made by TSMC in Taiwan. There are political concerns with the way China has threatened Taiwan, which can be contained and controlled by humans, and frequent earthquakes, which cannot. Even setting aside questions about control, competition, and China, it makes a lot of sense for there to be more manufacturers of high-performance chips in places with less earthquake potential. (Silicon Valley is also sitting in a geologically risky place. Why do we do this to ourselves?)

At least Intel gets the shine of a Trump co-sign, and when has that ever gone wrong?

Then there are the deals struck with Nvidia and AMD, whereby the U.S. government gets a kickback in exchange for trade. Lauren Hirsch and Maureen Farrell, New York Times:

But some of Mr. Trump’s recent moves appear to be a strong break with historical precedent. In the cases of Nvidia and AMD, the Trump administration has proposed dictating the global market that these chipmakers can have access to. The two companies have promised to give 15 percent of their revenue from China to the U.S. government in order to have the right to sell chips in that country and bypass any future U.S. restrictions.

These moves add up and are, apparently, just the beginning. The U.S. has been a dominant force in high technology in part because of a flywheel effect created by early investments, some of which came from government sources and public institutions. This additional context does not undermine the entrepreneurship that came after, and which has been a proud industry trait. In fact, it demonstrates a benefit of strong institutions.

The rest of the world should see these massive investments as an instruction to build up our own high technology industries. We should not be too proud in Canada to set up Crown corporations that can take this on, and we ought to work with governments elsewhere. We should also not lose sight of the increasing hostility of the U.S. government making these moves to reassert its dominance in the space. We can stop getting steamrolled if we want to, but we really need to want to. We can start small.

Alberta Announces New B.C. Tourism Campaign

By: Nick Heer

Michelle Bellefontaine, CBC News:

“Any publicly funded immunization in B.C. can be provided at no cost to any Canadian travelling within the province,” a statement from the ministry said.

“This includes providing publicly funded COVID-19 vaccine to people of Alberta.”

[…]

Alberta is the only Canadian province that will not provide free universal access to COVID-19 vaccines this fall.

The dummies running our province opened what they called a “vaccine booking system” earlier this month allowing Albertans to “pre-order” vaccines. However, despite these terms having defined meanings, the system did not allow anyone to book a specific day, time, or location to receive the vaccine, nor did it take payments or even show prices. The government’s rationale for this strategy is that it is “intended [to] help reduce waste”.

Now that pricing has been revealed, it sure seems like these dopes want us to have a nice weekend just over the B.C. border. A hotel room for a couple or a family will probably be about the same as the combined vaccination cost. Sure, a couple of meals would cost extra, but it is also a nice weekend away. Sure, it means people who are poor or otherwise unable will likely need to pay the $100 “administrative fee” to get their booster, and it means a whole bunch of pre-ordered vaccines will go to waste thereby undermining the whole point of this exercise. But at least it plays to the anti-vaccine crowd. That is what counts for these jokers.

⌥ Permalink

Jay Blahnik Accused of Creating a Toxic Workplace Culture at Apple

By: Nick Heer

Jane Mundy, writing at the imaginatively named Lawyers and Settlements in December:

A former Apple executive has filed a California labor complaint against Apple and Jay Blahnik, the company’s vice president of fitness technologies. Mandana Mofidi accuses Apple of retaliation after she reported sexual harassment and raised concerns about receiving less pay than her male colleagues.

The Superior Court of California for the County of Los Angeles wants nearly seventeen of the finest United States dollars for a copy of the complaint alone.

Tripp Mickle, New York Times:

But along the way, [Jay] Blahnik created a toxic work environment, said nine current and former employees who worked with or for Mr. Blahnik and spoke about personnel issues on the condition of anonymity. They said Mr. Blahnik, 57, who leads a roughly 100-person division as vice president for fitness technologies, could be verbally abusive, manipulative and inappropriate. His behavior contributed to decisions by more than 10 workers to seek extended mental health or medical leaves of absence since 2022, about 10 percent of the team, these people said.

The behaviours described in this article are deeply unprofessional, at best. It is difficult to square the testimony of a sizeable portion of Blahnik’s team with an internal investigation finding no wrongdoing, but that is what Apple’s spokesperson expects us to believe.

⌥ Permalink

Meta Says Threads Has Over 400 Million Monthly Active Users

By: Nick Heer

Emily Price, Fast Company:

Meta’s Threads is on a roll.

The social networking app is now home to more than 400 million monthly active users, Meta shared with Fast Company on Tuesday. That’s 50 million more than just a few months ago, and a long way from the 175 million it had around its first birthday last summer.

What is even more amazing about this statistic is how non-essential Threads seems to be. I might be in a bubble, but I cannot recall the last time someone sent me a link to a Threads post or mentioned they saw something worthwhile there. I see plenty of screenshots of posts from Bluesky, X, and even Mastodon circulating in various other social networks, but I cannot remember a single one from Threads.

As if to illustrate Threads’ invisibility, Andy Stone, Meta’s communications guy, rebutted a Wall Street Journal story with a couple of posts on X. He has a Threads account, of course, but he posts there only a few times per month.

⌥ Permalink

❌