Normal view

There are new articles available, click to refresh the page.
Before yesterdayMain stream

A transistor for heat

By: VM
25 August 2025 at 11:49

Quantum technologies and the prospect of advanced, next-generation electronic devices have been maturing at an increasingly rapid pace. Both research groups and governments around the world are investing more attention in this domain.

India for example mooted its National Quantum Mission in 2023 with a decade-long outlay of Rs 6,000 crore. One of the Mission’s goals, in the words of IISER Pune physics professor Umakant Rapol, is “to engineer and utilise the delicate quantum features of photons and subatomic particles to build advanced sensors” for applications in “healthcare, security, and environmental monitoring”.

On the science front, as these technologies become better understood, scientists have been paying increasingly more attention to managing and controlling heat in them. These technologies often rely on quantum physical phenomena that appear only at extremely low temperatures and are so fragile that even a small amount of stray heat can destabilise them. In these settings, scientists have found that traditional methods of handling heat — mainly by controlling the vibrations of atoms in the devices’ materials — become ineffective.

Instead, scientists have identified a promising alternative: energy transfer through photons, the particles of light. And in this paradigm, instead of simply moving heat from one place to another, scientists have been trying to control and amplify it, much like how transistors and amplifiers handle electrical signals in everyday electronics.

Playing with fire

Central to this effort is the concept of a thermal transistor. This device resembles an electrical transistor but works with heat instead of electrical current. Electrical transistors amplify or switch currents, allowing the complex logic and computation required to power modern computers. Creating similar thermal devices would represent a major advance, especially for technologies that require very precise temperature control. This is particularly true in the sub-kelvin temperature range where many quantum processors and sensors operate.

Transistor Simple Circuit Diagram with NPN Labels.svg.
This circuit diagram depicts an NPN bipolar transistor. When a small voltage is applied between the base and emitter, electrons are injected from the emitter into the base, most of which then sweep across into the collector. The end result is a large current flowing through the collector, controlled by the much smaller current flowing through the base. Credit: Michael9422 (CC BY-SA)

Energy transport at such cryogenic temperatures differs significantly from normal conditions. Below roughly 1 kelvin, atomic vibrations no longer carry most of the heat. Instead, electromagnetic fluctuations — ripples of energy carried by photons — dominate the conduction of heat. Scientists channel these photons through specially designed, lossless wires made of superconducting materials. They keep these wires below their superconducting critical temperatures, allowing only photons to transfer energy between the reservoirs. This arrangement enables careful and precise control of heat flow.

One crucial phenomenon that allows scientists to manipulate heat in this way is negative differential thermal conductance (NDTC). NDTC defies common intuition. Normally, decreasing the temperature difference between two bodies reduces the amount of heat they exchange. This is why a glass of water at 50º C in a room at 25º C will cool faster than a glass of water at 30º C. In NDTC, however, reducing the temperature difference between two connected reservoirs can actually increase the heat flow between them.

NDTC arises from a detailed relationship between temperature and the properties of the material that makes up the reservoirs. When physicists harness NDTC, they can amplify heat signals in a manner similar to how negative electrical resistance powers electrical amplifiers.

A ‘circuit’ for heat

In a new study, researchers from Italy have designed and theoretically modelled a new kind of ‘thermal transistor’ that they have said can actively control and amplify how heat flows at extremely low temperatures for quantum technology applications. Their findings were published recently in the journal Physical Review Applied.

To explore NDTC experimentally, the researchers studied reservoirs made of a disordered semiconductor material that exhibited a transport mechanism called variable range hopping (VRH). An example is neutron-transmutation-doped germanium. In VRH materials, the electrical resistance at low temperatures depends very strongly, sometimes exponentially, on temperature.

This attribute makes them ideal to tune their impedance, a property that controls the material’s resistance to energy flow, simply by adjusting temperature. That is, how well two reservoirs made of VRH materials exchange heat can be controlled by tuning the impedance of the materials, which in turn can be controlled by tuning their temperature.

In the new study, the researchers reported that impedance matching played a key role. When the reservoirs’ impedances matched perfectly (when their temperatures became equal), the efficiency with which they transferred photonic heat reached a peak. As the materials’ temperatures diverged, heat flow dropped. In fact, the researchers wrote that there was a temperature range, especially as the colder reservoir’s temperature rose to approach that of the warmer one, within which the heat flow increased even as the temperature difference shrank. This effect forms the core of NDTC.

The research team, associated with the NEST initiative at the Istituto Nanoscienze-CNR and Scuola Normale Superiore, both in Pisa in Italy, have proposed a device they call the photonic heat amplifier. They built it using two VRH reservoirs connected by superconducting, lossless wires. One reservoir was kept at a higher temperature and served as the source of heat energy. The other reservoir, called the central island, received heat by exchanging photons with the warmer reservoir.

The proposed device features a central island at temperature T1 that transfers heat currents to various terminals. The tunnel contacts to the drain and gate are positioned at heavily doped regions of the yellow central island, highlighted by a grey etched pattern. Each arrow indicates the positive direction of the heat flux. The substrate is (shown as and) maintained at temperature Tb, the gate at Tg, and the drain at Td. Credit: arXiv:2502.04250v3

The central island was also connected to two additional metallic reservoirs named the “gate” and the “drain”. These points operated with the same purpose as the control and output terminals in an electrical transistor. The drain stayed cold, allowing the amplified heat signal to exit the system from this point. By adjusting the gate temperature, the team could modulate and even amplify the flow of heat between the source and the drain (see image below).

To understand and predict the amplifier’s behaviour, the researchers developed mathematical models for all forms of heat transfer within the device. These included photonic currents between VRH reservoirs, electron tunnelling through the gate and drain contacts, and energy lost as vibrations through the device’s substrate.

(Tunnelling is a quantum mechanical phenomenon where an electron has a small chance of floating through a thin barrier instead of going around it.)

Raring to go

By carefully selecting the device parameters — including the characteristic temperature of the VRH material, the source temperature, resistances at the gate and drain contacts, the volume of the central island, and geometric factors — the researchers said they could tailor the device for different amplification purposes.

They reported two main operating modes. The first was called ‘current modulation amplifier’. In this configuration, the device amplified small variations in thermal input at the gate. In this mode, small oscillations in the gate heat current produced much larger oscillations, up to 15-times greater, in the photon current between the source and the central island and in the drain current, according to the paper. This amplification was efficient down to 20 millikelvin, matching the ultracold conditions required in quantum technologies. The output range of heat current was similarly broad, showing the device’s suitability to amplify heat signals.

The second mode was called ‘temperature modulation amplifier’. Here, slight changes of only a few millikelvin in the gate temperature, the team wrote, caused the output temperature in the central island to swing by as large as 3.3 times the changes in the input. The device could also handle input temperature ranges over 100 millikelvin. This performance reportedly matched or surpassed other temperature amplifiers already reported in the scientific literature. The researchers also noted that this mode could be used to pre-amplify signals in bolometric detectors used in astronomy telescopes.

An important ability relevant for practical use is the relaxation time, i.e. how soon after operating once the device returned to its original state, ready for the next run. The amplifier in both configurations showed relaxation times between microseconds and milliseconds. According to the researchers, this speed resulted from the device’s low thermal mass and efficient heat channels. Such a fast response could make it suitable to detect and amplify thermal signals in real time.

The researchers wrote that the amplifier also maintained good linearity and low distortion across various inputs. In other words, the output heat signal changed proportionally to the input heat signal and the device didn’t add unwanted changes, noise or artifacts to the input signal. Its noise-equivalent power values were also found to rival the best available solid-state thermometers, indicating low noise levels.

Approaching the limits

For these promising results, realising this device involves some significant practical challenges. For instance, NDTC depends heavily on precise impedance matching. Real materials inevitably have imperfections, including those due to imperfect fabrication and environmental fluctuations. Such deviations could lower the device’s heat transfer efficiency and reduce the operational range of NDTC.

The system also banked on lossless superconducting wires being kept well below their critical temperatures. Achieving and maintaining these ultralow temperatures requires sophisticated and expensive refrigeration infrastructure, which adds to the experimental complexity.

Fabrication also demands very precise doping and finely tuned resistances for the gate and drain terminals. Scaling production to create many devices or arrays poses major technical difficulties. Integrating numerous photonic heat amplifiers into larger thermal circuits risks unwanted thermal crosstalk and signal degradation, a risk compounded by the extremely small heat currents involved.

Furthermore, the fully photonic design offers benefits such as electrical isolation and long-distance thermal connections. However, it also approaches fundamental physical limits. Thermal conductance caps the maximum possible heat flow through photonic channels. This limitation could restrict how much power the device is able to handle in some applications.

Then again, many of these challenges are typical of cutting-edge research in quantum devices, and highlight the need for detailed experimental work to realise and integrate photonic heat amplifiers into operational quantum systems.

If they are successfully realised for practical applications, photonic heat amplifiers could transform how scientists manage heat in quantum computing and nanotechnologies that operate near absolute zero. They could pave the way for on-chip heat control, computers to autonomously stabilise the temperature, and perform thermal logic operations. Redirecting or harvesting waste heat could also improve the efficiency and significantly reduce noise — a critical barrier in ultra-sensitive quantum devices like quantum computers.

Featured image credit: Lucas K./Unsplash.

The Hyperion dispute and chaos in space

By: VM
24 August 2025 at 06:12

I believe my blog’s subscribers did not receive email notifications of some recent posts. If you’re interested, I’ve listed the links to the last eight posts at the bottom of this edition.

When reading around for my piece yesterday on the wavefunctions of quantum mechanics, I stumbled across an old and fascinating debate about Saturn’s moon Hyperion.

The question of how the smooth, classical world around us emerges from the rules of quantum mechanics has haunted physicists for a century. Most of the time the divide seems easy: quantum laws govern atoms and electrons while planets, chairs, and cats are governed by the laws of Newton and Einstein. Yet there are cases where this distinction is not so easy to draw. One of the most surprising examples comes not from a laboratory experiment but from the cosmos.

In the 1990s, Hyperion became the focus of a deep debate about the nature of classicality, one that quickly snowballed into the so-called Hyperion dispute. It showed how different interpretations of quantum theory could lead to apparently contradictory claims, and how those claims can be settled by making their underlying assumptions clear.

Hyperion is not one of Saturn’s best-known moons but it is among the most unusual. Unlike round bodies such as Titan or Enceladus, Hyperion has an irregular shape, resembling a potato more than a sphere. Its surface is pocked by craters and its interior appears porous, almost like a sponge. But the feature that caught physicists’ attention was its rotation. Hyperion does not spin in a steady, predictable way. Instead, it tumbles chaotically. Its orientation changes in an irregular fashion as it orbits Saturn, influenced by the gravitational pulls of Saturn and Titan, which is a moon larger than Mercury.

In physics, chaos does not mean complete disorder. It means a system is sensitive to its initial conditions. For instance, imagine two weather models that start with almost the same initial data: one says the temperature in your locality at 9:00 am is 20.000º C, the other says it’s 20.001º C. That seems like a meaningless difference. But because the atmosphere is chaotic, this difference can grow rapidly. After a few days, the two models may predict very different outcomes: one may show a sunny afternoon and the other, thunderstorms.

This sensitivity to initial conditions is often called the butterfly effect — it’s the idea that the flap of a butterfly’s wings in Brazil might, through a chain of amplifications, eventually influence the formation of a tornado in Canada.

Hyperion behaves in a similar way. A minuscule difference in its initial spin angle or speed grows exponentially with time, making its future orientation unpredictable beyond a few months. In classical mechanics this is chaos; in quantum mechanics, those tiny initial uncertainties are built in by the uncertainty principle, and chaos amplifies them dramatically. As a result, predicting its orientation more than a few months ahead is impossible, even with precise initial data.

To astronomers, this was a striking case of classical chaos. But to a quantum theorist, it raised a deeper question: how does quantum mechanics describe such a macroscopic, chaotic system?

Why Hyperion interested quantum physicists is rooted in that core feature of quantum theory: the wavefunction. A quantum particle is described by a wavefunction, which encodes the probabilities of finding it in different places or states. A key property of wavefunctions is that they spread over time. A sharply localised particle will gradually smear out, with a nonzero probability of it being found over an expanding region of space.

For microscopic particles such as electrons, this spreading occurs very rapidly. For macroscopic objects, like a chair, an orange or you, the spread is usually negligible. The large mass of everyday objects makes the quantum uncertainty in their motion astronomically small. This is why you don’t have to be worried about your chai mug being in two places at once.

Hyperion is a macroscopic moon, so you might think it falls clearly on the classical side. But this is where chaos changes the picture. In a chaotic system, small uncertainties get amplified exponentially fast. A variable called the Lyapunov exponent measures this sensitivity. If Hyperion begins with an orientation with a minuscule uncertainty, chaos will magnify that uncertainty at an exponential rate. In quantum terms, this means the wavefunction describing Hyperion’s orientation will not spread slowly, as for most macroscopic bodies, but at full tilt.

In 1998, the Polish-American theoretical physicist Wojciech Zurek calculated that within about 20 years, the quantum state of Hyperion should evolve into a superposition of macroscopically distinct orientations. In other words, if you took quantum mechanics seriously, Hyperion would be “pointing this way and that way at once”, just like Schrödinger’s famous cat that is alive and dead at once.

This startling conclusion raised the question: why do we not observe such superpositions in the real Solar System?

Zurek’s answer to this question was decoherence. Say you’re blowing a soap bubble in a dark room. If no light touches it, the bubble is just there, invisible to you. Now shine a torchlight on it. Photons from the bulb will scatter off the bubble and enter your eyes, letting you see its position and color. But here’s the catch: every photon that bounces off the bubble also carries away a little bit of information about it. In quantum terms, the bubble’s wavefunction becomes entangled with all those photons.

If the bubble were treated purely quantum mechanically, you could imagine a strange state where it was simultaneously in many places in the room — a giant superposition. But once trillions of photons have scattered off it, each carrying “which path?” information, the superposition is effectively destroyed. What remains is an apparent mixture of “bubble here” or “bubble there”, and to any observer the bubble looks like a localised classical object. This is decoherence in action: the environment (the sea of photons here) acts like a constant measuring device, preventing large objects from showing quantum weirdness.

For Hyperion, decoherence would be rapid. Interactions with sunlight, Saturn’s magnetospheric particles, and cosmic dust would constantly ‘measure’ Hyperion’s orientation. Any coherent superposition of orientations would be suppressed almost instantly, long before it could ever be observed. Thus, although pure quantum theory predicts Hyperion’s wavefunction would spread into cat-like superpositions, decoherence explains why we only ever see Hyperion in a definite orientation.

Thus Zurek argued that decoherence is essential to understand how the classical world emerges from its quantum substrate. To him, Hyperion provided an astronomical example of how chaotic dynamics could, in principle, generate macroscopic superpositions, and how decoherence ensures these superpositions remain invisible to us.

Not everyone agreed with Zurek’s conclusion, however. In 2005, physicists Nathan Wiebe and Leslie Ballentine revisited the problem. They wanted to know: if we treat Hyperion using the rules of quantum mechanics, do we really need the idea of decoherence to explain why it looks classical? Or would Hyperion look classical even without bringing the environment into the picture?

To answer this, they did something quite concrete. Instead of trying to describe every possible property of Hyperion, they focused on one specific and measurable feature: the part of its spin that pointed along a fixed axis, perpendicular to Hyperion’s orbit. This quantity — essentially the up-and-down component of Hyperion’s tumbling spin — was a natural choice because it can be defined both in classical mechanics and in quantum mechanics. By looking at the same feature in both worlds, they could make a direct comparison.

Wiebe and Ballentine then built a detailed model of Hyperion’s chaotic motion and ran numerical simulations. They asked: if we look at this component of Hyperion’s spin, how does the distribution of outcomes predicted by classical physics compare with the distribution predicted by quantum mechanics?

The result was striking. The two sets of predictions matched extremely well. Even though Hyperion’s quantum state was spreading in complicated ways, the actual probabilities for this chosen feature of its spin lined up with the classical expectations. In other words, for this observable, Hyperion looked just as classical in the quantum description as it did in the classical one.

From this, Wiebe and Ballentine drew a bold conclusion: that Hyperion doesn’t require decoherence to appear classical. The agreement between quantum and classical predictions was already enough. They went further and suggested that this might be true more broadly: perhaps decoherence is not essential to explain why macroscopic bodies, the large objects we see around us, behave classically.

This conclusion went directly against the prevailing view of quantum physics as a whole. By the early 2000s, many physicists believed that decoherence was the central mechanism that bridged the quantum and classical worlds. Zurek and others had spent years showing how environmental interactions suppress the quantum superpositions that would otherwise appear in macroscopic systems. To suggest that decoherence was not essential was to challenge the very foundation of that programme.

The debate quickly gained attention. On one side stood Wiebe and Ballentine, arguing that simple agreement between quantum and classical predictions for certain observables was enough to resolve the issue. On the other stood Zurek and the decoherence community, insisting that the real puzzle was more fundamental: why we never observe interference between large-scale quantum states.

At this time, the Hyperion dispute wasn’t just about a chaotic moon. It was about how we could define ‘classical behavior’ in the first place. For Wiebe and Ballentine, classical meant “quantum predictions match classical ones”. For Zurek et al., classical meant “no detectable superpositions of macroscopically distinct states”. The difference in definitions made the two sides seem to clash.

But then, in 2008, physicist Maximilian Schlosshauer carefully analysed the issue and showed that the two sides were not actually talking about the same problem. The apparent clash arose because Zurek and Wiebe-Ballentine had started from essentially different assumptions.

Specifically, Wiebe and Ballentine had adopted the ensemble interpretation of quantum mechanics. In everyday terms, the ensemble interpretation says, “Don’t take the quantum wavefunction too literally.” That is, it does not describe the “real state” of a single object. Instead, it’s a tool to calculate the probabilities of what we will see if we repeat an experiment many times on many identical systems. It’s like rolling dice. If I say the probability of rolling a 6 is 1/6, that probability does not describe the dice themselves as being in a strange mixture of outcomes. It simply summarises what will happen if I roll a large collection of dice.

Applied to quantum mechanics, the ensemble interpretation works the same way. If an electron is described by a wavefunction that seems to say it is “spread out” over many positions, the ensemble interpretation insists this does not mean the electron is literally smeared across space. Rather, the wavefunction encodes the probabilities for where the electron would be found if we prepared many electrons in the same way and measured them. The apparent superposition is not a weird physical reality, just a statistical recipe.

Wiebe and Ballentine carried this outlook over to Hyperion. When Zurek described Hyperion’s chaotic motion as evolving into a superposition of many distinct orientations, he meant this as a literal statement: without decoherence, the moon’s quantum state really would be in a giant blend of “pointing this way” and “pointing that way”. From his perspective, there was a crisis because no one ever observes moons or chai mugs in such states. Decoherence, he argued, was the missing mechanism that explained why these superpositions never show up.

But under the ensemble interpretation, the situation looks entirely different. For Wiebe and Ballentine, Hyperion’s wavefunction was never a literal “moon in superposition”. It was always just a probability tool, telling us the likelihood of finding Hyperion with one orientation or another if we made a measurement. Their job, then, was simply to check: do these quantum probabilities match the probabilities that classical physics would give us? If they do, then Hyperion behaves classically by definition. There is no puzzle to be solved and no role for decoherence to play.

This explains why Wiebe and Ballentine concentrated on comparing the probability distributions for a single observable, namely the component of Hyperion’s spin along a chosen axis. If the quantum and classical results lined up — as their calculations showed — then from the ensemble point of view Hyperion’s classicality was secured. The apparent superpositions that worried Zurek were never taken as physically real in the first place.

Zurek, on the other hand, was addressing the measurement problem. In standard quantum mechanics, superpositions are physically real. Without decoherence, there is always some observable that could reveal the coherence between different macroscopic orientations. The puzzle is why we never see such observables registering superpositions. Decoherence provided the answer: the environment prevents us from ever detecting those delicate quantum correlations.

In other words, Zurek and Wiebe-Ballentine were tackling different notions of classicality. For Wiebe and Ballentine, classicality meant the match between quantum and classical statistical distributions for certain observables. For Zurek, classicality meant the suppression of interference between macroscopically distinct states.

Once Schlosshauer spotted this difference, the apparent dispute went away. His resolution showed that the clash was less over data than over perspectives. If you adopt the ensemble interpretation, then decoherence indeed seems unnecessary, because you never take the superposition as a real physical state in the first place. If you are interested in solving the measurement problem, then decoherence is crucial, because it explains why macroscopic superpositions never manifest.

The overarching takeaway is that, from the quantum point of view, there is no single definition of what constitutes “classical behaviour”. The Hyperion dispute forced physicists to articulate what they meant by classicality and to recognise the assumptions embedded in different interpretations. Depending on your personal stance, you may emphasise the agreement of statistical distributions or you may emphasise the absence of observable superpositions. Both approaches can be internally consistent — but they  also answer different questions.

For school students that are reading this story, the Hyperion dispute may seem obscure. Why should we care about whether a distant moon’s tumbling motion demands decoherence or not? The reason is that the moon provides a vivid example of a deep issue: how do we reconcile the strange predictions of quantum theory with the ordinary world we see?

In the laboratory, decoherence is an everyday reality. Quantum computers, for example, must be carefully shielded from their environments to prevent decoherence from destroying fragile quantum information. In cosmology, decoherence plays a role in explaining how quantum fluctuations in the early universe influenced the structure of galaxies. Hyperion showed that even an astronomical body can, in principle, highlight the same foundational issues.


Last five posts:

1. The guiding light of KD45

2. What on earth is a wavefunction?

3.  The PixxelSpace constellation conundrum

4. The Zomato ad and India’s hustle since 1947

5. A new kind of quantum engine with ultracold atoms

6. Trade rift today, cryogenic tech yesterday

7. What keeps the red queen running?

8. A limit of ‘show, don’t tell’

What on earth is a wavefunction?

By: VM
23 August 2025 at 13:00

If you drop a pebble into a pond, ripples spread outward in gentle circles. We all know this sight, and it feels natural to call them waves. Now imagine being told that everything — from an electron to an atom to a speck of dust — can also behave like a wave, even though they are made of matter and not water or air. That is the bold claim of quantum mechanics. The waves in this case are not ripples in a material substance. Instead, they are mathematical entities known as wavefunctions.

At first, this sounds like nothing more than fancy maths. But the wavefunction is central to how the quantum world works. It carries the information that tells us where a particle might be found, what momentum it might have, and how it might interact. In place of neat certainties, the quantum world offers a blur of possibilities. The wavefunction is the map of that blur. The peculiar thing is, experiments show that this ‘blur’ behaves as though it is real. Electrons fired through two slits make interference patterns as though each one went through both slits at once. Molecules too large to see under a microscope can act the same way, spreading out in space like waves until they are detected.

So what exactly is a wavefunction, and how should we think about it? That question has haunted physicists since the early 20th century and it remains unsettled to this day.

In classical life, you can say with confidence, “The cricket ball is here, moving at this speed.” If you can’t measure it, that’s your problem, not nature’s. In quantum mechanics, it is not so simple. Until a measurement is made, a particle does not have a definite position in the classical sense. Instead, the wavefunction stretches out and describes a range of possibilities. If the wavefunction is sharply peaked, the particle is most likely near a particular spot. If it is wide, the particle is spread out. Squaring the wavefunction’s magnitude gives the probability distribution you would see in many repeated experiments.

If this sounds abstract, remember that the predictions are tangible. Interference patterns, tunnelling, superpositions, entanglement — all of these quantum phenomena flow from the properties of the wavefunction. It is the script that the universe seems to follow at its smallest scales.

To make sense of this, many physicists use analogies. Some compare the wavefunction to a musical chord. A chord is not just one note but several at once. When you play it, the sound is rich and full. Similarly, a particle’s wavefunction contains many possible positions (or momenta) simultaneously. Only when you press down with measurement do you “pick out” a single note from the chord.

Others have compared it to a weather forecast. Meteorologists don’t say, “It will rain here at exactly 3:07 pm.” They say, “There’s a 60% chance of showers in this region.” The wavefunction is like nature’s own forecast, except it is more fundamental: it is not our ignorance that makes it probabilistic, but the way the universe itself behaves.

Mathematically, the wavefunction is found by solving the Schrödinger equation, which is a central law of quantum physics. This equation describes how the wavefunction changes in time. It is to quantum mechanics what Newton’s second law (F = ma) is to classical mechanics. But unlike Newton’s law, which predicts a single trajectory, the Schrödinger equation predicts the evolving shape of probabilities. For example, it can show how a sharply localised wavefunction naturally spreads over time, just like a drop of ink disperses in water. The difference is that the spreading is not caused by random mixing but by the fundamental rules of the quantum world.

But does that mean the wavefunction is real, like a water wave you can touch, or is it just a clever mathematical fiction?

There are two broad camps. One camp, sometimes called the instrumentalists, argues the wavefunction is only a tool for making predictions. In this view, nothing actually waves in space. The particle is simply somewhere, and the wavefunction is our best way to calculate the odds of finding it. When we measure, we discover the position, and the wavefunction ‘collapses’ because our information has been updated, not because the world itself has changed.

The other camp, the realists, argues that the wavefunction is as real as any energy field. If the mathematics says a particle is spread out across two slits, then until you measure it, the particle really is spread out, occupying both paths in a superposed state. Measurement then forces the possibilities into a single outcome, but before that moment, the wavefunction’s broad reach isn’t just bookkeeping: it’s physical.

This isn’t an idle philosophical spat. It has consequences for how we interpret famous paradoxes like Schrödinger’s cat — supposedly “alive and dead at once until observed” — and for how we understand the limits of quantum mechanics itself. If the wavefunction is real, then perhaps macroscopic objects like cats, tables or even ourselves can exist in superpositions in the right conditions. If it is not real, then quantum mechanics is only a calculating device, and the world remains classical at larger scales.

The ability of a wavefunction to remain spread out is tied to what physicists call coherence. A coherent state is one where the different parts of the wavefunction stay in step with each other, like musicians in an orchestra keeping perfect time. If even a few instruments go off-beat, the harmony collapses into noise. In the same way, when coherence is lost, the wavefunction’s delicate correlations vanish.

Physicists measure this ‘togetherness’ with a parameter called the coherence length. You can think of it as the distance over which the wavefunction’s rhythm remains intact. A laser pointer offers a good everyday example: its light is coherent, so the waves line up across long distances, allowing a sharp red dot to appear even all the way across a lecture hall. By contrast, the light from a torch is incoherent: the waves quickly fall out of step, producing only a fuzzy glow. In the quantum world, a longer coherence length means the particle’s wavefunction can stay spread out and in tune across a larger stretch of space, making the object more thoroughly delocalised.

However, coherence is fragile. The world outside — the air, the light, the random hustle of molecules — constantly disturbs the system. Each poke causes the system to ‘leak’ information, collapsing the wavefunction’s delicate superposition. This process is called decoherence, and it explains why we don’t see cats or chairs spread out in superpositions in daily life. The environment ‘measures’ them constantly, destroying their quantum fuzziness.

One frontier of modern physics is to see how far coherence can be pushed before decoherence wins. For electrons and atoms, the answer is “very far”. Physicists have found their wavefunctions can stretch across micrometres or more. They have also demonstrated coherence with molecules with thousands of atoms, but keeping them coherent has been much more difficult. For larger solid objects, it’s harder still.

Physicists often talk about expanding a wavefunction. What they mean is deliberately increasing the spatial extent of the quantum state, making the fuzziness spread wider, while still keeping it coherent. Imagine a violin string: if it vibrates softly, the motion is narrow; if it vibrates with larger amplitude, it spreads. In quantum mechanics, expansion is more subtle but the analogy holds: you want the wavefunction to cover more ground not through noise or randomness but through genuine quantum uncertainty.

Another way to picture it is as a drop of ink released into clear water. At first, the drop is tight and dark. Over time, it spreads outward, thinning and covering more space. Expanding a quantum wavefunction is like speeding up this spreading process, but with a twist: the cloud must remain coherent. The ink can’t become blotchy or disturbed by outside currents. Instead, it must preserve its smooth, wave-like character, where all parts of the spread remain correlated.

How can this be done? One way is to relax the trap that’s being used to hold the particle in place. In physics, the trap is described by a potential, which is just a way of talking about how strong the forces are that pull the particle back towards the centre. Imagine a ball sitting in a bowl. The shape of the bowl represents the potential. A deep, steep bowl means strong restoring forces, which prevent the ball from moving around. A shallow bowl means the forces are weaker. That is, if you suddenly make the bowl shallower, the ball is less tightly confined and can explore more space. In the quantum picture, reducing the stiffness of the potential is like flattening the bowl, which allows the wavefunction to swell outward. If you later return the bowl to its steep form, you can catch the now-broader state and measure its properties.

The challenge is to do this fast and cleanly, before decoherence destroys the quantum character. And you must measure in ways that reveal quantum behaviour rather than just classical blur.

This brings us to an experiment reported on August 19 in Physical Review Letters, conducted by researchers at ETH Zürich and their collaborators. It seems the researchers have achieved something unprecedented: they prepared a small silica sphere, only about 100 nm across, in a nearly pure quantum state and then expanded its wavefunction beyond the natural zero-point limit. This means they coherently stretched the particle’s quantum fuzziness farther than the smallest quantum wiggle that nature usually allows, while still keeping the state coherent.

To appreciate why this matters, let’s consider the numbers. The zero-point motion of their nanoparticle — the smallest possible movement even at absolute zero — is about 17 picometres (one picometre is a trillionth of a meter). Before expansion, the coherence length was about 21 pm. After the expansion protocol, it reached roughly 73 pm, more than tripling the initial reach and surpassing the ground-state value. For something as massive as a nanoparticle, this is a big step.

The team began by levitating a silica nanoparticle in an optical tweezer, created by a tightly focused laser beam. The particle floated in an ultra-high vacuum at a temperature of just 7 K (-266º C). These conditions reduced outside disturbances to almost nothing.

Next, they cooled the particle’s motion close to its ground state using feedback control. By monitoring its position and applying gentle electrical forces through the surrounding electrodes, they damped its jostling until only a fraction of a quantum of motion remained. At this point, the particle was quiet enough for quantum effects to dominate.

The core step was the two-pulse expansion protocol. First, the researchers switched off the cooling and briefly lowered the trap’s stiffness by reducing the laser power. This allowed the wavefunction to spread. Then, after a carefully timed delay, they applied a second softening pulse. This sequence cancelled out unwanted drifts caused by stray forces while letting the wavefunction expand even further.

Finally, they restored the trap to full strength and measured the particle’s motion by studying how they scattered light. Repeating this process hundreds of times gave them a statistical view of the expanded state.

The results showed that the nanoparticle’s wavefunction expanded far beyond its zero-point motion while still remaining coherent. The coherence length grew more than threefold, reaching 73 ± 34 pm. Per the team, this wasn’t just noisy spread but genuine quantum delocalisation.

More strikingly, the momentum of the nanoparticle had become ‘squeezed’ below its zero-point value. In other words, while uncertainty over the particle’s position increased, that over its momentum decreased, in keeping with Heisenberg’s uncertainty principle. This kind of squeezed state is useful because it’s especially sensitive to feeble external forces.

The data matched theoretical models that considered photon recoil to be the main source of decoherence. Each scattered photon gave the nanoparticle a small kick, and this set a fundamental limit. The experiment confirmed that photon recoil was indeed the bottleneck, not hidden technical noise. The researchers have suggested using dark traps in future — trapping methods that use less light, such as radio-frequency fields — to reduce this recoil. With such tools, the coherence lengths can potentially be expanded to scales comparable to the particle’s size. Imagine a nanoparticle existing in a state that spans its own diameter. That would be a true macroscopic quantum object.

This new study pushes quantum mechanics into a new regime. Thus far, large, solid objects like nanoparticles could be cooled and controlled, but their coherence lengths stayed pinned near the zero-point level. Here, the researchers were able to deliberately increase the coherence length beyond that limit, and in doing so showed that quantum fuzziness can be engineered, not just preserved.

The implications are broad. On the practical side, delocalised nanoparticles could become extremely sensitive force sensors, able to detect faint electric or gravitational forces. On the fundamental side, the ability to hold large objects in coherent, expanded states is a step towards probing whether gravity itself has quantum features. Several theoretical proposals suggest that if two massive objects in superposition can become entangled through their mutual gravity, it would prove gravity must be quantum. To reach that stage, experiments must first learn to create and control delocalised states like this one.

The possibilities for sensing in particular are exciting. Imagine a nanoparticle prepared in a squeezed, delocalised state being used to detect the tug of an unseen mass nearby or to measure an electric field too weak for ordinary instruments. Some physicists have speculated that such systems could help search for exotic particles such as certain dark matter candidates, which might nudge the nanoparticle ever so slightly. The extreme sensitivity arises because a delocalised quantum object is like a feather balanced on a pin: the tiniest push shifts it in measurable ways.

There are also parallels with past breakthroughs. The Laser Interferometer Gravitational-wave Observatories, which detect gravitational waves, rely on manipulating quantum noise in light to reach unprecedented sensitivity. The ETH Zürich experiment has extended the same philosophy into the mechanical world of nanoparticles. Both cases show that pushing deeper into quantum control could yield technologies that were once unimaginable.

But beyond the technologies also lies a more interesting philosophical edge. The experiment strengthens the case that the wavefunction behaves like something real. If it were only an abstract formula, could we stretch it, squeeze it, and measure the changes in line with theory? The fact that researchers can engineer the wavefunction of a many-atom object and watch it respond like a physical entity tilts the balance towards reality. At the least, it shows that the wavefunction is not just a mathematical ghost. It’s a structure that researchers can shape with lasers and measure with detectors.

There are also of course the broader human questions. If nature at its core is described not by certainties but by probabilities, then philosophers must rethink determinism, the idea that everything is fixed in advance. Our everyday world looks predictable only because decoherence hides the fuzziness. But under carefully controlled conditions, that fuzziness comes back into view. Experiments like this remind us that the universe is stranger, and more flexible, than classical common sense would suggest.

The experiment also reminds us that the line between the quantum and classical worlds is not a brick wall but a veil — thin, fragile, and possibly removable in the right conditions. And each time we lift it a little further, we don’t just see strange behaviour: we also glimpse sensors more sensitive than ever, tests of gravity’s quantum nature, and perhaps someday, direct encounters with macroscopic superpositions that will force us to rewrite what we mean by reality.

A new kind of quantum engine with ultracold atoms

By: VM
7 August 2025 at 15:30

In conventional ‘macroscopic’ engines like the ones that guzzle fossil fuels to power cars and motorcycles, the fuels are set ablaze to release heat, which is converted to mechanical energy and transferred to the vehicle’s moving parts. In order to perform these functions over and over in a continuous manner, the engine cycles through four repeating steps. There are different kinds of cycles depending on the engine’s design and needs. A common example is the Otto cycle, where the engine’s four steps are: 

1. Adiabatic compression: The piston compresses the air-fuel mixture, increasing its pressure and temperature without exchanging heat with the surroundings

2. Constant volume heat addition: At the piston’s top position, a spark plug ignites the fuel-air mixture, rapidly increasing pressure and temperature while the volume remains constant

3. Adiabatic expansion: The high-pressure gas pushes the piston down, doing work on the piston, which powers the engine

4. Constant volume heat rejection: At the bottom of the piston stroke, heat is expelled from the gas at constant volume as the engine prepares to clear the exhaust gases

So the engine goes 1-2-3-4-1-2-3-4 and so on. This is useful. If you plot the pressure and volume of the fuel-air mixture in the engine on two axes of a graph, you’ll see that at the end of the ‘constant volume heat rejection’ step (no. 4), the mixture is in the same state as it is at the start of the adiabatic compression step (no. 1). The work that the engine does on the vehicle is equal to the difference between the work done during the expansion and compression steps. Engines are designed to meet the cyclical requirement while increasing the amount of work it does for a given fuel and vehicle design.

It’s easy to understand the value of machines like this. They’re the reason we have vehicles that we can drive in different ways using our hands, legs, and our senses and in relative comfort. As long as we refill the fuel tank once in a while, engines can repeatedly perform mechanical work using their fuel combustion cycles. It’s understandable then why scientists have been trying to build quantum engines. While conventional engines use classical physics to operate, quantum engines are machines that use the ideas of quantum physics. For now, however, these machines are futuristic because scientists have found that they don’t understand the working principles of quantum engines well enough. University of Kaiserslautern-Landau professor Artur Widera told me the following in September 2023 after he and his team published a paper reporting that they had developed a new kind of quantum engine:

Just observing the development and miniaturisation of engines from macroscopic scales to biological machines and further potentially to single- or few-atom engines, it becomes clear that for few particles close to the quantum regime, thermodynamics as we use in classical life will not be sufficient to understand processes or devices. In fact, quantum thermodynamics is just emerging, and some aspects of how to describe the thermodynamical aspects of quantum processes are even theoretically not fully understood.

This said, recent advances in ultracold atomic physics have allowed physicists to control substances called quantum gases in the so-called low-dimensional regimes, laying the ground for them to realise and study quantum engines. Two recent studies exemplify this progress: the study by Widera et al. in 2023 and a new theoretical study reported in Physical Review E. Both studies have explored engines based on ultracold quantum gases but  have approached the concept of quantum energy conversion from complementary perspectives.

The Physical Review E work investigated a ‘quantum thermochemical engine’ operating with a trapped one-dimensional (1D) Bose gas in the quasicondensate regime as the working fluid — just like the fuel-air mixture in in the internal combustion engine of a petrol-powered car. A Bose gas is a quantum system that consists of subatomic particles called bosons. The ‘1D’ simply means they are limited to moving back and forth on a straight line, i.e. a single spatial dimension. This restriction dramatically changes the bosons’ physical and quantum properties.

According to the paper’s single author, University of Queensland theoretical physicist Vijit Nautiyal, the resulting engine can operate on an Otto cycle where the compression and expansion steps — which dictate the work the engine can do — are implemented by tuning how strongly the bosons interact, instead of changing the volume as in a classical engine. In order to do this, the quantum engine needs to exchange not heat with its surroundings but particles. That is, the particles flow from a hot reservoir to the working boson gas, allowing the engine to perform net work.

Energy enters and leaves the system in the A-B and C-D steps, respectively, when the engine absorbs and releases particles from the hot reservoir. The engine consumes work during adiabatic compression (D-A) and performs work during adiabatic expansion (B-C). The difference between these steps is the engine’s net work output. Credit: arXiv:2411.13041v2

Nautiyal’s study focused on the engine’s performance in two regimes: one where the strength of interaction between bosons was suddenly quenched in order to maximise the engine’s power at the cost of its efficiency, and another where the quantum engine operates at maximum efficiency but produces negligible power. Nautiyal has reported doing this using advanced numerical simulations.

The simulations showed that if the engine only used heat but didn’t absorb particles from the hot reservoir, it couldn’t really produce useful energy at a finite temperatures. This was because of complicated quantum effects and uneven density in the boson gas. But when the engine was allowed to gain or lose particles from/to the reservoir, it got the extra energy it needed to work properly. Surprisingly, this particle exchange allowed the engine operate very efficiently, even when it ran fast. Usually, engines have to choose between going fast and losing efficiency or go slow and being more efficient. The particle exchange allowed Nautiyal’s quantum thermochemical engine avoid that trade-off. Letting more particles flow in and out also made the engine produce more energy and be even more efficient.

Finally, unlike regular engines where higher temperature usually means better efficiency, increasing the temperature of the quantum thermochemical engine too much actually lowered its efficiency, speaking to the important role chemical work played in this engine design.

In contrast, the 2023 experimental study — which I wrote about in The Hindu — realised a quantum engine that, instead of relying on conventional heating and cooling with thermal reservoirs, operated by cycling a gas of particles between two quantum states, a Bose-Einstein condensate and a Fermi gas. The process was driven by adiabatic changes (i.e. changes that happen while keeping the entropy fixed) that converted the fundamental difference in total energy distribution arising from the two states into usable work. The experiment demonstrated that this energy difference, called the Pauli energy, constituted a significant resource for thermodynamic cycles.

The theoretical 2025 paper and the experimental 2023 work are intimately connected as complementary explorations of quantum engine operation using ultracold atomic gases. Both have taken advantage of the unique quantum effects accessible in such systems while focusing on distinct energy resources and operational principles.

The 2025 work emphasised the role of chemical work arising from particle exchange in a one-dimensional Bose gas, exploring the balance of efficiency and power in finite-time quantum thermochemical engines. It also provided detailed computational frameworks to understand and optimise these engines. Likewise, the 2023 experiment physically realised a related but conceptually different mechanism: the movement of lithium atoms between two states and converting their Pauli energy to work. This approach highlighted how the fundamental differences between the two states could be a direct energy source, rather than conventional heat baths, and one operating with little to no production of entropy.

Together, these studies broaden the scope of quantum engines beyond traditional heat-based cycles by demonstrating the usefulness of intrinsically quantum energy forms such as chemical work and Pauli energy. Such microscopic ‘machines’ also herald a new class of engines that harness the fundamental laws of quantum physics to convert energy between different forms more efficiently than the best conventional engines can manage with classical physics.

Physics World asked Nautiyal about the potential applications of his work:

… Nautiyal referred to “quantum steampunk”. This term, which was coined by the physicist Nicole Yunger Halpern at the US National Institute of Standards and Technology and the University of Maryland, encapsulates the idea that as quantum technologies advance, the field of quantum thermodynamics must also advance in order to make such technologies more efficient. A similar principle, Nautiyal explains, applies to smartphones: “The processor can be made more powerful, but the benefits cannot be appreciated without an efficient battery to meet the increased power demands.” Conducting research on quantum engines and quantum thermodynamics is thus a way to optimize quantum technologies.

What keeps the red queen running?

By: VM
6 August 2025 at 03:29

AI-generated definition based on ‘Quantitative and analytical tools to analyze the spatiotemporal population dynamics of microbial consortia’, Current Opinion in Biotechnology, August 2022:

The Red Queen hypothesis refers to the idea that a constant rate of extinction persists in a community, independent of the duration of a species’ existence, driven by interspecies relationships where beneficial mutations in one species can negatively impact others.

Encyclopedia of Ecology (second ed.), 2008:

The term is derived from Lewis Carroll’s Through the Looking Glass, where the Red Queen informs Alice that “here, you see, it takes all the running you can do to keep in the same place.” Thus, with organisms, it may require multitudes of evolutionary adjustments just to keep from going extinct.

The Red Queen hypothesis serves as a primary explanation for the evolution of sexual reproduction. As parasites (or other selective agents) become specialized on common host genotypes, frequency-dependent selection favors sexual reproduction (i.e., recombination) in host populations (which produces novel genotypes, increasing the rate of adaptation). The Red Queen hypothesis also describes how coevolution can produce extinction probabilities that are relatively constant over millions of years, which is consistent with much of the fossil record.

Also read: ‘Sexual reproduction as an adaptation to resist parasites (a review).’, Proceedings of the National Academy of Sciences, May 1, 1990.

~

In nature, scientists have found that even very similar strains of bacteria constantly appear and disappear even when their environment doesn’t seem to change much. This is called continual turnover. In a new study in PRX Life, Aditya Mahadevan and Daniel Fisher of Stanford University make sense of how this ongoing change happens, even without big differences between species or dramatic changes in the environment. Their jumping-off point is the red queen hypothesis.

While the hypothesis has usually been used to talk about ‘arms races’, like between hosts and parasites, the new study asked: can continuous red queen evolution also happen in communities where different species or strains overlap a lot in what they do and where there aren’t obvious teams fighting each other?

Mahadevan and Fisher built mathematical models to mimic how communities of microbes evolve over time. These models allowed the duo to simulate what would happen if a population started with just one microbial strain and over time new strains appeared due to random changes in their genes (i.e. mutations). Some of these new strains could invade other species’ resources and survive while others are forced to extinction.

The models focused especially on ecological interactions, meaning how strains or species affected each other’s survival based on how they competed for the same food.

When they ran the models, the duo found that even when there were no clear teams (like host v. parasite), communities could enter a red queen phase. The overall number of coexisting strains stayed roughly constant, but which strains were present keeps changing, like a continuous evolutionary game of musical chairs.

The continual turnover happened most robustly when strains interacted in a non-reciprocal way. As ICTS biological physicist Akshit Goyal put it in Physics:

… almost every attempt to model evolving ecological communities ran into the same problem: One organism, dubbed a Darwinian monster, evolves to be good at everything, killing diversity and collapsing the community. Theorists circumvented this outcome by imposing metabolic trade-offs, essentially declaring that no species could excel at everything. But that approach felt like cheating because the trade-offs in the models needed to be unreasonably strict. Moreover, for mathematical convenience, previous models assumed that ecological interactions between species were reciprocal: Species A affects species B in exactly the same way that B affects A. However, when interactions are reciprocal, community evolution ends up resembling the misleading fixed fitness landscape. Evolution is fast at first but eventually slows down and stops instead of going on endlessly.

Mahadevan and Fisher solved this puzzle by focusing on a previously neglected but ubiquitous aspect of ecological interactions: nonreciprocity. This feature occurs when the way species A affects species B differs from the way B affects A—for example, when two species compete for the same nutrient, but the competition harms one species more than the other

Next, despite the continual turnover, there was a cap on the number of strains that could coexist. This depended on the number of different resources available and how strains interacted, but as new strains invaded others, some old ones had to go extinct, keeping diversity within limits.

If some strains started off much better (i.e. with higher fitness), over time the evolving competition narrowed these differences and only strains with similar overall abilities managed to stick around.

Finally, if the system got close to being perfectly reciprocal, the dynamics could shift to an oligarch phase in which a few strains dominated most of the population and continual turnover slowed considerably.

Taken together, the study’s main conclusion is that there doesn’t need to be a constant or elaborate ‘arms race’ between predator and prey or dramatic environmental changes to keep evolution going in bacterial communities. Such evolution can arise naturally when species or strains interact asymmetrically as they compete for resources.

Featured image: “Now, here, you see, it takes all the running you can do, to keep in the same place.” Credit: Public domain.

A limit of ‘show, don’t tell’

By: VM
3 August 2025 at 08:24

The virtue of ‘show, don’t tell’ in writing, including in journalism, lies in its power to create a more vivid, immersive, and emotionally engaging reading experience. Instead of simply providing information or summarising events, the technique encourages writers to use evocative imagery, action, dialogue, and sensory details to invite readers into the world of the story.

The idea is that once they’re in there, they’ll be able to do a lot of the task of engaging for you.

However, perhaps this depends on the world the reader is being invited to enter.

There’s an episode in season 10 of ‘Friends’ where a palaeontologist tells Joey she doesn’t own a TV. Joey is confused and asks, “Then what’s all your furniture pointed at?”

Most of the (textual) journalism of physics I’m seeing these days frames narratives around the application of some discovery or concept. For example, here’s the last paragraph of one of the top articles on Physics World today:

The trio hopes that its technique will help us understand polaron behaviours. “The method we developed could also help study strong interactions between light and matter, or even provide the blueprint to efficiently add up Feynman diagrams in entirely different physical theories,” Bernardi says. In turn, it could help to provide deeper insights into a variety of effects where polarons contribute – including electrical transport, spectroscopy, and superconductivity.

I’m not sure if there’s something implicitly bad about this framing but I do believe it gives the impression that the research is in pursuit of those applications, which in my view is often misguided. Scientific research is incremental and theories and data often takes many turns before they can be stitched together cleanly enough for a technological application in the real world.

Yet I’m also aware that, just like pointing all your furniture at the TV can simplify your decisions about arranging your house, drafting narratives in order to convey the relevance of some research for specific applications can help hold readers’ attention better. Yes, this is a populist approach to the extent that it panders to what readers know they want rather than what they may not know, but it’s useful — especially when the communicator or journalist is pressed for time and/or doesn’t have the mental bandwidth to craft a thoughtful narrative.

But this narrative choice may also imply a partial triumph of “tell, don’t show” over “show, don’t tell”. This is because the narrative has an incentive to restrict itself to communicating whatever physics is required to describe the technology and still be considered complete rather than wade into waters that will potentially complicate the narrative.

A closely related issue here is that a lot of physics worth knowing about — if for no reason other than that they’re windows into scientists’ spirit and ingenuity — is quite involved. (It doesn’t help that it’s also mostly mathematical.) The concepts are simply impossible to show, at least not without the liberal use of metaphors and, inevitably, some oversimplification.

Of course, it’s not possible to compare a physics news piece in Physics World with that in The Hindu: the former will be able to show more by telling itself because its target audience is physicists and other scientists, and they will see more detail in the word “polaron” than readers of The Hindu can be expected to. But even if The Hindu’s readers need more showing, I can’t show them the physics without expecting they will be interested in complicated theoretical ideas.

In fact, I’ll be hard-pressed to be a better communicator than if I resorted to telling. Thus my lesson is that ‘show, don’t tell’ isn’t always a virtue. Sometimes what you show can bore or maybe scare readers off, and for reasons that have nothing to do with your skills as a communicator. Obviously the point isn’t to condescend readers here. Instead, we need to acknowledge that telling is virtuous in its own right, and in the proper context may be the more engaging way to communicate science.

Sharks don’t do math

By: VM
13 July 2025 at 12:42

From ’Sharks hunt via Lévy flights’, Physics World, June 11, 2010:

They were menacing enough before, but how would you feel if you knew sharks were employing advanced mathematical concepts in their hunt for the kill? Well, this is the case, according to new research, which has tracked the movement of these marine predators along with a number of other species as they foraged for prey in the Pacific and Atlantic oceans. The results showed that these animals hunt for food by alternating between Brownian motion and Lévy flights, depending on the scarcity of prey.

Animals don’t use advanced mathematical concepts. This statement encompasses many humans as well because it’s not a statement about intelligence but one about language and reality. You see a shark foraging in a particular pattern. You invent a language to efficiently describe such patterns. And in that language your name for the shark’s pattern is a Lévy flight. This doesn’t mean the shark is using a Lévy flight. The shark is simply doing what makes sense to it, but which we — in our own description of the world — call a Lévy flight.

The Lévy flight isn’t an advanced concept either. It’s a subset of a broader concept called the random walk. Say you’re on a square grid, like a chessboard. You’re standing on one square. You can move only one step at a time. You roll a four-sided die. Depending on the side it lands on, you step one square forwards, backwards, to the right or to the left. The path you trace over time is called a random walk because its shape is determined by the die roll, which is random.

Random walk 2500.svg.

There are different kinds of walks depending on the rule that determines the choice of your next step. A Lévy flight is a random walk that varies both the direction of the next step and the length of the step. In the random walk on the chessboard, you took steps of fixed lengths: to the adjacent squares. In a Lévy flight, the direction of the next step is random and the length is picked at random from a Lévy distribution. This is what the distribution looks like:

Levy0 distributionPDF.svg.

Notice how a small part of each curve (for different values of c in the distribution’s function) has high values and the majority has smaller values. When you pick your step length at random from, say, the red curve, you have higher odds of of picking a smaller step length than a longer one. This means in a Lévy flight, most of the step lengths will be short but a small number of steps will be long. Thus the ‘flight’ looks like this:

Sharks and many other animals have been known to follow a Lévy flight when foraging. To quote from an older post:

Research has shown that the foraging path of animals looking for food that is scarce can be modelled as a Lévy flight: the large steps correspond to the long distances towards food sources that are located far apart and the short steps to finding food spread in a small area at each source.

Brownian motion is a more famous kind of random walk. It’s the name for the movement of an object that’s following the Wiener process. This means the object’s path needs to obey the following five rules (from the same post):

(i) Each increment of the process is independent of other (non-overlapping) increments;

(ii) How much the process changes over a period of time depends only on the duration of the period;

(iii) Increments in the process are randomly sampled from a Gaussian distribution;

(iv) The process has a statistical mean equal to zero;

(v) The process’s covariance between any two time points is equal to the lower variance at those two points (variance denotes how quickly the value of a variable is spreading out over time).

Thus Brownian motion models the movement of pollen grains in water, dust particles in the air, electrons in a conductor, and colloidal particles in a fluid, and the fluctuation of stock prices, the diffusion of molecules in liquids, and population dynamics in biology. That is, all these processes in disparate domains evolve at least in part according to the rules of the Wiener process.

Still doesn’t mean a shark understands what a Lévy flight is. By saying “sharks use a Lévy flight”, we also discard in the process how the shark makes its decisions — something worth learning about in order to make more complete sense of the world around us rather than force the world to make sense only in those ways we’ve already dreamt up. (This is all the more relevant now with #sharkweek just a week away.)

I care so much because metaphors are bridges between language and reality. Even if the statement “sharks employ advanced mathematical concepts” doesn’t feature a metaphor, the risk it represents hews close to one that stalks the use of metaphors in science journalism: the creation of false knowledge.

Depending on the topic, it’s not uncommon for science journalists to use metaphors liberally, yet scientists have not infrequently upbraided them for using the wrong metaphors in some narratives or for not alerting readers to the metaphors’ limits. This is not unfair: while I disagree with some critiques along these lines for being too pedantic, in most cases it’s warranted. As science philosopher Daniel Sarewitz put it in that 2012 article:

Most people, including most scientists, can acquire knowledge of the Higgs only through the metaphors and analogies that physicists and science writers use to try to explain phenomena that can only truly be characterized mathematically.

Here’s The New York Times: “The Higgs boson is the only manifestation of an invisible force field, a cosmic molasses that permeates space and imbues elementary particles with mass … Without the Higgs field, as it is known, or something like it, all elementary forms of matter would zoom around at the speed of light, flowing through our hands like moonlight.” Fair enough. But why “a cosmic molasses” and not, say, a “sea of milk”? The latter is the common translation of an episode in Hindu cosmology, represented on a spectacular bas-relief panel at Angkor Wat showing armies of gods and demons churning the “sea of milk” to producean elixir of immortality.

For those who cannot follow the mathematics, belief in the Higgs is an act of faith, not of rationality.

A metaphor is not the thing itself and shouldn’t be allow to masquerade as such.

Just as well, there are important differences between becoming aware of something and learning it, and a journalist may require metaphors only to facilitate the former. Toeing this line also helps journalists tame the publics’ expectations of them.

Featured image credit: David Clode/Unsplash.

The hidden heatwave

By: VM
13 July 2025 at 10:22

A heatwave is like the COVID-19 virus. During the pandemic, the virus infected and killed many people. When vaccines became available, the mortality rate dropped even though the virus continued to spread. But vaccines weren’t the only way to keep people from dying. The COVID-19 virus killed more people if the people were already unhealthy

In India, an important cause for people being unhealthy is the state itself. In many places, the roads are poorly laid, kicking dust exposed by traffic use up into the air, where it joins the PM2.5 particles emitted by industrial facilities allowed to set up shop near residential and commercial areas without proper emission controls. If this is one extreme, becauses these experiences are so common for so many Indians, at the other is the state’s apathy towards public health. India’s doctor-to-patient ratio is dismal; hospitals are understaffed and under-equipped; drug quality is so uneven as to be a gamble; insurance coverage is iffy and unclear; privatisation is increasing; and the national government’s financial contribution towards public health is in free fall.

For these reasons as well, and not just because of vaccine availability or coverage, the COVID-19 virus killed more people than it should have been able to. A person’s vulnerability to this or any other infection is thus determined by their well-being — which is affected both by explicit factors like a new pathogen in the population and implicit factors like the quality of healthcare they have been able to access.

A heatwave resembles the virus for the same reason: a person’s vulnerability to high heat is determined by their well-being — which in turn is affected by the amount of ambient heat and relative humidity as well as the extent to which they are able to evade the effects of that combination. This weekend, a new investigative effort by a team of journalists at The Hindu (including me) has reported just this fact, but for the first time with ground-zero details that people in general, and perhaps even the Tamil Nadu government itself, have thus far only presumed to be the case. Read it online, in the e-paper or in today’s newspaper.

The fundamental issues are two-pronged. First, Tamil Nadu’s policies on protecting people during heatwaves require the weather department to have declared a heatwave to apply. Second, even when there is no heatwave, many people but especially the poorer consistently suffer heatwave conditions. (Note: I’m criticising Tamil Nadu here because it’s my state of residence and equally because it’s one of a few states actually paying as much attention to economic growth as it is to public health, of which heat safety is an important part.)

The net effect is for people to suffer their private but nonetheless very real heatwave conditions without enjoying the support the state has promised for people in these conditions. The criticism also indicts the state for falling short on enforcing other heat-related policies that leave the vulnerable even more stranded.

The corresponding measures include (i) access to clean toilets, a lack of which forces people — but especially women, who can’t urinate in public the way men are known to — to drink less water and suppress their urges to urinate, risking urinary tract infections; (ii) access to clean and cool drinking water, a paucity of which forces people to pay out of their pockets to buy chilled water or beverages, reducing the amount of money they have left for medical expenses as well as risking the ill health that comes with consuming aerated and/or sugary beverages; and (ii) state-built quarters that pay meaningful attention to ventilating living spaces, which when skipped exposes people to humidity levels that prevent their bodies from cooling by sweating, rendering them more susceptible to heat-related illnesses.

And as The Hindu team revealed, these forms of suffering are already playing out.

The India Meteorological Department defines a heatwave based on how much the temperature deviates from a historical average. But this is a strictly meteorological definition that doesn’t account for the way class differences create heatwave-like conditions. These conditions kick in as a combination of temperature and humidity, and as the report shows, even normal temperature can induce them if the relative humidity is higher and/or if an individual is unable to cool themselves. The state has a significant role to play in the latter. Right now, it needs to abandon the strictly meteorological definition of heatwaves in its policy framework and instead develop a more holistic sociological definition.

Featured image credit: Austin Curtis/Unsplash.

Quantum clock breaks entropy barrier

By: VM
12 July 2025 at 12:21

In physics, the second law of thermodynamics says that a closed system tends to become more disordered over time. This disorder is captured in an entity called entropy. Many devices, especially clocks, are affected by this law because they need to tick regularly to measure time. But every tick creates a bit of disorder, i.e. increases the entropy, and physicists have believed for a long time now that this places a fundamental limit on how precise a clock can be. The more precise you want your clock, the more entropy (and thus more energy) you’ll have to expend.

A study published in Nature Physics on June 2 challenges this wisdom. In it, researchers from Austria, Malta, and Sweden asked if the second law of thermodynamics really set a limit on a clock’s precision and came away, surprisingly, with a design of a new kind of quantum clock that’s too precise scientists once believed possible for the amount of energy it spends to achieve that precision.

The researchers designed this clock using a spin chain. Imagine a ring made of several quantum sites, like minuscule cups. Each cup can hold an excitation — say, a marble that can hop from cup to cup. This excitation moves around the ring and every time it completes a full circle, the clock ticks once. A spin chain is, broadly speaking, a series of connected quantum systems (the sites) arranged in a ring and the excitation is a subatomic particle or packet of energy that moves from site to site.

In most clocks, every tick is accompanied by the dissipation of some energy and a small increase in entropy. But in the model in the new study, only the last link in the circle, where the last quantum system was linked to the first one, dissipated energy. Everywhere else, the excitation moved without losing energy, like a wave gliding smoothly around the ring. The movement of the excitation in this lossless way through most of the ring is called coherent transport.

The researchers used computer simulations to help them adjust the hopping rates — or how easily the excitation moved between sites — and thus to make the clock as precise as possible. They found that the best setup involved dividing the ring into three regions: (i) in the preparation ramp, the excitation was shaped into a wave packet; (ii) in the bulk propagation phase, the wave packet moved steadily through the ring; and (iii) in the boundary matching phase, the wave packet was reset for the next tick.

The team measured the clock’s precision as the number of ticks it completed before it was one tick ahead or behind a perfect clock. Likewise, team members defined the entropy per tick to be the amount of energy dissipated per tick. Finally, the team compared this quantum clock to classical clocks and other quantum models, which typically show a linear relationship between precision and entropy: e.g. if the precision doubled, the entropy doubled as well.

The researchers, however, found that the precision of their quantum clock grew exponentially with entropy. In other words, if the amount of entropy per tick increased only slightly, the precision increased by a big leap. It was proof that, at least in principle, it’s possible to build a clock to be arbitrarily precise while keeping the system’s entropy down, all without falling afoul of the second law.

That is, contrary to what many physicists thought, the second law of thermodynamics doesn’t strictly limit a clock precision, at least not for quantum clocks like this one. The clock’s design allowed it to sidestep the otherwise usual trade-off between precision and entropy.

During coherent transport, the process is governed only by the system’s Hamiltonian, i.e. the rules for how energy moves in a closed quantum system. In this regime, the excitation acts like a wave that spreads smoothly and reversibly, without losing any energy or creating any disorder. Imagine a ball rolling on a perfectly smooth, frictionless track. It keeps moving without slowing down or heating up the track. Such a thing is impossible in classical mechanics, like in the ball example, but it’s possible in quantum systems. The tradeoff of course is that the latter are very small and very fragile and thus harder to manipulate.

In the present study, the researchers have proved that it’s possible to build a quantum clock that takes advantage of coherent transport to tick while dissipating very little energy. Their model, the spin chain, uses a Hamiltonian that only allows the excitation to coherently hop to its nearest neighbour. The researchers engineered the couplings between the sites in the preparation ramp part of the ring to shape the excitation into a traveling wave packet that moves predominantly in the forward direction.

This tendency to move in only direction is further bolstered at the last link, where the last site is coupled to the first. Here, the researchers installed a thermal gradient — a small temperature difference that encouraged the wave to restart its journey rather than be reflected and move backwards through the ring. When the excitation crossed this thermodynamic bias, the clock ticked once and also dissipated some energy.

Three points here. First, remember that this is a quantum system. The researchers are dealing with energy (almost) at its barest, manipulating it directly without having to bother with an accoutrement of matter covering it. In the classical regime, such accoutrements are unavoidable. For example, if you have a series of cups and you want to make an excitation hop through it, you do so with a marble. But while the marble contains the (potential) energy that you want to move through the cups, it also has mass and it dissipates energy whenever it hops into a cup, e.g. it might bounce when it lands and it will release sound when it strikes the cup’s material. So while the marble metaphor earlier might have helped you visualise the quantum clock, remember that the metaphor has limitations.

Second, for the quantum clock to work as a clock, it needs to break time-reversal symmetry (a concept I recently discussed in the context of quasicrystals). Say you remove the thermodynamic bias at the last link of the ring and replace it with a regular link. In this case the excitation will move randomly — i.e. at each step it will randomly pick the cup to move to, forward or backward, and keep going. If you reversed time, the excitation’s path will still be random and just evolve in reverse.

However, the final thermodynamically biased link causes the excitation to acquire a preference for moving in one direction. The system thus breaks time-reversal symmetry because even if you reverse the flow of time, the system will encourage the excitation to move in one direction and one direction only. This in turn is essential for the quantum system to function like a clock. That is, the excitation needs to traverse a fixed number of cups in the spin chain and then start from the first cup. Only between these two stages will the system count off a ‘tick’. Breaking time-reversal symmetry thus turns the device into a clock.

Three, the thermodynamic bias ensures that the jump from the last site to the first is more likely than the reverse, and the entropy is the cost the system pays in order to ensure the jump. Equally, the greater the thermodynamic bias, the more likely the excitation is to move in one direction through spin chain as well as make the jump in the right direction at the final step. Thus, the greater the thermodynamic bias, the more precise the clock will be.

The new study excelled by creating a sufficiently precise clock while minimising the entropy cost.

According to the researchers, its design design could help build better quantum clocks, which are important for quantum computers, quantum communication, and to make ultra-precise precise measurements of the kind demanded by atomic clocks. The clock’s ticks could also be used to emit single photons at regular intervals — a technology increasingly in demand for its use in quantum networks of the sort China, the US, and India are trying to build.

But more fundamentally, the clock’s design — which confines energy dissipation to a single link and uses coherent transport everywhere else — and that design’s ability to evade the precision-entropy trade-off challenges a longstanding belief that the second law of thermodynamics strictly limits precision.

Featured image credit: Meier, F., Minoguchi, Y., Sundelin, S. et al. Nat. Phys. (2025).

A new beast: antiferromagnetic quasicrystals

By: VM
11 July 2025 at 09:38

Scientists have made a new material that is both a quasicrystal and antiferromagnetic — a combination never seen before.

Quasicrystals are a special kind of solid. Unlike normal crystals, whose atoms are arranged in repeating patterns, quasicrystals have patterns that never exactly repeat but which still have an overall order. While regular crystals have left-right symmetries, quasicrystals have unusual rotational ones.

For decades, scientists wondered if certain kinds of magnetism, but especially antiferromagnetism, could exist in these strange materials. In all materials the electrons have a property called spin. It’s as if a small magnet is embedded inside each electron. The spin denotes the direction of this magnet’s magnetic field. In ferromagnets, the spins are aligned in a common direction, so the materials are attracted to magnets. In antiferromagnetic materials, the electron spins line up in alternating directions, so their effects cancel out.

While antiferromagnetism is common in regular crystals, it’s thus far never been observed in a true quasicrystal.

The new study is the first to show clear evidence of antiferromagnetic order in a real, three-dimensional quasicrystal — one made of gold, indium, and europium. The findings were published in Nature Physics on April 14.

The team confirmed such a material is real by carefully measuring how its atoms and spins are arranged and by observing how it behaves at low temperatures. Their work shows that even in the weird world of quasicrystals, complex magnetic order is possible, opening the door to new discoveries and technologies.

The scientists created a new alloy with the formula Au56In28.5Eu15.5. This means in 1,000 atoms’ worth of the material, 560 will be gold, 285 will be indium, and 155 will be europium. The composition tells us that the scientists were going for a particularly precise combination of these elements — which they could have known in one of two ways. It might have been trial-and-error*, but that makes research very expensive, or the scientists had reasons to expect antiferromagnetic order would appear in this material.

They did. Specifically, the team focused on Au56In28.5Eu15.5 because of its (i) unique positive Curie-Weiss temperature and (ii) rare-earth content, and (iii) because its structural features matched the theoretical criteria for stable antiferromagnetic order. Previous studies focused on quasicrystals containing rare-earth elements because they often have strong magnetic interactions. However, these compounds typically displayed a negative Curie-Weiss temperature, indicating dominant antiferromagnetic interactions but resulting only in disordered magnetic states.

A positive Curie-Weiss temperature indicates dominant ferromagnetic interactions. In this case, however, it also suggested a unique balance of magnetic forces that could potentially stabilise antiferromagnetic order rather than spin-glass behaviour. Studies on approximant crystals — periodic structures closely related to quasicrystals — had also shown that both ferromagnetic and antiferromagnetic orders are stabilised only when the Curie-Weiss temperature is positive. In contrast, a negative temperature led to spin-glass states.

The scientists of the new study noticed that the Au-In-Eu quasicrystal fit into the positive Curie-Weiss temperature category, making it a promising candidate to have antiferromagnetic order.

For added measure, by slightly altering the composition, e.g. adding an impurity to increase the electron-per-atom ratio, the scientists could make the antiferromagnetic phase disappear, to be replaced by spin-glass behaviour. This sensitivity to electron concentration further hinted that the composition of the alloy was at a sweet spot for stabilising antiferromagnetism.

Finally, the team had also recently discovered ferromagnetic order in some similar gold-based quasicrystals with rare-earth elements. The success encouraged them to explore the magnetic properties of new compositions, especially those with unusual Curie-Weiss temperatures.

The Au-In-Eu quasicrystal is also a Tsai-type icosahedral quasicrystal, meaning it features a highly symmetric atomic arrangement. Theoretical work has suggested that such structures could support antiferromagnetic order in the right conditions, especially if the atoms occupied specific sites in the lattice.

To make the alloy, the scientists used a technique called arc-melting, where highly pure metals are melted together using an electric arc, then quickly cooled to form the solid quasicrystal. To ensure the mixture was even, the team melted and flipped the sample several times.

Then they used X-ray and electron diffraction to check the atomic arrangement. These techniques passed X-rays and electrons through the material. A detector on the other side picked up the radiation scattered by the material’s atoms and used it to recreate their arrangement. The patterns showed the material was a primitive icosahedral quasicrystal, a structure with 20-sided symmetry and no repeating units.

The team also confirmed special arrangement of atoms by the way the diffraction patterns followed mathematical rules that are special to quasicrystals. Team members also used a magnetometer to track how much the material was magnetised when exposed to a magnetic field, from temperatures as low as 0.4 K to up to 300 K. Finally they also measured the material’s specific heat, i.e. the amount of heat energy it took to raise its temperature by 1º C. This reading can show signs of magnetic transitions.

Left: The arrangement of atoms in the quasicrystal alloy. The atoms are arranged in a combination of two patterns, shown on the right. The colouring denotes their place in either pattern rather than different elements. Credit: Nature Physics volume 21, pages 974–979 (2025)

To confirm how the spins inside the material were arranged, the team used neutron diffraction. Neutrons are adept at passing through materials and are sensitive to both atoms’ positions and magnetic order. By comparing patterns at temperatures above and below the suspected transition point, they could spot the appearance of new peaks that signal magnetic order.

This way, the team reported that at 6.5 K, the magnetisation curve showed a sharp change, known as a cusp. This is a classic sign of an antiferromagnetic transition, where the material suddenly changes from being unordered to having a regular up-and-down pattern of spins. The specific heat also showed a sharp peak at this temperature, confirming something dramatic was happening inside the material.

The scientists also reported that there was no sign of spin-glass behaviour — where the spins are pointing in random directions but unchanging — which is common in other magnetic quasicrystals.

Below 6.5 K, new peaks appeared in the neutron diffraction data, evidence that the spins inside the material were lining up in the regular but alternating pattern characteristic of antiferromagnetic order. The peaks were also sharp and well-defined, showing the order was long-range, meaning they were there throughout the material and not confined to small patches.

The team also experimented by adding a small amount of tin to the alloy, which changed the balance of electrons. This little change caused the material to lose its antiferromagnetic order and become a spin glass instead, showing how delicate the balance is between different magnetic states in quasicrystals.

The findings are important because this is the first time scientists have observed antiferromagnetic order in a real, three-dimensional quasicrystal, settling a long-standing debate. They also open up a new field of study, of quasiperiodic antiferromagnets, and suggest that by carefully tuning the composition, scientists may be able to find yet other types of magnetic order in quasicrystals.

“The present discovery will stimulate both experimental and theoretical efforts to elucidate not only its unique magnetic structure but also the intrinsic properties of the quasiperiodic order parameter,” the scientists wrote in their paper. “Another exciting aspect of magnetically ordered quasicrystals is their potential for new applications such as functional materials in spintronics” — which use electron spins to store and process information in ultra-fast computers of the future.


* Which is not the same as serendipity.

Featured image credit: Nature Physics volume 21, pages 974–979 (2025).

Tracking the Meissner effect under pressure

By: VM
5 July 2025 at 11:32

In the last two or three years, groups of scientists from around the world have made several claims that they had discovered a room-temperature superconductor. Many of these claims concerned high-pressure superconductors — materials that superconduct electricity at room temperature but only if they are placed under extreme pressure (a million atmospheres’ worth). Yet other scientists had challenged these claims on many grounds, but one in particular was whether these materials really exhibited the Meissner effect.

Room-temperature superconductors are often called the ‘holy grail’ of materials science. I abhor clichés but in this case the idiom fits perfectly. If such a material is invented or discovered, it could revolutionise many industries. To quote at length from an article by electrical engineer Massoud Pedram in The Conversation:

Room-temperature superconductors would enable ultra high-speed digital interconnects for next-generation computers and low-latency broadband wireless communications. They would also enable high-resolution imaging techniques and emerging sensors for biomedical and security applications, materials and structure analyses, and deep-space radio astrophysics.

Room-temperature superconductors would mean MRIs could become much less expensive to operate because they would not require liquid helium coolant, which is expensive and in short supply. Electrical power grids would be at least 20% more power efficient than today’s grids, resulting in billions of dollars saved per year, according to my estimates. Maglev trains could operate over longer distances at lower costs. Computers would run faster with orders of magnitude lower power consumption. And quantum computers could be built with many more qubits, enabling them to solve problems that are far beyond the reach of today’s most powerful supercomputers.

However, this surfeit of economic opportunities could also lure scientists into not thoroughly double-checking their results, cherry-picking from their data or jumping to conclusions if they believe they have found a room-temperature superconductor. Many papers written by scientists claiming they had found a room-temperature superconductor have in fact been published in and subsequently retracted from peer-reviewed journals with prestigious reputations, including Nature and Science, after independent experts found the papers to contain flawed data. Whatever the reasons for these mistakes, independent scrutiny of such reports has become very important.

If a material is a superconductor, it needs to meet two conditions*. The first of course is that it needs conduct a direct electric current with zero resistance. Second, the material should display the Meissner effect. Place a magnet over a superconducting material. Then, gradually cool the material to lower and lower temperatures, until you cross the critical temperature. Just as you cross this threshold, the magnet will start to float above the material. You’ve just physically observed the Meissner effect. It happens because when the material transitions to its superconducting state, it will expel all magnetic fields within its bulk to its surface. This results in any magnets already sitting nearby to be pushed away. In fact, the Meissner effect is considered to be the hallmark sign of a superconductor because it’s difficult to fake.

An illustration of the Meissner effect. B denotes the magnetic field, T is the temperature, and Tc is the critical temperature. Credit: Piotr Jaworski
Wait for the 1:03 mark.

The problem with acquiring evidence of the Meissner effect is the setup in which many of these materials become superconductors. In order to apply the tens to hundreds of gigapascals (GPa) of pressure, a small sample of the material — a few grams or less — is placed between a pair of high-quality diamond crystals and squeezed. This diamond anvil cell apparatus leaves no room for a conventional magnetic field sensor to be placed inside the cell. Measuring the magnetic properties of the sample is also complicated because of the fields from other sources in the apparatus, which will have to be accurately measured and then subtracted from the final data.

To tackle this problem, some scientists have of late suggested measuring the sample’s magnetic properties using the only entity that can still enter and leave the diamond anvil cell: light.

In technical terms, such a technique is called optical magnetometry. Magnetometry in general is any technique that converts some physical signal into data about a magnetic field. In this case the signal is in the form of light, thus the ‘optical’ prefix. To deploy optical magnetometry in the context of verifying whether a material is a high-pressure superconductor, scientists have suggested using nitrogen vacancy (NV) centres.

Say you have a good crystal of diamond with you. The crystal consists of carbon atoms bound to each other in sets of four in the shape of a pyramid. Millions of copies of such pyramids together make up the diamond. Now, say you substitute one of the carbon atoms in the gem with a nitrogen atom and also knock out an adjacent carbon atom. Physicists have found that this vacancy in the lattice, called an NV centre, has interesting, useful properties. For example, an NV centre can fluoresce, i.e. absorb light of a higher frequency and emit light of a lower frequency.

An illustration of a nitrogen vacancy centre in diamond. Carbon atoms are shown in green. Credit: Public domain

Because each NV centre is surrounded by three carbon atoms and one nitrogen atom, the vacancy hosts six electrons, two of which are unpaired. All electrons have a property called quantum spin. The quantum spin is the constitutive entity of magnetism the same way the electric charge is the constitutive entity of electricity. For example, if a block of iron is to be turned into a magnet, the spins of all the electrons inside have to be made point in the same direction. Each spin can point in one of two directions, which for a magnet are called ‘north’ and ‘south’. Planet earth has a magnetic north and a magnetic south because the spins of the trillions upon trillions of electrons in its core have come to point in roughly the same direction.

The alignment of the spins of different electrons also affects what energy they have. For example, in the right conditions, an atom with two electrons will have more energy if the electrons’ spins are aligned (↑↑) than when the electrons’ spins are anti-aligned (↑↓). This fundamental attribute of the electrons in the NV centres allows the centres to operate as a super-sensitive detector of magnetic fields — and which is what scientists from institutions around France have reported doing in a June 30 paper in Physical Review Applied.

The scientists implanted a layer of 10,000 to 100,000 NV centres a few nanometres under the surface of one of the diamond anvils. These centres had electrons with energies precisely 2.87 GHz apart.** When the centres were then exposed to microwave laser of some frequency, every NV centre could absorb green laser light and re-emit red light.

The experimental setup. DAC stands for ‘diamond anvil cell’. PL stands for ‘photoluminescence’, i.e. the red light emission. Credit: arXiv:2501.14504v1

As the diamond anvils squeezed the sample past 4 GPa, the pressure at which it would have become a superconductor, the sample displayed the Meissner effect, expelling magnetic fields from within its bulk to the surface. As a result, the NV centres were exposed to a magnetic field in their midst that wasn’t there before. This field affected the electrons’ collective spin and thus their energy levels, which in turn caused the red light being emitted from the centres to dim.

The researchers could easily track the levels and patterns of dimming in the NV centres with a microscopy, and based on that were able to ascertain whether the sample had displayed the Meissner effect. As Physical Review Letters associate editor Martin Rodriguez-Vega wrote in Physics magazine: “A statistical analysis of the [optical] dataset revealed information about the magnetic-field strength and orientation across the sample. Mapping these quantities produced a visualisation of the Meissner effect and revealed the existence of defects in the superconductor.”

In (a), the dotted lines show the parts of the sample that the diamond anvils were in contact with. (b) shows the parts of the sample associated with the red-light emissions from the NV centres, meaning these parts of the sample exhibited the Meissner effect in the experiment. (c) shows the normalised red-light emission along the y-axis and the frequency of microwave light shined along the x-axis. Red lines show the emission in normal conditions and blue lines show the emissions in the presence of the Meissner effect. Credit: arXiv:2501.14504v1

Because the NV centres were less than 1 micrometre away from the sample, they were extremely sensitive to changes in the magnetic field. In fact the researchers reported that the various centres were able to reveal the critical temperature for different parts of the sample separately than for the sample as a whole — a resolution not possible with conventional techniques. The pristine diamond matrix also conferred the electrons’ spins inside the NV centres with a long lifetime. And because there were so many NV centres, the researchers were able to ‘scan’ them with the microwave laser en masse instead of having to maintain focus on a single point on the diamond anvil, when looking for evidence of changes in the sample’s magnetic field. Finally, while the sample in the study became superconducting at a critical temperature of around 140 K, the centres were stable to under 4 K.

Another major advantage of the technique is that it can be used with type II superconductors as well. Type I superconductors are materials that transition to their superconducting state in a single step, under the critical temperature. Type II superconductors transition to their superconducting states in more than one step and display a combination of flux-pinning and the Meissner effect. From my piece in The Hindu in August 2023: “When a flux-pinned superconductor is taken away from a particular part of the magnetic field and put back in, it will snap back to its original relative position.” This happens because type II materials, while they don’t expel magnetic fields from within their bulk, also prevent the fields from moving around inside. Thus the magnetic field lines are pinned in place.

Because of the spatial distribution of the NV centres and their sensitivity, they can reveal flux-pinning in the sample by ‘sensing’ the magnetic fields at different distances.


* The material can make a stronger case for itself if it displays two more properties. (i) The heat energy required to raise the material’s electrons by 1º C has to change drastically at the critical temperature, which is the temperature below which the material becomes a superconductor. (ii) The material’s electrons shouldn’t be able to have certain energy readings. (That is, a map of the energies of all the electrons should show some gaps.) These properties are however considered optional.

** While 2.87 GHz is a frequency figure, recall Planck’s equation from high school: E = hv. Energy is equal to frequency times Planck’s constant, h. Since h is a constant (6.62 × 10-34 m2kg/s), energy figures are frequently denoted in terms of frequency in physics. An interested party can calculate the energy by themselves.

Why do quasicrystals exist?

By: VM
26 June 2025 at 07:04

Featured image: An example of zellij tilework in the Al Attarine Madrasa in Fes, Morocco (2012), with complex geometric patterns on the lower walls and a band of calligraphy above. Caption and credit: just_a_cheeseburger (CC BY)


‘Quasi’ means almost. It’s an unfair name for quasicrystals. These crystals exist in their own right. Their name comes from the internal arrangement of their atoms. A crystal is made up of a repeating group of some atoms arranged in a fixed way. The smallest arrangement that repeats to make up the whole crystal is called the unit cell. In diamond, a convenient unit cell is four carbon atoms bonded to each other in a tetrahedral (pyramid-like) arrangement. Millions of copies of this unit cell together make up a diamond crystal. The unit cell of sodium chloride has a cubical shape: the chloride ions (Cl) occupy the corners and face centres while the sodium ions (Na+) occupy the middle of the edges and centre of the cube. As this cube repeats itself, you get table salt.

The structure of all crystals thus follows two simple rules: have a unit cell and repeat it. Thus the internal structure of crystals is periodic. For example if a unit cell is 5 nanometres wide, it stands to reason you’ll see the same arrangement of atoms after every 5 nm. And because it’s the same unit cell in all directions and they don’t have any gaps between them, the unit cells fill the space available. It’s thus an exercise in tiling. For example, you can cover a floor of any shape completely with square or triangular tiles (you’ll just need to trim those at the edges). But you can’t do this with pentagonal tiles. If you do, the tiles will have gaps between them that other pentagonal tiles can’t fill.

Quasicrystals buck this pattern in a simple way: their unit cells are like pentagonal tiles. They repeat themselves but the resulting tiling isn’t periodic. There are no gaps in the crystal either because instead of each unit cell just like the one on its left or right, the tiles sometimes slot themselves in by rotating by an angle. Thus rather than the crystal structure following a grid-like pattern, the unit cells seem to be ordered along curves. As a result, even though the structure may have an ordered set of atoms, it’s impossible to find a unit cell that by repeating itself in a straight line gives rise to the overall crystal. In technical parlance, the crystal is said to lack translational symmetry.

Such structures are called quasicrystals. They’re obviously not crystalline, because they lack a periodic arrangement of atoms. They aren’t amorphous either, like the haphazardly arranged atoms of glass. Quasicrystals are somewhere in between: their atoms are arranged in a fixed way, with different combinations of pentagonal, octagonal, and other tile shapes that are disallowed in regular crystals, and with the substance lacking a unit cell. Instead the tiles twist and turn within the structure to form mosaic patterns like the ones featured in Islamic architecture (see image at the top).

In the 1970s, Roger Penrose discovered a particularly striking quasicrystal pattern, since called the Penrose Tiling, composed of two ‘thin’ and ‘thick’ rhombi (depicted here in green and blue, respectively). Credit: Public domain

The discovery of quasicrystals in the early 1980s was a revolutionary moment in the history of science. It shook up what chemists believed a crystal should look like and what rules the unit cell ought to follow. The first quasicrystals that scientists studied were made in the lab, in particular aluminium-manganese alloys, and there was a sense that these unusual crystals didn’t occur in nature. That changed in the 1990s and 2000s when expeditions to Siberia uncovered natural quasicrystals in meteorites that had smashed into the earth millions of years ago. But even this discovery kept one particular question about quasicrystals alive: why do they exist? Both Al-Mn alloys and the minerals in meteorites form in high temperatures and extreme pressures. The question of their existence, more than just because they can, is a question about whether the atoms involved are forced to adopt a quasicrystal rather than a crystal structure. In other words, it asks if the atoms would rather adopt a crystal structure but don’t because their external conditions force them not to.


This post benefited from feedback from Adhip Agarwala.


Often a good way to understand the effects of extreme conditions on a substance is using the tools of thermodynamics — the science of the conditions in which heat moves from place to another. And in thermodynamics, the existential question can be framed like this, to quote from a June paper in Nature Physics: “Are quasicrystals enthalpy-stabilised or entropy-stabilised?” Enthalpy-stabilised means the atoms of a quasicrystal are arranged in a way where they collectively have the lowest energy possible for that group. It means the atoms aren’t arranged in a less-than-ideal way forced by their external conditions but because the quasicrystal structure in fact is better than a crystal structure. It answers “why do quasicrystals exist?” with “because they want to, not just because they can”. Entropy-stabilised goes the other way. That is: at 0 K (-273.15º C), the atoms would rather come together as a crystal because a crystal structure has lower energy at absolute zero. But as the temperature increases, the energy in the crystal builds up and forces the atoms to adjust where they’re sitting so that they can accommodate new forces. At some higher temperature, the structure becomes entropy-stabilised. That is, there’s enough disorder in the structure — like sound passing through the grid of atoms and atoms momentarily shifting their positions — that allows it to hold the ‘excess’ energy but at the same time deviate from the orderliness of a crystal structure. Entropy stabilisation answers “why do quasicrystals exist?” with “because they’re forced to, not because they want to”.

In materials science, the go-to tool to judge whether a crystal structure is energetically favourable is density functional theory (DFT). It estimates the total energy of a solid and from there scientists can compare competing phases and decide which one is most stable. If four atoms will have less energy arranged as a cuboid than as a pyramid at a certain temperature and pressure, then the cuboidal phase is said to be more favoured. The problem is DFT can’t be directly applied to quasicrystals because the technique assumes that a given mineral has a periodic internal structure. Quasicrystals are aperiodic. But because scientists are already comfortable with using DFT, they have tried to surmount this problem by considering a superunit cell that’s made up of a large number of atoms or by assuming that a quasicrystal’s structure, while being aperiodic in three dimensions, could be periodic in say four dimensions. But the resulting estimates of the solid’s energy have not been very good.

In the new Nature Physics paper, scientists from the University of Michigan, Ann Arbor, have reported a way around the no-unit-cell problem to apply DFT to estimate the energy of two quasicrystals. And they found that these quasicrystals are enthalpy-stabilised. The finding answer is a chemistry breakthrough because it raises the possibility of performing DFT in crystals without translational symmetry. Further, by showing that two real quasicrystals are enthalpy-stabilised, chemists may be forced to rethink why almost every other inorganic material does adopt a repeating structure. Crystals are no longer at the centre of the orderliness universe.

An electron diffraction pattern of an icosahedral holmium-magnesium-zinc quasicrystal reveals the arrangement of its atoms. Credit: Jgmoxness (CC BY-SA)

The team started by studying the internal structure of two quasicrystals using X-rays, then ‘scooped’ out five random parts for further analysis. Each of these scoops had 24 to 740 atoms. Second, the team used a modified version of DFT called DFT-FE. The computational cost of running DFT scales increases according to the cube of the number of atoms being studied. If studying four atoms with DFT requires X amount of computing power, 24 atoms would require 8,000 times X and 740 atoms would require 400 million times X. Instead the computational cost of DFT-FE scales as the square of the number of atoms, which makes a big difference. Continuing from the previous example, 24 atoms would require 400 times X and 740 atoms would require half a million times X. But the lower computational cost of DFT-FE is still considerable. The researchers’ solution was to use GPUs — the processors originally developed to run complicated video games and today used to run artificial intelligence (AI) apps like ChatGPT.

The team was able to calculate that the resulting energy estimates for a quasicrystal was off by no more than 0.3 milli-electron-volt (meV) per atom, considered acceptable. They also applied their technique to a known crystal, ScZn6, and confirmed that its estimate of the energy matched the known value (5-9 meV per atom). They were ready to go now.

When they applied DFT-FE to scandium-zinc and ytterbium-cadmium quasicrystals, they found clear evidence that they were enthalpy-stabilised. Each atom in the scandium-zinc quasicrystal had 23 meV less energy than if it had been part of a crystal structure. Similarly atoms in the ytterbium-cadmium quasicrystal had roughly 7 meV less each. The verdict was obvious: translational symmetry is not required for the most stable form of an inorganic solid.

A single grain of a scandium-zinc quasicrystal has 12 pentagonal faces. Credit: Yamada et al. (2016). IUCrJ

The researchers also explored why the ytterbium-cadmium quasicrystal is so much easier to make than the scandium-zinc quasicrystal. In fact the former was the world’s first two-element quasicrystal to be discovered, 25 years ago this year. The team broke down the total energy as the energy in the bulk plus energy on the surface, and found that the scandium-zinc quasicrystal has high surface energy.

This is important because in thermodynamics, energy is like cost. If you’re hungry and go to a department store, you buy the pack of biscuits that you can afford rather than wait until you have enough money to buy the most expensive one. Similarly, when there’s a hot mass of scandium-zinc as a liquid and scientists are slowly cooling it, the atoms will form the first solid phase they can access rather than wait until they have accumulated enough surface energy to access the quasicrystal phase. And the first phase they can access will be crystalline. On the other hand scientists discovered the ytterbium-cadmium quasicrystal so quickly because it has a modest amount of energy across its surface and thus when cooled from liquid to solid, the first solid phase the atoms can access is also the quasicrystal phase.

This is an important discovery: the researchers found that a phase diagram alone can’t be used to say which phase will actually form. Understanding the surface-energy barrier is also important, and could pave the way to a practical roadmap for scientists trying to grow crystals for specific applications.

The big question now is: what special bonding or electronic effects allow atoms to have order without periodicity? After Israeli scientist Dan Shechtman discovered quasicrystals in 1982, he didn’t publish his findings until two years later, after including some authors on his submission to improve its chances with a journal, because he thought he wouldn’t be taken seriously. This wasn’t a silly concern: Linus Pauling, one of the greatest chemists in the history of subject, dismissed Shechtman’s work and called him a “quasi-scientist”. The blowback was so sharp and swift because chemists like Pauling, who had helped establish the science of crystal structures, were certain there was a way crystals could look and a way they couldn’t — and quasicrystals didn’t have the right look. But now, the new study has found that quasicrystals look perfect. Perhaps it’s crystals that need to explain themselves…

A not-so-random walk through random walks

By: VM
23 November 2024 at 06:54

Though I’ve been interested of late with the idea of random walks, I was introduced to the concept when, more than two decades ago, I stumbled across Conway’s Game of Life, the cellular automaton built by John Conway in 1970. A cellular automaton is a grid of cells in which each cell has a value depending on the values of its neighbours. The automaton simulates the evolution of the grid as the cells’ values change dynamically.

Langton’s ant was a popular instance of the simulator and one of my favourites, too. One 2001 conference paper described it as “a simple … system with a surprisingly complex behaviour.” Here, a (virtual) ant starts off at a random cell on the grid and moves randomly into one of the four neighbouring squares (diagonal squares aren’t accessible). There are three rules:

(i) A cell can be either black or white in colour;

(ii) If the square is white when the ant moves into it, the colour is flipped, and the ant turns 90º clockwise and moves forward;

(iii) If the square is black, the colour is flipped, and the ant turns 90º counter-clockwise and moves forward.

As the ant moves across the grid in this way, the first hundred or so steps produce a symmetric pattern before chaos ensues. For the next 9,900 or so steps, an image devoid of any patterns comes into view. But after around 10,000 steps, there’s magic: the ant suddenly enters into a repetitive 104-step pattern that it continues until the end of time. You can run your own simulation and check.

The path of a Langton’s ant. The repetitive pattern after ~10,000 steps is the ‘highway’ growing at the bottom. The location of the ant is shown in red. Credit: Krwarobrody and Ferkel/Wikimedia Commons

The march of the Langton’s ant before the repetitive portion has been described as a pseudorandom walk — a walk whose pattern appears random but whose next step is not quite random (because of the rules). In a truly random walk, the length of each step is fixed and the direction of each step is chosen at random from a fixed number of options.

If it sounds simple, it is, but you might be forgiven for thinking it’s only a mathematical flight of fancy. Random walks have applications in numerous areas, including econometrics, finance, biology, chemistry, and quantum physics.

The trajectory of a random walk after 25,000 steps. Credit: László Németh/Wikimedia Commons

Specific variants of the random walk behave in ways that closely match the properties of some complex system evolving in time. For example, in a Gaussian random walk, the direction of each step is random and the length of each step is sampled randomly from a Gaussian distribution (the classic example of a bell curve). Experts use the evolution of this walk to evaluate the risk exposure of investment portfolios.

The Lévy flight is a random walk with a small change: instead of the step length being determined by a random pick from the Gaussian distribution, it comes from any distribution with a heavy tail. One common example is the gamma distribution. Each such distribution can be tweaked with two parameters called κ (kappa) and θ (theta) to produce different plots on a graph, all with the same general properties. In the examples shown below, focus on the orange line (κ = 2, θ = 2): it shows a gamma distribution with a heavy tail.

Various gamma distributions for different values of κ and θ. Credit: MarkSweep and Cburnett/Wikimedia Commons, CC BY-SA 3.0

Put another way, the distribution has some large values but mostly small values. A Lévy flight is a random walk where the step length is sampled randomly from this distribution, and as a result has a few large steps and many small steps. Research has shown that the foraging path of animals looking for food that is scarce can be modelled as a Lévy flight: the large steps correspond to the long distances towards food sources that are located far apart and the short steps to finding food spread in a small area at each source.

A Lévy flight simulated for 1,000 steps. Credit: PAR/Wikimedia Commons

Perhaps the most famous ‘example’ of a random walk is Brownian motion; it isn’t a perfect example however. Brownian motion can describe, say, the path of a single atom over time in a gas of billions of atoms by using a Lévy process. Whereas a random walk proceeds in discrete steps, a Lévy process is continuous; they are in other respects the same. The motion itself refers to the atom’s journey in some time period, frequently bumping into other atoms (depending on the gas’s density) and shifting its path in random ways.

The yellow circle depicts the motion of a larger particle in a container filled with smaller particles moving in random directions at different speeds. Credit: Francisco Esquembre, Fu-Kwun and lookang/Wikimedia Commons, CC BY-SA 3.0

Brownian motion in particular uses a type of Lévy process called the Wiener process, where the path evolves according to the following rules:

(i) Each increment of the process is independent of other (non-overlapping) increments;

(ii) How much the process changes over a period of time depends only on the duration of the period;

(iii) Increments in the process are randomly sampled from a Gaussian distribution;

(iv) The process has a statistical mean equal to zero;

(v) The process’s covariance between any two time points is equal to the lower variance at those two points (variance denotes how quickly the value of a variable is spreading out over time).

The path of the atom in the gas follows a Wiener process and is thus Brownian motion. The Wiener process has a wealth of applications across both the pure and the applied sciences. Just to name one: say there is a small particle — e.g. an ion — trapped in a cell. It can’t escape the cell except through a small opening. The Wiener process, which models the Brownian motion of the ion through the cell, can be used to estimate the average amount of time the ion will need to reach the opening and escape.

Like random walks, Wiener processes can also be tweaked to produce models for different conditions. One example is the Brownian bridge, which arises when a Wiener process is limited to appear at the start of an interval and disappear at the end, with the start and end points fixed. A different, more visual way to put this is in terms of a graph with two y-axes and one x-axis. Say the point 0 is the start of the interval on the left y-axis and 1 is the end of the interval on the right y-axis. A Wiener process in the interval [0, 1] will be a ‘bridge’ that connects 0 and 1 in a path that follows Brownian motion.

A Brownian bridge pinned at the two endpoints of an interval. Credit: Zemyla/Wikimedia Commons, CC BY-SA 3.0

By analogy, a random bridge in the interval [0, 1] will be a random walk based on the Gaussian distribution between 0 and 1; a gamma random bridge in the interval [0, 1] will be a random walk based on the gamma distribution between 0 and 1; and so on. (This said, a Wiener process and a random walk are distinct: a Wiener process will play out the same way if the underlying grid is rotated by an arbitrary angle but a random walk won’t.)

It’s a wonder of mathematics that it can discern recurring behaviours in such highly noisy systems and with its finite tools distil from them glimpses into their future. According to a 2020 preprint paper on arXiv, “Various random-walk-related models can be applied in different fields, which is of great significance to downstream tasks such as link prediction, recommendation, computer vision, semi-supervised learning, and network embedding.”

If some basic conditions are met, there are random walks out in the universe as well. In 2004, researchers estimated the Brownian velocity of the black hole at the Milky Way’s centre to be less than 1 km/s.

For a more mathematical example, in a ‘conventional’ random walk, after N steps the walker’s distance from the origin will be comparable to the square root of N. Further, it takes on average S2 steps to travel a distance of S from the origin. For a long time, researchers believed this so-called S → S2 scaling law could model almost any process in which a physical entity was moving from one location to another. The law captured the notion of how much a given distribution would spread out over time.

One of the earliest deviations from this law was fractals, where there is an S → Sβ law but such that β is always greater than 2, implying a greater amount of spread relative to the step length vis-à-vis random walks. (Factoid: a random walk on a fractal also gives rise to a fractal.)

A Sierpinski triangle fractal. Credit: Beojan Stanislaus/Wikimedia Commons, CC BY-SA 3.0

For yet another example, random walks have a famously deep connection to resistor networks: electric circuits where a bunch of resistors are connected in some configuration, plus a power source and a grounding. Researchers have found that the effective voltage between any two points in the circuit is proportional to the time a random-walker would take to travel between those two points for the first time.

A schematic diagram of an electrical circuit where the challenge is to determine the resistance to the flow of an electric current at each of the in-between nodes. Source: math.ucla.edu

The resistor model speaks to a beautiful thing random walks bring to light: the influence an underlying structure exerts on a stochastic process — one governed entirely by chance — playing out on that structure, its inherent geometry imposing unexpected limits on the randomness and keeping it from wheeling away into chaos. At each step the random walk makes an unpredictable choice but the big picture in which these steps are individual strokes is a picture of predictability, to some degree at least.

Flip this conclusion on its head and an even more captivating notion emerges: that though two random walks may resemble each other in their statistical properties, they can still be very different journeys.

PSA about Business Today

26 August 2024 at 03:46
PSA about Business Today

If you get your space news from the website businesstoday.in, this post is for you. Business Today has published several articles over the last few weeks about the Starliner saga with misleading headlines and claims blown far out of proportion. I’d been putting off writing about them but this morning, I spotted the following piece:

PSA about Business Today

Business Today has produced all these misleading articles in this format, resembling Instagram reels. This is more troubling because we know tidbits like this are more consumable as well as are likely to go viral by virtue of their uncomplicated content and simplistic message. Business Today has also been focusing its articles on the saga on Sunita Williams alone, as if the other astronauts don’t exist. This choice is obviously of a piece with Williams’s Indian heritage and Business Today’s intention to maximise traffic to its pages by publishing sensational claims about her experience in space. As I wrote before:

… in the eyes of those penning articles and headlines, “Indian-American” she is. They’re using this language to get people interested in these articles, and if they succeed, they’re effectively selling the idea that it’s not possible for Indians to care about the accomplishments of non-Indians, that only Indians’, and by extension India’s, accomplishments matter. … Calling Williams “Indian-American” is to retrench her patriarchal identity as being part of her primary identity — just as referring to her as “Indian origin” is to evoke her genetic identity…

But something more important than the cynical India connection is at work here: in these pieces, Business Today has been toasting it. This my term for a shady media practice reminiscent of a scene in an episode of the TV show Mad Men, where Don Draper suggests Lucky Strike should advertise its cigarettes as being “toasted”. When someone objects that all cigarettes are toasted, Draper says they may well be, but by saying publicly that its cigarettes are toasted, Lucky Strike will set itself out without doing anything new, without lying, without breaking any rules. It’s just a bit of psychological manipulation.

Similarly, Business Today has been writing about Williams as if she’s the only astronaut facing an extended stay in space (and suggesting in more subtle ways that this fate hasn’t befallen anyone before — whereas it has dozens of times), that NASA statements concern only her health and not the health of the other astronauts she’s with, and that what we’re learning about her difficulties in space constitute new information.

None of this is false but it’s not true either. It’s toasted. Consider the first claim: “NASA has revealed that Williams is facing a critical health issue”:

* “NASA has revealed” — there’s nothing to reveal here. We already know microgravity affects various biochemical processes in the body, including the accelerated destruction of red blood cells.

* “Williams is facing” — No. Everyone in microgravity faces this. That’s why astronauts need to be very fit people, so their bodies can weather unanticipated changes for longer without suffering critical damage.

* “critical health issue” — Err, no. See above. Also, perhaps in a bid to emphasise this (faux) criticality, Business Today’s headline begins “3 million per second” and ends calling the number “disturbing”. You read it, this alarmingly big number is in your face, and you’re asking to believe it’s “disturbing”. But it’s not really a big number in context and certainly not worth any disturbance.

For another example, consider: “Given Williams’ extended mission duration, this accelerated red blood cell destruction poses a heightened risk, potentially leading to severe health issues”. Notice how Business Today doesn’t include three important details: how much of an extension amounts to a ‘bad’ level of extension, what the odds are of Williams (or her fellow Starliner test pilot Barry Wilmore) developing “health issues”, and whether these consequences are reversible. Including these details would deflate Business Today’s ‘story’, of course.

If Business Today is your, a friend’s and/or a relative’s source of space news, please ask them to switch to any of the following instead for space news coverage and commentary that’s interesting without insulting your intelligence:

* SpaceNews

* Jeff Foust

* Marcia Smith

* Aviation Week

* Victoria Samson

* Jatan Mehta

* The Hindu Science

A spaceflight narrative unstuck

11 August 2024 at 03:43
“First, a clarification: Unlike in Gravity, the 2013 film about two astronauts left adrift after space debris damages their shuttle, Sunita Williams and Butch Wilmore are not stuck in space.”
A spaceflight narrative unstuck

This is the first line of an Indian Express editorial today, and frankly, it’s enough said. The idea that Williams and Wilmore are “stuck” or “stranded” in space just won’t die down because reports in the media — from The Guardian to New Scientist, from Mint to Business Today — repeatedly prop it up.

Why are they not “stuck”?

First: because “stuck” implies Boeing/NASA are denying them an opportunity to return as well as that the astronauts wish to return, yet neither of which is true. What was to be a shorter visit has become a longer sojourn.

This leads to the second answer: Williams and Wilmore are spaceflight veterans who were picked specifically to deal with unexpected outcomes, like what’s going on right now. If amateurs or space tourists had been picked for the flight and their stay at the ISS had been extended in an unplanned way, then the question of their wanting to return would arise. But even then we’d have to check if they’re okay with their longer stay instead of jumping to conclusions. If we didn’t, we’d be trivialising their intention and willingness to brave their conditions as a form of public service to their country and its needs. We should think about extending the same courtesy to Williams and Wilmore.

And this brings us to the third answer: The history of spaceflight — human or robotic — is the history of people trying to expect the unexpected and to survive the unexpectable. That’s why we have test flights and then we have redundancies. For example, after the Columbia disaster in 2003, part of NASA’s response was a new protocol: that astronauts flying in faulty space capsules could dock at the ISS until the capsule was repaired or a space agency could launch a new capsule to bring them back. So Williams and Wilmore aren’t “stuck” there: they’re practically following protocol.

For its upcoming Gaganyaan mission, ISRO has planned multiple test flights leading up the human version. It’s possible this flight or subsequent ones could throw up a problem, causing the astronauts within to take shelter at the ISS. Would we accuse ISRO of keeping them “stuck” there or would we laud the astronauts’ commitment to the mission and support ISRO’s efforts to retrieve them safely?

Fourth: “stuck” or “stranded” implies a crisis, an outcome that no party involved in the mission planned for. It creates the impression human spaceflight (in this particular mission) is riskier than it is actually and produces false signals about the competencies of the people who planned the mission. It also erects unreasonable expectations about the sort of outcomes test flights can and can’t have.

In fact, the very reason the world has the ISS and NASA (and other agencies capable of human spaceflight) has its protocol means this particular outcome — of the crew capsule malfunctioning during a flight — needn’t be a crisis. Let’s respect that.

Finally: “Stuck” is an innocuous term, you say, something that doesn’t have to mean all that you’re making it out to be. Everyone knows the astronauts are going to return. Let it go.

Spaceflight is an exercise in control — about achieving it to the extent possible without also getting in the way of a mission and in the way of the people executing it. I don’t see why this control has to slip in the language around spaceflight.

The pitfalls of Somanath calling Aditya L1 a “protector”

By: VM
11 June 2024 at 04:11

In a WhatsApp group of which I’m a part, there’s a heated discussion going on around an article published by NDTV on June 10, entitled ‘Sun’s Fury May Fry Satellites, But India Has A Watchful Space Protector’. The article was published after the Indian Space Research Organisation (ISRO) published images of the Sun the Aditya L1 spacecraft (including its coronagraph) captured during the May solar storm. The article also features quotes by ISRO chairman S. Somanath — and some of them in particular prompted the discussion. For example, he says:

“Aditya L1 captured when the Sun got angry this May. If it gets furious in the near future, as scientists suggest, India’s 24x7X365 days’ eye on the Sun is going to provide a forewarning. After all, we have to protect the 50-plus Indian satellites in space that have cost the country an estimated more than ₹ 50,000 crore. Aditya L1 is a celestial protector for our space assets.”

A space scientist on the group pointed out that any solar event that could fry satellites in Earth orbit would also fry Aditya L1, which is stationed at the first Earth-Sun Lagrange point (1.5 million km from Earth in the direction of the Sun), so it doesn’t make sense to describe this spacecraft as a “protector” of India’s “space assets”. Instead, the scientist said, we’re better off describing Aditya L1 as a science mission, which is what it’d been billed as.

Another space scientist in the same group contended that the coronagraph onboard Aditya L1, plus its other instruments, still give the spacecraft a not insignificant early-warning ability, using which ISRO could consider protective measures. He also said not all solar storms are likely to fry all satellites around Earth, only the very powerful ones; likewise, not all satellites around Earth are equally engineered to withstand solar radiation that is more intense than usual, to varying extents. With these variables in mind, he added, Aditya L1 — which is protected to a greater degree — could give ISRO folks enough head start to manoeuvre ‘weaker’ satellites out of harm’s way or prevent catastrophic failures. By virtue of being ISRO’s eyes on the Sun, then, he suggested Aditya L1 was a scientific mission that could also perform some, but not all, of the functions expected of a full-blown early warning system.

(For such a system vis-a-vis solar weather, he said the fourth or the fifth Earth-Sun Lagrange points would have been better stations.)

I’m putting this down here as a public service message. Characterising a scientific mission — which is driven by scientists’ questions, rather than ISRO’s perception of threats or as part of any overarching strategy of the Indian government — as something else is not harmless because it downplays the fact that we have open questions and that we need to spend time and money answering them. It also creates a false narrative about the mission’s purpose that the people who have spent years designing and building the instruments onboard Aditya L1 don’t deserve, and a false impression of how much room the Indian space programme currently has to launch and operate spacecraft that are dedicated to providing early warnings of bad solar weather.

To be fair, the NDTV article says in a few places that Aditya L1 is a scientific mission, as does astrophysicist Somak Raychaudhury in the last paragraph. It’s just not clear why Somanath characterised it as a “protector” and as a “space-based insurance policy”. NDTV also erred by putting “protector” in the headline (based on my experiences at The Wire and The Hindu, most readers of online articles read and share nothing more than the headline). That it was the ISRO chairman who said these things is more harmful: as the person heading India’s nodal space research body, he has a protagonist’s role in making room in the public imagination for the importance and wonders of scientific missions.

The BHU Covaxin study and ICMR bait

By: VM
28 May 2024 at 04:51

Earlier this month, a study by a team at Banaras Hindu University (BHU) in Varanasi concluded that fully 1% of Covaxin recipients may suffer severe adverse events. One percent is a large number because the multiplier (x in 1/100 * x) is very large — several million people. The study first hit the headlines for claiming it had the support of the Indian Council of Medical Research (ICMR) and reporting that both Bharat Biotech and the ICMR are yet to publish long-term safety data for Covaxin. The latter is probably moot now, with the COVID-19 pandemic well behind us, but it’s the principle that matters. Let it go this time and who knows what else we’ll be prepared to let go.

But more importantly, as The Hindu reported on May 25, the BHU study is too flawed to claim Covaxin is harmful, or claim anything for that matter. Here’s why (excerpt):

Though the researchers acknowledge all the limitations of the study, which is published in the journal Drug Safety, many of the limitations are so critical that they defeat the very purpose of the study. “Ideally, this paper should have been rejected at the peer-review stage. Simply mentioning the limitations, some of them critical to arrive at any useful conclusion, defeats the whole purpose of undertaking the study,” Dr. Vipin M. Vashishtha, director and pediatrician, Mangla Hospital and Research Center, Bijnor, says in an email to The Hindu. Dr. Gautam Menon, Dean (Research) & Professor, Departments of Physics and Biology, Ashoka University shares the same view. Given the limitations of the study one can “certainly say that the study can’t be used to draw the conclusions it does,” Dr. Menon says in an email.

Just because you’ve admitted your study has limitations doesn’t absolve you of the responsibility to interpret your research data with integrity. In fact, the journal needs to speak up here: why did Drug Safety publish the study manuscript? Too often when news of a controversial or bad study is published, the journal that published it stays out of the limelight. While the proximal cause is likely that journalists don’t think to ask journal editors and/or publishers tough questions about their publishing process, there is also a cultural problem here: when shit hits the fan, only the study’s authors are pulled up, but when things are rosy, the journals are out to take credit for the quality of the papers they publish. In either case, we must ask what they actually bring to the table other than capitalising on other scientists’ tendency to judge papers based on the journals they’re published in instead of their contents.

Of course, it’s also possible to argue that unlike, say, journalistic material, research papers aren’t required to be in the public interest at the time of publication. Yet the BHU paper threatens to undermine public confidence in observational studies, and that can’t be in anyone’s interest. Even at the outset, experts and many health journalists knew observational studies don’t carry the same weight as randomised controlled trials as well as that such studies still serve a legitimate purpose, just not the one to which its conclusions were pressed in the BHU study.

After the paper’s contents hit the headlines, the ICMR shot off a latter to the BHU research team saying it hasn’t “provided any financial or technical support” to the study and that the study is “poorly designed”. Curiously, the BHU team’s repartee to the ICMR’s makes repeated reference to Vivek Agnihotri’s film The Vaccine War. In the same point in which two of these references appear (no. 2), the team writes: “While a study with a control group would certainly be of higher quality, this immediately points to the fact that it is researchers from ICMR who have access to the data with the control group, i.e. the original phase-3 trials of Covaxin – as well publicized in ‘The Vaccine War’ movie. ICMR thus owes it to the people of India, that it publishes the long-term follow-up of phase-3 trials.”

I’m not clear why the team saw fit to appeal to statements made in this of all films. As I’ve written earlier, The Vaccine War — which I haven’t watched but which directly references journalistic work by The Wire during and of the pandemic — is most likely a mix of truths and fictionalisation (and not in the clever, good-faith ways in which screenwriters adopt textual biographies for the big screen), with the fiction designed to serve the BJP’s nationalist political narratives. So when the letter says in its point no. 5 that the ICMR should apologise to a female member of the BHU team for allegedly “spreading a falsehood” about her and offers The Vaccine War as a counterexample (“While ‘The Vaccine War’ movie is celebrating women scientists…”), I can’t but retch.

Together with another odd line in the latter — that the “ICMR owes it to the people of India” — the appeals read less like a debate between scientists on the merits and the demerits of the study and more like they’re trying to bait the ICMR into doing better. I’m not denying the ICMR started it, as a child might say, but saying that this shouldn’t have prevented the BHU team from keeping it dignified. For example, the BHU letter reads: “It is to be noted that interim results of the phase-3 trial, also cited by Dr. Priya Abraham in ‘The Vaccine War’ movie, had a mere 56 days of safety follow-up, much shorter than the one-year follow-up in the IMS-BHU study.” Surely the 56-day period finds mention in a more respectable and reliable medium than a film that confuses you about what’s real and what’s not?

In all, the BHU study seems to have been designed to draw attention to gaps in the safety data for Covaxin — but by adopting such a provocative route, all that took centerstage was its spat with the ICMR plus its own flaws.

The billionaire’s solution to climate change

By: VM
10 May 2024 at 14:14

On May 3, Bloomberg published a profile of Salesforce CEO Marc Benioff’s 1t.org project to plant or conserve one trillion trees around the world in order to sequester 200 gigatonnes of carbon every year. The idea reportedly came to Benioff from Thomas Crowther’s infamous September 2015 paper in Nature that claimed restoring trees was the world’s best way to ‘solve’ climate change.

Following pointed criticism of the paper’s attitude and conclusions, they were revised to a significant extent in October 2019 to tamper predictions about the carbon sequestration potential of the world’s trees and to withdraw its assertion that no other solution could work better than planting and/or restoring trees.

According to Bloomberg’s profile, Benioff’s 1t.org initiative seems to be faltering as well, with unreliable accounting of the pledges companies submitted to 1t.org and, unsurprisingly, many of these companies engaging in shady carbon-credit transactions. This is also why Jane Goodall’s comment in the article is disagreeable: it isn’t better for these companies to do something vis-à-vis trees than nothing at all because the companies are only furthering an illusion of climate action — claiming to do something while doing nothing at all — and perpetuating the currency of counterproductive ideas like carbon-trading.

A smattering of Benioff’s comments to Bloomberg are presented throughout the profile, as a result of which he might come across like a sage figure — but take them together, in one go, and he sounds actually like a child.

“I think that there’s a lot of people who are attacking nature and hate nature. I’m somebody who loves nature and supports nature.”

This comment follows one by “the climate and energy policy director at the Union of Concerned Scientists”, Rachel Cleetus, that trees “should not be seen as a substitute for the core task at hand here, which is getting off fossil fuels.” But in Bloomberg’s telling, Cleetus is a [checks notes] ‘nature hater’. Similarly, the following thoughtful comment is Benioff’s view of other scientists who criticised the Crowther et al. paper:

“I view it as nonsense.”

Moving on…

“I was in third grade. I learned about photosynthesis and I got it right away.”

This amazing quote appears as the last line of a paragraph; the rest of it goes thus: “Slashing fossil fuel consumption is critical to slowing warming, but scientists say we also need to pull carbon that’s already in the air back out of it. Trees are really good at that, drawing in CO2 and then releasing oxygen.” Then Benioff’s third-grade quote appears. It’s just comedy.

His other statements make for an important reminder of the oft-understated purpose of scientific communication. Aside from being published by a ‘prestige’ journal — Nature — the Crowther et al. paper presented an easy and straightforward solution to reducing the concentration of atmospheric carbon: to fix lots and lots of trees. Even without knowing the specific details of the study’s merits, any environmental scientist in South and Southeast Asia, Africa, and South America, i.e. the “Global South”, would have said this is a terrible idea.

“I said, ‘What? One trillion trees will sequester more than 200 gigatons of carbon? We have to get on this right now. Who’s working on this?’”

“Everybody agreed on tree diplomacy. I was in shock.”

“The greatest, most scalable technology we have today to sequester carbon is the tree.”

The countries in these regions have become sites of aggressive afforestation that provide carbon credits for the “Global North” to encash as licenses to keep emitting carbon. But the flip sides of these exercises are: (i) only some areas are naturally amenable to hosting trees, and it’s not feasible to plant them willy-nilly through ecosystems that don’t naturally support them; (ii) unless those in charge plant native species, afforestation will only precipitate local ecosystem decline, which will further lower the sequestration potential; (iii) unafforested land runs the risk of being perceived as ‘waste land’, sidelining the ecosystem services provided by wetlands, deserts, grasslands, etc.; and (iv) many of these countries need to be able to emit more carbon before being expected to reach net-zero, in order to pull their populations out of poverty and become economically developed — the same right the “Global North” countries had in the 19th and 20th centuries.

Scientists have known all this from well before the Crowther et al. paper turned up. Yet Benioff leapt for it the moment it appeared, and was keen on seeing it to its not-so-logical end. It’s impossible to miss the fact that his being worth $10 billion didn’t encourage him to use all that wealth and his clout to tackle the more complex actions in the soup of all actions that make up humankind’s response to climate change. Instead, he used his wealth to go for an easy way out, while dismissing informed criticism of it as “nonsense”

In fact, a similar sort of ‘ease-seeking’ is visible in the Crowther et al. paper as well, as brought out in a comment published by Veldman et al. In response to this, Crowther et al. wrote in October 2019 that their first paper simply presented value-neutral knowledge and that it shouldn’t be blamed for how it’s been construed:

Veldman et al. (4) criticize our results in dryland biomes, stating that many of these areas simply should not be considered suitable for tree restoration. Generally, we must highlight that our analysis does not ever address whether any actions “should” or “should not” take place. Our analysis simply estimated the biophysical limits of global forest growth by highlighting where trees “can” exist.

In fact, the October 2019 correction to Crowther et al., in which the authors walked back on the “trees are the best way” claim, was particularly important because it has come to mirror the challenges Benioff has found himself facing through 1t.org: it isn’t just that there are other ways to improve climate mitigation and adaptation, it’s that those ways are required, and giving up on them for any reason could never be short of a moral hazard, if not an existential one.

Featured image credit: Dawid Zawiła/Unsplash.

The “coherent water” scam is back

By: VM
10 May 2024 at 14:05

On May 7, I received a press release touting a product called “coherent water” made by a company named Analemma Water India. According to the document, “coherent water” is based on more than “15 years of rigorous research and development” and confers “a myriad … health benefits”.This “rigorous research” is flawed research. There’s definitely such a thing as “coherent water” and it’s indistinguishable from regular water at all scales. The “coherent water” scam has reared its serpentine head before with the names “hexagonal water”, “structured water”, “polywater”, “exclusion zone water”, and water with one additional hydrogen and oxygen atom each, i.e. “H3O2”. Analemma’s “Mother Water”, which is its brand name for “coherent water”, itself is a rebranding of a product called “Somarka” that hit the Indian market in 2021.

The scam here is that the constituent molecules of “coherent water” get together to form hexagonal structures that persist indefinitely. And these structures distinguish “coherent water”, giving it wonderful abilities like possessing a greater energy content than regular water, boosting one’s “life force”, and — this one I love — being able to “encourage” other water molecules around it to form similar hexagonal assemblages.

I hope people won’t fall for this hoax but I know some will. But thanks to the lowest price of what Analemma is offering — a vial of “Mother Water” that it claims is worth $180 (Rs 15,000) — it’ll be some rich buggers and I think that’s okay. Fools, their wealth, and all that. Then again, it’s somewhat saddening that while (some) people are fighting to keep junk foods and bad medicines out of the market, we have “coherent water” companies and their PR outfits bravely broadcasting their press releases to news publications (and at least one publishing it) at around the same time.

If you’re curious about the issue with “coherent water”: At room temperature and pressure, the hydrogen atoms of water keep forming and breaking weak bonds with other hydrogen atoms. These bonds last for a very small duration and give water its high boiling point and ice crystals their characteristic hexagonal structure.

Sometimes water molecules organise themselves using these bonds into a hexagonal structure as well. But these formations are very short-lived because the hydrogen bonds last only around 200 quadrillionths of a second at a time, if not lower. According to the hoax, however, in “coherent water”, the hydrogen bonds continue to hold such that its water molecules persist in long-lived hexagonal clusters. But this conclusion is not supported by research — nor is the  claim that, “When swirled in normal water, the [magic water] encourages chaotic and irregular H2O molecules to rearrange into the same liquid crystalline structure as the [magic water]. What’s more, the coherent structure is retained over time – this stability is unique to Analemma.”

I don’t think this ability is unique to the “Mother Water”. In 1963, a scientist named Felix Hoenikker invented a variant of ice that, when it came in contact with water cooler than 45.8º C, quickly converted it to ice-nine as well. Sadly Hoenikker had to abandon the project after he realised the continued use of ice-nine would simply destroy all life on Earth.

Anyway, water that’s neither acidic nor basic also has a few rare hydronium (H3O+) and hydroxide (OH-) ions floating around as well. The additional hydrogen ion — basically a proton — from the hydronium ion is engaged in a game of musical chairs with the protons in the same volume of water, each one jumping to a molecule, dislodging a proton there, which jumps to another molecule, and so on. This is happening so rapidly that the hydrogen atoms in every water molecule are practically being changed several thousand times every minute.

In this milieu, it’s impossible for a fixed group of water molecules to be hanging around. In addition, the ultra-short lifetime of the hydrogen bonds are what makes water a liquid: a thing that flows, fills containers, squeezes between gaps, collects into droplets, etc. Take this ability and the fast-switching hydrogen bonds away, as “coherent water” claims to do by imposing a fixed structure, and it’s no longer water — any kind of water.

Analemma has links to some reports on its website; if you’re up to it, I suggest going through them with a simple checklist of the signs of bad research side by side. You should be able to spot most of the gunk.

The BHU Covaxin study and ICMR bait

By: V.M.
28 May 2024 at 03:51

Earlier this month, a study by a team at Banaras Hindu University (BHU) in Varanasi concluded that fully 1% of Covaxin recipients may suffer severe adverse events. One percent is a large number because the multiplier (x in 1/100 * x) is very large — several million people. The study first hit the headlines for claiming it had the support of the Indian Council of Medical Research (ICMR) and reporting that both Bharat Biotech and the ICMR are yet to publish long-term safety data for Covaxin. The latter is probably moot now, with the COVID-19 pandemic well behind us, but it’s the principle that matters. Let it go this time and who knows what else we’ll be prepared to let go.

But more importantly, as The Hindu reported on May 25, the BHU study is too flawed to claim Covaxin is harmful, or claim anything for that matter. Here’s why (excerpt):

Though the researchers acknowledge all the limitations of the study, which is published in the journal Drug Safety, many of the limitations are so critical that they defeat the very purpose of the study. “Ideally, this paper should have been rejected at the peer-review stage. Simply mentioning the limitations, some of them critical to arrive at any useful conclusion, defeats the whole purpose of undertaking the study,” Dr. Vipin M. Vashishtha, director and pediatrician, Mangla Hospital and Research Center, Bijnor, says in an email to The Hindu. Dr. Gautam Menon, Dean (Research) & Professor, Departments of Physics and Biology, Ashoka University shares the same view. Given the limitations of the study one can “certainly say that the study can’t be used to draw the conclusions it does,” Dr. Menon says in an email.

Just because you’ve admitted your study has limitations doesn’t absolve you of the responsibility to interpret your research data with integrity. In fact, the journal needs to speak up here: why did Drug Safety publish the study manuscript? Too often when news of a controversial or bad study is published, the journal that published it stays out of the limelight. While the proximal cause is likely that journalists don’t think to ask journal editors and/or publishers tough questions about their publishing process, there is also a cultural problem here: when shit hits the fan, only the study’s authors are pulled up, but when things are rosy, the journals are out to take credit for the quality of the papers they publish. In either case, we must ask what they actually bring to the table other than capitalising on other scientists’ tendency to judge papers based on the journals they’re published in instead of their contents.

Of course, it’s also possible to argue that unlike, say, journalistic material, research papers aren’t required to be in the public interest at the time of publication. Yet the BHU paper threatens to undermine public confidence in observational studies, and that can’t be in anyone’s interest. Even at the outset, experts and many health journalists knew observational studies don’t carry the same weight as randomised controlled trials as well as that such studies still serve a legitimate purpose, just not the one to which its conclusions were pressed in the BHU study.

After the paper’s contents hit the headlines, the ICMR shot off a latter to the BHU research team saying it hasn’t “provided any financial or technical support” to the study and that the study is “poorly designed”. Curiously, the BHU team’s repartee to the ICMR’s makes repeated reference to Vivek Agnihotri’s film The Vaccine War. In the same point in which two of these references appear (no. 2), the team writes: “While a study with a control group would certainly be of higher quality, this immediately points to the fact that it is researchers from ICMR who have access to the data with the control group, i.e. the original phase-3 trials of Covaxin – as well publicized in ‘The Vaccine War’ movie. ICMR thus owes it to the people of India, that it publishes the long-term follow-up of phase-3 trials.”

I’m not clear why the team saw fit to appeal to statements made in this of all films. As I’ve written earlier, The Vaccine War — which I haven’t watched but which directly references journalistic work by The Wire during and of the pandemic — is most likely a mix of truths and fictionalisation (and not in the clever, good-faith ways in which screenwriters adopt textual biographies for the big screen), with the fiction designed to serve the BJP’s nationalist political narratives. So when the letter says in its point no. 5 that the ICMR should apologise to a female member of the BHU team for allegedly “spreading a falsehood” about her and offers The Vaccine War as a counterexample (“While ‘The Vaccine War’ movie is celebrating women scientists…”), I can’t but retch.

Together with another odd line in the latter — that the “ICMR owes it to the people of India” — the appeals read less like a debate between scientists on the merits and the demerits of the study and more like they’re trying to bait the ICMR into doing better. I’m not denying the ICMR started it, as a child might say, but saying that this shouldn’t have prevented the BHU team from keeping it dignified. For example, the BHU letter reads: “It is to be noted that interim results of the phase-3 trial, also cited by Dr. Priya Abraham in ‘The Vaccine War’ movie, had a mere 56 days of safety follow-up, much shorter than the one-year follow-up in the IMS-BHU study.” Surely the 56-day period finds mention in a more respectable and reliable medium than a film that confuses you about what’s real and what’s not?

In all, the BHU study seems to have been designed to draw attention to gaps in the safety data for Covaxin — but by adopting such a provocative route, all that took centerstage was its spat with the ICMR plus its own flaws.

The billionaire’s solution to climate change

By: V.M.
10 May 2024 at 08:44

On May 3, Bloomberg published a profile of Salesforce CEO Marc Benioff’s 1t.org project to plant or conserve one trillion trees around the world in order to sequester 200 gigatonnes of carbon every year. The idea reportedly came to Benioff from Thomas Crowther’s infamous September 2015 paper in Nature that claimed restoring trees was the world’s best way to ‘solve’ climate change.

Following pointed criticism of the paper’s attitude and conclusions, they were revised to a significant extent in October 2019 to tamper predictions about the carbon sequestration potential of the world’s trees and to withdraw its assertion that no other solution could work better than planting and/or restoring trees.

According to Bloomberg’s profile, Benioff’s 1t.org initiative seems to be faltering as well, with unreliable accounting of the pledges companies submitted to 1t.org and, unsurprisingly, many of these companies engaging in shady carbon-credit transactions. This is also why Jane Goodall’s comment in the article is disagreeable: it isn’t better for these companies to do something vis-à-vis trees than nothing at all because the companies are only furthering an illusion of climate action — claiming to do something while doing nothing at all — and perpetuating the currency of counterproductive ideas like carbon-trading.

A smattering of Benioff’s comments to Bloomberg are presented throughout the profile, as a result of which he might come across like a sage figure — but take them together, in one go, and he sounds actually like a child.

“I think that there’s a lot of people who are attacking nature and hate nature. I’m somebody who loves nature and supports nature.”

This comment follows one by “the climate and energy policy director at the Union of Concerned Scientists”, Rachel Cleetus, that trees “should not be seen as a substitute for the core task at hand here, which is getting off fossil fuels.” But in Bloomberg’s telling, Cleetus is a [checks notes] ‘nature hater’. Similarly, the following thoughtful comment is Benioff’s view of other scientists who criticised the Crowther et al. paper:

“I view it as nonsense.”

Moving on…

“I was in third grade. I learned about photosynthesis and I got it right away.”

This amazing quote appears as the last line of a paragraph; the rest of it goes thus: “Slashing fossil fuel consumption is critical to slowing warming, but scientists say we also need to pull carbon that’s already in the air back out of it. Trees are really good at that, drawing in CO2 and then releasing oxygen.” Then Benioff’s third-grade quote appears. It’s just comedy.

His other statements make for an important reminder of the oft-understated purpose of scientific communication. Aside from being published by a ‘prestige’ journal — Nature — the Crowther et al. paper presented an easy and straightforward solution to reducing the concentration of atmospheric carbon: to fix lots and lots of trees. Even without knowing the specific details of the study’s merits, any environmental scientist in South and Southeast Asia, Africa, and South America, i.e. the “Global South”, would have said this is a terrible idea.

“I said, ‘What? One trillion trees will sequester more than 200 gigatons of carbon? We have to get on this right now. Who’s working on this?’”

“Everybody agreed on tree diplomacy. I was in shock.”

“The greatest, most scalable technology we have today to sequester carbon is the tree.”

The countries in these regions have become sites of aggressive afforestation that provide carbon credits for the “Global North” to encash as licenses to keep emitting carbon. But the flip sides of these exercises are: (i) only some areas are naturally amenable to hosting trees, and it’s not feasible to plant them willy-nilly through ecosystems that don’t naturally support them; (ii) unless those in charge plant native species, afforestation will only precipitate local ecosystem decline, which will further lower the sequestration potential; (iii) unafforested land runs the risk of being perceived as ‘waste land’, sidelining the ecosystem services provided by wetlands, deserts, grasslands, etc.; and (iv) many of these countries need to be able to emit more carbon before being expected to reach net-zero, in order to pull their populations out of poverty and become economically developed — the same right the “Global North” countries had in the 19th and 20th centuries.

Scientists have known all this from well before the Crowther et al. paper turned up. Yet Benioff leapt for it the moment it appeared, and was keen on seeing it to its not-so-logical end. It’s impossible to miss the fact that his being worth $10 billion didn’t encourage him to use all that wealth and his clout to tackle the more complex actions in the soup of all actions that make up humankind’s response to climate change. Instead, he used his wealth to go for an easy way out, while dismissing informed criticism of it as “nonsense”

In fact, a similar sort of ‘ease-seeking’ is visible in the Crowther et al. paper as well, as brought out in a comment published by Veldman et al. In response to this, Crowther et al. wrote in October 2019 that their first paper simply presented value-neutral knowledge and that it shouldn’t be blamed for how it’s been construed:

Veldman et al. (4) criticize our results in dryland biomes, stating that many of these areas simply should not be considered suitable for tree restoration. Generally, we must highlight that our analysis does not ever address whether any actions “should” or “should not” take place. Our analysis simply estimated the biophysical limits of global forest growth by highlighting where trees “can” exist.

In fact, the October 2019 correction to Crowther et al., in which the authors walked back on the “trees are the best way” claim, was particularly important because it has come to mirror the challenges Benioff has found himself facing through 1t.org: it isn’t just that there are other ways to improve climate mitigation and adaptation, it’s that those ways are required, and giving up on them for any reason could never be short of a moral hazard, if not an existential one.

Featured image credit: Dawid Zawiła/Unsplash.

The “coherent water” scam is back

By: V.M.
10 May 2024 at 08:35

On May 7, I received a press release touting a product called “coherent water” made by a company named Analemma Water India. According to the document, “coherent water” is based on more than “15 years of rigorous research and development” and confers “a myriad … health benefits”.This “rigorous research” is flawed research. There’s definitely such a thing as “coherent water” and it’s indistinguishable from regular water at all scales. The “coherent water” scam has reared its serpentine head before with the names “hexagonal water”, “structured water”, “polywater”, “exclusion zone water”, and water with one additional hydrogen and oxygen atom each, i.e. “H3O2”. Analemma’s “Mother Water”, which is its brand name for “coherent water”, itself is a rebranding of a product called “Somarka” that hit the Indian market in 2021.

The scam here is that the constituent molecules of “coherent water” get together to form hexagonal structures that persist indefinitely. And these structures distinguish “coherent water”, giving it wonderful abilities like possessing a greater energy content than regular water, boosting one’s “life force”, and — this one I love — being able to “encourage” other water molecules around it to form similar hexagonal assemblages.

I hope people won’t fall for this hoax but I know some will. But thanks to the lowest price of what Analemma is offering — a vial of “Mother Water” that it claims is worth $180 (Rs 15,000) — it’ll be some rich buggers and I think that’s okay. Fools, their wealth, and all that. Then again, it’s somewhat saddening that while (some) people are fighting to keep junk foods and bad medicines out of the market, we have “coherent water” companies and their PR outfits bravely broadcasting their press releases to news publications (and at least one publishing it) at around the same time.

If you’re curious about the issue with “coherent water”: At room temperature and pressure, the hydrogen atoms of water keep forming and breaking weak bonds with other hydrogen atoms. These bonds last for a very small duration and give water its high boiling point and ice crystals their characteristic hexagonal structure.

Sometimes water molecules organise themselves using these bonds into a hexagonal structure as well. But these formations are very short-lived because the hydrogen bonds last only around 200 quadrillionths of a second at a time, if not lower. According to the hoax, however, in “coherent water”, the hydrogen bonds continue to hold such that its water molecules persist in long-lived hexagonal clusters. But this conclusion is not supported by research — nor is the  claim that, “When swirled in normal water, the [magic water] encourages chaotic and irregular H2O molecules to rearrange into the same liquid crystalline structure as the [magic water]. What’s more, the coherent structure is retained over time – this stability is unique to Analemma.”

I don’t think this ability is unique to the “Mother Water”. In 1963, a scientist named Felix Hoenikker invented a variant of ice that, when it came in contact with water cooler than 45.8º C, quickly converted it to ice-nine as well. Sadly Hoenikker had to abandon the project after he realised the continued use of ice-nine would simply destroy all life on Earth.

Anyway, water that’s neither acidic nor basic also has a few rare hydronium (H3O+) and hydroxide (OH-) ions floating around as well. The additional hydrogen ion — basically a proton — from the hydronium ion is engaged in a game of musical chairs with the protons in the same volume of water, each one jumping to a molecule, dislodging a proton there, which jumps to another molecule, and so on. This is happening so rapidly that the hydrogen atoms in every water molecule are practically being changed several thousand times every minute.

In this milieu, it’s impossible for a fixed group of water molecules to be hanging around. In addition, the ultra-short lifetime of the hydrogen bonds are what makes water a liquid: a thing that flows, fills containers, squeezes between gaps, collects into droplets, etc. Take this ability and the fast-switching hydrogen bonds away, as “coherent water” claims to do by imposing a fixed structure, and it’s no longer water — any kind of water.

Analemma has links to some reports on its website; if you’re up to it, I suggest going through them with a simple checklist of the signs of bad research side by side. You should be able to spot most of the gunk.

❌
❌