Reading view

There are new articles available, click to refresh the page.

Lighting the way with Parrondo’s paradox

By: VM
Lighting the way with Parrondo’s paradox

In science, paradoxes often appear when familiar rules are pushed into unfamiliar territory. One of them is Parrondo’s paradox, a curious mathematical result showing that when two losing strategies are combined, they can produce a winning outcome. This might sound like trickery but the paradox has deep connections to how randomness and asymmetry interact in the physical world. In fact its roots can be traced back to a famous thought experiment explored by the US physicist Richard Feynman, who analysed whether one could extract useful work from random thermal motion. The link between Feynman’s thought experiment and Parrondo’s paradox demonstrates how chance can be turned into order when the conditions are right.

Imagine two games. Each game, when played on its own, is stacked against you. In one, the odds are slightly less than fair, e.g. you win 49% of the time and lose 51%. In another, the rules are even more complex, with the chances of winning and losing depending on your current position or capital. If you keep playing either game alone, the statistics say you will eventually go broke.

But then there’s a twist. If you alternate the games — sometimes playing one, sometimes the other — your fortune can actually grow. This is Parrondo’s paradox, proposed in 1996 by the Spanish physicist Juan Parrondo.

The answer to how combining losing games can result in a winning streak lies in how randomness interacts with structure. In Parrondo’s games, the rules are not simply fair or unfair in isolation; they have hidden patterns. When the games are alternated, these patterns line up in such a way that random losses become rectified into net gains.

Say there’s a perfectly flat surface in front of you. You place a small bead on it and then you constantly jiggle the surface. The bead jitters back and forth. Because the noise you’re applying to the bead’s position is unbiased, the bead simply wanders around in different directions on the surface. Now, say you introduce a switch that alternates the surface between two states. When the switch is ON, an ice-tray shape appears on the surface. When the switch is OFF, it becomes flat again. This ice-tray shape is special: the cups are slightly lopsided because there’s a gentle downward slope from left to right in each cup. At the right end, there’s a steep wall. If you’re jiggling the surface when the switch is OFF, the bead diffuses a little towards the left, a little towards the right, and so on. When you throw the switch to ON, the bead falls into the nearest cup. Because each cup is slightly tilted towards the right, the bead eventually settles near the steep wall there. Then you move the switch to OFF again.

As you repeat these steps with more and more beads over time, you’ll see they end up a little to the right of where they started. This is Parrando’s paradox. The jittering motion you applied to the surface caused each bead to move randomly. The switch you used to alter the shape of the surface allowed you to expend some energy in order to rectify the beads’ randomness.

The reason why Parrondo’s paradox isn’t just a mathematical trick lies in physics. At the microscopic scale, particles of matter are in constant, jittery motion because of heat. This restless behaviour is known as Brownian motion, named after the botanist Robert Brown, who observed pollen grains dancing erratically in water under a microscope in 1827. At this scale, randomness is unavoidable: molecules collide, rebound, and scatter endlessly.

Scientists have long wondered whether such random motion could be tapped to extract useful work, perhaps to drive a microscopic machine. This was Feynman’s thought experiment as well, involving a device called the Brownian ratchet, a.k.a. the Feynman-Smoluchowski ratchet. The Polish physicist Marian Smoluchowski dreamt up the idea in 1912 and which Feynman popularised in a lecture 50 years later, in 1962.

Picture a set of paddles immersed in a fluid, constantly jolted by Brownian motion. A ratchet and pawl mechanism is attached to the paddles (see video below). The ratchet allows the paddles to rotate in one direction but not the other. It seems plausible that the random kicks from molecules would turn the paddles, which the ratchet would then lock into forward motion. Over time, this could spin a wheel or lift a weight.

In one of his physics famous lectures in 1962, Feynman analysed the ratchet. He showed that the pawl itself would also be subject to Brownian motion. It would jiggle, slip, and release under the same thermal agitation as the paddles. When everything is at the same temperature, the forward and backward slips would cancel out and no net motion would occur.

This insight was crucial: it preserved the rule that free energy can’t be extracted from randomness at equilibrium. If motion is to be biased in only one direction, there needs to be a temperature difference between different parts of the ratchet. In other words, random noise alone isn’t enough: you also need an asymmetry, or what physicists call nonequilibrium conditions, to turn randomness into work.

Let’s return to Parrondo’s paradox now. The paradoxical games are essentially a discrete-time abstraction of Feynman’s ratchet. The losing games are like unbiased random motion: fluctuations that on their own can’t produce net gain because the gains become cancelled out. But when they’re alternated cleverly, they mimic the effect of adding asymmetry. The combination rectifies the randomness, just as a physical ratchet can rectify the molecular jostling when a gradient is present.

This is why Parrondo explicitly acknowledged his inspiration from Feynman’s analysis of the Brownian ratchet. Where Feynman used a wheel and pawl to show how equilibrium noise can’t be exploited without a bias, Parrondo created games whose hidden rules provided the bias when they were combined. Both cases highlight a universal theme: randomness can be guided to produce order.

The implications of these ideas extend well beyond thought experiments. Inside living cells, molecular motors like kinesin and myosin actually function like Brownian ratchets. These proteins move along cellular tracks by drawing energy from random thermal kicks with the aid of a chemical energy gradient. They demonstrate that life itself has evolved ways to turn thermal noise into directed motion by operating out of equilibrium.

Parrondo’s paradox also has applications in economics, evolutionary biology, and computer algorithms. For example, alternating between two investment strategies, each of which is poor on its own, may yield better long-term outcomes if the fluctuations in markets interact in the right way. Similarly, in genetics, when harmful mutations alternate in certain conditions, they can produce beneficial effects for populations. The paradox provides a framework to describe how losing at one level can add up to winning at another.

Feynman’s role in this story is historical as well as philosophical. By dissecting the Brownian ratchet, he demonstrated how deeply the laws of thermodynamics constrain what’s possible. His analysis reminded physicists that intuition about randomness can be misleading and that only careful reasoning could reveal the real rules.

In 2021, a group of scientists from Australia, Canada, France, and Germany wrote in Cancers that the mathematics of Parrondo’s paradox could also illuminate the biology of cancerous tumours. Their starting point was the observation that cancer cells behave in ways that often seem self-defeating: they accumulate genetic and epigenetic instability, devolve into abnormal states, sometimes stop dividing altogether, and often migrate away from their original location and perish. Each of these traits looks like a “losing strategy” — yet cancers that use these ‘strategies’ together are often persistent.

The group suggested that the paradox arises because cancers grow in unstable, hostile environments. Tumour cells deal with low oxygen, intermittent blood supply, attacks by the immune system, and toxic drugs. In these circumstances, no single survival strategy is reliable. A population of only stable tumour cells would be wiped out when the conditions change. Likewise a population of only unstable cells would collapse under its own chaos. But by maintaining a mix, the group contended, cancers achieve resilience. Stable, specialised cells can exploit resources efficiently while unstable cells with high plasticity constantly generate new variations, some of which could respond better to future challenges. Together, the team continued, the cancer can alternate between the two sets of cells so that it can win.

The scientists also interpreted dormancy and metastasis of cancers through this lens. Dormant cells are inactive and can lie hidden for years, escaping chemotherapy drugs that are aimed at cells that divide. Once the drugs have faded, they restart growth. While a migrating cancer cell has a high chance of dying off, even one success can seed a tumor in a new tissue.

On the flip side, the scientists argued that cancer therapy can also be improved by embracing Parrondo’s paradox. In conventional chemotherapy, doctors repeatedly administer strong drugs, creating a strategy that often backfires: the therapy kills off the weak, leaving the strong behind — but in this case the strong are the very cells you least want to survive. By contrast, adaptive approaches that alternate periods of treatment with rest or that mix real drugs with harmless lookalikes could harness evolutionary trade-offs inside the tumor and keep it in check. Just as cancer may use Parrondo’s paradox to outwit the body, doctors may one day use the same paradox to outwit cancer.

On August 6, physicists from Lanzhou University in China published a paper in Physical Review E discussing just such a possibility. They focused on chemotherapy, which is usually delivered in one of two main ways. The first, called the maximum tolerated dose (MTD), uses strong doses given at intervals. The second, called low-dose metronomic (LDM), uses weaker doses applied continuously over time. Each method has been widely tested in clinics and each one has drawbacks.

MTD often succeeds at first by rapidly killing off drug-sensitive cancer cells. In the process, however, it also paves the way for the most resistant cancer cells to expand, leading to relapse. LDM on the other hand keeps steady pressure on a tumor but can end up either failing to control sensitive cells if the dose is too low or clearing them so thoroughly that resistant cells again dominate if the dose is too strong. In other words, both strategies can be losing games in the long run.

The question the study’s authors asked was whether combining these two flawed strategies in a specific sequence could achieve better results than deploying either strategy on its own. This is the sort of situation Parrondo’s paradox describes, even if not exactly. While the paradox is concerned with combining outright losing strategies, the study has discussed combining two ineffective strategies.

To investigate, the researchers used mathematical models that treated tumors as ecosystems containing three interacting populations: healthy cells, drug-sensitive cancer cells, and drug-resistant cancer cells. They applied equations from evolutionary game theory that tracked how the fractions of these groups shifted in different conditions.

The models showed that in a purely MTD strategy, the resistant cells soon took over, and in a purely LDM strategy, the outcomes depended strongly on drug strength but still ended badly. But when the two schedules were alternated, the tumor behaved differently. The more sensitive cells were suppressed but not eliminated while their persistence prevented the resistant cells from proliferating quickly. The team also found that the healthy cells survived longer.

Of course, tumours are not well-mixed soups of cells; in reality they have spatial structure. To account for this, the team put together computer simulations where individual cells occupied positions on a grid; grew, divided or died according to fixed rules; and interacted with their neighbours. This agent-based approach allowed the team to examine how pockets of sensitive and resistant cells might compete in more realistic tissue settings.

Their simulations only confirmed the previous set of results. A therapeutic strategy that alternated between MTD and LDM schedules extended the amount of time before the resistant cells took over and while the healthy cells dominated. When the model started with the LDM phase in particular, the sensitive cancer cells were found to compete with the resistant cancer cells and the arrival of the MTD phase next applied even more pressure on the latter.

This is an interesting finding because it suggests that the goal of therapy may not always be to eliminate every sensitive cancer cell as quickly as possible but, paradoxically, that sometimes it may be wiser to preserve some sensitive cells so that they can compete directly with resistant cells and prevent them from monopolising the tumor. In clinical terms, alternating between high- and low-dose regimens may delay resistance and keep tumours tractable for longer periods.

Then again this is cancer — the “emperor of all maladies” — and in silico evidence from a physics-based model is only the start. Researchers will have to test it in real, live tissue in animal models (or organoids) and subsequently in human trials. They will also have to assess whether certain cancers, followed by a specific combination of drugs for those cancers, will benefit more (or less) from taking the Parrando’s paradox way.

As Physics reported on August 6:

[University of London mathematical oncologist Robert] Noble … says that the method outlined in the new study may not be ripe for a real-world clinical setting. “The alternating strategy fails much faster, and the tumor bounces back, if you slightly change the initial conditions,” adds Noble. Liu and colleagues, however, plan to conduct in vitro experiments to test their mathematical model and to select regimen parameters that would make their strategy more robust in a realistic setting.

Sharks don’t do math

By: VM

From ’Sharks hunt via Lévy flights’, Physics World, June 11, 2010:

They were menacing enough before, but how would you feel if you knew sharks were employing advanced mathematical concepts in their hunt for the kill? Well, this is the case, according to new research, which has tracked the movement of these marine predators along with a number of other species as they foraged for prey in the Pacific and Atlantic oceans. The results showed that these animals hunt for food by alternating between Brownian motion and Lévy flights, depending on the scarcity of prey.

Animals don’t use advanced mathematical concepts. This statement encompasses many humans as well because it’s not a statement about intelligence but one about language and reality. You see a shark foraging in a particular pattern. You invent a language to efficiently describe such patterns. And in that language your name for the shark’s pattern is a Lévy flight. This doesn’t mean the shark is using a Lévy flight. The shark is simply doing what makes sense to it, but which we — in our own description of the world — call a Lévy flight.

The Lévy flight isn’t an advanced concept either. It’s a subset of a broader concept called the random walk. Say you’re on a square grid, like a chessboard. You’re standing on one square. You can move only one step at a time. You roll a four-sided die. Depending on the side it lands on, you step one square forwards, backwards, to the right or to the left. The path you trace over time is called a random walk because its shape is determined by the die roll, which is random.

Random walk 2500.svg.

There are different kinds of walks depending on the rule that determines the choice of your next step. A Lévy flight is a random walk that varies both the direction of the next step and the length of the step. In the random walk on the chessboard, you took steps of fixed lengths: to the adjacent squares. In a Lévy flight, the direction of the next step is random and the length is picked at random from a Lévy distribution. This is what the distribution looks like:

Levy0 distributionPDF.svg.

Notice how a small part of each curve (for different values of c in the distribution’s function) has high values and the majority has smaller values. When you pick your step length at random from, say, the red curve, you have higher odds of of picking a smaller step length than a longer one. This means in a Lévy flight, most of the step lengths will be short but a small number of steps will be long. Thus the ‘flight’ looks like this:

Sharks and many other animals have been known to follow a Lévy flight when foraging. To quote from an older post:

Research has shown that the foraging path of animals looking for food that is scarce can be modelled as a Lévy flight: the large steps correspond to the long distances towards food sources that are located far apart and the short steps to finding food spread in a small area at each source.

Brownian motion is a more famous kind of random walk. It’s the name for the movement of an object that’s following the Wiener process. This means the object’s path needs to obey the following five rules (from the same post):

(i) Each increment of the process is independent of other (non-overlapping) increments;

(ii) How much the process changes over a period of time depends only on the duration of the period;

(iii) Increments in the process are randomly sampled from a Gaussian distribution;

(iv) The process has a statistical mean equal to zero;

(v) The process’s covariance between any two time points is equal to the lower variance at those two points (variance denotes how quickly the value of a variable is spreading out over time).

Thus Brownian motion models the movement of pollen grains in water, dust particles in the air, electrons in a conductor, and colloidal particles in a fluid, and the fluctuation of stock prices, the diffusion of molecules in liquids, and population dynamics in biology. That is, all these processes in disparate domains evolve at least in part according to the rules of the Wiener process.

Still doesn’t mean a shark understands what a Lévy flight is. By saying “sharks use a Lévy flight”, we also discard in the process how the shark makes its decisions — something worth learning about in order to make more complete sense of the world around us rather than force the world to make sense only in those ways we’ve already dreamt up. (This is all the more relevant now with #sharkweek just a week away.)

I care so much because metaphors are bridges between language and reality. Even if the statement “sharks employ advanced mathematical concepts” doesn’t feature a metaphor, the risk it represents hews close to one that stalks the use of metaphors in science journalism: the creation of false knowledge.

Depending on the topic, it’s not uncommon for science journalists to use metaphors liberally, yet scientists have not infrequently upbraided them for using the wrong metaphors in some narratives or for not alerting readers to the metaphors’ limits. This is not unfair: while I disagree with some critiques along these lines for being too pedantic, in most cases it’s warranted. As science philosopher Daniel Sarewitz put it in that 2012 article:

Most people, including most scientists, can acquire knowledge of the Higgs only through the metaphors and analogies that physicists and science writers use to try to explain phenomena that can only truly be characterized mathematically.

Here’s The New York Times: “The Higgs boson is the only manifestation of an invisible force field, a cosmic molasses that permeates space and imbues elementary particles with mass … Without the Higgs field, as it is known, or something like it, all elementary forms of matter would zoom around at the speed of light, flowing through our hands like moonlight.” Fair enough. But why “a cosmic molasses” and not, say, a “sea of milk”? The latter is the common translation of an episode in Hindu cosmology, represented on a spectacular bas-relief panel at Angkor Wat showing armies of gods and demons churning the “sea of milk” to producean elixir of immortality.

For those who cannot follow the mathematics, belief in the Higgs is an act of faith, not of rationality.

A metaphor is not the thing itself and shouldn’t be allow to masquerade as such.

Just as well, there are important differences between becoming aware of something and learning it, and a journalist may require metaphors only to facilitate the former. Toeing this line also helps journalists tame the publics’ expectations of them.

Featured image credit: David Clode/Unsplash.

A not-so-random walk through random walks

By: VM

Though I’ve been interested of late with the idea of random walks, I was introduced to the concept when, more than two decades ago, I stumbled across Conway’s Game of Life, the cellular automaton built by John Conway in 1970. A cellular automaton is a grid of cells in which each cell has a value depending on the values of its neighbours. The automaton simulates the evolution of the grid as the cells’ values change dynamically.

Langton’s ant was a popular instance of the simulator and one of my favourites, too. One 2001 conference paper described it as “a simple … system with a surprisingly complex behaviour.” Here, a (virtual) ant starts off at a random cell on the grid and moves randomly into one of the four neighbouring squares (diagonal squares aren’t accessible). There are three rules:

(i) A cell can be either black or white in colour;

(ii) If the square is white when the ant moves into it, the colour is flipped, and the ant turns 90º clockwise and moves forward;

(iii) If the square is black, the colour is flipped, and the ant turns 90º counter-clockwise and moves forward.

As the ant moves across the grid in this way, the first hundred or so steps produce a symmetric pattern before chaos ensues. For the next 9,900 or so steps, an image devoid of any patterns comes into view. But after around 10,000 steps, there’s magic: the ant suddenly enters into a repetitive 104-step pattern that it continues until the end of time. You can run your own simulation and check.

The path of a Langton’s ant. The repetitive pattern after ~10,000 steps is the ‘highway’ growing at the bottom. The location of the ant is shown in red. Credit: Krwarobrody and Ferkel/Wikimedia Commons

The march of the Langton’s ant before the repetitive portion has been described as a pseudorandom walk — a walk whose pattern appears random but whose next step is not quite random (because of the rules). In a truly random walk, the length of each step is fixed and the direction of each step is chosen at random from a fixed number of options.

If it sounds simple, it is, but you might be forgiven for thinking it’s only a mathematical flight of fancy. Random walks have applications in numerous areas, including econometrics, finance, biology, chemistry, and quantum physics.

The trajectory of a random walk after 25,000 steps. Credit: László Németh/Wikimedia Commons

Specific variants of the random walk behave in ways that closely match the properties of some complex system evolving in time. For example, in a Gaussian random walk, the direction of each step is random and the length of each step is sampled randomly from a Gaussian distribution (the classic example of a bell curve). Experts use the evolution of this walk to evaluate the risk exposure of investment portfolios.

The Lévy flight is a random walk with a small change: instead of the step length being determined by a random pick from the Gaussian distribution, it comes from any distribution with a heavy tail. One common example is the gamma distribution. Each such distribution can be tweaked with two parameters called κ (kappa) and θ (theta) to produce different plots on a graph, all with the same general properties. In the examples shown below, focus on the orange line (κ = 2, θ = 2): it shows a gamma distribution with a heavy tail.

Various gamma distributions for different values of κ and θ. Credit: MarkSweep and Cburnett/Wikimedia Commons, CC BY-SA 3.0

Put another way, the distribution has some large values but mostly small values. A Lévy flight is a random walk where the step length is sampled randomly from this distribution, and as a result has a few large steps and many small steps. Research has shown that the foraging path of animals looking for food that is scarce can be modelled as a Lévy flight: the large steps correspond to the long distances towards food sources that are located far apart and the short steps to finding food spread in a small area at each source.

A Lévy flight simulated for 1,000 steps. Credit: PAR/Wikimedia Commons

Perhaps the most famous ‘example’ of a random walk is Brownian motion; it isn’t a perfect example however. Brownian motion can describe, say, the path of a single atom over time in a gas of billions of atoms by using a Lévy process. Whereas a random walk proceeds in discrete steps, a Lévy process is continuous; they are in other respects the same. The motion itself refers to the atom’s journey in some time period, frequently bumping into other atoms (depending on the gas’s density) and shifting its path in random ways.

The yellow circle depicts the motion of a larger particle in a container filled with smaller particles moving in random directions at different speeds. Credit: Francisco Esquembre, Fu-Kwun and lookang/Wikimedia Commons, CC BY-SA 3.0

Brownian motion in particular uses a type of Lévy process called the Wiener process, where the path evolves according to the following rules:

(i) Each increment of the process is independent of other (non-overlapping) increments;

(ii) How much the process changes over a period of time depends only on the duration of the period;

(iii) Increments in the process are randomly sampled from a Gaussian distribution;

(iv) The process has a statistical mean equal to zero;

(v) The process’s covariance between any two time points is equal to the lower variance at those two points (variance denotes how quickly the value of a variable is spreading out over time).

The path of the atom in the gas follows a Wiener process and is thus Brownian motion. The Wiener process has a wealth of applications across both the pure and the applied sciences. Just to name one: say there is a small particle — e.g. an ion — trapped in a cell. It can’t escape the cell except through a small opening. The Wiener process, which models the Brownian motion of the ion through the cell, can be used to estimate the average amount of time the ion will need to reach the opening and escape.

Like random walks, Wiener processes can also be tweaked to produce models for different conditions. One example is the Brownian bridge, which arises when a Wiener process is limited to appear at the start of an interval and disappear at the end, with the start and end points fixed. A different, more visual way to put this is in terms of a graph with two y-axes and one x-axis. Say the point 0 is the start of the interval on the left y-axis and 1 is the end of the interval on the right y-axis. A Wiener process in the interval [0, 1] will be a ‘bridge’ that connects 0 and 1 in a path that follows Brownian motion.

A Brownian bridge pinned at the two endpoints of an interval. Credit: Zemyla/Wikimedia Commons, CC BY-SA 3.0

By analogy, a random bridge in the interval [0, 1] will be a random walk based on the Gaussian distribution between 0 and 1; a gamma random bridge in the interval [0, 1] will be a random walk based on the gamma distribution between 0 and 1; and so on. (This said, a Wiener process and a random walk are distinct: a Wiener process will play out the same way if the underlying grid is rotated by an arbitrary angle but a random walk won’t.)

It’s a wonder of mathematics that it can discern recurring behaviours in such highly noisy systems and with its finite tools distil from them glimpses into their future. According to a 2020 preprint paper on arXiv, “Various random-walk-related models can be applied in different fields, which is of great significance to downstream tasks such as link prediction, recommendation, computer vision, semi-supervised learning, and network embedding.”

If some basic conditions are met, there are random walks out in the universe as well. In 2004, researchers estimated the Brownian velocity of the black hole at the Milky Way’s centre to be less than 1 km/s.

For a more mathematical example, in a ‘conventional’ random walk, after N steps the walker’s distance from the origin will be comparable to the square root of N. Further, it takes on average S2 steps to travel a distance of S from the origin. For a long time, researchers believed this so-called S → S2 scaling law could model almost any process in which a physical entity was moving from one location to another. The law captured the notion of how much a given distribution would spread out over time.

One of the earliest deviations from this law was fractals, where there is an S → Sβ law but such that β is always greater than 2, implying a greater amount of spread relative to the step length vis-à-vis random walks. (Factoid: a random walk on a fractal also gives rise to a fractal.)

A Sierpinski triangle fractal. Credit: Beojan Stanislaus/Wikimedia Commons, CC BY-SA 3.0

For yet another example, random walks have a famously deep connection to resistor networks: electric circuits where a bunch of resistors are connected in some configuration, plus a power source and a grounding. Researchers have found that the effective voltage between any two points in the circuit is proportional to the time a random-walker would take to travel between those two points for the first time.

A schematic diagram of an electrical circuit where the challenge is to determine the resistance to the flow of an electric current at each of the in-between nodes. Source: math.ucla.edu

The resistor model speaks to a beautiful thing random walks bring to light: the influence an underlying structure exerts on a stochastic process — one governed entirely by chance — playing out on that structure, its inherent geometry imposing unexpected limits on the randomness and keeping it from wheeling away into chaos. At each step the random walk makes an unpredictable choice but the big picture in which these steps are individual strokes is a picture of predictability, to some degree at least.

Flip this conclusion on its head and an even more captivating notion emerges: that though two random walks may resemble each other in their statistical properties, they can still be very different journeys.

❌