Normal view

There are new articles available, click to refresh the page.
Before yesterdayMain stream

Why do quasicrystals exist?

By: VM
26 June 2025 at 07:04

Featured image: An example of zellij tilework in the Al Attarine Madrasa in Fes, Morocco (2012), with complex geometric patterns on the lower walls and a band of calligraphy above. Caption and credit: just_a_cheeseburger (CC BY)


‘Quasi’ means almost. It’s an unfair name for quasicrystals. These crystals exist in their own right. Their name comes from the internal arrangement of their atoms. A crystal is made up of a repeating group of some atoms arranged in a fixed way. The smallest arrangement that repeats to make up the whole crystal is called the unit cell. In diamond, a convenient unit cell is four carbon atoms bonded to each other in a tetrahedral (pyramid-like) arrangement. Millions of copies of this unit cell together make up a diamond crystal. The unit cell of sodium chloride has a cubical shape: the chloride ions (Cl) occupy the corners and face centres while the sodium ions (Na+) occupy the middle of the edges and centre of the cube. As this cube repeats itself, you get table salt.

The structure of all crystals thus follows two simple rules: have a unit cell and repeat it. Thus the internal structure of crystals is periodic. For example if a unit cell is 5 nanometres wide, it stands to reason you’ll see the same arrangement of atoms after every 5 nm. And because it’s the same unit cell in all directions and they don’t have any gaps between them, the unit cells fill the space available. It’s thus an exercise in tiling. For example, you can cover a floor of any shape completely with square or triangular tiles (you’ll just need to trim those at the edges). But you can’t do this with pentagonal tiles. If you do, the tiles will have gaps between them that other pentagonal tiles can’t fill.

Quasicrystals buck this pattern in a simple way: their unit cells are like pentagonal tiles. They repeat themselves but the resulting tiling isn’t periodic. There are no gaps in the crystal either because instead of each unit cell just like the one on its left or right, the tiles sometimes slot themselves in by rotating by an angle. Thus rather than the crystal structure following a grid-like pattern, the unit cells seem to be ordered along curves. As a result, even though the structure may have an ordered set of atoms, it’s impossible to find a unit cell that by repeating itself in a straight line gives rise to the overall crystal. In technical parlance, the crystal is said to lack translational symmetry.

Such structures are called quasicrystals. They’re obviously not crystalline, because they lack a periodic arrangement of atoms. They aren’t amorphous either, like the haphazardly arranged atoms of glass. Quasicrystals are somewhere in between: their atoms are arranged in a fixed way, with different combinations of pentagonal, octagonal, and other tile shapes that are disallowed in regular crystals, and with the substance lacking a unit cell. Instead the tiles twist and turn within the structure to form mosaic patterns like the ones featured in Islamic architecture (see image at the top).

In the 1970s, Roger Penrose discovered a particularly striking quasicrystal pattern, since called the Penrose Tiling, composed of two ‘thin’ and ‘thick’ rhombi (depicted here in green and blue, respectively). Credit: Public domain

The discovery of quasicrystals in the early 1980s was a revolutionary moment in the history of science. It shook up what chemists believed a crystal should look like and what rules the unit cell ought to follow. The first quasicrystals that scientists studied were made in the lab, in particular aluminium-manganese alloys, and there was a sense that these unusual crystals didn’t occur in nature. That changed in the 1990s and 2000s when expeditions to Siberia uncovered natural quasicrystals in meteorites that had smashed into the earth millions of years ago. But even this discovery kept one particular question about quasicrystals alive: why do they exist? Both Al-Mn alloys and the minerals in meteorites form in high temperatures and extreme pressures. The question of their existence, more than just because they can, is a question about whether the atoms involved are forced to adopt a quasicrystal rather than a crystal structure. In other words, it asks if the atoms would rather adopt a crystal structure but don’t because their external conditions force them not to.


This post benefited from feedback from Adhip Agarwala.


Often a good way to understand the effects of extreme conditions on a substance is using the tools of thermodynamics — the science of the conditions in which heat moves from place to another. And in thermodynamics, the existential question can be framed like this, to quote from a June paper in Nature Physics: “Are quasicrystals enthalpy-stabilised or entropy-stabilised?” Enthalpy-stabilised means the atoms of a quasicrystal are arranged in a way where they collectively have the lowest energy possible for that group. It means the atoms aren’t arranged in a less-than-ideal way forced by their external conditions but because the quasicrystal structure in fact is better than a crystal structure. It answers “why do quasicrystals exist?” with “because they want to, not just because they can”. Entropy-stabilised goes the other way. That is: at 0 K (-273.15º C), the atoms would rather come together as a crystal because a crystal structure has lower energy at absolute zero. But as the temperature increases, the energy in the crystal builds up and forces the atoms to adjust where they’re sitting so that they can accommodate new forces. At some higher temperature, the structure becomes entropy-stabilised. That is, there’s enough disorder in the structure — like sound passing through the grid of atoms and atoms momentarily shifting their positions — that allows it to hold the ‘excess’ energy but at the same time deviate from the orderliness of a crystal structure. Entropy stabilisation answers “why do quasicrystals exist?” with “because they’re forced to, not because they want to”.

In materials science, the go-to tool to judge whether a crystal structure is energetically favourable is density functional theory (DFT). It estimates the total energy of a solid and from there scientists can compare competing phases and decide which one is most stable. If four atoms will have less energy arranged as a cuboid than as a pyramid at a certain temperature and pressure, then the cuboidal phase is said to be more favoured. The problem is DFT can’t be directly applied to quasicrystals because the technique assumes that a given mineral has a periodic internal structure. Quasicrystals are aperiodic. But because scientists are already comfortable with using DFT, they have tried to surmount this problem by considering a superunit cell that’s made up of a large number of atoms or by assuming that a quasicrystal’s structure, while being aperiodic in three dimensions, could be periodic in say four dimensions. But the resulting estimates of the solid’s energy have not been very good.

In the new Nature Physics paper, scientists from the University of Michigan, Ann Arbor, have reported a way around the no-unit-cell problem to apply DFT to estimate the energy of two quasicrystals. And they found that these quasicrystals are enthalpy-stabilised. The finding answer is a chemistry breakthrough because it raises the possibility of performing DFT in crystals without translational symmetry. Further, by showing that two real quasicrystals are enthalpy-stabilised, chemists may be forced to rethink why almost every other inorganic material does adopt a repeating structure. Crystals are no longer at the centre of the orderliness universe.

An electron diffraction pattern of an icosahedral holmium-magnesium-zinc quasicrystal reveals the arrangement of its atoms. Credit: Jgmoxness (CC BY-SA)

The team started by studying the internal structure of two quasicrystals using X-rays, then ‘scooped’ out five random parts for further analysis. Each of these scoops had 24 to 740 atoms. Second, the team used a modified version of DFT called DFT-FE. The computational cost of running DFT scales increases according to the cube of the number of atoms being studied. If studying four atoms with DFT requires X amount of computing power, 24 atoms would require 8,000 times X and 740 atoms would require 400 million times X. Instead the computational cost of DFT-FE scales as the square of the number of atoms, which makes a big difference. Continuing from the previous example, 24 atoms would require 400 times X and 740 atoms would require half a million times X. But the lower computational cost of DFT-FE is still considerable. The researchers’ solution was to use GPUs — the processors originally developed to run complicated video games and today used to run artificial intelligence (AI) apps like ChatGPT.

The team was able to calculate that the resulting energy estimates for a quasicrystal was off by no more than 0.3 milli-electron-volt (meV) per atom, considered acceptable. They also applied their technique to a known crystal, ScZn6, and confirmed that its estimate of the energy matched the known value (5-9 meV per atom). They were ready to go now.

When they applied DFT-FE to scandium-zinc and ytterbium-cadmium quasicrystals, they found clear evidence that they were enthalpy-stabilised. Each atom in the scandium-zinc quasicrystal had 23 meV less energy than if it had been part of a crystal structure. Similarly atoms in the ytterbium-cadmium quasicrystal had roughly 7 meV less each. The verdict was obvious: translational symmetry is not required for the most stable form of an inorganic solid.

A single grain of a scandium-zinc quasicrystal has 12 pentagonal faces. Credit: Yamada et al. (2016). IUCrJ

The researchers also explored why the ytterbium-cadmium quasicrystal is so much easier to make than the scandium-zinc quasicrystal. In fact the former was the world’s first two-element quasicrystal to be discovered, 25 years ago this year. The team broke down the total energy as the energy in the bulk plus energy on the surface, and found that the scandium-zinc quasicrystal has high surface energy.

This is important because in thermodynamics, energy is like cost. If you’re hungry and go to a department store, you buy the pack of biscuits that you can afford rather than wait until you have enough money to buy the most expensive one. Similarly, when there’s a hot mass of scandium-zinc as a liquid and scientists are slowly cooling it, the atoms will form the first solid phase they can access rather than wait until they have accumulated enough surface energy to access the quasicrystal phase. And the first phase they can access will be crystalline. On the other hand scientists discovered the ytterbium-cadmium quasicrystal so quickly because it has a modest amount of energy across its surface and thus when cooled from liquid to solid, the first solid phase the atoms can access is also the quasicrystal phase.

This is an important discovery: the researchers found that a phase diagram alone can’t be used to say which phase will actually form. Understanding the surface-energy barrier is also important, and could pave the way to a practical roadmap for scientists trying to grow crystals for specific applications.

The big question now is: what special bonding or electronic effects allow atoms to have order without periodicity? After Israeli scientist Dan Shechtman discovered quasicrystals in 1982, he didn’t publish his findings until two years later, after including some authors on his submission to improve its chances with a journal, because he thought he wouldn’t be taken seriously. This wasn’t a silly concern: Linus Pauling, one of the greatest chemists in the history of subject, dismissed Shechtman’s work and called him a “quasi-scientist”. The blowback was so sharp and swift because chemists like Pauling, who had helped establish the science of crystal structures, were certain there was a way crystals could look and a way they couldn’t — and quasicrystals didn’t have the right look. But now, the new study has found that quasicrystals look perfect. Perhaps it’s crystals that need to explain themselves…

Neural network supercharges model’s ability to predict phase transitions

By: VM
26 January 2025 at 15:37
Neural network supercharges model’s ability to predict phase transitions

Place a pot of water on the stove and light the fire. Once the temperature in the pot reaches 100º C or so, the water will boil to vapour. This is an example of a phase transition that occurs every day in our houses. Yet scientists have difficulty predicting whether a bunch of water molecules, like in the pot, will be liquid or gaseous in a given set of conditions.

This is different from your everyday experience with the pot on the stove and has to do with the model a computer can simulate to predict the phase of a group of interacting particles. Models that can make these predictions efficiently are prized in the study of wet surfaces, porous materials, microfluidics, and biological cells. They can also reveal ‘hidden’ phenomena we may not notice at the macroscopic level, i.e. just by looking at the water boil, and which scientists can use to make sense of other things and/or come up with new applications.

Remember your high school practicals notebook? For each experiment, you had to spell out sections called “given”, “to find”, “apparatus”, “methods”, and “results”. A model is an “apparatus” — a computer program — that uses the “given” (some input data) and certain “methods” (model parameters) to generate “results”. For example, the model below shows how a fluid with certain properties, like air, flowing around a spherical obstacle in its path, like a big rock, leads to the formation of vortices.

A popular “method” that models use to predict a phase transition is called classical density functional theory (DFT). Say there are a bunch of particles in a container. These particles can be the atoms of air, molecules of water, whatever the smallest unit of the substance is that you’re studying. Every three-dimensional distribution of these particles has a quantity called the free-energy functional associated with it. (Functionals and functions are the same thing except functionals can also accept functions as inputs.) The free-energy functional calculates the total free energy of a system based on how the density of its particles is distributed in three dimensions.

Classical DFT is a way to find the equilibrium state of a system — when it’s reached a stable state where its macroscopic properties don’t change and it doesn’t exchange energy with its surroundings — by minimising the system’s free energy.

A model can thus simulate a group of particles in a container, varying their distribution until it finds the one with the lowest free-energy functional, and thus the conditions in which the system is at its lowest energy. “Once [the free-energy functional] is specified, consistent and complete investigation of a wide variety of properties can be made,” the authors of a paper published in the journal Physical Review X on January 24 wrote.

While this sounds simple, the problem is that determining the free-energy functional becomes more difficult the more particles there are. And only once the functional has been determined can the model check when its value is lowest. This is why a model using classical DFT to determine the properties of a liquid at specific temperature and pressure, say, will struggle.

In the January 24 study in Physical Review X, scientists from the University of Bayreuth and the University of Bristol made an advance in this process when they replaced the free-energy functional with a neural network that had been trained on simulations of particles-in-a-container in a variety of conditions (e.g. changing the pressure and temperature across a range of values), then used it to model a realistic fluid.

From the abstract of the paper:

Local learning of the one-body direct correlation functional is based on Monte Carlo simulations of inhomogeneous systems with randomized thermodynamic conditions, randomized planar shapes of the external potential, and randomized box sizes.

Monte Carlo simulations are quite cool. You set up a computer to simulate, say, a poker game with five players. As the game progresses, at some point in the game you ask the computer to take a snapshot of the game and save it. This snapshot has information about each player’s cards, what decisions they made in the previous round (fold, call or raise), the stakes, and the cards on the table. Once the game ends, you rerun the simulation, each time freshly randomising the cards handed out to the players. Then again at some point during the game, the computer takes a snapshot and saves it.

Once the computer has done this a few thousand times, you collect all the snapshots and share them with someone who doesn’t know poker. Based on understanding just the snapshots, they can learn how the game works. The more snapshots there are, the finer their understanding will be. Very simply speaking this is how a Monte Carlo simulation operates.

The researchers generated data for the neural network to train on by running around 900 Monte Carlo simulations of “inhomogeneous systems with randomized thermodynamic conditions [including temperature], randomized planar shapes of the external potential, and randomized box sizes”. (The external potential refers to some energy field applied across the system, giving each of the particles inside some potential energy.) Then they used their classical DFT model with the “neural functional” to study a truncated Lennard-Jones system.

Scientists have previously combined machine-learning with classical DFT models to study particles moving randomly, interacting with each other only when they collide. Actual, real fluids aren’t so simple, however. Instead, their behaviour is more closely modelled as a Lennard-Jones system: the particles in a container repel each other at very short distances, are attracted to each other across intermediate distances, and at larger distances don’t have an effect on each other. As the researchers wrote in their paper:

… understanding the physics in such a simple model, which encompasses both repulsive and attractive interparticle interactions, provides a basis for understanding the occurrence of the same phenomena that arise in more complex fluids .

They also added that:

… recent investigations did not address the fundamental issue of how the presence of a phase transition might be accounted for within the framework of a neural density functional.

So they set about studying a truncated Lennard-Jones system with a phase transition. Their model started with predicting how the particles are distributed, the overall system’s thermodynamic properties, the conditions in which liquid and gaseous phases coexist in the container, and the particles’ behaviour at interfaces, like evaporating from the surface of a hard wall. Then, the researchers wrote:

… we focus on the liquid-gas transition which is a basic manifestation of the presence of interparticle attraction and seek to assess whether the neural functional can describe (i) phase coexistence and the approach to the associated critical point, (ii) surface tension and density profiles of the liquid-gas interface, (iii) drying and capillary evaporation transitions that occur at subcritical temperatures, and (iv) how accurately the approach performs for both bulk and interfacial properties.

(Emphasis in the original.)

So could the neural functional describe i-iv?

The answer is emphatically yes.

In fact, the model was able to accurately predict phase transitions even when it was trained only on supercritical states — i.e. when the container contains both liquid and gaseous states. The researchers singled this ability of the model out for especial praise, calling it “one of the most striking results”.

Neural network supercharges model’s ability to predict phase transitions
Source: Phys. Rev. X 15, 011013 (2025)

This plot, generated by the model, shows the states of a truncated Lennard-Jones fluid with density on the x-axis and temperature on the y-axis. In the red areas, the substance — the collection of particles in the box — is either liquid or gaseous. In the blue areas, the liquid and gaseous phases become separated. The intensity of the colour denotes the substance’s bulk modulus, i.e. how much it resists being compressed at a fixed temperature, from dark blue at the lower end to dark red at the upper.

Overall, the researchers wrote their “neural functional approach” is distinguished by the fact that “the range of phenomena and results it can describe … far exceed the information provided during training.” They attribute this ability to the information contained in a “single numerical object” that the neural network was tuned to track: 𝑐1⁡(𝐫;[𝜌],𝑇), a.k.a. the one-body direct correlation functional. It’s a functional that describes the variation of the density of particles inside the container in response to the external potential. As they put it:

Inputting only Monte Carlo training data of one-body profiles in planar geometry and then examining 𝑐1⁡(𝐫;[𝜌],𝑇) through the functional lens provides access to quantities which could not be obtained directly from the input data. Indeed, determining these usually requires advanced simulation techniques.

They added their method also required fewer computational resources than a classical DFT setup operating without a neural functional in order to achieve “comparable” accuracy. On the back of this resounding success, the researchers plan to use their model to study interactions in water and colloidal gels. They also wrote that they expect their findings will help solve problems in computational chemistry and condensed matter physics.

❌
❌