Normal view

There are new articles available, click to refresh the page.
Before yesterdayMain stream

The guiding light of KD45

By: VM
24 August 2025 at 03:00

On the subject of belief, I’m instinctively drawn to logical systems that demand consistency, closure, and introspection. And the KD45 system among them exerts a special pull. It consists of the following axioms:

  • K (closure): If you believe an implication and you believe the antecedent, then you believe the consequent. E.g. if you believe “if X then Y” and you believe X, then you also believe Y.
  • D (consistency): If you believe X, you don’t also believe not-X (i.e. X’s negation).
  • 4 (positive introspection): If you believe X, then you also believe that you believe X, i.e. you’re aware of your own beliefs.
  • 5 (negative introspection): If you don’t believe X, then you believe that you don’t believe X, i.e. you know what you don’t believe.

Thus, KD45 pictures a believer who never embraces contradictions, who always sees the consequences of what they believe, and who is perfectly aware of their own commitments. It’s the portrait of a mind that’s transparent to itself, free from error in structure, and entirely coherent. There’s something admirable in this picture. In moments of near-perfect clarity, it seems to me to describe the kind of believer I’d like to be.

Yet the attraction itself throws up a paradox. KD45 is appealing precisely because it abstracts away from the conditions in which real human beings actually think. In other words, its consistency is pristine because it’s idealised. It eliminates the compromises, distractions, and biases that animate everyday life. To aspire to KD45 is therefore to aspire to something constantly unattainable: a mind that’s rational at every step, free of contradiction, and immune to the fog of human psychology.

My attraction to KD45 is tempered by an equal admiration for Bayesian belief systems. The Bayesian approach allows for degrees of confidence and recognises that belief is often graded rather than binary. To me, this reflects the world as we encounter it — a realm of incomplete evidence, partial understanding, and evolving perspectives.

I admire Bayesianism because it doesn’t demand that we ignore uncertainty. It compels us to face it directly. Where KD45 insists on consistency, Bayesian thinking insists on responsiveness. I update beliefs not because they were previously incoherent but because new evidence has altered the balance of probabilities. This system thus embodies humility, my admission that no matter how strongly I believe today, tomorrow may bring evidence that forces me to change my mind.

The world, however, isn’t simply uncertain: it’s often contradictory. People hold opposing views, traditions preserve inconsistencies, and institutions are riddled with tensions. This is why I’m also drawn to paraconsistent logics, which allow contradictions to exist without collapsing. If I stick to classical logic, I’ll have to accept everything if I also accept a contradiction. One inconsistency causes the entire system to explode. Paraconsistent theories reject that explosion and instead allow me to live with contradictions without being consumed by them.

This isn’t an endorsement of confusion for its own sake but a recognition that practical thought must often proceed even when the data is messy. I can accept, provisionally, both “this practice is harmful” and “this practice is necessary”, and work through the tension without pretending I can neatly resolve the contradiction in advance. To deny myself this capacity is not to be rational — it’s to risk paralysis.

Finally, if Bayesianism teaches humility and paraconsistency teaches tolerance, the AGM theory of belief revision teaches discipline. Its core idea is that beliefs must be revised when confronted by new evidence, and that there are rational ways of choosing what to retract, what to retain, and what to alter. AGM speaks to me because it bridges the gap between the ideal and the real. It allows me to acknowledge that belief systems can be disrupted by facts while also maintaining that I can manage disruptions in a principled way.

That is to say, I don’t aspire to avoid the shock of revision but to absorb it intelligently.

Taken together, my position isn’t a choice of one system over another. It’s an attempt to weave their virtues together while recognising their limits. KD45 represents the ideal that belief should be consistent, closed under reasoning, and introspectively clear. Bayesianism represents the reality that belief is probabilistic and always open to revision. Paraconsistent logic represents the need to live with contradictions without succumbing to incoherence. AGM represents the discipline of revising beliefs rationally when evidence compels change.

A final point about aspiration itself. To aspire to KD45 isn’t to believe I will ever achieve it. In fact, I acknowledge I’m unlikely to desire complete consistency at every turn. There are cases where contradictions are useful, where I’ll need to tolerate ambiguity, and where the cost of absolute closure is too high. If I deny this, I’ll only end up misrepresenting myself.

However, I’m not going to be complacent either. I believe it’s important to aspire even if what I’m trying to achieve is going to be perpetually out of reach. By holding KD45 as a guiding ideal, I hope to give shape to my desire for rationality even as I expect to deviate from it. The value lies in the direction, not the destination.

Therefore, I state plainly (he said pompously):

  • I admire the clarity of KD45 and treat it as the horizon of rational belief
  • I embrace the flexibility of Bayesianism as the method of navigating uncertainty
  • I acknowledge the need for paraconsistency as the condition of living in a world of contradictions
  • I uphold the discipline of AGM belief revision as the art of managing disruption
  • I aspire to coherence but accept that my path will involve noise, contradiction, and compromise

In the end, the point isn’t to model myself after one system but to recognise the world demands several. KD45 will always represent the perfection of rational belief but I doubt I’ll ever get there in practice — not because I think I can’t but because I know I will choose not to in many matters. To be rational is not to be pure. It is to balance ideals with realities, to aspire without illusion, and to reason without denying the contradictions of life.

Neural network supercharges model’s ability to predict phase transitions

By: VM
26 January 2025 at 15:37
Neural network supercharges model’s ability to predict phase transitions

Place a pot of water on the stove and light the fire. Once the temperature in the pot reaches 100º C or so, the water will boil to vapour. This is an example of a phase transition that occurs every day in our houses. Yet scientists have difficulty predicting whether a bunch of water molecules, like in the pot, will be liquid or gaseous in a given set of conditions.

This is different from your everyday experience with the pot on the stove and has to do with the model a computer can simulate to predict the phase of a group of interacting particles. Models that can make these predictions efficiently are prized in the study of wet surfaces, porous materials, microfluidics, and biological cells. They can also reveal ‘hidden’ phenomena we may not notice at the macroscopic level, i.e. just by looking at the water boil, and which scientists can use to make sense of other things and/or come up with new applications.

Remember your high school practicals notebook? For each experiment, you had to spell out sections called “given”, “to find”, “apparatus”, “methods”, and “results”. A model is an “apparatus” — a computer program — that uses the “given” (some input data) and certain “methods” (model parameters) to generate “results”. For example, the model below shows how a fluid with certain properties, like air, flowing around a spherical obstacle in its path, like a big rock, leads to the formation of vortices.

A popular “method” that models use to predict a phase transition is called classical density functional theory (DFT). Say there are a bunch of particles in a container. These particles can be the atoms of air, molecules of water, whatever the smallest unit of the substance is that you’re studying. Every three-dimensional distribution of these particles has a quantity called the free-energy functional associated with it. (Functionals and functions are the same thing except functionals can also accept functions as inputs.) The free-energy functional calculates the total free energy of a system based on how the density of its particles is distributed in three dimensions.

Classical DFT is a way to find the equilibrium state of a system — when it’s reached a stable state where its macroscopic properties don’t change and it doesn’t exchange energy with its surroundings — by minimising the system’s free energy.

A model can thus simulate a group of particles in a container, varying their distribution until it finds the one with the lowest free-energy functional, and thus the conditions in which the system is at its lowest energy. “Once [the free-energy functional] is specified, consistent and complete investigation of a wide variety of properties can be made,” the authors of a paper published in the journal Physical Review X on January 24 wrote.

While this sounds simple, the problem is that determining the free-energy functional becomes more difficult the more particles there are. And only once the functional has been determined can the model check when its value is lowest. This is why a model using classical DFT to determine the properties of a liquid at specific temperature and pressure, say, will struggle.

In the January 24 study in Physical Review X, scientists from the University of Bayreuth and the University of Bristol made an advance in this process when they replaced the free-energy functional with a neural network that had been trained on simulations of particles-in-a-container in a variety of conditions (e.g. changing the pressure and temperature across a range of values), then used it to model a realistic fluid.

From the abstract of the paper:

Local learning of the one-body direct correlation functional is based on Monte Carlo simulations of inhomogeneous systems with randomized thermodynamic conditions, randomized planar shapes of the external potential, and randomized box sizes.

Monte Carlo simulations are quite cool. You set up a computer to simulate, say, a poker game with five players. As the game progresses, at some point in the game you ask the computer to take a snapshot of the game and save it. This snapshot has information about each player’s cards, what decisions they made in the previous round (fold, call or raise), the stakes, and the cards on the table. Once the game ends, you rerun the simulation, each time freshly randomising the cards handed out to the players. Then again at some point during the game, the computer takes a snapshot and saves it.

Once the computer has done this a few thousand times, you collect all the snapshots and share them with someone who doesn’t know poker. Based on understanding just the snapshots, they can learn how the game works. The more snapshots there are, the finer their understanding will be. Very simply speaking this is how a Monte Carlo simulation operates.

The researchers generated data for the neural network to train on by running around 900 Monte Carlo simulations of “inhomogeneous systems with randomized thermodynamic conditions [including temperature], randomized planar shapes of the external potential, and randomized box sizes”. (The external potential refers to some energy field applied across the system, giving each of the particles inside some potential energy.) Then they used their classical DFT model with the “neural functional” to study a truncated Lennard-Jones system.

Scientists have previously combined machine-learning with classical DFT models to study particles moving randomly, interacting with each other only when they collide. Actual, real fluids aren’t so simple, however. Instead, their behaviour is more closely modelled as a Lennard-Jones system: the particles in a container repel each other at very short distances, are attracted to each other across intermediate distances, and at larger distances don’t have an effect on each other. As the researchers wrote in their paper:

… understanding the physics in such a simple model, which encompasses both repulsive and attractive interparticle interactions, provides a basis for understanding the occurrence of the same phenomena that arise in more complex fluids .

They also added that:

… recent investigations did not address the fundamental issue of how the presence of a phase transition might be accounted for within the framework of a neural density functional.

So they set about studying a truncated Lennard-Jones system with a phase transition. Their model started with predicting how the particles are distributed, the overall system’s thermodynamic properties, the conditions in which liquid and gaseous phases coexist in the container, and the particles’ behaviour at interfaces, like evaporating from the surface of a hard wall. Then, the researchers wrote:

… we focus on the liquid-gas transition which is a basic manifestation of the presence of interparticle attraction and seek to assess whether the neural functional can describe (i) phase coexistence and the approach to the associated critical point, (ii) surface tension and density profiles of the liquid-gas interface, (iii) drying and capillary evaporation transitions that occur at subcritical temperatures, and (iv) how accurately the approach performs for both bulk and interfacial properties.

(Emphasis in the original.)

So could the neural functional describe i-iv?

The answer is emphatically yes.

In fact, the model was able to accurately predict phase transitions even when it was trained only on supercritical states — i.e. when the container contains both liquid and gaseous states. The researchers singled this ability of the model out for especial praise, calling it “one of the most striking results”.

Neural network supercharges model’s ability to predict phase transitions
Source: Phys. Rev. X 15, 011013 (2025)

This plot, generated by the model, shows the states of a truncated Lennard-Jones fluid with density on the x-axis and temperature on the y-axis. In the red areas, the substance — the collection of particles in the box — is either liquid or gaseous. In the blue areas, the liquid and gaseous phases become separated. The intensity of the colour denotes the substance’s bulk modulus, i.e. how much it resists being compressed at a fixed temperature, from dark blue at the lower end to dark red at the upper.

Overall, the researchers wrote their “neural functional approach” is distinguished by the fact that “the range of phenomena and results it can describe … far exceed the information provided during training.” They attribute this ability to the information contained in a “single numerical object” that the neural network was tuned to track: 𝑐1⁡(𝐫;[𝜌],𝑇), a.k.a. the one-body direct correlation functional. It’s a functional that describes the variation of the density of particles inside the container in response to the external potential. As they put it:

Inputting only Monte Carlo training data of one-body profiles in planar geometry and then examining 𝑐1⁡(𝐫;[𝜌],𝑇) through the functional lens provides access to quantities which could not be obtained directly from the input data. Indeed, determining these usually requires advanced simulation techniques.

They added their method also required fewer computational resources than a classical DFT setup operating without a neural functional in order to achieve “comparable” accuracy. On the back of this resounding success, the researchers plan to use their model to study interactions in water and colloidal gels. They also wrote that they expect their findings will help solve problems in computational chemistry and condensed matter physics.

HackathON: S superračunalnikom in odprtimi podatki proti zmagi

10 May 2024 at 08:09

Arnes v sodelovanju s Slovenskim društvom Informatika, portalom Odprti podatki Slovenije, Slovensko skupnostjo odprte znanosti, Nacionalnim kompetenčnim centrom za superračunalništvo, Fakulteto za računalništvo in informatiko na Univerzi v Ljubljani ter pod častnim pokroviteljstvom Slovenske nacionalne komisije za UNESCO predstavlja prvi Arnesov vseslovenski HackathON.

Rdeča nit tekmovanja je uporaba načel odprte znanosti s poudarkom na odprtih podatkih in ugotavljanjem njihovega potenciala prek obdelave s pomočjo superračunalnika za reševanje izzivov, ki naslavljajo cilje trajnostnega razvoja UNESCO. V skladu s tem smo za tekmovalce pripravili predloge izzivov, lahko pa so si izmislili svojega. Vsi sodelujoči se lahko pomerijo tudi v dodatnem izzivu podjetja Sandoz – Lek.

HackathON poteka v dveh krogih. Začel se je 4. aprila s spoznavnim dogodkom na Fakulteti za računalništvo in informatiko na Univerzi v Ljubljani. Pred tem smo za udeležence organizirali tudi tri izobraževanja: Uvod v superračunalništvo, Spoznaj odprto znanost in Znanost, komunikacija in mediji.

V HackathONovi galeriji pa si oglejte še nekaj utrinkov.

10 ekip in 40 inovativnih tekmovalcev

Drugi krog, ki je bolj praktične narave, bo potekal 13. in 14. maja v Portorožu v sklopu konference »Dnevi slovenske informatike«.

Tekmovalni del bo trajal 24 ur, od 13. maja ob 12. uri do 14. maja ob 12. uri. Deset ekip oziroma skoraj 40 tekmovalcev z inovativnimi rešitvami, ki so predstavljene na spletni strani https://hackathon.si/uvrsceni, bodo imele v času tekmovanja omogočen dostop do EuroHPC Vega, najzmogljivejšega slovenskega superračunalnika. Zaključni dogodek s predstavitvijo projektov in mreženjem s študenti bo ob 14.30, zmagovalce pa bomo razglasili v večernem programu, ki se bo začel ob 19. uri.

Več informacij o drugem krogu vam je na voljo na spletni strani https://hackathon.si/2-krog.

HackatON organiziramo z namenom združevanja različnih disciplin, ne le znotraj področja STEM, temveč tudi s področji družboslovja in humanistike, ter povezovanje študentov različnih univerz. Raznolikost in sodelovanje sta ključna za inovativne rešitve, ki bodo povezale znanje.

Kmalu boste lahko odkrili inovativne rešitve mladih talentov, ki združujejo odprte podatke, superračunalništvo in multidisciplinarne pristope za reševanje izzivov, ki so usmerjeni k ciljem trajnostnega razvoja UNESCO.

Za več informacij smo vam na voljo na e-naslovu: hackathon@arnes.si.

❌
❌