Normal view

There are new articles available, click to refresh the page.
Before yesterdayMain stream

Externalised costs and the human on the bicycle

By: VM
26 November 2024 at 05:24

Remember the most common question the protagonists of the eponymous British sitcom The IT Crowd asked a caller checking why a computer wasn’t working? “Have you tried turning it off and on again?” Nine times out of 10, this fixed the problem, whatever it was, and the IT team could get on with its life.

Around COP26 or so, I acquired a similar habit: every time someone presented something as a model of energy and/or cost efficiency, my first thought was whether they’d included the externalised costs. This is clearly a global problem today yet many people continue to overlook it in contexts big and small. So when I came across a neat graph on Bluesky (shown below), drawn from an old article in Scientific American, I began to wonder if the awesome transportation efficiency of the human on the bicycle (HotB) included the energy costs of making the bicycle as well.

According to the article, written by an SS Wilson and published in 1973, the HotB required 1-2 calories per gram per km to move around. The next most efficient mover was the salmon, which needed 4 cal/g/km. If the energy costs of making the bicycle are included, the energy cost per g/km would shoot up and, depending on the distance the MotB travels, the total cost may never become fully amortised. (It also matters that the math works out only this way at the scale of the human: anything smaller or bigger and the energy cost increases per unit weight per unit distance.)

But there’s a problem with this line of thinking. On a more basic level, neither Wilson nor Scientific American intended the graph to be completely accurate or claimed it was backed by any research more than that required to estimate the energy costs of moving different kinds of moving things through some distance. It was a graph to make one limited point. More importantly, it illustrates how externalised costs can become counterproductive if attempts to factor them in are not guided by subjective, qualitative assessments of what we’re arguing for or against.

Of course the question of external costs is an important one to ask — more so today, when climate commitments and actions are being reinterpreted in dollar figures and quantitative assessments are gaining in prominence as the carbon budget may well have to be strictly rationed among the world’s countries. But whether or not some activity is rendered more or less efficient by factoring in its externalised costs, any human industrial activities — including those to manufacture bicycles — are polluting. There’s no escaping that. And the struggle to mitigate climate change is a struggle to mitigate climate change while ensuring we don’t undermine or compromise the developmental imperative. Otherwise the struggle isn’t one at all.

Even more importantly, this balancing act isn’t a strategy and isn’t the product of consensus: it’s an implicit and morally and ethically correct assumption, an implicit and inviolable component of global climate mitigation efforts. Put another way, this is how it needs to be. In this milieu, and at a time it’s becoming clear the world’s richer countries have a limit to how much they’re prepared to spend to help poorer countries deal with climate change, the impulse to consider externalised costs can mislead decision-making by making some choices seem more undesirable than they really are.

Externalised costs are, or ought to be, important when the emissions from some activity don’t stack up commensurately with any social, cultural, and/or political advantages they confer as well. These costs are not always unavoidable nor undesirable, and we need to keep an eye on where we’re drawing the line between acceptable and unacceptable costs. The danger is that as richer countries both expect and force poorer ones to make more emissions cuts, the latter may have to adopt more robust quantitative rationales to determine what emissions to cut from which sources and when. Should they include externalised costs, many enterprises that should actually live on may face the axe instead.

For one, the HotB should be able to continue to ride on.


Addendum: Here’s an (extended) excerpt from the Scientific American article on where the HotB scores their efficiency gains.

Before considering these developments in detail it is worth asking why such an apparently simple device as the bicycle should have had such a major effect on the acceleration of technology.The answer surely lies in the sheer humanity of the machine. Its purpose is to make it easier for an individual to move about, and this the bicycle achieves in a way that quite outdoes natural evolution. When one compares the energy consumed in moving a certain distance as a function of body weight for a variety of animals and machines, one finds that an unaided walking man does fairly well (consuming about .75 calorie per gram per kilometer), but he is not as efficient as a horse, a salmon or a jet transport. With the aid of a bicycle, however, the man’s energy consumption for a given distance is reduced to about a fifth (roughly .15 calorie per gram per kilometer). Therefore, apart from increasing his unaided speed by a factor of three or four, the cyclist improves his efficiency rating to No. 1 among moving creatures and machines.

… The reason for the high energy efficiency of cycling compared with walking appears to lie mainly in the mode of action of the muscles. … the cyclist … saves energy by sitting, thus relieving his leg muscles of their supporting function and accompanying energy consumption. The only reciprocating parts of his body are his knees and thighs; his feet rotate smoothly at a constant speed and the rest of his body is still. Even the acceleration and deceleration of his legs are achieved efficiently, since the strongest muscles are used almost exclusively; the rising leg does not have to be lifted but is raised by the downward thrust of the other leg. The back muscles must be used to support the trunk, but the arms can also help to do this, resulting (in the normal cycling attitude) in a little residual strain on the hands and arms.

Featured image credit: Luca Zanon/Unsplash.

Tamil Nadu’s lukewarm heatwave policy

By: VM
21 November 2024 at 05:56

From ‘Tamil Nadu heatwave policy is only a start’, The Hindu, November 21, 2024:

Estimates of a heatwave’s deadliness are typically based on the extent to which the ambient temperature deviates from the historical average at a specific location and the number of lives lost during and because of the heatwave. This is a tricky, even devious, combination as illustrated by the accompanying rider: “to the reasonable exclusion of other causes of hyperthermia”.

A heatwave injures and/or kills by first pushing more vulnerable people over the edge; the less vulnerable are further down the line. The new policy is presumably designed to help the State catch those whose risk exposure the State has not been able to mitigate in time. However, the goal should be to altogether reduce the number of people requiring such catching. The policy lacks the instruments to guide the State toward this outcome.

The farm fires paradox

By: VM
20 November 2024 at 05:56

From The Times of India on November 18, 2024:

A curious claim by all means. The scientist, a Hiren Jethva at NASA Goddard, compared data from the Aqua, Suomi-NPP, and GEO-KOMPSAT 2A satellites and reported that the number of farm fires over North India and Pakistan had dropped whereas the aerosol optical depth — a proxy measure of the aerosol load in the atmosphere — has remained what it’s been over the last half decade or so. He interpreted this to suggest farmers could be burning paddy stubble after the Aqua and Suomi-NPP satellites had completed their overpass. GEO-KOMPSAT 2A is in a geostationary orbit so there’s no evading its gaze.

The idea that farmers across the many paddy-growing states in North India collectively decided to postpone their fires to keep them out of the satellites’ sight seems preposterous. The The Times of India article has some experts towards the end saying this…

… and I sort of agree because it’s in farmers’ interests for the satellites to see more of their fires so the national and state governments can give them better alternatives with better incentives.

The farmers aren’t particularly keen on burning the stubble — they’re doing it because it’s what’s cheapest and quickest. It also matters that there is no surer path to national headlines than being one of the causes of air pollution in New Delhi, much more than dirtying the air in any other city in the country, and that both national and states’ governments have thus far failed to institute sustainable alternatives to burning the stubble. Taken together, if any farmers are looking for better alternatives, more farm fires seem to be the best way to put pressure on governments to do better.

All this said, there may be a fallacy lurking in Jethva’s decision to interpret the timing change solely with respect to the overpass times of the two US satellites and not with any other factor. It’s amusing with a tinge of disappointment that the possibility of someone somewhere “educating” farmers to change their behaviour — and then them following suit en masse — was more within reach than the possibility of satellite data being flawed. If a fire burns in a farm and no satellite is around to see it, does it still produce smoke?

As The Hindu reported:

The data on fire counts are from a heat-sensing instrument on two American satellites — Suomi-NPP and NOAA-20 polar-orbiting satellites. Instruments on polar-orbiting satellites typically observe a wildfire at a given location a few times a day as they orbit the Earth, pole to pole. They pass over India from 1 p.m. to 2 p.m. …

Other researchers also suggest that merely relying on fire counts from the polar satellites may be inadequate and newer satellite data parameters, such as estimating the actual extent of fields burned, may be a more accurate indicator of the true measure of stubble burning.

An infuriating editorial in Science

By: VM
17 November 2024 at 05:56

I’m not just disappointed with an editorial published by the journal Science on November 14, I’m angry.

Irrespective of whether the Republican Party in the US has shifted more or less rightward on specific issues, it has certainly shifted towards falsehoods on many of them. Party leaders, including Donald Trump, have been using everything from lazily inaccurate information to deliberately misleading messages to preserve conservative attitudes wherever that’s been the status quo and to stoke fear, confusion, uncertainty, and animosity where peace and good sense have thus far prevailed.

Against this backdrop, which the COVID-19 pandemic revealed in all its glory, Science‘s editorial is headlined “Science is neither red nor blue”. (Whether this is a reference to the journal itself is immaterial.) Its author, Marcia McNutt, president of the US National Academy of Sciences (NAS), writes (emphasis added):

… scientists need to better explain the norms and values of science to reinforce the notion—with the public and their elected representatives—that science, at its most basic, is apolitical. Careers of scientists advance when they improve upon, or show the errors in, the work of others, not by simply agreeing with prior work. Whether conservative or liberal, citizens ignore the nature of reality at their peril. A recent example is the increased death rate from COVID-19 (as much as 26% higher) in US regions where political leaders dismissed the science on the effectiveness of vaccines. Scientists should better explain the scientific process and what makes it so trustworthy, while more candidly acknowledging that science can only provide the best available evidence and cannot dictate what people should value. Science cannot say whether society should prioritize allocating river water for sustaining fish or for irrigating farms, but it can predict immediate and long-term outcomes of any allocation scheme. Science can also find solutions that avoid the zero-sum dilemma by finding conservation approaches to water management that benefit both fish and farms.

Can anyone explain to me what the first portion in bold even means? Because I don’t want to assume a science administrator as accomplished as McNutt is able to ignore the narratives and scholarship roiling around the sociology of science at large or the cruel and relentless vitiation of scientific knowledge the first Trump administration practiced in particular. Even if the editorial’s purpose is to extend an olive branch to Trump et al., it’s bound to fail. If, say, a Republican leader makes a patently false claim in public, are we to believe an institution as influential as the NAS will not call it out for fear of being cast as “blue” in the public eye?

The second portion in bold is slightly less ridiculous: “science can only provide the best available evidence and cannot dictate what people should value.” McNutt is creating a false impression here by failing to present the full picture. During a crisis, science has to be able to tell people what to value more or less rather than what to value at all. Crises create uncertainty whereas science creates knowledge that is free from bias (at least it can be). It offers a pillar to lean on while we figure out everything else. People should value these pillars.

When a national government — in this case the government of one of the world’s most powerful countries — gives conspiracies and lies free reign, crises will be everywhere. If McNutt means to suggest these crises are so only insofar as the liberal order is faced with changes inimical to its sustenance, she will be confusing what is today the evidence-conspiracy divide for what was once, but is no longer, the conservative-liberal divide.

As if to illustrate this point, she follows up with the third portion in bold: “Science cannot say whether society should prioritize allocating river water for sustaining fish or for irrigating farms, but it can predict immediate and long-term outcomes of any allocation scheme.” Her choice of example is clever because it’s also fallacious: it presents a difficult decision with two reasonable outcomes, ‘reasonable’ being the clincher. The political character of science-in-practice is rarely revealed in debates where reasonability is allowed through the front door and given the power to cast the decisive vote. This was almost never the case under the first Trump administration nor the parts of the Republican Party devoted to him (which I assume is the whole party now), where crazy* has had the final say.

The choice McNutt should really have deliberated is “promoting the use of scientifically tested vaccines during a pandemic versus urging people to be cautious about these vaccines” or “increasing the stockpile of evidence-backed drugs and building social resilience versus hawking speculative ideas and demoralising science administrators”. When the choice is between irrigation for farms and water for fisheries, science can present the evidence and then watch. When the choice is between reason and bullshit, still advocating present-and-watch would be bullshit, too — i.e. science would be “red”.

This is just my clumsy, anger-flecked take on what John Stuart Mill and many others recognised long past: “Bad men need nothing more to compass their ends than that good men should look on and do nothing.” But if McNutt would still rather push the line that what seem like “bad men” to me might be good men to others, she and the policies she influences will have committed themselves to the sort of moral relativism that could never be relevant to politics in practice, which in turn would be a blow for us all.


(* My colloquialism for the policy of being in power for the sake of being in power, rather than to govern.)

Low Orbit Satellite Companies Respond to Scientists’ Concerns About Light and Environmental Pollution With Even Bigger, Brighter Satellites

By: Nick Heer
7 October 2024 at 23:53

Karl Bode, Techdirt:

Scientists say that low earth orbit (LEO) satellite constellations being built by Amazon, Starlink, and AT&T pose a dire threat to astronomy and scientific research, and that too little is being done to address the issue.

There are costs to suddenly widespread satellite connectivity. Apple’s partner in its offering, Globalstar, operates a constellation of satellites which would similarly be concerning to scientists.

It is a tricky balance. Adding redundant communications layers in our everyday devices can be useful and is, plausibly, of lifesaving consequence. Yet it also means the sky is littered with fields of objects which interfere with ground-based instruments. The needs of scientists might seem more abstract and less dire than, say, people seeking help in a natural disaster — I understand that. But I am not certain we will be proud of ourselves fifty years from now if we realize astronomical research has been severely curtailed because a bunch of private companies decided to compete in our shared sky. There is surely a balance to be struck.

⌥ Permalink

What can science education do, and what can it not?

4 September 2024 at 03:21
What can science education do, and what can it not?

On September 29, 2021, The Third Eye published an interview with Milind Sohoni, a teacher at the Centre for Technology Alternatives for Rural Areas and at IIT Bombay. (Thanks to @labhopping for bringing it into my feed.) I found it very thought-provoking. I’m pasting below some excerpts from the interview together with my notes. I think what Prof. Sohoni says doesn’t build up to a coherent whole. He is at times simplistic and self-contradictory, and what he says is often descriptive instead of offering a way out. Of course I don’t know whether what I say builds up to a coherent whole either but perhaps you’ll realise details here that I’ve missed.


… I wish the textbooks had exercises like let’s visit a bus depot, or let’s visit a good farmer and find out what the yields are, or let’s visit the PHC sub-centre, talk to the nurse, talk to the compounder, talk to the two doctors, just getting familiar with the PHC as something which provides a critical health service would have helped a lot. Or spend time with an ASHA worker. She has a notepad with names of people in a village and the diseases they have, which family has what medical emergency. How is it X village has so much diabetes and Y village has none?

I’m sure you’ll agree this would be an excellent way to teach science — together with its social dependencies instead of introducing the latter as an add-on at the level of higher, specialised education.

… science education is not just about big science, and should not be about big science. But if you look at the main central government departments populated by scientists, they are Space, Atomic Energy and Defence. Okay, so we have missile men and women, big people in science, but really, so much of science in most of the developed world is really sadak, bijli, pani.

I disagree on three counts. (i) Science education should include ‘big science’; if it doesn’t we lose access to a domain of knowledge and enterprise that plays an important role in future-proofing societies. We choose the materials with which we will build buildings, lay roads, and make cars and batteries and from which we will generate electric power based on ‘big science’. (ii) Then again, what is ‘big science’? I’m not clear what Sohoni means by that in this comment. But later in the interview he refers to Big Science as a source of “certainty” (vis-à-vis life today) delivered in the form of “scientific things … which we don’t understand”.

If by “Big Science” he means large scientific experiments that have received investments worth millions of dollars from multiple governments, and which are churning out results that don’t inform or enhance contemporary daily life, his statement seems all the more problematic. If a government invests some money in a Big Science project but then pulls out, it doesn’t necessarily or automatically redirect those funds to a project that a critic has deemed more worthwhile, like say multiple smaller science projects. Government support for Big Science has never operated that way. Further, Big Science frequently and almost by design inevitably leads to a lot of derivative ‘Smaller Science’, spinoff technologies, and advances in allied industries. Irrespective of whether these characteristics — accidental or otherwise — suffice to justify supporting a Big Science project, wanting to expel such science from science education is still reckless.

You’re allowed to be interested in particle physics
This page appeared in The Hindu’s e-paper today. I wrote the lead article, about why scientists are so interested in an elementary particle called the top quark. Long story short: the top quark is the heaviest elementary particle, and because all elementary particles get their masses by interacting with
What can science education do, and what can it not?DisagreeVM
What can science education do, and what can it not?

(iii) Re: “… so much of science in most of the developed world is really streets, electricity, water” — Forget proving/disproving this and ask yourself: how do we separate research in space, atomic energy, and defence from knowledge that gave rise to better roads, cheaper electricity, and cleaner water? We can’t. There is also a specific history that explains why each of these departments Sohoni has singled out were set up the way they were. And just because they are staffed with scientists doesn’t mean they are any good or worth emulating. (I’m also setting aside what Sohoni means by “much”. Time consumed in research? Money spent? Public value generated? Number of lives improved/saved?).

Our science education should definitely include Big Science: following up from the previous quote, teachers can take students to a radio observatory nearby and speak to the scientists about how the project acquired so much land, how it secured its water and power requirements, how administrators negotiated with the locals, etc. Then perhaps we can think about avoiding cases like the INO.

India-based neutrino oblivion
In a conversation with science journalist Nandita Jayaraj, physicist and Nobel laureate Takaaki Kajita touched on the dismal anti-parallels between the India-based Neutrino Observatory (INO) and the Japanese Kamioka and Super-Kamiokande observatories. The INO’s story should be familiar to readers of this blog: a team of physicists led by
What can science education do, and what can it not?DisagreeVM
What can science education do, and what can it not?
The Prohibition of Employment as Manual Scavengers Act came along ago, and along with it came a list of 42 [pieces of] equipment, which every municipality should have: a mask, a jetting machine, pumps and so on. Now, even IIT campuses don’t have that equipment. Is there any lab that has a ‘test mask’ even? Our men are going into talks and dying because of [lethal] fumes. A ‘test mask’ is an investment. You need a face-like structure and an artificial lung exposed to various environments to test its efficacy. And this mask needs to be standard equipment in every state. But these are things we never asked IITs to do, right?

This comment strikes a big nail on the head. It also brings to mind an incident on the Anna University campus eight years ago. To quote from Thomas Manuel’s report in The Wire on the incident: “On June 21, 2016, two young men died. Their bodies were found in a tank at the Anna University campus in Chennai. They were employees of a subcontractor who had been hired to seal the tank with rubber to prevent any leakage of air. The tank was being constructed as a part of a project by the Ministry of Renewable Energy to explore the possibilities of using compressed air to store energy. The two workers, Ramesh Shankar and Deepan, had arrived at the site at around 11.30 am and begun work. By 3.30 pm, when they were pulled out of the tank, Deepan was dead and Ramesh Shankar, while still breathing at the time, died a few minutes later.”

This incident seemed, and still seems, to say that even within a university — a place where scientists and students are keenly aware of the rigours of science and the value it brings to society — no one thinks to ensure the people hired for what is casually called “menial” labour are given masks or other safety equipment. The gaps in science education Sohoni is talking about are evident in the way scientists think about how they can ensure society is more rational. A society rife with preventable deaths is not rational.

I think what science does is that it claims to study reality. But most of reality is socially administered, and so we need to treat this kind of reality also as a part of science.

No, we don’t. We shouldn’t. Science offers a limited set of methods and analytical techniques with which people can probe and describe reality and organise the knowledge they generate. He’s right, most of reality is socially administered, but that shouldn’t be an invitation to forcibly bring what currently lies beyond science to within the purview of science. The scientific method can’t deal with them — but importantly it shouldn’t be expected to. Science is incapable of handling multiple, equally valid truths pertaining to the same set of facts. In fact a few paras later Sohoni ironically acknowledges that there are truths beyond science and that their existence shouldn’t trouble scientists or science itself:

… scientists have to accept that there are many things that we don’t know, and they still hold true. Scientists work empirically and sometimes we say okay, let’s park it, carry on, and maybe later on we will find out the ‘why’. The ‘why’ or the explanation is very cultural…

… whereas science needs that ‘why’, and needs it to be singular and specific. If these explanations for aspects of reality don’t exist in a form science can accommodate, yet we also insist as Sohoni did when he said “we need to treat this kind of reality also as a part of science”, then we will be forced to junk these explanations for no fault except that they don’t meet science’s acceptability criteria.

Perhaps there is a tendency here as if to say we need a universal theory of everything, but do we? We can continue to use different human intellectual and social enterprises to understand and take advantage of different parts of human experience. Science and for that matter the social sciences needn’t be, and aren’t, “everything”.

Science has convinced us, and is delivering on its promise of making us live longer. Whether those extra five years are of higher quality is not under discussion. You know, this is the same as people coming from really nice places in the Konkan to a slum in Mumbai and staying there because they want certainty. Life in rural Maharashtra is very hard. There’s more certainty if I’m a peon or a security guard in the city. I think that science is really offering some ‘certainty’. And that is what we seem to have accepted.

This seems to me to be too simplistic. Sohoni says this in reply to being asked whether science education today leans towards “technologies that are serving Big Business and corporate profits, rather than this developmental model of really looking critically at society”. And he would have been fairer to say we have many more technological devices and products around us today, founded on what were once scientific ideas, that serve corporate profits more than anything else. The French philosopher Jacques Ellul elucidated this idea brilliantly in his book The Technological Society (1964).

It’s just that Sohoni’s example of ageing is off the mark, and in the process it is harder to know what he’s really getting at. Lifespan is calculated as the average number of years an individual in a particular population lives. It can be improved by promoting factors that help our bodies become more resilient and by dissuading factors that cause us to die sooner. If lifespan is increasing today, it’s because fewer babies are succumbing to vaccine-preventable diseases before they turn five, because there are fewer road accidents thanks to vehicle safety, and because novel treatments like immunotherapy are improving the treatment rates of various cancers. Any new scientific knowledge in the prevailing capitalist world-system is susceptible to being coopted by Big Business but I’m also glad the knowledge exists at all.

Hair conditioners and immortality
I’m not a fan of cosmetic products whatsoever. The most I use is a bar of soap, a bottle of shampoo, a smaller bottle of coconut oil and the occasional earbud. Maybe a bottle of deodorant when I’ve been out in the sun overlong. But recently, when I
What can science education do, and what can it not?DisagreeVM
What can science education do, and what can it not?

Sure, we can all live for five more years on average, but if those five years will be spent in, say, the humiliating conditions of palliative care, let’s fix that problem. Sohoni says science has strayed from that path and I’m not so sure — but I’m convinced there’s enough science to go around (and enough money for it, just not the political will): scientists can work on both increasing lifespan and improving the conditions of palliative care. We shouldn’t vilify one kind of science in order to encourage the other. Yet Sohoni persists with this juxtaposition as he says later:

… we are living longer, we are still shitting on the road or, you know, letting our sewage be cleaned by fellow humans at the risk of death, but we are living longer. And that is, I think, a big problem.

We are still shitting on the road and we are letting our sewage be cleaned by fellow humans at the risk of death. These are big problems. Us living longer is not a big problem.

Big Technology has a knack of turning us all into consumers of science, by neutralising questions on ‘how’ and ‘why’ things work. We accept it and we enjoy the benefits. But see, if you know the benefits are divided very unevenly, why doesn’t it bother us? For example, if you buy an Apple iPhone for Rs. 75,000 how much does the actual makers of the phone (factory workers) get? I call it the Buddhufication Crisis: a lot of people are just hooked on to their smartphones, and live in a bubble of manufactured certainty; and the rest of society that can’t access smartphones, is left to deal with real-world problems.

By pushing us to get up, get out, and engage with science where it is practised, a better science education can inculcate a more inquisitive, critical-thinking population that applies the good sense that comes of a good education to more, or all, aspects of society and social living. This is why Big Technology in particular does not tempt us into becoming “consumers” of science rather than encouraging us to pick at its pieces. Practically everything does. Similarly Sohoni’s “Buddhufication” description is muddled. Of course it’s patronising towards the people who create value — especially if it is new and/or takes unexpected forms — out of smartphones and use it as a means of class mobility, and seems to suggest a person striving for any knowledge other than of the scientific variety is being a “buddhu”. And what such “buddhufication” has to do with the working conditions of Apple’s “factory workers” is unclear.

Speaking of relationships:

Through our Public Health edition, we also seem to sit with the feeling that science is not serving rural areas, not serving the poor. In turn, there is also a lower expectation of science from the rural communities. Do you feel this is true?
Yes, I think that is true to a large extent. But it’s not to do with rural. You see, for example, if you look at western Maharashtra — the Pune-Nashik belt — some of the cleverest people live there. They are basically producing vegetables for the big urban markets: in Satara, Sangli, that entire irrigated area. And in fact, you will see that they are very careful about their future, and understand their place in society and the role of the state. And they expect many things from the state or the government; they want things to work, hospitals to work, have oxygen, etc. And so, it is really about the basic understanding of cause and effect of citizenship. They understand what is needed to make buses work, or hospitals function; they understand how the state works. This is not very different from knowing how gadgets work.

While the distinction to many others may be trivial, “science” and “scientists” are not the same thing. This equation is present throughout the interview. At first I assumed it was casual and harmless but at this point, given the links between science, science education, technology, and public welfare that Sohoni has tried to draw, the distinction is crucial here. Science is already serving rural areas — Sohoni says as much in the comment here and the one that follows. But many, or maybe most, scientists may not be serving rural areas, if only so we can also acknowledge that some scientists are also serving rural areas. “Science is not serving rural areas” would mean no researcher in the country — or anywhere, really — has brought the precepts of science to bear on the problems of rural India. This is just not true. On the other hand saying “most scientists are not serving rural areas” will tell us some useful scientific knowledge exists but (i) too few scientists are working on it (i.e. mindful of the local context) and (ii) there are problems with translating it from the lab bench to its application in the field, at ground zero.

This version of this post benefited from inputs from and feedback by Prathmesh Kher.

Neural Networks (MNIST inference) on the “3-cent” Microcontroller

By: cpldcpu
2 May 2024 at 23:59

Bouyed by the surprisingly good performance of neural networks with quantization aware training on the CH32V003, I wondered how far this can be pushed. How much can we compress a neural network while still achieving good test accuracy on the MNIST dataset? When it comes to absolutely low-end microcontrollers, there is hardly a more compelling target than the Padauk 8-bit microcontrollers. These are microcontrollers optimized for the simplest and lowest cost applications there are. The smallest device of the portfolio, the PMS150C, sports 1024 13-bit word one-time-programmable memory and 64 bytes of ram, more than an order of magnitude smaller than the CH32V003. In addition, it has a proprieteray accumulator based 8-bit architecture, as opposed to a much more powerful RISC-V instruction set.

Is it possible to implement an MNIST inference engine, which can classify handwritten numbers, also on a PMS150C?

On the CH32V003 I used MNIST samples that were downscaled from 28×28 to 16×16, so that every sample take 256 bytes of storage. This is quite acceptable if there is 16kb of flash available, but with only 1 kword of rom, this is too much. Therefore I started with downscaling the dataset to 8×8 pixels.

The image above shows a few samples from the dataset at both resolutions. At 16×16 it is still easy to discriminate different numbers. At 8×8 it is still possible to guess most numbers, but a lot of information is lost.

Suprisingly, it is still possible to train a machine learning model to recognize even these very low resolution numbers with impressive accuracy. It’s important to remember that the test dataset contains 10000 images that the model does not see during training. The only way for a very small model to recognize these images accurate is to identify common patterns, the model capacity is too limited to “remember” complete digits. I trained a number of different network combinations to understand the trade-off between network memory footprint and achievable accuracy.

Parameter Exploration

The plot above shows the result of my hyperparameter exploration experiments, comparing models with different configurations of weights and quantization levels from 1 to 4 bit for input images of 8×8 and 16×16. The smallest models had to be trained without data augmentation, as they would not converge otherwise.

Again, there is a clear relationship between test accuracy and the memory footprint of the network. Increasing the memory footprint improves accuracy up to a certain point. For 16×16, around 99% accuracy can be achieved at the upper end, while around 98.5% is achieved for 8×8 test samples. This is still quite impressive, considering the significant loss of information for 8×8.

For small models, 8×8 achieves better accuracy than 16×16. The reason for this is that the size of the first layer dominates in small models, and this size is reduced by a factor of 4 for 8×8 inputs.

Surprisingly, it is possible to achieve over 90% test accuracy even on models as small as half a kilobyte. This means that it would fit into the code memory of the microcontroller! Now that the general feasibility has been established, I needed to tweak things further to accommodate the limitations of the MCU.

Training the Target Model

Since the RAM is limited to 64 bytes, the model structure had to use a minimum number of latent parameters during inference. I found that it was possible to use layers as narrow as 16. This reduces the buffer size during inference to only 32 bytes, 16 bytes each for one input buffer and one output buffer, leaving 32 bytes for other variables. The 8×8 input pattern is directly read from the ROM.

Furthermore, I used 2-bit weights with irregular spacing of (-2, -1, 1, 2) to allow for a simplified implementation of the inference code. I also skipped layer normalization and instead used a constant shift to rescale activations. These changes slightly reduced accuracy. The resulting model structure is shown below.

All things considered, I ended up with a model with 90.07% accuracy and a total of 3392 bits (0.414 kilobytes) in 1696 weights, as shown in the log below. The panel on the right displays the first layer weights of the trained model, which directly mask features in the test images. In contrast to the higher accuracy models, each channel seems to combine many features at once, and no discernible patterns can be seen.

Implementation on the Microntroller

In the first iteration, I used a slightly larger variant of the Padauk Microcontrollers, the PFS154. This device has twice the ROM and RAM and can be reflashed, which tremendously simplifies software development. The C versions of the inference code, including the debug output, worked almost out of the box. Below, you can see the predictions and labels, including the last layer output.

Squeezing everything down to fit into the smaller PMS150C was a different matter. One major issue when programming these devices in C is that every function call consumes RAM for the return stack and function parameters. This is unavoidable because the architecture has only a single register (the accumulator), so all other operations must occur in RAM.

To solve this, I flattened the inference code and implemented the inner loop in assembly to optimize variable usage. The inner loop for memory-to-memory inference of one layer is shown below. The two-bit weight is multiplied with a four-bit activation in the accumulator and then added to a 16-bit register. The multiplication requires only four instructions (t0sn, sl,t0sn,neg), thanks to the powerful bit manipulation instructions of the architecture. The sign-extending addition (add, addc, sl, subc) also consists of four instructions, demonstrating the limitations of 8-bit architectures.

void fc_innerloop_mem(uint8_t loops) {

    sum = 0;
    do  {
       weightChunk = *weightidx++;
__asm   
    idxm  a, _activations_idx
	inc	_activations_idx+0

    t0sn _weightChunk, #6
    sl     a            ;    if (weightChunk & 0x40) in = in+in;
    t0sn _weightChunk, #7
    neg    a           ;     if (weightChunk & 0x80) in =-in;                    

    add    _sum+0,a
    addc   _sum+1
    sl     a 
    subc   _sum+1  

  ... 3x more ...

__endasm;
    } while (--loops);

    int8_t sum8 = ((uint16_t)sum)>>3; // Normalization
    sum8 = sum8 < 0 ? 0 : sum8; // ReLU
    *output++ = sum8;
}

In the end, I managed to fit the entire inference code into 1 kilowords of memory and reduced sram usage to 59 bytes, as seen below. (Note that the output from SDCC is assuming 2 bytes per instruction word, while it is only 13 bits).

Success! Unfortunately, there was no rom space left for the soft UART to output debug information. However, based on the verificaiton on PFS154, I trust that the code works, and since I don’t have any specific application in mind, I left it at that stage.

Summary

It is indeed possible to implement MNIST inference with good accuracy using one of the cheapest and simplest microcontrollers on the market. A lot of memory footprint and processing overhead is usually spent on implementing flexible inference engines, that can accomodate a wide range of operators and model structures. Cutting this overhead away and reducing the functionality to its core allows for astonishing simplification at this very low end.

This hack demonstrates that there truly is no fundamental lower limit to applying machine learning and edge inference. However, the feasibility of implementing useful applications at this level is somewhat doubtful.

You can find the project repository here.

The pitfalls of Somanath calling Aditya L1 a “protector”

By: VM
11 June 2024 at 04:11

In a WhatsApp group of which I’m a part, there’s a heated discussion going on around an article published by NDTV on June 10, entitled ‘Sun’s Fury May Fry Satellites, But India Has A Watchful Space Protector’. The article was published after the Indian Space Research Organisation (ISRO) published images of the Sun the Aditya L1 spacecraft (including its coronagraph) captured during the May solar storm. The article also features quotes by ISRO chairman S. Somanath — and some of them in particular prompted the discussion. For example, he says:

“Aditya L1 captured when the Sun got angry this May. If it gets furious in the near future, as scientists suggest, India’s 24x7X365 days’ eye on the Sun is going to provide a forewarning. After all, we have to protect the 50-plus Indian satellites in space that have cost the country an estimated more than ₹ 50,000 crore. Aditya L1 is a celestial protector for our space assets.”

A space scientist on the group pointed out that any solar event that could fry satellites in Earth orbit would also fry Aditya L1, which is stationed at the first Earth-Sun Lagrange point (1.5 million km from Earth in the direction of the Sun), so it doesn’t make sense to describe this spacecraft as a “protector” of India’s “space assets”. Instead, the scientist said, we’re better off describing Aditya L1 as a science mission, which is what it’d been billed as.

Another space scientist in the same group contended that the coronagraph onboard Aditya L1, plus its other instruments, still give the spacecraft a not insignificant early-warning ability, using which ISRO could consider protective measures. He also said not all solar storms are likely to fry all satellites around Earth, only the very powerful ones; likewise, not all satellites around Earth are equally engineered to withstand solar radiation that is more intense than usual, to varying extents. With these variables in mind, he added, Aditya L1 — which is protected to a greater degree — could give ISRO folks enough head start to manoeuvre ‘weaker’ satellites out of harm’s way or prevent catastrophic failures. By virtue of being ISRO’s eyes on the Sun, then, he suggested Aditya L1 was a scientific mission that could also perform some, but not all, of the functions expected of a full-blown early warning system.

(For such a system vis-a-vis solar weather, he said the fourth or the fifth Earth-Sun Lagrange points would have been better stations.)

I’m putting this down here as a public service message. Characterising a scientific mission — which is driven by scientists’ questions, rather than ISRO’s perception of threats or as part of any overarching strategy of the Indian government — as something else is not harmless because it downplays the fact that we have open questions and that we need to spend time and money answering them. It also creates a false narrative about the mission’s purpose that the people who have spent years designing and building the instruments onboard Aditya L1 don’t deserve, and a false impression of how much room the Indian space programme currently has to launch and operate spacecraft that are dedicated to providing early warnings of bad solar weather.

To be fair, the NDTV article says in a few places that Aditya L1 is a scientific mission, as does astrophysicist Somak Raychaudhury in the last paragraph. It’s just not clear why Somanath characterised it as a “protector” and as a “space-based insurance policy”. NDTV also erred by putting “protector” in the headline (based on my experiences at The Wire and The Hindu, most readers of online articles read and share nothing more than the headline). That it was the ISRO chairman who said these things is more harmful: as the person heading India’s nodal space research body, he has a protagonist’s role in making room in the public imagination for the importance and wonders of scientific missions.

The BHU Covaxin study and ICMR bait

By: VM
28 May 2024 at 04:51

Earlier this month, a study by a team at Banaras Hindu University (BHU) in Varanasi concluded that fully 1% of Covaxin recipients may suffer severe adverse events. One percent is a large number because the multiplier (x in 1/100 * x) is very large — several million people. The study first hit the headlines for claiming it had the support of the Indian Council of Medical Research (ICMR) and reporting that both Bharat Biotech and the ICMR are yet to publish long-term safety data for Covaxin. The latter is probably moot now, with the COVID-19 pandemic well behind us, but it’s the principle that matters. Let it go this time and who knows what else we’ll be prepared to let go.

But more importantly, as The Hindu reported on May 25, the BHU study is too flawed to claim Covaxin is harmful, or claim anything for that matter. Here’s why (excerpt):

Though the researchers acknowledge all the limitations of the study, which is published in the journal Drug Safety, many of the limitations are so critical that they defeat the very purpose of the study. “Ideally, this paper should have been rejected at the peer-review stage. Simply mentioning the limitations, some of them critical to arrive at any useful conclusion, defeats the whole purpose of undertaking the study,” Dr. Vipin M. Vashishtha, director and pediatrician, Mangla Hospital and Research Center, Bijnor, says in an email to The Hindu. Dr. Gautam Menon, Dean (Research) & Professor, Departments of Physics and Biology, Ashoka University shares the same view. Given the limitations of the study one can “certainly say that the study can’t be used to draw the conclusions it does,” Dr. Menon says in an email.

Just because you’ve admitted your study has limitations doesn’t absolve you of the responsibility to interpret your research data with integrity. In fact, the journal needs to speak up here: why did Drug Safety publish the study manuscript? Too often when news of a controversial or bad study is published, the journal that published it stays out of the limelight. While the proximal cause is likely that journalists don’t think to ask journal editors and/or publishers tough questions about their publishing process, there is also a cultural problem here: when shit hits the fan, only the study’s authors are pulled up, but when things are rosy, the journals are out to take credit for the quality of the papers they publish. In either case, we must ask what they actually bring to the table other than capitalising on other scientists’ tendency to judge papers based on the journals they’re published in instead of their contents.

Of course, it’s also possible to argue that unlike, say, journalistic material, research papers aren’t required to be in the public interest at the time of publication. Yet the BHU paper threatens to undermine public confidence in observational studies, and that can’t be in anyone’s interest. Even at the outset, experts and many health journalists knew observational studies don’t carry the same weight as randomised controlled trials as well as that such studies still serve a legitimate purpose, just not the one to which its conclusions were pressed in the BHU study.

After the paper’s contents hit the headlines, the ICMR shot off a latter to the BHU research team saying it hasn’t “provided any financial or technical support” to the study and that the study is “poorly designed”. Curiously, the BHU team’s repartee to the ICMR’s makes repeated reference to Vivek Agnihotri’s film The Vaccine War. In the same point in which two of these references appear (no. 2), the team writes: “While a study with a control group would certainly be of higher quality, this immediately points to the fact that it is researchers from ICMR who have access to the data with the control group, i.e. the original phase-3 trials of Covaxin – as well publicized in ‘The Vaccine War’ movie. ICMR thus owes it to the people of India, that it publishes the long-term follow-up of phase-3 trials.”

I’m not clear why the team saw fit to appeal to statements made in this of all films. As I’ve written earlier, The Vaccine War — which I haven’t watched but which directly references journalistic work by The Wire during and of the pandemic — is most likely a mix of truths and fictionalisation (and not in the clever, good-faith ways in which screenwriters adopt textual biographies for the big screen), with the fiction designed to serve the BJP’s nationalist political narratives. So when the letter says in its point no. 5 that the ICMR should apologise to a female member of the BHU team for allegedly “spreading a falsehood” about her and offers The Vaccine War as a counterexample (“While ‘The Vaccine War’ movie is celebrating women scientists…”), I can’t but retch.

Together with another odd line in the latter — that the “ICMR owes it to the people of India” — the appeals read less like a debate between scientists on the merits and the demerits of the study and more like they’re trying to bait the ICMR into doing better. I’m not denying the ICMR started it, as a child might say, but saying that this shouldn’t have prevented the BHU team from keeping it dignified. For example, the BHU letter reads: “It is to be noted that interim results of the phase-3 trial, also cited by Dr. Priya Abraham in ‘The Vaccine War’ movie, had a mere 56 days of safety follow-up, much shorter than the one-year follow-up in the IMS-BHU study.” Surely the 56-day period finds mention in a more respectable and reliable medium than a film that confuses you about what’s real and what’s not?

In all, the BHU study seems to have been designed to draw attention to gaps in the safety data for Covaxin — but by adopting such a provocative route, all that took centerstage was its spat with the ICMR plus its own flaws.

India can do it!

By: VM
23 May 2024 at 11:39
Against the background of the H5N1 pandemic in birds and an epidemic among cattle in the US, the Government of Victoria, in Australia, published a statement on May 21 that the province had recorded the country’s first human H5N1 case. This doesn’t seem to be much cause (but also not negligible cause) for concern because, according to the statement as well as other experts, this strain of avian influenza hasn’t evolved to spread easily between people. The individual in question who had the infection — “a child”, according to Victoria’s statement — had a severe form of it but has since recovered fully as well.

But this story isn’t testament to Australia’s pathogen surveillance, at least not primarily; it’s testament to India’s ability to do it. In Vivek Agnihotri’s film The Vaccine War — purportedly about the efforts of Bharat Biotech, the ICMR, and the NIV to develop Covaxin during the COVID-19 pandemic — Raima Sen, who plays the science editor of a fictitious publication called The Daily Wire, says about developing the vaccine in a moment of amusing cringe on a TV news show that “India can’t do it”. Agnihotri didn’t make it difficult to see myself in Sen’s character: I was science editor of the very real publication The Wire when Covaxin was being developed. And I’m here to tell you that India, in point of fact, can: according to Victoria’s statement, the child became infected by a strain of the H5N1 virus in India and fell ill in March 2024.

And what is it that India can do? According to Victoria’s statement, spotting the infection required “Victoria’s enhanced surveillance system”. Further, “most strains don’t infect humans”; India was able to serve the child with one of the rare strains that could. “Transmission to humans” is also “very rare”, happening largely among people who “have contact with infected birds or animals, or their secretions, while in affected areas of the world”. Specifically: “Avian influenza is spread by close contact with an infected bird (dead or alive), e.g. handling infected birds, touching droppings or bedding, or killing/preparing infected poultry for cooking. You can’t catch avian influenza through eating fully cooked poultry or eggs, even in areas with an outbreak of avian influenza.”

So let’s learn our lesson: If we give India’s widespread dysregulation of poultry and cattle health, underinvestment in pathogen surveillance, and its national government’s unique blend of optimism and wilful ignorance a chance, the country will give someone somewhere a rare strain of an avian influenza virus that can infect humans. Repeat after me: India can do it!

The billionaire’s solution to climate change

By: VM
10 May 2024 at 14:14

On May 3, Bloomberg published a profile of Salesforce CEO Marc Benioff’s 1t.org project to plant or conserve one trillion trees around the world in order to sequester 200 gigatonnes of carbon every year. The idea reportedly came to Benioff from Thomas Crowther’s infamous September 2015 paper in Nature that claimed restoring trees was the world’s best way to ‘solve’ climate change.

Following pointed criticism of the paper’s attitude and conclusions, they were revised to a significant extent in October 2019 to tamper predictions about the carbon sequestration potential of the world’s trees and to withdraw its assertion that no other solution could work better than planting and/or restoring trees.

According to Bloomberg’s profile, Benioff’s 1t.org initiative seems to be faltering as well, with unreliable accounting of the pledges companies submitted to 1t.org and, unsurprisingly, many of these companies engaging in shady carbon-credit transactions. This is also why Jane Goodall’s comment in the article is disagreeable: it isn’t better for these companies to do something vis-à-vis trees than nothing at all because the companies are only furthering an illusion of climate action — claiming to do something while doing nothing at all — and perpetuating the currency of counterproductive ideas like carbon-trading.

A smattering of Benioff’s comments to Bloomberg are presented throughout the profile, as a result of which he might come across like a sage figure — but take them together, in one go, and he sounds actually like a child.

“I think that there’s a lot of people who are attacking nature and hate nature. I’m somebody who loves nature and supports nature.”

This comment follows one by “the climate and energy policy director at the Union of Concerned Scientists”, Rachel Cleetus, that trees “should not be seen as a substitute for the core task at hand here, which is getting off fossil fuels.” But in Bloomberg’s telling, Cleetus is a [checks notes] ‘nature hater’. Similarly, the following thoughtful comment is Benioff’s view of other scientists who criticised the Crowther et al. paper:

“I view it as nonsense.”

Moving on…

“I was in third grade. I learned about photosynthesis and I got it right away.”

This amazing quote appears as the last line of a paragraph; the rest of it goes thus: “Slashing fossil fuel consumption is critical to slowing warming, but scientists say we also need to pull carbon that’s already in the air back out of it. Trees are really good at that, drawing in CO2 and then releasing oxygen.” Then Benioff’s third-grade quote appears. It’s just comedy.

His other statements make for an important reminder of the oft-understated purpose of scientific communication. Aside from being published by a ‘prestige’ journal — Nature — the Crowther et al. paper presented an easy and straightforward solution to reducing the concentration of atmospheric carbon: to fix lots and lots of trees. Even without knowing the specific details of the study’s merits, any environmental scientist in South and Southeast Asia, Africa, and South America, i.e. the “Global South”, would have said this is a terrible idea.

“I said, ‘What? One trillion trees will sequester more than 200 gigatons of carbon? We have to get on this right now. Who’s working on this?’”

“Everybody agreed on tree diplomacy. I was in shock.”

“The greatest, most scalable technology we have today to sequester carbon is the tree.”

The countries in these regions have become sites of aggressive afforestation that provide carbon credits for the “Global North” to encash as licenses to keep emitting carbon. But the flip sides of these exercises are: (i) only some areas are naturally amenable to hosting trees, and it’s not feasible to plant them willy-nilly through ecosystems that don’t naturally support them; (ii) unless those in charge plant native species, afforestation will only precipitate local ecosystem decline, which will further lower the sequestration potential; (iii) unafforested land runs the risk of being perceived as ‘waste land’, sidelining the ecosystem services provided by wetlands, deserts, grasslands, etc.; and (iv) many of these countries need to be able to emit more carbon before being expected to reach net-zero, in order to pull their populations out of poverty and become economically developed — the same right the “Global North” countries had in the 19th and 20th centuries.

Scientists have known all this from well before the Crowther et al. paper turned up. Yet Benioff leapt for it the moment it appeared, and was keen on seeing it to its not-so-logical end. It’s impossible to miss the fact that his being worth $10 billion didn’t encourage him to use all that wealth and his clout to tackle the more complex actions in the soup of all actions that make up humankind’s response to climate change. Instead, he used his wealth to go for an easy way out, while dismissing informed criticism of it as “nonsense”

In fact, a similar sort of ‘ease-seeking’ is visible in the Crowther et al. paper as well, as brought out in a comment published by Veldman et al. In response to this, Crowther et al. wrote in October 2019 that their first paper simply presented value-neutral knowledge and that it shouldn’t be blamed for how it’s been construed:

Veldman et al. (4) criticize our results in dryland biomes, stating that many of these areas simply should not be considered suitable for tree restoration. Generally, we must highlight that our analysis does not ever address whether any actions “should” or “should not” take place. Our analysis simply estimated the biophysical limits of global forest growth by highlighting where trees “can” exist.

In fact, the October 2019 correction to Crowther et al., in which the authors walked back on the “trees are the best way” claim, was particularly important because it has come to mirror the challenges Benioff has found himself facing through 1t.org: it isn’t just that there are other ways to improve climate mitigation and adaptation, it’s that those ways are required, and giving up on them for any reason could never be short of a moral hazard, if not an existential one.

Featured image credit: Dawid Zawiła/Unsplash.

The “coherent water” scam is back

By: VM
10 May 2024 at 14:05

On May 7, I received a press release touting a product called “coherent water” made by a company named Analemma Water India. According to the document, “coherent water” is based on more than “15 years of rigorous research and development” and confers “a myriad … health benefits”.This “rigorous research” is flawed research. There’s definitely such a thing as “coherent water” and it’s indistinguishable from regular water at all scales. The “coherent water” scam has reared its serpentine head before with the names “hexagonal water”, “structured water”, “polywater”, “exclusion zone water”, and water with one additional hydrogen and oxygen atom each, i.e. “H3O2”. Analemma’s “Mother Water”, which is its brand name for “coherent water”, itself is a rebranding of a product called “Somarka” that hit the Indian market in 2021.

The scam here is that the constituent molecules of “coherent water” get together to form hexagonal structures that persist indefinitely. And these structures distinguish “coherent water”, giving it wonderful abilities like possessing a greater energy content than regular water, boosting one’s “life force”, and — this one I love — being able to “encourage” other water molecules around it to form similar hexagonal assemblages.

I hope people won’t fall for this hoax but I know some will. But thanks to the lowest price of what Analemma is offering — a vial of “Mother Water” that it claims is worth $180 (Rs 15,000) — it’ll be some rich buggers and I think that’s okay. Fools, their wealth, and all that. Then again, it’s somewhat saddening that while (some) people are fighting to keep junk foods and bad medicines out of the market, we have “coherent water” companies and their PR outfits bravely broadcasting their press releases to news publications (and at least one publishing it) at around the same time.

If you’re curious about the issue with “coherent water”: At room temperature and pressure, the hydrogen atoms of water keep forming and breaking weak bonds with other hydrogen atoms. These bonds last for a very small duration and give water its high boiling point and ice crystals their characteristic hexagonal structure.

Sometimes water molecules organise themselves using these bonds into a hexagonal structure as well. But these formations are very short-lived because the hydrogen bonds last only around 200 quadrillionths of a second at a time, if not lower. According to the hoax, however, in “coherent water”, the hydrogen bonds continue to hold such that its water molecules persist in long-lived hexagonal clusters. But this conclusion is not supported by research — nor is the  claim that, “When swirled in normal water, the [magic water] encourages chaotic and irregular H2O molecules to rearrange into the same liquid crystalline structure as the [magic water]. What’s more, the coherent structure is retained over time – this stability is unique to Analemma.”

I don’t think this ability is unique to the “Mother Water”. In 1963, a scientist named Felix Hoenikker invented a variant of ice that, when it came in contact with water cooler than 45.8º C, quickly converted it to ice-nine as well. Sadly Hoenikker had to abandon the project after he realised the continued use of ice-nine would simply destroy all life on Earth.

Anyway, water that’s neither acidic nor basic also has a few rare hydronium (H3O+) and hydroxide (OH-) ions floating around as well. The additional hydrogen ion — basically a proton — from the hydronium ion is engaged in a game of musical chairs with the protons in the same volume of water, each one jumping to a molecule, dislodging a proton there, which jumps to another molecule, and so on. This is happening so rapidly that the hydrogen atoms in every water molecule are practically being changed several thousand times every minute.

In this milieu, it’s impossible for a fixed group of water molecules to be hanging around. In addition, the ultra-short lifetime of the hydrogen bonds are what makes water a liquid: a thing that flows, fills containers, squeezes between gaps, collects into droplets, etc. Take this ability and the fast-switching hydrogen bonds away, as “coherent water” claims to do by imposing a fixed structure, and it’s no longer water — any kind of water.

Analemma has links to some reports on its website; if you’re up to it, I suggest going through them with a simple checklist of the signs of bad research side by side. You should be able to spot most of the gunk.

Infinity in 15 kilograms

By: VM
19 April 2024 at 21:54

While space is hard, there are also different kinds of hardness. For example, on April 15, ISRO issued a press release saying it had successfully tested nozzles made of a carbon-carbon composite that would replace those made of Columbium alloy in the PSLV rocket’s fourth stage and thus increase the rocket’s payload capacity by 15 kg. Just 15 kg!

The successful testing of the C-C nozzle divergent marked a major milestone for ISRO. On March 19, 2024, a 60-second hot test was conducted at the High-Altitude Test (HAT) facility in ISRO Propulsion Complex (IPRC), Mahendragiri, confirming the system’s performance and hardware integrity. Subsequent tests, including a 200-second hot test on April 2, 2024, further validated the nozzle’s capabilities, with temperatures reaching 1216K, matching predictions.

Granted, the PSLV’s cost of launching a single kilogram to low-earth orbit is more than 8 lakh rupees (a very conservative estimate, I reckon) – meaning an additional 15 kg means at least an additional Rs 1.2 crore per launch. But finances alone are not a useful way to evaluate this addition: more payload mass could mean, say, one additional instrument onboard an indigenous spacecraft instead of waiting for a larger rocket to become available or postponing that instrument’s launch to a future mission.

But equally fascinating, and pride- and notice-worthy, to me is the fact that ISRO’s scientists and engineers were able to fine-tune the PSLV to this extent. This isn’t to say I’m surprised they were able to do it at all; on the contrary, it means the feat is as much about the benefits accruing to the rocket, and the Indian space programme by extension, as about R&D advances on the materials science front. It speaks to the oft-underestimated importance of the foundations on which a space programme is built.

Vikram Sarabhai Space Centre … has leveraged advanced materials like Carbon-Carbon (C-C) Composites to create a nozzle divergent that offers exceptional properties. By utilizing processes such as carbonization of green composites, Chemical Vapor Infiltration, and High-Temperature Treatment, it has produced a nozzle with low density, high specific strength, and excellent stiffness, capable of retaining mechanical properties even at elevated temperatures.

A key feature of the C-C nozzle is its special anti-oxidation coating of Silicon Carbide, which extends its operational limits in oxidizing environments. This innovation not only reduces thermally induced stresses but also enhances corrosion resistance, allowing for extended operational temperature limits in hostile environments.

The advances here draw from insights into metallurgy, crystallography, ceramic engineering, composite materials, numerical methods, etc., which in turn stand on the shoulders of people trained well enough in these areas, the educational institutions (and their teachers) that did so, and the schooling system and socio-economic support structures that brought them there. A country needs a lot to go right for achievements like squeezing an extra 15 kg into the payload capacity of an already highly fine-tuned machine to be possible. It’s a bummer that such advances are currently largely vertically restricted, except in the case of the Indian space programme, rather than diffusing freely across sectors.

Other enterprises ought to have these particular advantages ISRO enjoys. Even should one or two rockets fail, a test not work out or a spacecraft go kaput sooner than designed, the PSLV’s new carbon-carbon-composite nozzles stand for the idea that we have everything we need to keep trying, including the opportunity to do better next time. They represent the idea of how advances in one field of research can lead to advances in another, such that each field is no longer held back by the limitations of its starting conditions.

The BHU Covaxin study and ICMR bait

By: V.M.
28 May 2024 at 03:51

Earlier this month, a study by a team at Banaras Hindu University (BHU) in Varanasi concluded that fully 1% of Covaxin recipients may suffer severe adverse events. One percent is a large number because the multiplier (x in 1/100 * x) is very large — several million people. The study first hit the headlines for claiming it had the support of the Indian Council of Medical Research (ICMR) and reporting that both Bharat Biotech and the ICMR are yet to publish long-term safety data for Covaxin. The latter is probably moot now, with the COVID-19 pandemic well behind us, but it’s the principle that matters. Let it go this time and who knows what else we’ll be prepared to let go.

But more importantly, as The Hindu reported on May 25, the BHU study is too flawed to claim Covaxin is harmful, or claim anything for that matter. Here’s why (excerpt):

Though the researchers acknowledge all the limitations of the study, which is published in the journal Drug Safety, many of the limitations are so critical that they defeat the very purpose of the study. “Ideally, this paper should have been rejected at the peer-review stage. Simply mentioning the limitations, some of them critical to arrive at any useful conclusion, defeats the whole purpose of undertaking the study,” Dr. Vipin M. Vashishtha, director and pediatrician, Mangla Hospital and Research Center, Bijnor, says in an email to The Hindu. Dr. Gautam Menon, Dean (Research) & Professor, Departments of Physics and Biology, Ashoka University shares the same view. Given the limitations of the study one can “certainly say that the study can’t be used to draw the conclusions it does,” Dr. Menon says in an email.

Just because you’ve admitted your study has limitations doesn’t absolve you of the responsibility to interpret your research data with integrity. In fact, the journal needs to speak up here: why did Drug Safety publish the study manuscript? Too often when news of a controversial or bad study is published, the journal that published it stays out of the limelight. While the proximal cause is likely that journalists don’t think to ask journal editors and/or publishers tough questions about their publishing process, there is also a cultural problem here: when shit hits the fan, only the study’s authors are pulled up, but when things are rosy, the journals are out to take credit for the quality of the papers they publish. In either case, we must ask what they actually bring to the table other than capitalising on other scientists’ tendency to judge papers based on the journals they’re published in instead of their contents.

Of course, it’s also possible to argue that unlike, say, journalistic material, research papers aren’t required to be in the public interest at the time of publication. Yet the BHU paper threatens to undermine public confidence in observational studies, and that can’t be in anyone’s interest. Even at the outset, experts and many health journalists knew observational studies don’t carry the same weight as randomised controlled trials as well as that such studies still serve a legitimate purpose, just not the one to which its conclusions were pressed in the BHU study.

After the paper’s contents hit the headlines, the ICMR shot off a latter to the BHU research team saying it hasn’t “provided any financial or technical support” to the study and that the study is “poorly designed”. Curiously, the BHU team’s repartee to the ICMR’s makes repeated reference to Vivek Agnihotri’s film The Vaccine War. In the same point in which two of these references appear (no. 2), the team writes: “While a study with a control group would certainly be of higher quality, this immediately points to the fact that it is researchers from ICMR who have access to the data with the control group, i.e. the original phase-3 trials of Covaxin – as well publicized in ‘The Vaccine War’ movie. ICMR thus owes it to the people of India, that it publishes the long-term follow-up of phase-3 trials.”

I’m not clear why the team saw fit to appeal to statements made in this of all films. As I’ve written earlier, The Vaccine War — which I haven’t watched but which directly references journalistic work by The Wire during and of the pandemic — is most likely a mix of truths and fictionalisation (and not in the clever, good-faith ways in which screenwriters adopt textual biographies for the big screen), with the fiction designed to serve the BJP’s nationalist political narratives. So when the letter says in its point no. 5 that the ICMR should apologise to a female member of the BHU team for allegedly “spreading a falsehood” about her and offers The Vaccine War as a counterexample (“While ‘The Vaccine War’ movie is celebrating women scientists…”), I can’t but retch.

Together with another odd line in the latter — that the “ICMR owes it to the people of India” — the appeals read less like a debate between scientists on the merits and the demerits of the study and more like they’re trying to bait the ICMR into doing better. I’m not denying the ICMR started it, as a child might say, but saying that this shouldn’t have prevented the BHU team from keeping it dignified. For example, the BHU letter reads: “It is to be noted that interim results of the phase-3 trial, also cited by Dr. Priya Abraham in ‘The Vaccine War’ movie, had a mere 56 days of safety follow-up, much shorter than the one-year follow-up in the IMS-BHU study.” Surely the 56-day period finds mention in a more respectable and reliable medium than a film that confuses you about what’s real and what’s not?

In all, the BHU study seems to have been designed to draw attention to gaps in the safety data for Covaxin — but by adopting such a provocative route, all that took centerstage was its spat with the ICMR plus its own flaws.

India can do it!

By: V.M.
23 May 2024 at 06:09
Against the background of the H5N1 pandemic in birds and an epidemic among cattle in the US, the Government of Victoria, in Australia, published a statement on May 21 that the province had recorded the country’s first human H5N1 case. This doesn’t seem to be much cause (but also not negligible cause) for concern because, according to the statement as well as other experts, this strain of avian influenza hasn’t evolved to spread easily between people. The individual in question who had the infection — “a child”, according to Victoria’s statement — had a severe form of it but has since recovered fully as well.

But this story isn’t testament to Australia’s pathogen surveillance, at least not primarily; it’s testament to India’s ability to do it. In Vivek Agnihotri’s film The Vaccine War — purportedly about the efforts of Bharat Biotech, the ICMR, and the NIV to develop Covaxin during the COVID-19 pandemic — Raima Sen, who plays the science editor of a fictitious publication called The Daily Wire, says about developing the vaccine in a moment of amusing cringe on a TV news show that “India can’t do it”. Agnihotri didn’t make it difficult to see myself in Sen’s character: I was science editor of the very real publication The Wire when Covaxin was being developed. And I’m here to tell you that India, in point of fact, can: according to Victoria’s statement, the child became infected by a strain of the H5N1 virus in India and fell ill in March 2024.

And what is it that India can do? According to Victoria’s statement, spotting the infection required “Victoria’s enhanced surveillance system”. Further, “most strains don’t infect humans”; India was able to serve the child with one of the rare strains that could. “Transmission to humans” is also “very rare”, happening largely among people who “have contact with infected birds or animals, or their secretions, while in affected areas of the world”. Specifically: “Avian influenza is spread by close contact with an infected bird (dead or alive), e.g. handling infected birds, touching droppings or bedding, or killing/preparing infected poultry for cooking. You can’t catch avian influenza through eating fully cooked poultry or eggs, even in areas with an outbreak of avian influenza.”

So let’s learn our lesson: If we give India’s widespread dysregulation of poultry and cattle health, underinvestment in pathogen surveillance, and its national government’s unique blend of optimism and wilful ignorance a chance, the country will give someone somewhere a rare strain of an avian influenza virus that can infect humans. Repeat after me: India can do it!

The billionaire’s solution to climate change

By: V.M.
10 May 2024 at 08:44

On May 3, Bloomberg published a profile of Salesforce CEO Marc Benioff’s 1t.org project to plant or conserve one trillion trees around the world in order to sequester 200 gigatonnes of carbon every year. The idea reportedly came to Benioff from Thomas Crowther’s infamous September 2015 paper in Nature that claimed restoring trees was the world’s best way to ‘solve’ climate change.

Following pointed criticism of the paper’s attitude and conclusions, they were revised to a significant extent in October 2019 to tamper predictions about the carbon sequestration potential of the world’s trees and to withdraw its assertion that no other solution could work better than planting and/or restoring trees.

According to Bloomberg’s profile, Benioff’s 1t.org initiative seems to be faltering as well, with unreliable accounting of the pledges companies submitted to 1t.org and, unsurprisingly, many of these companies engaging in shady carbon-credit transactions. This is also why Jane Goodall’s comment in the article is disagreeable: it isn’t better for these companies to do something vis-à-vis trees than nothing at all because the companies are only furthering an illusion of climate action — claiming to do something while doing nothing at all — and perpetuating the currency of counterproductive ideas like carbon-trading.

A smattering of Benioff’s comments to Bloomberg are presented throughout the profile, as a result of which he might come across like a sage figure — but take them together, in one go, and he sounds actually like a child.

“I think that there’s a lot of people who are attacking nature and hate nature. I’m somebody who loves nature and supports nature.”

This comment follows one by “the climate and energy policy director at the Union of Concerned Scientists”, Rachel Cleetus, that trees “should not be seen as a substitute for the core task at hand here, which is getting off fossil fuels.” But in Bloomberg’s telling, Cleetus is a [checks notes] ‘nature hater’. Similarly, the following thoughtful comment is Benioff’s view of other scientists who criticised the Crowther et al. paper:

“I view it as nonsense.”

Moving on…

“I was in third grade. I learned about photosynthesis and I got it right away.”

This amazing quote appears as the last line of a paragraph; the rest of it goes thus: “Slashing fossil fuel consumption is critical to slowing warming, but scientists say we also need to pull carbon that’s already in the air back out of it. Trees are really good at that, drawing in CO2 and then releasing oxygen.” Then Benioff’s third-grade quote appears. It’s just comedy.

His other statements make for an important reminder of the oft-understated purpose of scientific communication. Aside from being published by a ‘prestige’ journal — Nature — the Crowther et al. paper presented an easy and straightforward solution to reducing the concentration of atmospheric carbon: to fix lots and lots of trees. Even without knowing the specific details of the study’s merits, any environmental scientist in South and Southeast Asia, Africa, and South America, i.e. the “Global South”, would have said this is a terrible idea.

“I said, ‘What? One trillion trees will sequester more than 200 gigatons of carbon? We have to get on this right now. Who’s working on this?’”

“Everybody agreed on tree diplomacy. I was in shock.”

“The greatest, most scalable technology we have today to sequester carbon is the tree.”

The countries in these regions have become sites of aggressive afforestation that provide carbon credits for the “Global North” to encash as licenses to keep emitting carbon. But the flip sides of these exercises are: (i) only some areas are naturally amenable to hosting trees, and it’s not feasible to plant them willy-nilly through ecosystems that don’t naturally support them; (ii) unless those in charge plant native species, afforestation will only precipitate local ecosystem decline, which will further lower the sequestration potential; (iii) unafforested land runs the risk of being perceived as ‘waste land’, sidelining the ecosystem services provided by wetlands, deserts, grasslands, etc.; and (iv) many of these countries need to be able to emit more carbon before being expected to reach net-zero, in order to pull their populations out of poverty and become economically developed — the same right the “Global North” countries had in the 19th and 20th centuries.

Scientists have known all this from well before the Crowther et al. paper turned up. Yet Benioff leapt for it the moment it appeared, and was keen on seeing it to its not-so-logical end. It’s impossible to miss the fact that his being worth $10 billion didn’t encourage him to use all that wealth and his clout to tackle the more complex actions in the soup of all actions that make up humankind’s response to climate change. Instead, he used his wealth to go for an easy way out, while dismissing informed criticism of it as “nonsense”

In fact, a similar sort of ‘ease-seeking’ is visible in the Crowther et al. paper as well, as brought out in a comment published by Veldman et al. In response to this, Crowther et al. wrote in October 2019 that their first paper simply presented value-neutral knowledge and that it shouldn’t be blamed for how it’s been construed:

Veldman et al. (4) criticize our results in dryland biomes, stating that many of these areas simply should not be considered suitable for tree restoration. Generally, we must highlight that our analysis does not ever address whether any actions “should” or “should not” take place. Our analysis simply estimated the biophysical limits of global forest growth by highlighting where trees “can” exist.

In fact, the October 2019 correction to Crowther et al., in which the authors walked back on the “trees are the best way” claim, was particularly important because it has come to mirror the challenges Benioff has found himself facing through 1t.org: it isn’t just that there are other ways to improve climate mitigation and adaptation, it’s that those ways are required, and giving up on them for any reason could never be short of a moral hazard, if not an existential one.

Featured image credit: Dawid Zawiła/Unsplash.

The “coherent water” scam is back

By: V.M.
10 May 2024 at 08:35

On May 7, I received a press release touting a product called “coherent water” made by a company named Analemma Water India. According to the document, “coherent water” is based on more than “15 years of rigorous research and development” and confers “a myriad … health benefits”.This “rigorous research” is flawed research. There’s definitely such a thing as “coherent water” and it’s indistinguishable from regular water at all scales. The “coherent water” scam has reared its serpentine head before with the names “hexagonal water”, “structured water”, “polywater”, “exclusion zone water”, and water with one additional hydrogen and oxygen atom each, i.e. “H3O2”. Analemma’s “Mother Water”, which is its brand name for “coherent water”, itself is a rebranding of a product called “Somarka” that hit the Indian market in 2021.

The scam here is that the constituent molecules of “coherent water” get together to form hexagonal structures that persist indefinitely. And these structures distinguish “coherent water”, giving it wonderful abilities like possessing a greater energy content than regular water, boosting one’s “life force”, and — this one I love — being able to “encourage” other water molecules around it to form similar hexagonal assemblages.

I hope people won’t fall for this hoax but I know some will. But thanks to the lowest price of what Analemma is offering — a vial of “Mother Water” that it claims is worth $180 (Rs 15,000) — it’ll be some rich buggers and I think that’s okay. Fools, their wealth, and all that. Then again, it’s somewhat saddening that while (some) people are fighting to keep junk foods and bad medicines out of the market, we have “coherent water” companies and their PR outfits bravely broadcasting their press releases to news publications (and at least one publishing it) at around the same time.

If you’re curious about the issue with “coherent water”: At room temperature and pressure, the hydrogen atoms of water keep forming and breaking weak bonds with other hydrogen atoms. These bonds last for a very small duration and give water its high boiling point and ice crystals their characteristic hexagonal structure.

Sometimes water molecules organise themselves using these bonds into a hexagonal structure as well. But these formations are very short-lived because the hydrogen bonds last only around 200 quadrillionths of a second at a time, if not lower. According to the hoax, however, in “coherent water”, the hydrogen bonds continue to hold such that its water molecules persist in long-lived hexagonal clusters. But this conclusion is not supported by research — nor is the  claim that, “When swirled in normal water, the [magic water] encourages chaotic and irregular H2O molecules to rearrange into the same liquid crystalline structure as the [magic water]. What’s more, the coherent structure is retained over time – this stability is unique to Analemma.”

I don’t think this ability is unique to the “Mother Water”. In 1963, a scientist named Felix Hoenikker invented a variant of ice that, when it came in contact with water cooler than 45.8º C, quickly converted it to ice-nine as well. Sadly Hoenikker had to abandon the project after he realised the continued use of ice-nine would simply destroy all life on Earth.

Anyway, water that’s neither acidic nor basic also has a few rare hydronium (H3O+) and hydroxide (OH-) ions floating around as well. The additional hydrogen ion — basically a proton — from the hydronium ion is engaged in a game of musical chairs with the protons in the same volume of water, each one jumping to a molecule, dislodging a proton there, which jumps to another molecule, and so on. This is happening so rapidly that the hydrogen atoms in every water molecule are practically being changed several thousand times every minute.

In this milieu, it’s impossible for a fixed group of water molecules to be hanging around. In addition, the ultra-short lifetime of the hydrogen bonds are what makes water a liquid: a thing that flows, fills containers, squeezes between gaps, collects into droplets, etc. Take this ability and the fast-switching hydrogen bonds away, as “coherent water” claims to do by imposing a fixed structure, and it’s no longer water — any kind of water.

Analemma has links to some reports on its website; if you’re up to it, I suggest going through them with a simple checklist of the signs of bad research side by side. You should be able to spot most of the gunk.

Infinity in 15 kilograms

By: V.M.
19 April 2024 at 16:24

While space is hard, there are also different kinds of hardness. For example, on April 15, ISRO issued a press release saying it had successfully tested nozzles made of a carbon-carbon composite that would replace those made of Columbium alloy in the PSLV rocket’s fourth stage and thus increase the rocket’s payload capacity by 15 kg. Just 15 kg!

The successful testing of the C-C nozzle divergent marked a major milestone for ISRO. On March 19, 2024, a 60-second hot test was conducted at the High-Altitude Test (HAT) facility in ISRO Propulsion Complex (IPRC), Mahendragiri, confirming the system’s performance and hardware integrity. Subsequent tests, including a 200-second hot test on April 2, 2024, further validated the nozzle’s capabilities, with temperatures reaching 1216K, matching predictions.

Granted, the PSLV’s cost of launching a single kilogram to low-earth orbit is more than 8 lakh rupees (a very conservative estimate, I reckon) – meaning an additional 15 kg means at least an additional Rs 1.2 crore per launch. But finances alone are not a useful way to evaluate this addition: more payload mass could mean, say, one additional instrument onboard an indigenous spacecraft instead of waiting for a larger rocket to become available or postponing that instrument’s launch to a future mission.

But equally fascinating, and pride- and notice-worthy, to me is the fact that ISRO’s scientists and engineers were able to fine-tune the PSLV to this extent. This isn’t to say I’m surprised they were able to do it at all; on the contrary, it means the feat is as much about the benefits accruing to the rocket, and the Indian space programme by extension, as about R&D advances on the materials science front. It speaks to the oft-underestimated importance of the foundations on which a space programme is built.

Vikram Sarabhai Space Centre … has leveraged advanced materials like Carbon-Carbon (C-C) Composites to create a nozzle divergent that offers exceptional properties. By utilizing processes such as carbonization of green composites, Chemical Vapor Infiltration, and High-Temperature Treatment, it has produced a nozzle with low density, high specific strength, and excellent stiffness, capable of retaining mechanical properties even at elevated temperatures.

A key feature of the C-C nozzle is its special anti-oxidation coating of Silicon Carbide, which extends its operational limits in oxidizing environments. This innovation not only reduces thermally induced stresses but also enhances corrosion resistance, allowing for extended operational temperature limits in hostile environments.

The advances here draw from insights into metallurgy, crystallography, ceramic engineering, composite materials, numerical methods, etc., which in turn stand on the shoulders of people trained well enough in these areas, the educational institutions (and their teachers) that did so, and the schooling system and socio-economic support structures that brought them there. A country needs a lot to go right for achievements like squeezing an extra 15 kg into the payload capacity of an already highly fine-tuned machine to be possible. It’s a bummer that such advances are currently largely vertically restricted, except in the case of the Indian space programme, rather than diffusing freely across sectors.

Other enterprises ought to have these particular advantages ISRO enjoys. Even should one or two rockets fail, a test not work out or a spacecraft go kaput sooner than designed, the PSLV’s new carbon-carbon-composite nozzles stand for the idea that we have everything we need to keep trying, including the opportunity to do better next time. They represent the idea of how advances in one field of research can lead to advances in another, such that each field is no longer held back by the limitations of its starting conditions.

Justice delayed but a ton of bricks await

By: V.M.
11 April 2024 at 11:46

From ‘SC declines Ramdev, Patanjali apology; expresses concern over FMCGs taking gullible consumers ‘up and down the garden path’’, The Hindu, April 10, 2024:

The Supreme Court has refused to accept the unconditional apology from Patanjali co-founder Baba Ramdev and managing director Acharya Balkrishna for advertising medical products in violation of giving an undertaking in the apex court in November 2023 prohibiting the self-styled yoga guru. … Justices Hima Kohli and Ahsanuddin Amanullah told senior advocate Mukul Rohatgi that Mr. Ramdev has apologised only after being caught on the back foot. His violations of the undertaking to the court was deliberate and willful, they said. The SC recorded its dissatisfaction with the apology tendered by proposed contemnors Patanjali, Mr. Balkrishna and Mr. Ramdev, and posted the contempt of court case on April 16.

… The Bench also turned its ire on the Uttarakhand State Licensing Authority for “twiddling their thumbs” and doing nothing to prevent the publications and advertisements. “Why should we not come down like a ton of bricks on your officers? They have been fillibustering,” Justice Kohli said. The court said the assurances of the State Licensing Authority and the apology of the proposed contemnors are not worth the paper they are written on.

A very emotionally gratifying turn of events, but perhaps not as gratifying as they might have been had they transpired at the government’s hands when Patanjali was issuing its advertisements of pseudoscience-backed COVID-19 cures during the pandemic. Or if the Supreme Court had proceeded to actually hold the men in contempt instead of making a slew of observations and setting a date for another hearing. Still, something to cheer for and occasion to reserve some hope for the April 16 session.

But in matters involving Ramdev and Patanjali Ayurved, many ministers of the current government ought to be pulled up as well, including former Union health minister Harsh Vardhan, Union micro, small, and medium enterprises minister Nitin Gadkari, and Prime Minister Narendra Modi. Modi’s governance and policies both written and unwritten enabled Patanjali’s charlatanry while messrs Vardhan and Gadkari were present at an event in February 2021 when Patanjali launched a product it claimed could cure COVID-19, with Vardhan – who was health minister then – speaking in favour of people buying and using the unproven thing.

I think the Supreme Court’s inclination to hold Ramdev et al. in contempt should extend to Vardhan as well because his presence at the event conferred a sheen of legitimacy on the product but also because of a specific bit of theatrics he pulled in May the same year involving Ramdev and former Prime Minister Manmohan Singh. Ramdev apologising because that’s more politically convenient rather than because he thinks he screwed up isn’t new. In that May, he’d called evidence-based medicine “stupid” and alleged such medicine had killed more people than the virus itself. After some virulent public backlash, Vardhan wrote a really polite letter to Ramdev asking him to apologise, and Ramdev obliged.

But just the previous month, in April 2021, Manmohan Singh had written a letter to Modi suggesting a few courses of action to improve India’s response to the virus’s spread. Its contents were perfectly reasonable, yet Vardhan responded to it accusing Singh of spreading “vaccine hesitancy” and alleging Congress-ruled states were responsible for fanning India’s deadly second wave of COVID-19 infections (in 2021). These were all ridiculous assertions. But equally importantly, his lashing out stood in stark contrast to his letter to Ramdev: respect for the self-styled godman and businessman whose company was attempting to corner the market for COVID-19 cures with untested, pseudo-Ayurvedic froth versus unhinged rhetoric for a well-regarded economist and statesman.

For this alone, Vardhan deserves the “ton of bricks” the Supreme Court is waiting with.

The "coherent water" scam is back

By: VM
10 May 2024 at 05:03
The "coherent water" scam is back

On May 7, I received a press release touting a product called "coherent water" made by a company named Analemma Water India. According to the document, "coherent water" is based on more than "15 years of rigorous research and development" and confers "a myriad … health benefits".

This "rigorous research" is flawed research. There's definitely such a thing as "coherent water" and it's indistinguishable from regular water at all scales. The "coherent water" scam has reared its serpentine head before with the names "hexagonal water", "structured water", "polywater", "exclusion zone water", and water with one additional hydrogen and oxygen atom each, i.e. "H3O2". Analemma's "Mother Water", which is its brand name for "coherent water", itself is a rebranding of a product called "Somarka" that hit the Indian market in 2021.

The scam here is that the constituent molecules of "coherent water" get together to form hexagonal structures that persist indefinitely. And these structures distinguish "coherent water", giving it wonderful abilities like possessing a greater energy content than regular water, boosting one's "life force", and — this one I love — being able to "encourage" other water molecules around it to form similar hexagonal assemblages.

I hope people won't fall for this hoax but I know some will. But thanks to the lowest price of what Analemma is offering — a vial of "Mother Water" that it claims is worth $180 (Rs 15,000) — it'll be some rich buggers and I think that's okay. Fools, their wealth, and all that. Then again, it's somewhat saddening that while (some) journalists, policymakers, activists, and members of the judiciary are fighting to keep junk foods and bad medicines out of the market, we have also companies and their PR outfits bravely broadcasting their press releases to news publications (and at least one publishing it) at around the same time.

Anyway, if you're curious about the issue with "coherent water": At room temperature and pressure, the hydrogen atoms of water keep forming and breaking weak bonds with other hydrogen atoms. These bonds last for a very small duration and give water its high boiling point and ice crystals their characteristic hexagonal structure.

Sometimes water molecules organise themselves using these bonds into a hexagonal structure as well. But these formations are very short-lived because the hydrogen bonds last only around 200 quadrillionths of a second at a time, if not lower. According to the hoax, however, in "coherent water", the hydrogen bonds continue to hold such that its water molecules persist in long-lived hexagonal clusters. But this conclusion is not supported by research — nor is the  claim that, "When swirled in normal water, the [magic water] encourages chaotic and irregular H2O molecules to rearrange into the same liquid crystalline structure as the [magic water]. What’s more, the coherent structure is retained over time – this stability is unique to Analemma."

I don't think this ability is unique to the "Mother Water". In 1963, a scientist named Felix Hoenikker invented a variant of ice that, when it came in contact with water cooler than 45.8º C, quickly converted it to ice-nine as well. Sadly Hoenikker had to abandon the project after he realised the continued use of ice-nine would simply destroy all life on Earth.

Anyway, water that's neither acidic nor basic also has a few rare hydronium (H3O+) and hydroxide (OH-) ions floating around as well. The additional hydrogen ion — basically a proton — from the hydronium ion is engaged in a game of musical chairs with the protons in the same volume of water, each one jumping to a molecule, dislodging a proton there, which jumps to another molecule, and so on. This is happening so rapidly that the hydrogen atoms in every water molecule are practically being changed several thousand times every minute.

In this milieu, it's impossible for a fixed group of water molecules to be hanging around. In addition, the ultra-short lifetime of the hydrogen bonds are what makes water a liquid: a thing that flows, fills containers, squeezes between gaps, collects into droplets, etc. Take this ability and the fast-switching hydrogen bonds away, as "coherent water" claims to do by imposing a fixed structure, and it's no longer water — any kind of water.

Analemma has links to some reports on its website; if you're up to it, I suggest going through them with a simple checklist of the signs of bad research side by side. You should be able to spot most of the gunk.

End of the line

By: VM
30 March 2024 at 15:00
End of the line

The folks at The Wire have laid The Wire Science to rest, I’ve learnt. The site hasn’t published any (original) articles since February 2 and its last tweet was on February 16, 2024.

At the time I left, in October 2022, the prospect of it continuing to run on its own steam was very much in the picture. But I’ve also been out of the loop since and learnt a short while ago that The Wire Science stopped being a functional outlet sometime earlier this year, and that its website and its articles will, in the coming months, be folded into The Wire, where they will continue to live. The Wire must do what’s best for its future and I don’t begrudge the decision to stop publishing The Wire Science separately – although I do wonder if, even if they didn’t see sense in finding a like-for-like replacement, they could have attempted something less intensive with another science journalist. I’m nonetheless sad because some things will still be lost.

Foremost on my mind are The Wire Science‘s distinct sensibilities. As is the case at The Hindu as well as at all publications whose primary journalistic product is ‘news’, the science coverage doesn’t have the room or license to examine a giant swath of the science landscape, which – while in many ways being science news in the sense that it presents new information derived from scientific work – can only manifest in the pages of a news product as ‘analysis’, ‘commentary’, ‘opinion’, etc. The Wire has the latter, or had when I left and I don’t know how they’ll be thinking about that going ahead, but there is still the risk of science coverage there not being able to spread its wings nearly as widely as it could on The Wire Science.

I still think such freedom is required because we haven’t figured out how best to cover science, at least not without also getting entangled in questions about science’s increasingly high-strung relationship with society and whether science journalists, as practitioners of a science journalism coming of age anew in the era of transdisciplinary technologies (AI, One Health, open access, etc.), can expect to be truly objective, forget covering science by the same rules and expectations that guide the traditional journalisms of business, politics, sports, etc. If however The Wire‘s journalists are still thinking about these things, kudos and best wishes to them.

Of course, one thing was definitely lost: the room to experiment with forms of storytelling that better interrogate many of these alternative possibilities I think science journalism needs to embrace. Such things rarely, if ever, survive the demands of the everyday newsroom. Again, The Wire must do what it deems best for its future; doing otherwise would be insensible. But loss is also loss. RIP. I’m sad, but also proud The Wire Science was what it was when it lived.

The foundation of shit

By: VM
30 March 2024 at 14:30
The foundation of shit

I’ve been a commissioning editor in Indian science, health, and environment journalism for a little under a decade. I’ve learnt many lessons in this time but one in particular still surprises me. Whenever I receive an email, I’m quick to at least shoot off a holding reply: “I’m caught up with other stuff today, I’ll get back to you on this <whenever>”. Having a horizon makes time management much easier. What surprises me is that many commissioning editors don’t do this. I’ve heard the same story from scores of freelancing writers and reporters: “I email them but they just don’t reply for a long time.” Newsrooms are short-staffed everywhere and I readily empathise with any editor who says there’s just no time or mental bandwidth. But that’s also why the holding email exists and can even be automated to ask the sender to wait for <insert number here> hours. A few people have even said they prefer working with me because, among other things, I’m prompt. This really isn’t a brag. It’s a fruit hanging so low it’s touching the ground. Sure, it’s nice to have an advantage just by being someone who replies to emails and sets expectations – but if you think about it, especially from a freelancer’s point of view, it has a foundation of shit. It shouldn’t exist.

There’s a problem on the other side of this coin here. I picked up the habit of the holding email when I was with The Wire (before The Wire Science) – a very useful piece of advice SV gave me. When I first started to deploy it, it worked wonders when engaging with reporters and writers. Because I wrote back, almost always within less than half a day of their emails, they submitted more of their work. Bear in mind at this point that freelancers are juggling payments for past work (from this or other publications), negotiations for payment for the current submission, and work on other stories in the pipeline. In the midst of all this – and I’m narrating second-hand experiences here – to have an editor come along who replies possibly seems very alluring. Perhaps it’s one less variable to solve for. I certainly wanted to take advantage of it. Over time, however, a problem arose. Being prompt with emails means checking the inbox every <insert number here> minutes. I quickly lost my mind over having to check for new emails as often as I could, but I kept at it because the payoff stayed high. This behaviour also changed some writers’ expectations of me: if I didn’t reply within six hours, say, I’d receive an email or two checking in or, in one case, accusing me of being like “the others”.

I want my job to be about doing good science journalism as much as giving back to the community of science journalists. In fact, I believe doing the latter will automatically achieve the former. We tried this in one way when building out The Wire Science and I think we’ve taken the first steps in a new direction at The Hindu Science – yet these are also drops in the ocean. For a community that requires so, so much still, giving can be so easy that one loses oneself in the process, including on the deceptively trivial matter of replying to emails. Reply quickly and meaningfully and it’s likely to offer a value of its own to the person on the other side of the email server. Suddenly you have a virtue, and because it’s a virtue, you want to hold on to it. But it’s a pseudo-virtue, a false god, created by the expectations of those who deserve better and the aspirations of those who want to meet those expectations. Like it or not, it comes from a bad place. The community needs so, so much still, but that doesn’t mean everything I or anyone else has to give is valuable.

I won’t stop being prompt but I will have to find a middle-ground where I’m prompt enough and at the same time the sender of the email doesn’t think I or any other editor for that matter has dropped the ball. This is as much about managing individual expectations as the culture of thinking about time a certain way, which includes stakeholders’ expectations of the editor-writer relationship in all Indian newsrooms publishing science-related material. (The fact of India being the sort of country where the place you’re at – and increasingly the government there – being one of the first things getting in the way of life also matters.) This culture should also serve the interests of science journalism in the country, including managing the tension between the well-being of its practitioners and sustainability on one hand and the effort and the proverbial extra push required for its growth on the other.

Neural Networks (MNIST inference) on the “3-cent” Microcontroller

By: cpldcpu
2 May 2024 at 23:59

Bouyed by the surprisingly good performance of neural networks with quantization aware training on the CH32V003, I wondered how far this can be pushed. How much can we compress a neural network while still achieving good test accuracy on the MNIST dataset? When it comes to absolutely low-end microcontrollers, there is hardly a more compelling target than the Padauk 8-bit microcontrollers. These are microcontrollers optimized for the simplest and lowest cost applications there are. The smallest device of the portfolio, the PMS150C, sports 1024 13-bit word one-time-programmable memory and 64 bytes of ram, more than an order of magnitude smaller than the CH32V003. In addition, it has a proprieteray accumulator based 8-bit architecture, as opposed to a much more powerful RISC-V instruction set.

Is it possible to implement an MNIST inference engine, which can classify handwritten numbers, also on a PMS150C?

On the CH32V003 I used MNIST samples that were downscaled from 28×28 to 16×16, so that every sample take 256 bytes of storage. This is quite acceptable if there is 16kb of flash available, but with only 1 kword of rom, this is too much. Therefore I started with downscaling the dataset to 8×8 pixels.

The image above shows a few samples from the dataset at both resolutions. At 16×16 it is still easy to discriminate different numbers. At 8×8 it is still possible to guess most numbers, but a lot of information is lost.

Suprisingly, it is still possible to train a machine learning model to recognize even these very low resolution numbers with impressive accuracy. It’s important to remember that the test dataset contains 10000 images that the model does not see during training. The only way for a very small model to recognize these images accurate is to identify common patterns, the model capacity is too limited to “remember” complete digits. I trained a number of different network combinations to understand the trade-off between network memory footprint and achievable accuracy.

Parameter Exploration

The plot above shows the result of my hyperparameter exploration experiments, comparing models with different configurations of weights and quantization levels from 1 to 4 bit for input images of 8×8 and 16×16. The smallest models had to be trained without data augmentation, as they would not converge otherwise.

Again, there is a clear relationship between test accuracy and the memory footprint of the network. Increasing the memory footprint improves accuracy up to a certain point. For 16×16, around 99% accuracy can be achieved at the upper end, while around 98.5% is achieved for 8×8 test samples. This is still quite impressive, considering the significant loss of information for 8×8.

For small models, 8×8 achieves better accuracy than 16×16. The reason for this is that the size of the first layer dominates in small models, and this size is reduced by a factor of 4 for 8×8 inputs.

Surprisingly, it is possible to achieve over 90% test accuracy even on models as small as half a kilobyte. This means that it would fit into the code memory of the microcontroller! Now that the general feasibility has been established, I needed to tweak things further to accommodate the limitations of the MCU.

Training the Target Model

Since the RAM is limited to 64 bytes, the model structure had to use a minimum number of latent parameters during inference. I found that it was possible to use layers as narrow as 16. This reduces the buffer size during inference to only 32 bytes, 16 bytes each for one input buffer and one output buffer, leaving 32 bytes for other variables. The 8×8 input pattern is directly read from the ROM.

Furthermore, I used 2-bit weights with irregular spacing of (-2, -1, 1, 2) to allow for a simplified implementation of the inference code. I also skipped layer normalization and instead used a constant shift to rescale activations. These changes slightly reduced accuracy. The resulting model structure is shown below.

All things considered, I ended up with a model with 90.07% accuracy and a total of 3392 bits (0.414 kilobytes) in 1696 weights, as shown in the log below. The panel on the right displays the first layer weights of the trained model, which directly mask features in the test images. In contrast to the higher accuracy models, each channel seems to combine many features at once, and no discernible patterns can be seen.

Implementation on the Microntroller

In the first iteration, I used a slightly larger variant of the Padauk Microcontrollers, the PFS154. This device has twice the ROM and RAM and can be reflashed, which tremendously simplifies software development. The C versions of the inference code, including the debug output, worked almost out of the box. Below, you can see the predictions and labels, including the last layer output.

Squeezing everything down to fit into the smaller PMS150C was a different matter. One major issue when programming these devices in C is that every function call consumes RAM for the return stack and function parameters. This is unavoidable because the architecture has only a single register (the accumulator), so all other operations must occur in RAM.

To solve this, I flattened the inference code and implemented the inner loop in assembly to optimize variable usage. The inner loop for memory-to-memory inference of one layer is shown below. The two-bit weight is multiplied with a four-bit activation in the accumulator and then added to a 16-bit register. The multiplication requires only four instructions (t0sn, sl,t0sn,neg), thanks to the powerful bit manipulation instructions of the architecture. The sign-extending addition (add, addc, sl, subc) also consists of four instructions, demonstrating the limitations of 8-bit architectures.

void fc_innerloop_mem(uint8_t loops) {

    sum = 0;
    do  {
       weightChunk = *weightidx++;
__asm   
    idxm  a, _activations_idx
	inc	_activations_idx+0

    t0sn _weightChunk, #6
    sl     a            ;    if (weightChunk & 0x40) in = in+in;
    t0sn _weightChunk, #7
    neg    a           ;     if (weightChunk & 0x80) in =-in;                    

    add    _sum+0,a
    addc   _sum+1
    sl     a 
    subc   _sum+1  

  ... 3x more ...

__endasm;
    } while (--loops);

    int8_t sum8 = ((uint16_t)sum)>>3; // Normalization
    sum8 = sum8 < 0 ? 0 : sum8; // ReLU
    *output++ = sum8;
}

In the end, I managed to fit the entire inference code into 1 kilowords of memory and reduced sram usage to 59 bytes, as seen below. (Note that the output from SDCC is assuming 2 bytes per instruction word, while it is only 13 bits).

Success! Unfortunately, there was no rom space left for the soft UART to output debug information. However, based on the verificaiton on PFS154, I trust that the code works, and since I don’t have any specific application in mind, I left it at that stage.

Summary

It is indeed possible to implement MNIST inference with good accuracy using one of the cheapest and simplest microcontrollers on the market. A lot of memory footprint and processing overhead is usually spent on implementing flexible inference engines, that can accomodate a wide range of operators and model structures. Cutting this overhead away and reducing the functionality to its core allows for astonishing simplification at this very low end.

This hack demonstrates that there truly is no fundamental lower limit to applying machine learning and edge inference. However, the feasibility of implementing useful applications at this level is somewhat doubtful.

You can find the project repository here.

Implementing Neural Networks on the “10-cent” RISC-V MCU without Multiplier

By: cpldcpu
24 April 2024 at 10:20

I have been meaning for a while to establish a setup to implement neural network based algorithms on smaller microcontrollers. After reviewing existing solutions, I felt there is no solution that I really felt comfortable with. One obvious issue is that often flexibility is traded for overhead. As always, for a really optimized solution you have to roll your own. So I did. You can find the project here and a detailed writeup here.

It is always easier to work with a clear challenge: I picked the CH32V003 as my target platform. This is the smallest RISC-V microcontroller on the market right now, addressing a $0.10 price point. It sports 2kb of SRAM and 16kb of flash. It is somewhat unique in implementing the RV32EC instruction set architecture, which does not even support multiplications. In other words, for many purposes this controller is less capable than an Arduino UNO.

As a test subject I chose the well-known MNIST dataset, which consists of images of hand written numbers which need to be classified from 0 to 9. Many inspiring implementation on Arduino exist for MNIST, for example here. In this case, the inference time was 7 seconds and 82% accuracy was achieved.

The idea is to train a neural network on a PC and optimize it for inference on teh CH32V003 while meetings these criteria:

  1. Be as fast and as accurate as possible
  2. Low SRAM footprint during inference to fit into 2kb sram
  3. Keep the weights of the neural network as small as possible
  4. No multiplications!

These criteria can be addressed by using a neural network with quantized weights, were each weight is represented with as few bits as possible. The best possible results are achieved when training the network already on quantized weights (Quantization Aware Training) as opposed to quantized a model that was trained with high accuracy weights. There is currently some hype around using Binary and Ternary weights for large language models. But indeed, we can also use these approaches to fit a neural network to a small microcontroller.

The benefit of only using a few bits to represent each weight is that the memory footprint is low and we do not need a real multiplication instruction – inference can be reduced to additions only.

Model structure and optimization

For simplicity reasons, I decided to go for a e network architecture based on fully-connected layers instead of convolutional neural networks. The input images are reduced to a size of 16×16=256 pixels and are then fed into the network as shown below.

The implementation of the inference engine is straightforward since only fully connected layers are used. The code snippet below shows the innerloop, which implements multiplication of 4 bit weights by using adds and shifts. The weights use a one-complement encoding without zero, which helps with code efficiency. One bit, ternary, and 2 bit quantization was implemented in a similar way.

    int32_t sum = 0;
for (uint32_t k = 0; k < n_input; k+=8) {
uint32_t weightChunk = *weightidx++;

for (uint32_t j = 0; j < 8; j++) {
int32_t in=*activations_idx++;
int32_t tmpsum = (weightChunk & 0x80000000) ? -in : in;
sum += tmpsum; // sign*in*1
if (weightChunk & 0x40000000) sum += tmpsum<<3; // sign*in*8
if (weightChunk & 0x20000000) sum += tmpsum<<2; // sign*in*4
if (weightChunk & 0x10000000) sum += tmpsum<<1; // sign*in*2
weightChunk <<= 4;
}
}
output[i] = sum;

In addition the fc layers also normalization and ReLU operators are required. I found that it was possible to replace a more complex RMS normalization with simple shifts in the inference. Not a single full 32×32 multiplication is needed for the inference! Having this simple structure for inference means that we have to focus the effort on the training part.

I studied variations of the network with different numbers of bits and different sizes by varying the numer of hidden activiations. To my surprise I found that the accuracy of the prediction is proportional to the total number of bits used to store the weights. For example, when 2 bits are used for each weight, twice the numbers of weights are needed to achieve the same perforemnce as a 4 bit weight network. The plot below shows training loss vs. total number of bits. We can see that for 1-4 bits, we can basically trade more weights for less bits. This trade-off is less efficient for 8 bits and no quantization (fp32).

I further optimized the training by using data augmentation, a cosine schedule and more epochs. It seems that 4 bit weights offered the best trade off.

More than 99% accuracy was achieved for 12 kbyte model size. While it is possible to achiever better accuracy with much larger models, it is significantly more accurate than other on-MCU implementations of MNIST.

Implementation on the Microcontroller

The model data is exported to a c-header file for inclusion into the inference code. I used the excellent ch32v003fun environment, which allowed me to reduce overhead to be able to store 12kb of weights plus the inference engine in only 16kb of flash.

There was still enough free flash to include 4 sample images. The inference output is shown above. Execution time for one inference is 13.7 ms which would actually allow to model to process moving image input in real time.

Alternatively, I also tested a smaller model with 4512 2-bit parameters and only 1kb of flash memory footprintg. Despite its size, it still achieves a 94.22% test accuracy and it executes in only 1.88ms.

Conclusions

This was quite a tedious projects, hunting many lost bits and rounding errors. I am quite pleased with the outcome as it shows that it is possible to compress neural networks very significantly with dedicated effort. I learned a lot and am planning to use the data pipeline for more interesting applications.

❌
❌