PART TWO–ANARCHY OF TRANSITION
In every age of well-marked transition there is the pattern of habitual dumb practice and emotion which is passing, and there is the oncoming of a new complex habit. Between the two lies a zone of anarchy.
–Alfred North Whitehead (1861–1947)1
The five chapters of Part One have followed a common pattern: building from historical context and example, and charting change over time, into and through the Information Age and connecting with health care.
Part Two adopts a different viewpoint: that of the impact of the adventure of ideas of Part One on life and medical sciences and health care services, in their connected transitions into and through the Information Age. Its two chapters might arguably comprise and merit books of their own. They tell stories of anarchic transition in which the author has been both eyewitness and participant and are thus integral with the songline that the book traverses.
As expressed by Alfred North Whitehead (1861–1947), radically new ideas and the anarchic transitions they unleash create new contexts and opportunities that become the focus of programmes for reform. Part Three of the book peers ahead, imaginatively, towards a new era where the experiences and learning that feature in Parts One and Two will lead us to understand, create and sustain health care and its services differently.
1 Adventures of Ideas (New York: Macmillan, 1933), p. 14.
6. Life and Information–Co-evolving Sciences
© 2023 David Ingram, CC BY-NC 4.0 https://doi.org/10.11647/OBP.0384.01
This chapter steps away from the practical engineering of Chapter Five into a new dimension, to consider where information itself, as an idea, now connects within life science and medicine. The current era has seen radical transition in scientific understanding of the nature of both information and life. Like particles and waves in quantum theory, perhaps they will come to be seen, in some emergent way, as another example of complementarity. Life as somewhere between material entity and immaterial essence. Information as somewhere between material and measurable entity and immaterial abstraction.
The question ‘What is Life?’, and its connection with the nature of information as a scientific concept, has captivated luminary thinkers, who have informed and challenged one another, and written landmark books on this theme. I have a collection of these, written from physics, life science, mathematics, computer science and cognitive neuroscience perspectives. I look in turn at an eclectic selection, over time. My purpose is to illustrate how these great and imaginative contributors have applied their evolving insights to elucidate connection of their disciplines with ideas about the nature of information and life.
The chapter concludes with a reflection on information policy for health care services in the present era of still extremely rapid transition on all fronts of information technology and life science. There can be no more important global goals than those that seek balance, continuity and governance of the natural environment. In health care, these three also predominate as concerns of our age. They pose challenges that can only be tackled based on shared knowledge and methods that connect coherently and transcend from local to global scale, building on common ground.
What is mind? No matter.
What is matter? Never mind.
–attributed to George Berkeley (1685–1753)1
‘What is’ questions are not new! What is reality? What is life? What is information? These, too, perplex! Erwin Schrödinger (1887–1961) opened a new window and provided insight into (or should that be outlook onto?) the first of these questions, in the theoretical physics of the quantum era. The Schrödinger equation started wheels turning and gained experimental traction. In later years, he peered into the misty future in raising the second question and made some suggestions, too, about the Berkeley question, which he considered even harder. ‘Leave it to the computer’, as an answer to the third question, does not really equate to ‘leave it to the Schrödinger equation’, in answer to the first! And resolving to ‘never mind’ about it may turn out to matter a lot more in the computer age!
I used to sit eating with physics student colleagues, tired after a day battling problems connecting with quantum theory and the ‘What is reality?’ question. Nearby, a group of lawyer colleagues were often in more lively debate over their ‘What is law?’ question. I do not recall discussion among the mathematicians about ‘What is mathematics?’ No doubt, they were resting after their days deeply immersed in theory of number, topology and symmetry; like us aspirant physicists, numbed by our mental struggles with vector calculus, tensor algebra and analytical solutions of differential equations!
The question ‘What is life?’ became a preoccupation of mathematics, physics and chemistry, as they cross-fertilized with one another and spread their interests and influence further around the circle of knowledge, to biology and medicine. And now filtering to the top of the pile of ‘what is’ questions in both physical and life science is the question ‘What is information?’ Theory of information has evolved in multiple contexts of mathematics, science and engineering over the past one hundred and fifty years. Some believe it holds the key to clarity about other ‘What is…’ questions in science. It may have significant impact on what comes next in the evolution of life science and health care. Peering into the mists for an Occam’s razor moment, perhaps an answer could only ever emerge alongside the untangling of the first great unknown: ‘What is reality?’ Perhaps the Hitchhiker’s Guide answer, ‘forty-two’, will prove the best we can do!2 Now that we know more about information and what we can make and do, by way of data and knowledge, do we know more about life? Zobaczymy [we will see]!3
What is matter? Maybe it’s information
What is information? Maybe it’s matter
How does information matter in life?
Does any of this connect with health care?
As we have seen throughout Part One, the Information Age has been one of disruptive transition in science, technology and society. The anarchy has played out into health care services. Uncertain and experimental, new and evolving insight into the science of life has coupled with the equally new and evolving science and technology of information systems, which have themselves underpinned this scientific revolution. It looks rather like an engineering control system (with information feeding forward in some way into science, and science feeding back in some way into information), and such systems can be unstable. This scene in health care, and its context, look to have some features in common with what happened to the world’s monetary system in 2008, a thought that I explore further in Chapter Eight.
In such times, we must be cautious about digging too rapidly or deeply into ‘What is?’ conundrums. They can be tiring, costly and a bit beside the point. As the quotation leading into Part Three emphasizes, sustainable progress comes iteratively and incrementally, with the need for careful testing at each stage. The Information Age of science and technology can readily dig bottomless holes and endless tunnels of discourse, excavating more and more data, ever faster. There are swallow holes lurking when we dig deeply into ‘What is?’ questions–it is easy to fall in and get stuck underground. Swallow holes are called solution features in the technical jargon because they dissolve underlying chalk, and the earth above falls in. In some places, chalk is what supports our houses and swallow holes can undermine these foundations, when our intent in digging is to underpin and make them better. There was once a very small one halfway down our garden, which is maybe why that analogy came to my mind!
As we survey our current era of anarchic transitions in science, technology and society, we need to shore up their necessary foundations that have become exposed and weakened. How we do this, matters. We need to focus more on the practicalities of how, and less on what we’re trying to achieve and why. Health care at the front line is clearly operating on expensive and shaking foundations at present. Part Three proposes one approach to the ‘how’ question of health care information policy, about how to make and do things better.
Finding good answers will matter for health care. We must neither fall under the spell of Siren voices of technological utopia nor fail to find a safe course through the gap between beckoning rocks, which health care must navigate, seeking a way onto a stronger and more resilient future home base. We’re not like Odysseus; his story is mythical, although the seascape he tells of maybe not. Our encounter with an anarchic seascape of health care information is all real. The difference is that we create the sea and can navigate it better.
The range of ideas embraced in postulated answers to the ‘What is information?’ and ‘What is life?’ questions is considerable and continuously evolving. Here, I can only seek to outline and connect their history and scope. We can look at life in evolutionary, historical and scientific contexts. We can look at information in the context of physical science and engineering, and how it has interfaced with life science and health care. This is the scope I will now venture to outline. It is a challenge better tuned to my physics brain of yesteryear and I set it out, here, only to encourage more flexible and knowledgeable modern brains to reflect on it, pick it apart and improve it.
Life in Evolutionary Context
Our understanding of life may in time arise from way beyond our tiny Earth. Search for extra-terrestrial life is fascinating astrobiology of our time, highlighted by the report, last week as I write, of the possible discovery of phosphine in the Venusian atmosphere, as a potential marking of microbial life there. Accounts of the experiments planned for the automated Mars lander when it arrives on Venus in a few years’ time also stir speculation. It will drill to collect samples and use Raman spectroscopy to characterize their minerals, looking for imprints of carbon-based materials that might also prove indicative of life.4 As I first revised this chapter on 18 December 2021, the National Geographic had an excited article about a pre-publication announcement of a strong candidate SETI intelligent signal from space, emanating from Proxima Centauri, our nearest stellar neighbour.5 In more recent months, the James Webb telescope has been successfully launched and positioned, starting to focus such investigation much further away. I will focus, here, on life on Earth and the connections made from mathematics and science to information and life.
Stephen Jay Gould (1941–2002) coined the term punctuated equilibrium to describe periods of quiescence and slow change, and periods of rapid and disruptive change in the natural world. Over such eons, humankind has played almost no part in evolution. We are the tiniest of dots on this landscape. The Information Age is punctuated evolution of a new kind, in which humans are, or should be, in control.
Earth as a planet has a 4.6-billion-year history within the solar system and the earliest life forms may have originated between 3.77 and 4.5 Ga (billion years) ago, 100 million years or so after the first appearance of liquid water. These dates are continuously under review and are recalibrated as new evidence emerges and gains sway. With increasing complexity and diversity, life evolved and emerged from the sea onto land and into the sky. By the time of the carboniferous period, 363 Ma (million years) ago, the earth began to look a bit like the earth today. Major extinction events 251.4 Ma and 66 Ma years ago destroyed and rebooted life, eliminating ninety to ninety-five percent of marine species in the first event and half of animal species in the second event. Life flourished again and genus homo appears 2 Ma years ago in the fossil record. Anatomically, modern humans appeared in Africa around 250 Ka (thousand years) ago, colonizing the other continents, replacing Neanderthals in Europe and other hominins in Asia.
Landscape can usually be relied on to evolve very slowly. Tectonic plates drift and collide, sometimes grumblingly in tremors and localized earthquakes, sometimes creating pressure valves released in volcanoes that spread their effects more widely around the planet. Humankind can do little if anything to influence geodynamic punctuations of our planetary history, save to seek out and build in safer places and employ robust and resilient construction methods and defences. The information landscape is now, and increasingly, intertwined with the physical environment, and shaping the living world and human experience. Its construction methods and defences are often proving inadequate for combatting the attrition it has engendered.
Major punctuations of evolution arise from beyond the earth as it is buffeted from elsewhere in the solar system. Meteors arrive in different sizes, as daily events. Much larger asteroid strikes, as at Chicxulub in Mexico, associated with the second major extinction of life 66 Ma years go, burned and scarred the earth for thousands of miles around and precipitated climate change that impacted life everywhere, immediately and over very long periods. This one’s size has been estimated in tens of kilometres and the energy it dissipated in tens of yottajoules–physical estimates of events on earth do not yet have units much larger! Sunspots, arising in the chaotic surface dynamics of the star, flare radiation that disrupts information systems on earth, nine minutes later.
We learn about such events in history from the science of our era. This learning shapes and illuminates our understanding of Earth, and life on Earth. We can now predict and assess some such external risks, but, like Nassim Taleb’s black swans,6 they also arrive unannounced and unpredicted. We hope that we will be spared from them, and mostly do not think about them, or adopt self-comforting denial. We are consumed with surviving present storms and our perspectives and decisions are biased by recency.
Life in Historical and Scientific Context
The nature of life is of enduring interest to conscious minds. Ancient chemists sought an elixir of life. Mystics and worshippers found meaning in patterns of the world they inhabited, and ascribed hardships and extreme events to the will of all-powerful gods. Living and dying were understood and ritualized in relationship with unseen creators and takers of life.
Greek philosophers found beauty and symbolic order in the material world and living creatures, expressing this in their writing and through the arts. Other cultures decried this as idolatrous imagery. Mechanisms of living organisms and their dysfunctions provided rich material for these pursuits and preoccupations. Birth and death, the transition into and out of life, now occupy centre stage–illness and disease likewise. The extant writings attributed to Hippocrates (c. 460 BCE–375 BCE)7 and those of Galen (c. 130 CE–210 CE),8 whose ancestral home was in Aesculapian (Aesculapius, the Roman god of medicine), tell the story of the stirrings of medical discipline and practice, as symbolized in the classical iconography of the caduceus–a sword or staff with twin venomous serpents entwined around it.
Skipping forward many centuries, the Italian Renaissance imagery of Leonardo da Vinci (1452–1519) captured human anatomy in artistic detail and his polymath range of interests made connections with mathematics, engineering, botany and astronomy. Jumping again, to the era of William Harvey (1578–1657), the inner function of the body was explored, and the circulatory system of blood discovered, as described in his book de Motu Cordis (1628). Medical practice slowly evolved from the mystical and pragmatic–leeching of blood and herbal medicaments and preoccupation with extracting vapours, seen as poisons–to an experimentally balanced science. The surgical profession started with the Barber-Surgeons in the mid-sixteenth century, as a trade guild and livery company of the City of London. As invasiveness of interventions increased, they battled infection and outcomes were poor. Surgery separated into its own domain and acquired its Royal Charter in the mid-nineteenth century.
Identification of patterns of disease and their classifications by pathologists such as Thomas Hodgkin (1798–1866), dissecting the bodies of deceased patients, opened insight into a new world of ordered and disordered bodies, and the time course of their life, from birth to death. This branch of science was predicated on advances in optical devices for magnifying minute detail. Experienced clinical observation remained central to professionalism.
Health and disease were increasingly of wider interest in society. Adverse outcomes as well as beneficial ones attracted attention and were related to good and bad standards of the professions as well as good and bad prognoses of patients. Reputation and remuneration were at stake. The legal profession became more active on the scene.
As with any innovation impacting on established skills and practice, there was kickback. Stethoscopes were an unproven irrelevance–the physician’s hand and a chronometer were all that was needed–thus ruled the rulers of the times. Limitations in what can be done to support patients and improve their clinical situation may render additional measurement irrelevant. But, if new measurement can potentially cast new light on a clinical situation, or open new opportunities for treating it effectively, there is a case for experiment. In striking a balance in what was done to and for a patient, with many interests in play, the patient’s interest–only slowly well-represented–became of greater concern. How people concerned about loss of livelihood or influence may see that balance, may differ significantly from how the eager innovator or the patient may see it.
Evolving science and technology ushered in a new era of measurement devices. Physicists became increasingly interested and involved in medicine, heralding new methods of investigation and treatment of disease. X-ray imaging devices, recording penetrating radiation on photographic emulsions, allowed visualization of internal organs. New devices, such as for radiation therapy, required significant engineering expertise to design and operate them safely. Medical and surgical procedures increased in ambition, alongside widening knowledge of pharmacology and pharmaceutics. Other devices of potential relevance to medical practice continued to appear. An increasing range of national bodies extended the regulation of medical practice.
An innovative thrust came from a different direction in the first half of the twentieth century, again, in part, from the cross-fertilization of physics and mathematics with physiology. At the turn of the century, the electrophysiology of the integrated nervous system was the experimental domain of Charles Sherrington (1857–1952). With Edgar Adrian (1889–1977), he was awarded the Nobel Prize in 1933 for their work at the University of Cambridge on the function of neurons. At the start of my songline, another illustrious Hodgkin, Alan Hodgkin (1914–98), and Andrew Huxley (1917–2012), were Nobel Prize winners in 1963 for their work at the Universityof Cambridge and the Marine Biological Laboratory in Plymouth, which included the development of the original mathematical model of the propagation of nerve action potential. And in the 1970s, towards the end of his career, the zoologist and neurophysiologist John Zachary Young (1907–97), at University College London (UCL), updated Sherrington’s concept of an integrative nervous system in his 1975–77 Gifford Lectures on the Programs of the Brain, an inukbook that I discuss below. I shared his academic affiliation with Magdalen College at the University of Oxford and UCL, although my tiny and invisible village school was very different from the visible majesty of his Marlborough College, which I know from friends was also a wonderful academic environment to experience.
The scientific understanding of living organisms has advanced beyond recognition in the past seventy years. Foremost among the discoveries that opened a new window onto the unknown, in the memorable phrase of the physicist Max Born (1882–1970), was that of the double helix structure of DNA, by Francis Crick (1916–2004) and James Watson in 1953. Max Born’s son, Gustav Born (1921–2018), worked alongside the pharmacologist John Vane (1927–2004), in my years at St Bartholomew’s Hospital (Bart’s). Advances in laboratory science and technology and insights gained from the tracking of genetic mutations through successive generations of reproduction of living organisms were developed and brought to fruition by an outstanding generation of scientists, many of them Nobel Prize winners, combining with the emerging capabilities and analytical methods of information technology and computer science. Together, these paved and led the way to astonishingly rapid advances in the sequencing of genomes and mapping of their component structures and functions.
Over much the same period, parallel insights from mathematics, physics, computer science and engineering, linking with biology and cognitive neuroscience, have shone new light on ideas about the nature of living systems and the human brain, framed within concepts of information networks. The studies of information networks–computer networks, gene networks, health information networks, social networks–have proliferated and cross-fertilized.
These multidisciplinary efforts have brought increasing focus on the nature of information itself. We are in a scientific era that seeks understanding of complex systems by building models that draw on many domains of knowledge: the mathematics of symmetry, topology and calculus; the physics of order, information and energy; computer science and formal logic; and the engineering of control systems. It is marrying these with chemistry and life science, from the level of atoms, molecules and cells to organs, bodies, populations and ecosystems. Information in this holistic perspective is conceived as fundamental and quantifiable, and even perhaps a physical property of matter.
Underlying such quests for unification, nature seems to place some restrictions–energy is conserved, it is impossible to travel faster than light, the universe proceeds towards states of increasing disorder. Physics looks for theory consistent with such appearances and constraints–if there is breakage, the experiment is flawed, or it is the theory that is broken. Fragmented ideas about information, from many domains, are distilling and evolving towards a coherent core of information theory and methods. They have a two-hundred-year songline dating from the time of the early steam engines–nice to have a train of thought connecting steam with information!
In whatever way these ideas may ultimately connect within the practical domain of health informatics, there is urgency to progress ideas on multiple fronts to improve understanding. We need good and enabled teams and environments in which to draw them together. Downplaying the complexity and challenges involved in unifying health informatics, and opting for single and fragmented communities, cannot work. Left to academia, health informatics becomes distracted into more and more disengaged words and airmiles. Left to governments and NGOs, it becomes corralled into political power struggles. Left to consultants and industry, it engenders wasteful gold rush. Left to managers and regulators it becomes disconnected and unimplementable. Left to clinicians and technologists, it has remained intractable.
To date, an Institute of Life Science and Health Care Informatics with this broad scope would likely be seen as both unworldly and unacademic. It would be worth a try! How we tackle what we do not know about (but must act on, learn about and implement) boils down to good ideas, tractable goals, capable teams and richly endowed and protected environments fostering creativity, experiment and learning. Multiprofessional and interdisciplinary teamwork and good environments are crucial. I seek to draw these thoughts together and show them in action in Part Three of the book.
Information in Context of Physical, Engineering and Life Sciences
Bertrand Russell (1872–1970) thought of mathematics and logic as one and the same–the basis of clear, precise and consistent thought and reasoning about the world. The answer to the ‘What is mathematics?’ question seems to encompass whatever you need in your armoury, to support you in achieving that lofty goal–as fully as you can, but not with a sense of completeness or perfection. Mathematics is what mathematicians decide to do, as it were–quite an attractive perspective for the academic mind, and important for the rest of us to enable them to get on with it, unhindered! New problems encountered may require new ideas and methods of mathematics. In this way of thinking, mathematics is akin to a model of logical reasoning (not a model of the way the human brain works, although neuromorphic computation seems on the up again, now). Its corpus of ideas and methods is its discipline, positioned close by to philosophy around the circle of knowledge.
Early science rattled the cages of religion and philosophy and was allowed out under the guise of the label ‘natural philosophy’. It entails many ‘What is?’ questions that still baffle. It evolves models and methods–ways of describing and reasoning–seeking to understand better, as times change, and the world moves on. The ‘What is reality?’ question may be destined never to be resolved, but theoretical physics continues to posit ideas and keep trying.
Physics poses other ‘What is?’ questions–what is gravity, quantum entanglement, dark matter, dark energy, entropy, time? Armed with the ideas and methods of mathematics to help keep it on track, it ventures to describe and tame the observed physical world, in the shape of theory and experiment that embrace manifolds of space and time, fields, forces, energies, elementary particles, nuclei, atoms, molecules and their ensembles, and now of information. Unanswered or partially answered questions spur new endeavours.
New mathematics and science are discovered, to model and simulate patterns and behaviours encountered, and bring rigour to their analysis. As Karl Popper (1902–94) is reported to have said, the essence of modelling is to discover what can safely be left out. Faced with increasing complexity, how can a problem be reframed, drawing on new ideas and methods, to achieve a goal of simplifying perhaps hitherto intractable descriptions, and enhance understanding and ability to reason consistently.
William of Ockham’s (c. 1285–1347) Razor points to virtue in simplicity in this process. In the physical world, there is poetic simplicity and profound science in hydrogen–the simplest element, just one proton and one electron, and the origin of all the other elements in the evolving universe. Its name means maker of water, itself a quite simple molecule–an assembly of two hydrogen and one oxygen atoms. The polarized charge distribution of the water molecule gives rise to its complex physical behaviours–solid ice expands from and floats on liquid water, and aqueous solutions exhibit complex behaviours which play out throughout the chemistry and complexity of life.
Early in my postgraduate career, I read beyond classical and quantum physics into the connections from mathematics into computer science and electrical engineering. A pivotal stage was the mathematician Claude Shannon’s (1916–2001) characterization of the information content of electrical signals and their digital communication through transmitted messages. John von Neumann (1903–57) advised him of the parallel with Ludwig Boltzmann’s (1844–1906) and James Clerk Maxwell’s (1831–79) earlier ground-breaking connection of theory of order and disorder of physical systems with the concept of entropy, which unfolded in the field of statistical mechanics and thermodynamics. Thus arose the term information entropy, and ideas connecting order, information and life, as I further describe, below. This has evolved into a complex chain of ideas, probing at the limits of what we know and can know, with new ideas extending into life science, medical science and health care. The early history is a great example of the pioneering connections that von Neumann made, between mathematics, science and engineering, leading into the Information Age. His early death from cancer was a great loss. The book of his 1956 Silliman Lectures on The Computer and the Brain, which he worked on as he came close to death, is one of the landmark contributions I introduce in this chapter.
I read further through the connections from mathematics and physics into theory of complexity, and emergent properties of physical systems in states of thermodynamic disequilibrium and irreversible change. Ilya Prigogine (1917–2003) and René Thom (1923–2002) are remembered storytellers. The Belousov–Zhabotinsky reaction (Boris Belousov (1893–1970) and Anatol Zhabotinsky (1938–2008)) was a captivating chemical example of non-equilibrium thermodynamics appearing as an oscillating perpetuum mobile. A hypothesis of the times was of life as an emergent property of dynamical systems far from equilibrium. And other conjectures arose.
Around the same time that Shannon alighted on his concept of information entropy, there was increasing cross-over into study of the living world, where the ‘What is life?’ question was in search of an answer. Human life plays out from the physics and chemistry of energy and membrane into the biology of organelles and cells, and into organs and organ systems, bodies, families, populations and species. The quest is for increasing precision and traction in ways of describing and reasoning about living systems. It may require new science and new mathematics. It is a field in which mathematics bridges into informatics, and physical science into biology and medicine, around the circle of knowledge. Informatics in this broad context might be characterized as a science of information that spans from mathematics, through natural science to engineering science.
‘What is informatics?’ is a question that I was teased about in my early medical school academic post. I decided to stick it out and not retreat to a safe distance from this sometimes indulgent, sometimes slightly menacing mockery, sheltered in the mathematics or science establishments. I wanted to find out what informatics is, by working inside the world of life science and medicine, engaging as broadly as possible with problems I came across there. I was given carte blanche to live out an experimental enactment of ‘informatics is what informaticians do’, in the way that the Nobel Laureate physicist, John Archibald Wheeler (1911–2008) was content with mathematics being what mathematicians do.
As mentioned above, early perspectives about the scientific nature of information connected with the study of the behaviour of physical ensembles (groupings) and systems of things that interact with one another. This evolved from the study of the properties of gases and connected the experimental study of thermodynamics, as classically expressed in the laws governing their physical behaviour (measurements of pressures, volumes, temperatures and so on) with theory of statistical mechanics, which modelled the behaviour of the gas in terms of ensembles of gaseous molecules.
A key, but elusive concept of classical thermodynamics, relating to the capacity of a heated gas in a steam engine to expand and thereby be organized to perform useful work, was entropy. This was quantified in terms of properties that could be measured, but the answer to the ‘What is entropy?’ question was elusive. No one knew the answer to that question. Conjectures about its connection with other ‘What is?’ questions persist today. But, as with quantum theory battling the ‘What is reality?’ question, today, there was theory and method that enabled the classical thermodynamic system to be modelled mathematically and used to simulate and predict its observed behaviours, making use of this concept and calculating its changing value. In his kinetic theory of gases, Boltzmann’s crowning achievement in 1877 was to connect the entropy of the gas, seen as a macrosystem state in classical thermodynamics, with theory of statistical mechanics and the number of equiprobable microsystem states of the component gas molecules in which the system could exist. This was a measure of the order exhibited by the description of the microsystem: if highly ordered, only a few descriptive states are possible; if highly disordered, very many. Entropy emerged as a measure of disorder–increasing entropy being in a negative logarithmic relationship to the Boltzmann quantification of order. James Clerk Maxwell and Max Planck (1858–1947) shared in the later mathematical formulation of these ideas.
Many decades later, as presaged above, a further connection was made between concepts of order and information, in the context of communication of electrical signals. This seminal contribution was made by Shannon, who arrived on the scene as the Third Industrial Revolution of electronics and communication devices came into view. He thought about electrical signals and their faithful transmission within telecommunications systems. Electronics was opening into the new world of digitization and communication of signals. In 1948, he published his seminal paper entitled ‘A Mathematical Theory of Communication’.9 For Shannon, the thing communicated by the signal (its content) was information. Thinking about how to quantify this information, he alighted on a logarithmic transform of binary numbers that looked useful. Shannon’s fellow mathematician, von Neumann, knew the physics history and was deeply engaged with the emerging fields of computer science and engineering–and thinking about how brains worked, as well! Unbeknown to Shannon, but pointed out to him by von Neumann, his quantification of information content of a communication was a mirror of that discovered by Boltzmann for entropy. ‘What is entropy?’ and ‘What is information?’ started to share common foundations. Information content became an entropy.10 Shannon took von Neumann’s advice and called his construct information entropy. This idea started to permeate into methods of statistical data analysis.
The burgeoning electronic and information technology worlds extended into characterizing and analyzing the behaviour of ever more complex electrical circuits and communications networks, and then, in recent decades, into the study of quantum computation and quantum circuits. Physics and informatics ‘What is?’ questions became further entrained. The enmeshing of theoretical and experimental quantum physics with theory of information brought imaginative new conjectures about these connections, and extraordinarily precise new methods of experimental measurement arrived to test these ideas. Perhaps theory of information and entropy will emerge further as a unifying conceptual framework linking from thermodynamics and its second law, through to the nature of time, and other ‘what is’ unknowns of physics and universal physical law. Where will theory of information come to sit in relation to the basic measures of length, time, amount of substance, electric current, temperature, luminous intensity and mass? Is information an abstract concept or is it real? Is it an energy? Is it the same as entropy? And how may new discoveries about life and living systems reflect into new physics of the organization of complex systems? Zobaczymy!
Wheeler is remembered for his words of wisdom about the ‘What is’ of reality and information, which he characterized as ‘It from Bit’.11 He described his career in three stages–‘Everything is Particles’, ‘Everything is Fields’, and ‘Everything is Information’. In summarizing this perspective, he wrote:
It from bit symbolises the idea that every item of the physical world has at bottom–at a very deep bottom, in most instances–an immaterial source and explanation; that what we call reality arises in the last analysis from the posing of yes-no questions and the registering of equipment-evoked responses; in short, that all things physical are information-theoretic in origin and this is a participatory universe.12
Biology has connected ever more closely with the physical sciences, mathematics and computer science. Mathematical and computational biology have extended into the modelling of networks of genes, and biochemical pathways were reimagined as information circuits, by analogy with circuits in electronic engineering. Conjecture has extended to the connection of quantum level processes with biological mechanisms and from the emergent properties of complex systems to the evolution of living systems. Informatics has extended into the study of all manner of networks of communication–physical, biological and social.
At the start of my life and songline, Schrödinger was grappling with the question ‘What is Life?’ With amazing prescience, he reasoned on the grounds of physics and genetics of the time to envisage an information code of life, embodied in the chromatin and chromosomes of the cell nucleus. DNA was at that time revealing itself through the crystallographers’ images of X-ray diffraction patterns. And von Neumann, who, as we have seen in Chapter Five, conceived a simple model for the architecture of electronic computational machines, that bears his name, was grappling with analogy between the computer and the brain. The double helix of DNA was described and characterized as information–as both Turing machine paper tape and von Neumann universal constructor, embodying also the self-referencing ability to reproduce itself. Here, the analogy made is that DNA is, in a sense, all three of knowledge, program and data. It is knowledge that enables growth, maintenance and reproduction of the living world. It is program code that bootstraps those abilities, functions and actions. It is data on which those programs operate.
Skipping along the timeline to the 2020s, the language of life science is now the language of biomathematics, biophysics, biochemistry and bioinformatics. Electron transfer and energy gradients across membranes are minutely described as the flux and driving force of life. Quantum chemistry and bioinformatics have transformed pharmacology. And the nature of information, at the heart of all this, is a much-debated issue. The mathematics of symmetry and topology has advanced within particle physics and field theory. It has illuminated general principles and how simple rules and constraints can determine the envelope through which complex living systems emerge and evolve.
As well as ‘What is?’ questions there are also ‘Why are things the way they are?’ questions that relate to life and information. This pairing of questions is illustrated in the progression in physics from the ‘What is reality’ question to the question ‘Why is reality the way it is?’ as a mathematical and scientific, as well as a philosophical, question. The laws that appear to govern the physical world appear finely tuned to basic constants, such that, were they even slightly different, the best current models we have would break apart, predicting a destructive physical Armageddon that would have prevented anything of the observed universe from ever happening. Are there scientific principles that underpin the observed reality revealed by these constants, and are other realities possible and do they exist?
My inukbook by Nick Lane, described in the section below on landmark contributions, argues that ‘Why is life the way it is?’ is as important a question as ‘What is life?’ and suggests how we should balance the two. Does biology need to look further than current physics–are more abstract models of information needed, bearing in mind the advice only to keep what is needed in the models we create? These questions cross into the realm of metaphysics and belief. Mathematics and science will always push on the boundaries of how fully and accurately they can describe the pattern of observed living systems, building on these insights and methods. In 2000, John Maddox (1925–2009) summarized What Remains to Be Discovered, and in 2019, Marcus du Sautoy summarized how such ambition eventually runs into the sands of What We Cannot Know.13 Others have speculated about how all this may connect over time with health care. I introduce inukbooks expressing these ideas in the section below on landmark contributions.
New Frontiers of Information
Just before I started studying physics at Oxford, Rolf Landauer (1927–99) showed something quite unexpected, I think. That when we destroy information, we increase physical entropy. Information sounds abstract, but maybe it is real. That was 1961 and this insight registered nowhere within the information feeding to my heated brain, battling theoretical physics at that time. What was taking shape was the revisiting of a conundrum first explored by the Scottish physicist James Clerk Maxwell, who looked at Clausius’s entropy law and wondered whether intelligent life could defeat it. He envisioned this intelligent entity in the shape of a demon that he described, that could sit in the middle of a gas chamber and route gas molecules in a manner to sort them into a lower entropy order, thus defying the second law of thermodynamics. Could life overcome thermodynamic law in this sort of way? A hundred years later, Charles Bennett showed it to be a consequence of Landauer’s insight about the connection between information destruction and entropy production, that no intelligent entity can defeat the second law.
We have seen how entropy is related to increasing disorder. Since order and disorder are inversely related, the mathematics of logarithms enables us to relate order to negative entropy, termed negentropy. If living systems cannot buck physics, how do they acquire and sustain their order from their environment, thus compensating for the entropy they produce with the negentropy they acquire? It is apparent that they do succeed in ‘cheating’ this fundamental law, at least for a while. From the fertilized egg to the developing embryo and the growing and living body, animal life maintains its low entropy order and cohesion, and disorder of bodily function is in the realm of error rather than natural and progressive growth of entropy. Is there some unknown physics that can reconcile the observation of increasing disorder in physical systems with the observation of sustained order in living systems? What is the life that makes a system alive, and how can it be characterized and described within experimentally verified and consistent theory?
This was the conundrum that Schrödinger addressed, and his reasoning was set out seventy years ago, in the landmark inukbook discussed in the section below, in which he described a living system as feeding on negative entropy from its surroundings. Today, the argument, which Schrödinger himself agreed with, would be phrased in terms of free energy exchange and this aligns more clearly with the now better understood biophysics of electron transfer and electrochemical energy gradients, characterizing the bioenergetics and biochemistry of life science.
In this evolving story, the study of information has permeated from mathematics, physics and engineering, through life science and into the bioinformatics of living systems, as a unifying concept of science. It is now debated in the realm of neuroscience and is moving into medical science and towards health care. What is unknown, at this stage of the story, is how far this science will connect with the concept of information as a utility in everyday life, and specifically in support of health care. In this direction, the discussion of the nature of life moves up further levels in the brain, to the concept of mind–to consciousness and artificial intelligence. Here the contemporary interplay of neuroscience and computer science is very much alive.
As we drill down like this on information, we must keep in mind its dark side; it is not an assured good. It is sometimes harmful, and sometimes better not to know. Economy based on energy creates utility of food, shelter and safety. It consumes oil to generate power and that consumption pollutes environment–and oil runs out. There are no free meals and no free wheels. Francis Bacon (1561–1626) said that knowledge itself was a kind of power, and David Deutsch’s characterization of knowledge as information with causal power conveys much the same idea. Information is closely coupled with energy, health and economy and articulation and appreciation of these links is rapidly unfolding. Economy based on information systems consumes electrical power and thus pollutes. The Cloud is now said to be consuming some twenty percent of the energy distributed through the world electricity grid. Bitcoin mining currently consumes electrical power equivalent to the entire economy of Argentina, and ocean and ice-buried data centres are warming the planet. Information technology consumes rare earths and these, too, run out. Data-ism creates noise and bias in actions based on information, and thus interacts with the power of knowledge.
From Life and Information to Mind and Intelligence
It is at the level where multidisciplinary science extends into matters of mind that the model and analogy of life as an information engine merges with matters of philosophy. Along my songline, this connected with Gilbert Ryle (1900–76) at Magdalen College, and his philosophy of mind. Another luminary figure encountered was Willard Van Orman Quine (1908–2000), whose perspective has been described as ‘naturalistic, empiricist, and behaviourist’.14
There is much drawing and defending of red lines. One common dividing line is that consciously felt sensory experience is the hard problem to understand, and unrelated to the mathematics of information flow, which has little to say about the deep problems of neuroscience and cognitive psychology. This perspective has been championed today by the neuropsychologist Nicholas Humphrey. Accusations of egregious error talk past one another in these circles, as they do in all manner of deep discussions of the many ‘What is?’ and ‘Why is?’ questions that remain unplaced around the circle of knowledge. There is now either a great deal more or very little left to be said on the topic of ‘What is mind?’–a topic that Berkeley gave up on in the quotation that headed this chapter! Zobaczymy!
Rather than trespassing foolishly onto these enduringly shifting philosophical sands, risking being swallowed there, it seems relevant to start from the perspective of an engineer. Placed at the interface of neuroscience, computer science and cognitive psychology, and with a keen eye on the health care needs of society, what can be built there using the insights and methods from these disciplines to improve and contribute usefully to health care? It seems much clearer, now that artificial intelligence is advancing in such leaps and bounds, that there is a lot that can be done in this spirit–the challenge is to keep faith, and in balance with both the science and the values in play. As ever, how this balance is approached will be crucial to success and sustainability. As we have begun to see, and will see more, these are potentially very harmful and costly places in which to get things wrong.
The interplay of computer science, neuroscience and cognitive psychology presents a Popperian Open Society of the mind. An unbounded set of possibilities. A place for humble learning. Discussion of human and machine intelligence has brought experiment in this forest to forking paths in the way ahead. This was foreseen by Richard Feynman (1918–88), writing that:
Some people look at the activity of the brain in action and see that in many respects it surpasses the computer of today, and in many other respects the computer surpasses ourselves. This inspires people to design machines that can do more. What often happens is that an engineer makes up how the brain works in his opinion, and then designs a machine that behaves that way. This new machine may in fact work very well. But I must warn you that it does not tell us anything about how the brain actually works, nor is it necessary to ever really know that in order to make a computer very capable. It is not necessary to understand the way birds flap their wings and how the feathers are designed in order to make a flying machine. It is not necessary to understand the lever system in the legs of a cheetah, that is an animal that runs fast, in order to make an automobile with wheels that goes very fast. It is therefore not necessary to imitate the behaviour of nature in detail in order to engineer a device which can in many respects surpass natures abilities.15
There is an analogy, here, with how physics has come to terms with the extent of its unknowing, as previously described. Experimental and theoretical quantum science reached a boundary of understanding of the nature of reality and decided to duck the question and get on with calculating well-corroborated solutions of the Schrödinger equation, to learn what they tell us in specific cases. Continuing perplexity about the nature of the entanglement of quantum states has not held back advances in quantum computing, and this in turn has led to new ways of thinking about the issue. Continuing perplexity about the nature of gravity alongside the other fundamental forces and the relationships of concepts of mass, energy and time in the observed universe have not impeded space travel. Likewise, perplexity about the nature of mind has not held back interplay of computer science and neuroscience. There is rich potential for exploring the interplay of machine intelligence with theory of mind, and with other still perplexing problems in mathematics, science and medicine. There is similar potential to explore its interplay with problems of social and environmental policy and practice. Machine intelligence has the potential to change life in almost every way, but it cannot be allowed just to happen. It requires the mixture of enterprise and innovation anchored in a common ground of values, principles and goals.
Artificial Intelligence
The seventy years of my songline have seen the emergence of artificial intelligence (AI). It has variously been described and referred to in every chapter of this book. I came across it from the time of its origins in the expert systems of the 1960s: in Dendral, Meta-Dendral and Heuristic Dendral at Stanford University and Massachusetts Institute of Technology (MIT);16 in the LISP language, a pioneering language of computational method designed by John McCarthy (1927–2011); in concerns about computer science and human reasoning (Joseph Weizenbaum (1923–2008);17 in Donald Michie’s (1923–2007) work that developed from his wartime connections with Alan Turing (1912–54) in the United Kingdom.18
Generic and domain-specific systems of medical decision logic and decision making came and went: Caduceus/Internist-1,19 Iliad,20 DxPlain,21 MYCIN.22 Methods of image classification–for example automated chromosome karyotyping and identification of abnormal histopathology slides–came and went, too. The power of the computer industry created and disseminated powerhouse systems such as Watson and Jeopardy. And seemingly more generic and agile methods, such as underpin AlphaFold and ChatGPT, for example, are gaining traction.
Theory of machine learning has drawn on and evolved from the mathematics of Bayesian networks, neural networks, genetic algorithms and more. It is proving of increasingly high economic significance for the world of automation, robotics and autonomous systems. Demonstrations of its powerful applications are causing increasing concern about governance and impact on human society. The number of words said and written about AI systems is the latest explosion of the Information Age. Such patterns of verbal excess tend to reflect chaotic times and correlate inversely with what is known, experimentally. The problem is that the knowledge they draw on and express tends to be cloaked or hidden, often for reasons of commercial propriety. Mathematical methods are not patentable–if kept secret they convey no advantage to mathematicians or to those who depend on mathematics. AI methods are being pursued as protected intellectual property. Kept secret, they confer commercial advantage but do not advance the common ground of knowledge on which all depend, in the way that shared and co-developed mathematics discipline and mathematical methods do. This is a revolution where assessment of its implications and consequences (Zhou Enlai-like, about the French Revolution!) is ‘too early to decide’!23
In his recent televised discussion with Alan Yentob, the prize-winning novelist Kazuo Ishiguro spoke softly and clearly about these concerns.24 They talked about his novels in context of his life and times in Japan and England. Roughly one novel every five years–I like him for that and his explanation of why his creativity evolves in five-year epochs. The most recent novel, Klara and the Sun, explores an evolving (he says it is not far in the future) world of humans and artificial friends (AFs).25 Klara is Lucy’s AF and Lucy is ill and may not live. The book is Klara’s account. Her concern is, will she evolve to become and continue sick Lucy’s life? Ishiguro was pensive.
Landmark Contributions
Many who have specialized in mathematics, science and engineering have reflected on how their different disciplines can connect with and illuminate the origins and fundamental nature of life and living systems. As we have seen, the term information has travelled widely through these connections, in the search for greater understanding of what life is, how it came into being and how it functions and evolves. These connections link theory and experiment with concepts such as symmetry, topology, calculus, order, communication, control, energy and computation.
It seems fitting to celebrate here some of these pioneers, decade by decade over nearly eighty years. Some inukbooks that remind me of them every day are: What Is Life? by Schrödinger; The Computer and the Brain, by von Neumann; Programs of the Brain, by Young; Feynman Lectures on Computation, by Feynman; Life’s Other Secret, by Ian Stewart; I Am a Strange Loop, by Douglas Hofstadter; Information Theory and Evolution, by John Scales Avery; The Vital Question, by Lane; The Creative Code, by du Sautoy; and The Demon in the Machine, by Paul Davies.
In drawing together this selection–a synthesis of syntheses–I risk making an even greater than usual fool of myself as their individual contents are, in themselves, wide-ranging and well beyond my detailed knowledge, such is the range and pace of advance. I have collected these books around me and consulted them for inspiration, as I write. Here they are:
![An image of ten books laid out in two rows of five. Top row, left to right: 1944–Erwin Schrödinger: What Is Life?, 1956–John von Neumann: The Computer and the Brain, 1978–John Zachary Young: Programs of the Brain, 1996–Richard Feynman: Feynman Lectures on Computation, 1998–Ian Stewart: Life’s Other Secret. Bottom row, left to right: 2007–Douglas Hofstadter: I Am a Strange Loop, 2012–John Scales Avery: Information Theory and Evolution, 2015–Nick Lane: The Vital Question, 2019–Marcus du Sautoy: The Creativity Code, 2020–Paul Davies: The Demon in the Machine.](image/Fig6.1Inukbooksfinal.jpg)
Fig. 6.1 The inukbooks that draw together a number of career-long contributions that have illuminated the ‘what is’ and ‘why is’ questions about life and information, discussed in this chapter. Photograph by David Ingram (2023), CC BY-NC.
The books track back over a century in exploring information as a fundamental concept in relation to living systems. This is a domain that Schrödinger brought to life in his series of landmark lectures published in 1944, entitled ‘What is Life?’.26 I then introduce the Silliman Lectures on The Computer and the Brain, which looked at the brain from the perspective of computer science, as a computational machine.27 These lectures were delivered by the mathematician von Neumann, the originator of the eponymous von Neumann architecture of the computer Central Processing Unit (CPU).
I move next to The Programs of the Brain by Young, remembered for his treatise of the times on the life of mammals and research on nerve function.28 The book is based on his 1975–77 Gifford Lectures at the University of Aberdeen. I remember this tousled grey-haired figure, a legendary UCL personality, striding along Gower Street outside the medical school where I worked in my PhD days. His is the broadest ranging review, embracing philosophy of mind and revisiting the pioneering work of Sherrington, a hundred years ago, who was the first to characterize the integrative nature of the human nervous system.
Next in the selection is the Feynman Lectures on Computation, edited by Anthony Hey and first published in 1996, which is the best entry route I know to the science of computation.29 Following this, Stewart’s Life’s Other Secret stands out by bringing a mathematical perspective on the subject.30 I also visit the polymath Hofstadter and his book I Am a Strange Loop, which weaves Gödel numbers with cognitive psychology in an imaginative conjecture about the nature of conscious thought.31 I include next the book by Avery, Information Theory and Evolution.32 This is a good source of reference that collates a wide range of materials and provides more mathematical content than the other books in the selection. I then introduce another UCL life science colleague, Lane, and his The Vital Question, which sets biology, bioinformatics and bioenergetics side by side in context of his question.33
Coming back to mathematicians, du Sautoy wrote The Creativity Code, which reaches beyond computation and machine intelligence to machine creativity.34 And coming back to physics, Davies has brought things up to date in 2020 with The Demon in the Machine, opening the subject out into speculation about how the story will evolve into medicine of the future.35 He was a physics PhD student at UCL at the time I was doing my own PhD there, as he told me when we met very briefly, when I was collecting this inukbook directly from him, at a New Scientist Live event in London where I heard him speak.
1944–Erwin Schrödinger: What Is Life?
Schrödinger has an amazing and vivid biography. It extends through two World Wars and the turmoil between them, studying physics in Vienna in 1906, working in a succession of appointments in Austria, Germany, England and Ireland, succeeding or in parallel with the great names of Boltzmann, Einstein, Planck and Born. The intermissions for military service came first in 1910–11 (which he spent in what he describes as the beautiful old town of Krakow), and then from 1914, during subsequent service in Italy (which he describes as uneventful and giving plenty of time for study of Einstein’s 1916 paper on relativity theory, a subject which he struggled to understand). From 1933–36, he was a fellow of Magdalen College, Oxford, sponsored there by Frederick Lindemann (1886–1957–later Lord Cherwell, Winston Churchill’s science advisor). Around this time, he started to turn his mind to the connections of physics and chemistry with biology. Later, when settled in wartime years in Ireland, and working at Trinity College Dublin, he delivered a set of lectures on the question, ‘What is Life?’ In subsequent years, Trinity was a leading European centre in Health Informatics, under the computer scientist and my colleague, Jane Grimson, for whom I acted as a visiting examiner for some years. Jane was subsequently Vice-Provost of the College and a leader of the engineering profession across Europe. In the history of Trinity College, there is thus a close connection between the three dimensions of information–information and life science, information for health, and information technology. Schrödinger supplemented his lectures on life with further lectures on mind and matter, which he considered an even more exacting intellectual challenge!
The following excerpt is quoted at length from the Preface to What Is Life? to showcase his magnificent clarity of thought and style:
Let me use the word ‘pattern’ of an Organism in the sense in which the biologist calls it ‘the four-dimensional pattern’ meaning not only the structure and functioning of that Organism in the adult, or in any other particular stage, but the whole of its ontogenic development from the fertilized egg cell to the stage of maturity, when the organism begins to reproduce itself. Now, this whole four-dimensional pattern is known to be determined by the structure of that one cell, the fertilized egg. Moreover, we know that it is essentially determined by the structure of only a small part of that cell, its nucleus. This nucleus, in the ordinary ‘resting state’ of the cell, usually appears as a network of chromatin, distributed over the cell. But in the vitally important processes of cell division (mitosis and meiosis) it is seen to consist of a set of particles, usually fibre-shaped or rodlike, called the chromosomes […]36
To reconcile the high durability of the hereditary substance with its minute size, we had to evade the tendency to disorder by ‘inventing the molecule’, in fact, an unusually large molecule which has to be a masterpiece of highly differentiated order, safeguarded by the quantum rod of quantum theory. The laws of chance are not invalidated by this ‘invention’, but their outcome is modified. The physicist is familiar with the fact that the classical laws of physics are modified by quantum theory, especially at low temperature. There are many instances of this. Life seems to be one of them, a particularly striking one. Life seems to be orderly and lawful behaviour of matter, not based exclusively on its tendency to go over from order to disorder but based partly on existing order that is kept up […]37
What is the characteristic feature of life? When is a piece of matter said to be alive? When it goes on ‘doing something’, moving, exchanging material with its environment, and so forth, and that for a much longer period than we would expect an inanimate piece of matter to ‘keep going’ under similar circumstances.38
He wrote of the chromosome as containing a ‘code-script’, the entire pattern of the individual’s future development and of its functioning in the mature state.
Schrödinger’s was, as he himself acknowledged, a bold foray into the domain of living systems. He argued from the outset that a living organism requires exact physical laws, otherwise life would be impossible to sustain and for humans to be capable of orderly thought. He recognized the incompleteness of his analysis and the implication that greater understanding would reveal a need for new physics.39
The question he posed at the outset of his lectures was: ‘How can the events in space and time which take place within the spatial boundaries of a living organism be accounted for by physics and chemistry?’ He took the issue of brain organization into the subsequent lectures on Mind and Matter, in 1956.
Schrödinger reasoned from principles of statistical thermodynamics, the quantum physics of the atom and the chemistry of molecules, showing the need for new scientific insight into how orderly life succeeded in persisting and reproducing, given the composition, size and sensitivity of the materials from which it was made.
The science of genetics had evolved to that point through experiment: selective breeding, microscopic analysis and studies of mutations induced by X-rays. And from these, a picture of genes structured within chromosomes had emerged, with experimental methods to relate changes in the band patterns in images of the chromosomes with changes in the molecular structure of specific genes. From microscopy images revealing the pattern of the fertilized egg cell, to the patterns of inheritance in breeding experiments, to the effect of different doses of ionizing radiation on mutation of the gene as revealed in images of subsequent development of the organism, he brought together estimates of numbers of genes present, and their sizes, in terms of number of atoms they contained.
From the quantum theory and experimental science of molecular mutations, he reasoned about the challenge living systems overcome in persisting for many years and over generations, inherited and communicated through fertilized eggs. He reasoned, from physical principles, about the scale and composition of the genetic material of the cell and its persistence over time. He argued that the classical statistical physics of the preceding century could not account for such reliable persistence over time being generated from the amount of material present at such small scale. Reasoning then at the level of molecular chemistry, and using the explanation provided by quantum theory for the stability of chemical bonds and molecular structures, he went on to show this theory to also be deficient for explaining genetic variety of expression. Reflecting on X-ray crystallographers’ insights on material structure, he reasoned that a regular periodic crystal structure for the chromatin would not suffice to account for the observed patterns of scale, variety and persistence.
From the estimates of the size of the genes he reasoned that they must have the form of what he termed an ‘aperiodic solid molecule’ (as opposed to liquid or gas). This molecule was non-repeating, in the sense that every element would be capable of carrying information (his coding script), enabling the relatively small number of atoms and genes comprising the molecule to code for the growth and variety of structures and functions of the living organism, as evidenced by the development of the embryo from a single egg cell, as observed in life.
Here were the origins of a theory that made connections between information and living systems. Schrödinger also reviewed in depth the connections of classical with statistical thermodynamics. To recap from the sections above, in the former, the measurable physical quantity of entropy is calculated in linear proportion to heat energy flux and inversely in proportion to temperature. Boltzmann’s fundamental advance described this system in terms of the dynamic distributions of gas molecules and their natural evolution from orderly towards disorderly states. He invented a mathematical model to characterize this order, which he connected with entropy measurement. In this way, entropy is characterized by a logarithmic relation with Boltzmann derived disorder, and likewise, since order and disorder are inversely related to one another (high disorder implies low order, and vice versa), it follows that order may be quantified as a negative entropy. This opens the door for an image of living systems that defy the second law of thermodynamics, whereby entropy always only increases. At issue was the question of how to reconcile the observed sustained order of living systems, from cell to embryo and living organism, with the classical physics of entropy as expressed in the second law of thermodynamics. In looking at the energy balance of living systems, the thermodynamics view expressed was that a living organism ‘feeds’ from order, and the associated negative entropy acquired balances the natural production of entropy in its everyday functions, thus preserving its living state of order. It was a subtle and contested argument, and he later adjusted it, in response to criticism.40
1956–John von Neumann: The Computer and the Brain
In this book, based on a manuscript prepared for the Yale University Silliman Lectures of 1956, we encounter the voice of the Hungarian mathematician and early pioneer of computer science and technology, von Neumann. He was unable to deliver the lectures and died of cancer in early 1957. His wife, Klara, completed the manuscript and added a wonderful Preface, connecting the book with her husband’s work on mathematics and later as an early pioneer of electronic computer architecture. He made many contributions in pure and applied mathematics through the era in which Kurt Gödel (1906–78) upset the apple cart of Principia Mathematica, approving of his reasoning about its incompleteness. This was the era in which mathematics and formal logic found its way into the foundations of computer science. After wartime work on the Manhattan Project, he was a member of the team at Princeton University that produced the early prototype electronic calculator, called JONIAC. This work led him to study the brain and nervous system, looking for inspiration there for its design principles. He was clearly a towering figure in the optimistic postwar era of America–recognized in senior roles in the government of President Dwight Eisenhower.
As the title describes, the topic of the book is an analogy between brain and computer, seen from the perspective of computer science, as a computing machine. It is interesting in how it reveals the thinking that bootstrapped early machine architecture. It is a mindset focused on crafting technology to perform mathematics. This technology-inspired thread has been taken forward by Raymond Kurzweil in his How to Create a Mind, in 2012.41
Von Neumann’s book is quite short–just eighty-two pages–and in two halves. I read it again last night. The first is about machinery of numeric calculation. That is, about arithmetic and logic–the basic operations involved and how these have been enacted by different machines, starting from mechanical analogue computers, and translating on into the early world of electronic computers and hybrids of the two. He was clearly closely involved with the engineering involved, as he gives chapter and verse about kinds and numbers of components and the precision with which they worked and were coupled together in the machines. The picture is one of a selection from a toolbox of components, choosing and customizing them to perform calculations. He describes the requirements for arithmetic, memory and programming, and for honing these together. In the analogue case, he makes a connection with the Babbage engine-like world of differential gears used in car transmissions, showing their utility in combining and averaging inputs, and of rotating discs driven to integrate inputs, and how these were used as basic operations of the mechanical machine. He places himself in the middle, knowing what he needs as a mathematician and what the engineer can provide him with, as component methods, and marries the two.
Moving to electronic computers, he gives details of circuits of thermionic valves, rectifiers, capacitors, resistors and their component magnetic and electrical properties, size and speed of operation, and precision with which they worked. He sketches a hierarchy of devices for storing, processing and transporting data around the machine, considering which of these needed to operate rapidly on the critical path of the calculation, and which needed to operate more slowly in support, in the background of the calculation. The options ranged widely over acoustic delay lines, electromechanical storage devices and electronic components based on ferromagnetic and ferroelectric materials.
Von Neumann focuses on the fast-acting memory registers used for number crunching, showing the scale of logic operations required for the digital arithmetic that had to be performed on the numbers they contained. For example, he talks of a twelve decimal digit number system requiring 196-tube (thermionic valve) registers. These were huge, power hungry and heat-producing machines. The ENIAC at Los Alamos had twenty-two thousand such valves. He works through design considerations around data and program, starting from early ‘plug and play’ programming, where the program consisted of wires patched between electronic components. From there he moves on to stored programs and the greater sophistication of calculations they enabled.
The first part of the book is interesting in showing the creative engineering involved in matching capability of components with design of machine, to meet requirements of calculation. The second part of the book is interesting in a completely different way–it reveals how von Neumann, as computer architect, was deeply engrossed in the structure and understanding of the human nervous system. In creative engineering, it is common to work from a prototype model, treating this as a test bed in which to explore further necessary refinement. The final design may have born limited resemblance to the prototype but arriving at that design depended on going through the prototype stage. It cannot be reasoned into existence because its design is an art of the possible, and possibility is only explored experimentally, by making and doing things, working with models and improving or rejecting methods.
Neurology at that time was much focused on sensory mechanisms, the action potential through which information is transmitted and the pathways of connection and interaction within the brain and nervous system. Neuroscience and philosophy of mind are in a wholly different era today, compared with the time of von Neumann, as is the world of nanoscale semiconductor and optical technology compared with that of thermionic valves and acoustic delay lines.42 Functional brain imaging methods have been pivotal to the evolving science. Von Neumann touches lightly on the connections of mathematics, physics and chemistry with description of sensory mechanisms. His commentary is interesting in the eye he casts over the analogy of computer and brain:43
- The speed of the computer processing unit being 104–105 times faster;
- The natural component of the brain being smaller by the order of 108–109;
- The brain as having more numerous processing units, operating slower and in parallel, as compared with the computer operating with fewer units, faster and serially;
- The neuron as the ‘basic digital organ’ of the brain, which he characterizes and compares with the computer circuits, in terms of threshold of activation and time to stabilize (‘summation time’);
- In discussing the nature and location of memory, he describes the modern computer as needing 105–106 bits of memory;
- He suggests ‘genetic memory’ in chromosomes as a component of the brain’s memory;
- He suggests a parallel between analogue/digital, hybrid processes and genes connecting with enzyme processes;
- In considering logical structure and arithmetic function, he compares the propagation of error in digital arithmetic, requiring 10–12 decimal points of precision of number representation to alleviate this acceptably, to the human brain, which he describes as doing mental arithmetic with just 2–3 decimal points of precision. By comparison he believes the brain to achieve greater reliability in logical operations;
- He talks about messages in the brain communicated as periodic pulse-trains, conjecturing that statistical relationships between such time-series might also convey information–thinking there of ‘correlation-coefficients, and the like’.
In his summary,44 he talks about the language of the brain and how this differs from the language of the machine. He talks of the nervous system being based on two types of communication–what he calls orders (logical ones) and numbers (arithmetic ones). He suggests that variety of spoken language might indicate that there is nothing absolute and necessary about them, and that logic and mathematics are themselves, historical and accidental forms of human expression, which might exist in other forms than those we are accustomed to. He uses the example of visual perception and compares what the brain achieves in three synapses of logical processing along the optic nerve, and subsequent low precision arithmetic in the central nervous system, with a machine built in an analogous manner, which would, he says, clearly fail to perform at all. His conclusion is that ‘logics and mathematics in the central nervous system, when viewed as languages, must structurally be essentially different from those languages to which our common experience refers’.
It would be so interesting to have him sitting here, now, reviewing how technology, computer science and machine architecture, neuroscience and machine intelligence have evolved in the sixty years that followed the sadly so shortened sixty years of his own life.45 It would be interesting to have Noam Chomsky with us, as well, to add his thoughts on the language of the brain. It would be interesting to see how the two human personalities would have gelled. Von Neumann died a year after a diagnosis of prostate cancer that quickly spread to bone and brain. What he pioneered was instrumental to the ability today to prolong and save the lives of such patients that followed him.
1978–John Zachary Young: Programs of the Brain
The UCL anatomist Young is remembered for his major work, The Life of Mammals, a book I read when expanding my learning from mathematics and physics into biology, medicine and computer science, in 1971. This later inukbook is based on his Gifford Lectures of 1975–77 at the University of Aberdeen.46 He was for twenty-seven years the Head of the Department of Anatomy–quite a stint but in a different era when academic leaders answered to themselves, by and large, leading and managing royally. A bit like hospital consultants! Times have changed in academia, and in medicine, too–leaders cannot, should not and would not wish to persist that long. They answer more widely and lose energy through the exigencies of being royally managed, more than managing royally. Creative souls keep their heads down and away from management pressures, if they wish to survive the time it takes to make an enduring difference in their field of endeavour, the likes of which Young brilliantly exemplified.
Notwithstanding the advance of anatomical neuroscience since its publication in the late 1970s, unleashed by new experimental methods such as nuclear magnetic resonance (NMR) imaging, this book is still spellbinding in its breadth and majesty. I speed-read it again, last night, getting ready to write about it today. The chapter titles summarize the scope embraced and the author’s immense knowledge and wisdom:
1. What’s in a brain?; 2. Programs of the brain; 3. Living and choosing; 4. Growing, repairing, and ageing; 5. Beginning; 6. Evolving; 7. Controlling, coding and communicating; 8. Repeating; 9. Unfolding; 10. Learning, remembering, and forgetting; 11. Touching, feeling, and hurting; 12. Seeing; 13. Needing, nourishing, and valuing; 14. Loving and caring; 15. Fearing, hating and fighting; 16. Hearing, speaking, and writing; 17. Knowing and thinking; 18. Sleeping, dreaming, and consciousness; 19. Helping, commanding, and obeying; 20. Enjoying, playing, and creating; 21. Believing and worshipping; 22. Concluding and continuing.
The material draws widely on human biological science and places it within the framework of an integrated information system. It spans from nerve cells and human physiology to psychology and philosophy of mind. It seems invidious to paraphrase the author’s intent. Here are his words:
I propose to say that the lives of human beings and other animals are governed by sets of programs written in their genes and brains. Some of these programs may be called ‘practical’ or physiological and they ensure that we breathe, eat, drink, and sleep. Others are social, and regulate our speaking and other forms of communication, our agreeing, and our loving or hating. Perhaps the longest-term programs are those that ensure continuing not of ourselves but of the race, programs for sexual activity and mating, programmes for growth, adolescence, and, indeed, for senescence and dying. Perhaps the most important programs of all are those used for the activities that we call mental, such as thinking, imagining, dreaming, believing, and worshipping.47
Acknowledging that the nature of language underlies all discussion of knowledge, he embraces language as consisting of sets of signs, and the study of signs (semiotics) as throwing light on the nature of life and on the communication that pervades all of living. He quotes Charles Peirce (1839–1914), who originated the study of signs, writing in answer to the question, ‘What is man?’, that ‘Man is a symbol’.48 Young sees important truths contained in this, perhaps rather mysterious, way of looking at things. He describes the essence of a living thing to be that it is organized and maintains its organization and can only do so because it receives, from its past history, a plan, or as he puts it, a program, of what to do to keep alive. As an aside, it looks that he would have found interest in Hofstadter’s later description of consciousness as an interacting ensemble of symbols.
By program (this spelling actually having preceded the French inspired programme), he means and follows the definition of: ‘a plan of procedure; a schedule or system under which action may be taken towards a desired goal’. He distances this program from purely logical steps of enactment of algorithm in computer software. But he reasons that the information in the program must have a physical embodiment as a system of signs, that maintain the living system in line with its environment and provide a symbolic representation of what goes on there. He says: ‘I want especially to emphasize the importance of selection of objectives and of the historical influence on everything that we do. Some of the influences on selection of plans are recent, depending on what has happened in the last few minutes, hours, or day. Other influences stretch back through selections made in the years of our life, in childhood, and in prenatal life and in the DNA of our genes, by natural selection over countless generations’.49
The book develops a synthesis extending from programs governing biological mechanisms to programs conditioning and influencing human choices. This he illuminates as an information system that organizes and regulates the biological function of all living organisms and provides a framework of choices in human life. In this he wished to connect knowledge about cells in the brain and how it can help our daily lives. He traces how René Descartes (1596–1650) compared operations of the brain with those of automata worked by hydraulics and Sherrington’s 1937 description of it as an enchanted loom, with lights flashing as messages weave around the brain.50 Again, akin to Hofstadter’s imagination!
Young distinguishes programs in four main languages followed in human life, expressed in distinct media, and the first two shared by all mammals:
- The fundamental program is inherited written in the triplets of bases of the DNA code.
- The second language in a mammal is embodied in the structure of the brain. Its units are the groups of nerve cells so organized as to produce the various actions at the right times.
- Speech and culture represent the third level of the human life program, largely embodied in the organized sounds of spoken language.
- These programs find their physical expressions and codes not only in human habits and speech sounds but also in writing and other forms of recorded speech. These provide a fourth level of coding, also peculiar to man, enabling some of the information for living to be recorded outside of any living creature.51
In thinking about the origin of codes and their meanings, he describes mapping of brain structures to body anatomy (drawing on his deep knowledge of octopus and squid axon) and how brain function ‘must provide a faithful representation of events outside, and the arrangement of the cells in it provides a detailed model of the world. But the function of this model is to provide action suitable for survival, so this topographically organized representation somehow provides a set of hypotheses about what is likely to happen, and of programs for dealing with these events’.52
Through this, he joins language of information, symbol, sign and code, from cellular function and signal to knowledge and thinking, where a great deal of conscious life consists of testing hypotheses. He writes that: ‘complicated internal thinking probably involves processes similar to the active search for the meaning of sights or sounds’.53
His concluding chapter summarizes his thesis that: ‘Progress in evolution by accumulation of information is especially revealing to mankind because humans have achieved an exceptional capacity for gathering information’.54 With a beautiful sense of language and imagery, he writes:
We need all the knowledge we can collect about ourselves and our propensities for good and evil. We can see the biological foundations for these, but for wisdom about how to act we must continue to look largely to the traditional skills of philosophy, theology, and politics and the newer ones of anthropology, psychology, and sociology. These are cultural problems, and they require investigation mainly by those who study people as individuals and in groups, and their relations to each other. […] Anatomical studies provide the most valuable of all clues to the functioning of the cortex. They show that the information about the features of the world is projected onto the cortex and recombined in a series of detailed Maps. This analogue or model of the world is the basis of all powers of computation. So, we can combine the detailed knowledge of the sequence of events in a few nerve cells, that is given by the microelectrode, with the knowledge about the arrangement of many of them that is provided by the microscope. We may thus begin to decipher how individual cells and groups of them interact to provide the coded script in which the programs of the brain are written.55
1996–Richard Feynman: Feynman Lectures on Computation
The first volume of Feynman’s Lectures on Physics opened me to the world of physics in 1964.56 The maths was not too difficult, and I devoured the book in the summer before starting the physics course at Oxford. It was not until three decades later that Tony Hey edited and published this inukbook, based on a course of lectures on computation that Feynman had developed and recorded in the early 1980s, again for the students of California Institute of Technology (Caltech).57 It exemplifies a similar tour de force of the Feynman mind, connecting the mathematics, physics and engineering of computation and computing machines. Hey was a postdoctoral student at Caltech in the early 1970s, having studied physics at Oxford and overlapping my years there. He was Feynman’s choice as editor and started work in the year that Feynman died from cancer. Our paths crossed again, briefly, in the UK e-Science Programme that he led in the early 2000s.
Feynman had a unique ability to work things out from first principles. Here, he connects theory of computation and theory of information with theory of physics. It discusses the links with thermodynamics and lays prophetic foundations for the advent of quantum computation. Feynman also developed early ideas on parallel computation. It is not a computer science textbook but a masterly example of interdisciplinary connections that have shaped and are still transforming computer science and theoretical physics, connecting them through the higher-level abstraction of quantum information theory.
The book is equally impactful in shedding light on the environment that Feynman and his many illustrious colleagues created and worked in at Caltech. Surely one of the most exciting crucibles of physical science, ever. Feynman did not extend his interests into the world of bio- and life science. These connections are pursued in some other inukbooks I have drawn together, here. Some tread the same path as Feynman in connecting information with physics, before extending on to the ‘what is’ and ‘why’ questions about life and living systems.
1998–Ian Stewart: Life’s Other Secret
Stewart believes mathematics will have a lot more to say about life’s secrets. He quotes Galileo Galilei as an impressive provenance of that belief: ‘The book of Nature is written in the language of mathematics’. In his own words, Stewart says that ‘mathematics is the study of patterns, regularities, rules, and their consequences–the science of significant form–and nowhere is form more significant than in biology’.58
This book is a story of symmetry and pattern in the living world and the deep and simple principles that underlie the form and formation of living systems. These principles are expressed in their purest form in the language of mathematics and address the question of what life is by reframing it as a question of what life does. Stewart is fluent in the science of life and its languages in physics, chemistry and biology. The book is richly populated with examples that frame the study of a living system in terms of ‘the stuff that it happens to be doing right now’, within a context descriptive of all possible things that might happen to that system–a phase-space. He describes the approach thus: ‘Instead of looking at one water wave and wondering why it does what it does, we look at an entire space of possible shapes and movements for water, seek relationships among them, and work out how simple natural rules pick out the behaviour that actually occurs’.59
This level of knowledge cannot, he argues, be found rooted in the science of genetics, without taking on board the mathematics of patterns which constrain possible life forms and are observed in living systems. This abstract endeavour is termed ‘morphomatics’ and the possible designs of living systems populate ‘morphospaces’. In this domain, mathematical language expresses principles of continuity, connectivity, feedback, information, order, disorder, bifurcation, learning, autonomy and emergence of living systems and the symmetries of their patterns.
The book is anchored in examples from the plant and animal worlds, ranging from molecular and cellular level systems and processes, through to ecosystems and their simulation. Examples that struck me most vividly were those tracing plant structures to Fibonacci numbers and magic numbers to the structure of viruses.
The governing principle of the three-dimensional structure of viruses is akin to that of a crystal, where atoms adopt a lattice structure constrained to minimize energy. Many viruses are observed to form in a pattern of approximately spherical icosahedrons. This is a minimum energy structure, much as a drop of water constrained by intermolecular forces at the surface adopts a spherical form exhibiting minimum energy, within the pattern of all possible forms that a drop of that volume might potentially adopt.
Likewise with the multiple copies of different protein units that form a virus, a spherical form is preferred and the mathematics of the icosahedron with increasing numbers of six sided faces is illuminating. The angular icosohedron is smoothed in the truncated icosahedron–as in a football which mixes twelve five-sided faces with six-sided faces. The number of six-sided faces allowed mathematically, that will fit together (tesselate) smoothly and approximating increasingly towards a sphere, follows a number series called magic numbers. The series up to 300 is 12, 32, 42, 72, 92, 122, 132, 162, 192, 212, 252, 272.
Turnip yellow mosaic virus has 32 units, human wart virus has 72 units, reovirus has 92 units, Herpes simplex has 162 units, chicken adenovirus has 252 units, and infectious canine hepatitis has 362 units, another magic number. As Stewart says, ‘It would be difficult to find more compelling evidence than this pattern of DNA, RNA, and viruses to show the importance of mathematical patterns in making life possible–certainly earthly life, the only kind we know’.60
The branching structure of Fibonacci number series has long found success in describing branching hierarchical structures of the developing plant world. The Game of Life program shows how simple repeating rules applied across a two-dimensional lattice, can lead to images traversing and replicating on the computer screen. Stewart draws on examples of cellular division in embryos, showing patterns of numbers that shape and constrain their three-dimensional symmetry. The precise correspondence of what the mathematics of symmetry would suggest, and what is observed in successive generations of cells is, again, striking.
We further find an illuminating perspective on how symmetry constrains possible patterns of life forms but does not homogenize living systems.61 The diversity of life forms evolves as transformations arise. There is mathematics descriptive of these transformations of systems–those that maintain symmetry (rotation, translation, reflection and dilation) and those that break symmetry. Instabilities arise that break symmetries and lead to new life forms. Physical laws are also understood in the mathematical language of symmetry and broken symmetry.
A system constrained to spherical symmetry and form can break into a new form, following a new symmetry, when it experiences a perturbing force that breaks the spherical structure. Stewart uses the example of a squashed ping-pong ball being distorted into a circular symmetry as it becomes unstable and collapses under an applied pressure. Life forms maintain stability but undergo a change of pattern as they grow. Spherical symmetry encapsulates the growing frog embryo to a size of about one thousand cells. At the stage of gastrulation, the pattern of development breaks from this symmetry into the circular symmetry, described mathematically in like manner to that governing symmetry breakage in the compressed ping-pong ball. The embryo and living frog come into being as a stable living entity, through the breakages and transformations of symmetry that govern its permitted forms and stability at different stages of development.
The book poses challenges to the pursuit of human knowledge and understanding on several levels. The breadth of examples is evidence of the importance of multidisciplinarity in science. But Stewart is critical of what he sees as the genetic determinism of our times, where everyone is ‘determined’ (maybe predetermined), whether it be in arguments of philosophy, mathematics, science, engineering or health care. There is an important debate about connection–of measurement and modelling of systems studied, perspective of discipline applied, and to what end, and how these cross-fertilize with one another to achieve insightful and useful ends. Here lie perennial problems of finding tractable mathematics that can unify basic principles of living systems from the level of molecules and cells to human and global ecosystems. One argument goes that the pursuit of such knowledge should proceed through disciplines characterized by their purity of pursuit, leaving aside other cooperative endeavours characterized as ‘trade’. The counter argument is that this approach comes to mean less and less about more and more. Determinism is in part philosophy and in part mindedness–holistic or narrow.
Mathematicians as accomplished as Stewart, and interested as widely across disciplines, are amazing people. But they are fewer and further between in worlds that constrain choice and narrow perspective. His book is a lifetime of application and an invaluable songline of links among the people and disciplines he has connected with. His academic awards and prominence in communication of science attest to that. He is an extraordinary polymath. Further big questions arise after ‘What is life?’ which he reframes into the question, ‘What does life do?’ This is also a question of how.
Stewart’s vision is of a unified theory of deep mathematical laws behind growth and form. How might we connect such a powerful and persuasive pattern of knowledge, in depth, with living systems and social life, in breadth. How can we picture such a time? Does society of the Information Age exhibit such mathematical pattern of instability and breaking symmetry–what might mathematics tell us about patterns of its future symmetry and stability?
2007–Douglas Hofstadter: I Am a Strange Loop
Hofstadter is another polymath inukbook author, spanning computer and cognitive sciences. His quest as a cognitive scientist was first expressed in the symphony in numbers, pictures and music of the book that made his name: Gödel, Escher, Bach.62 In that book, he explored patterns that persist through mathematics, art and music, associated with human creativity. Hofstadter’s Law states that ‘It always takes longer than you expect, even when you take into account Hofstadter’s Law’. It is a clever play on self-reference, and he sees analogy with self-reference in human consciousness.
In I Am a Strange Loop, he embraces an imaginative conjecture about the nature of conscious thought, seen from the perspective of computer science. Drawing on theory of Gödel numbers and self-reference in logical statements, he makes the analogy of the concept of ‘I’, and its expression of consciousness, as a self-referential system. Cogito, ergo sum, as it were. Hofstadter describes the brain as a ‘Ceranium’ billiard table with ricocheting ‘Simballs’–a play on cranium and symbol.63 He pictures symbols as the driving force of brain function, top down, rather than the physics of cells and signals driving bottom up. I mentioned the synergy I discovered between his ideas and those of Young and his imagined programs of the brain. The ‘I’ drives human consciousness. Is it real? Can multiple ‘strange’ self-referential loops occupy a brain? Does the strange loop illuminate feelings?
Hofstadter answers that consciousness is the dance of symbols in the brain, consciousness is thinking, cogito, ergo sum.64 He discusses how the dance of symbols enables the brain to simplify its models, while holding on to the essence needed to operate in the world.65
2012–John Scales Avery: Information Theory and Evolution
Avery’s book is included for its coverage of both historical and mathematical detail, brought together, now, in a second edition. It brings discipline of theoretical chemistry to the table, joined with that of mathematics, physics, computer science and bioscience.66
The scope of the book is the most wide-ranging of all the selection here. It has chapters on Charles Darwin’s life and work, molecular biology and evolution, statistical mechanics and information, information flow in biology, cultural evolution and information, information technology and bio-information technology, and a glimpse into the future. There are appendices on entropy, information, biosemiotics and economics.
2015–Nick Lane: The Vital Question
The biochemist Lane both challenged and inspired with his book–just read the superlatives of those who reviewed it across the world. The book is another breath-taking tour de force among my inukbooks. His vital question–why is life the way it is?–is a grand challenge, and the book a UCL-centred story of wide-ranging collaborations of people, connections of discipline and synthesis of insights that he presents as evidence for his answer. It resonates with Young’s story of the programs of the brain, but is a contrast, with Lane still at a formative stage of his research career, intent on new questions and experiments and not yet ready for the grandee status of the ‘philosopause’. He was awarded patronage and given free rein to explore his ideas at UCL, resulting over the following six years in the publication of this book. It touches only lightly on the concept of information, and I include it here to provide context for and balance with the more information-centred perspectives of my other inukbooks.
Reading it took me back to the first time I listened to the British chemist Leslie Orgel (1927–2007) describing his experiments directed towards understanding the origins of life, in the years from 1964, working under the blue skies of the Chemical Evolution Laboratory at the Salk Institute in California.67 There he created experimental simulations of lightning in a primeval atmosphere and observed spontaneous creation of organic molecules.
Lane is refreshingly bold and clear in setting out his wares: ‘Few biologists are more than dimly aware of the black hole at the heart of their subject’, he says. Contrasting the billions of dollars now spent each year in measuring and unravelling the complexity of system of genes, proteins, and regulatory networks, he asks ‘How can we hope to understand disease if we have no idea of why cells work the way they do?’68 He sees the understanding of how the component parts of living systems evolved as biology’s grand challenge, and the book as his attempt to frame and start a journey into this conceptual black hole. This is quite an opener!
Lane describes the living cell in the language of energy and bioenergetics. He places this description alongside, and in contrast to, what he sees as the present-day over-preoccupation with genetic determinism. The structures and functions of living cells are described in terms of protons and electrons and the electrical potential field gradients that shape and facilitate their flux–across membranes, through internal spaces, in exchanges with local environments, along cascades of connected transport mechanisms and chemical reactions that power and enact living processes. He describes a landscape characterized by energy equilibria and disequilibria.69 His forte and focus is evolution and the stages through which the components and behaviours of living systems came to be. He quotes the biochemist Albert Szent-Györgyi (1893–1986), who observed that ‘life is nothing but an electron looking for a place to rest’.
Lane is challenging, in the spirit that his UCL sabbatical award was designed to encourage. He talks about textbooks and journals that fail to engage with the question at this basic level and an Internet that overloads and swamps with ‘indiscriminate facts, mixed with varying proportions of nonsense’.70 There is, he says, a huge knowledge base about natural selection and random processes that ‘sculpt genomes’, all consistent with the evolution of cells. Adding, however, that this ‘encyclopaedia’ of facts and knowledge becomes a ‘straitjacket’ when it fails to address the question why life took the course it did. Bioenergetics is the theme and unifying signal he is seeking to identify and tune from within the resulting noise.
The book places its subject within a firm historical and evolutionary context, quoting Crick and Watson’s Nature paper of 1953.71 He quotes their conclusion that ‘It therefore seems likely that the precise sequence of the bases is the code which carries the genetical information’, asserting that ‘that sentence is the basis of modern biology today. Biology is information, genome sequences are laid out in silico, and life is defined in terms of information transfer’. He dissents from this worldview, and, in support of his own, which is centred on bioenergetics, responds as follows: ‘Well, biology is not only about genes and environment, but also cells and the constraints of their physical structure, which we shall see have little to do with other genes or environment directly. The predictions that arise from these disparate worldviews are strikingly different’.72
Lane is fulsome and careful in his acknowledgements of landmark revolutionary contributions from the 1960s of Lynn Margulis (1938–2011), Carl Woese (1928–2012), Peter Mitchell (1920–92) and Bill Martin. The book describes stepping-stones towards a synthesis of knowledge about life and living systems in the unfolding and connecting stories of biology, biochemistry, biophysics, biomathematics and bioinformatics. It adopts a position close to that of Stewart, in its downplaying of genetics and emphasis on the spatial and physical pathways and constraints underlying the components, shaping the patterns in and through which life and living systems have evolved.
Lane’s team is intent on building new experimental bioenergetic devices for his research. His focus is that of an experimentalist and he is not tuned to mathematics and models and is firmly in the camp of doubters that information holds the key to answer his question. He maintains: ‘If life is all about information, these are deep mysteries. I do not believe this story could be foretold, predicted as science, on the basis of information alone. The quirky properties of life would have to be ascribed to the contingencies of history the slings and arrows of outrageous fortune. We would have no basis for predicting the properties of life on other planets’.73
Lane picks up on Schrödinger’s book, saying, from seventy years on, that it ‘asked the wrong question altogether. Add in energy, and the question is much more telling: What is living?’74 This rhymes with Stewart’s disdain for genetic determinism and his reframing of Schrödinger’s question about what life is, into a question about what life does. The categorical phrase ‘wrong altogether’ surprises, here–incomplete is certainly true, but Schrödinger did himself recognize that he would better have discussed the tendency towards disorder in terms of Gibbs energy rather than negative entropy and he was, as Lane acknowledges, reasoning very far ahead of present-day knowledge of bioenergetics, played out in proton and electron transport in the cell. Schrödinger’s incisive analytical mind would have had much to say about bioenergetics in the context of present-day bioscience. And, certainly, quantum tunneling would not have found its coverage in Lane’s book, nor the discussion of entropy and order, unshaped by Schrödinger’s scientific legacy that endures to this day.
In setting out his own scope, Lane acknowledges of Schrödinger that:
When he was writing, nobody knew much about the biological currency of energy. Now we know how it all works in exquisite detail, right down to the level of atoms. The detailed mechanisms of energy harvesting turn out to be conserved as universally across life as the genetic code itself, and these mechanisms exert fundamental structural constraints on cells. But we have no idea how they evolved, nor how biological energy constrained the story of life. This is the question of this book.75
Elsewhere, he traces the rise of genetic determinism to Schrödinger’s aperiodic crystal and its ‘code-script’, saying ‘Yet DNA, the beguiling code-script which seems to promise every answer, has made us forget Schrödinger’s other central tenet–that life resists entropy, the tendency to decay’.76 Lane sets out a fascinating and persuasive case for bioenergetics as a complementary core discipline of biology. He looks at evolution in terms of core mechanisms that power and enact living systems and why and how these came to be. It is a story of sunlight, minerals, water, carbon dioxide and other key molecules; of carbon, hydrogen and oxygen, and other key atoms, bound together in organic matter; and of protons and electrons, pumped and pumping across membrane barriers, shunted and shunting along chemical pathways and across distances within the cell and between cells. It abuts quantum tunneling of electrons over Angstrom distances in the cell but the journey of physical reality into quantum entanglement and computation is not (yet!) in his scope. So, the probing of linkage with other domains of mathematics, computer science, information, mind and intelligence, which absorbed the writing of others of my inukbook writers, here, does not feature.
Lane writes vividly, of pumps and pulsating power stations as metaphors for the systems of the cell, likening them to the engine and engineering that underpin life and living. He writes of the water molecule splitting into a two-electron supplemented oxygen ion, eager to offload those electrons once more, and two protons, each eager to share again an electron partner. It is a scientific story of energy gradients and fluxes, of protons and electrons, oxidative (electron loss) and reductive (electron gain) chemical reactions, and chains of chemical reactions that pump and channel electrons through conformational and related energy state changes of proteins, mediated through permeability and impermeability of narrow membranes. With millivolt electrical potentials, these protons exert forces over short distance through potential gradients equal to those of lightning, in trillions and quadrillions of events within every cell. Protons are pumped across membranes and release energy back in cycles. Electrons flow in cascades, channeling and energizing this flux of protons and energizing molecules and reactions that channel and enact the chemistry of oxidation and reduction, whereby the body feeds, breathes, lives and works, and where the dictates of physics and constraints of mathematics are followed, and order is preserved.
Lane’s central question is rhetorical but its challenge to the scientific relevance of information is important–in what kinds of way will the answer matter, in practical terms? How does lack of knowledge of what lies within the ‘conceptual black hole’ that Lane identifies in biology impact on the science and practice of medicine, where action or inaction are central concerns. The worry expressed implicitly in Lane’s critique of biological dataism (it used to be dismissed as stamp collecting, I recall!), in its framing of the science of medicine, is that being devoid of an answer to his question, it spends too much on acquiring detail that explains wrongly or inadequately, and thus guides action inappropriately and achieves too little. But bioinformatics has transformed capacity for medicine to do better–in designing and proving vaccines in record time, for example. Time will tell if a different worldview, focused through the lens of bioenergetics, might direct attention differently and better. It seems that the two perspectives are mutually consistent and differently, but both usefully, explanatory–rather as, in my time studying these matters, quantum mechanical and liquid drop models of the nucleus cast differently useful light on experimental data in nuclear physics.77 Reading this book again made me think of Robert Oppenheimer’s (1904–67) remarks about complementarity, in his 1953 Reith Lectures, as discussed in the introduction of Chapter One.
2019–Marcus du Sautoy: The Creativity Code
The mathematician du Sautoy combines a razor-sharp mind with wide-ranging and penetrating vision. He wears distinguished hats, as both Professor of Mathematics and Professor of Public Understanding of Science at Oxford. His books What We Cannot Know and The Creativity Code engage, entertain and educate a very wide audience.78 He keeps his focus humbly in the world of knowledge and understanding and asks questions to elucidate issues. His writing style is not quite as magisterial and combative as that of his predecessor in the latter role, Richard Dawkins, who, equally brilliant and incisive in his field of human evolution, appeared rather to stoke and revel in controversy. The Selfish Gene and The God Delusion challenged and expressed strong views on religious belief about what we do not, and possibly also cannot know.79 Du Sautoy does not enter this territory as a gladiator–Dawkins agitated and du Sautoy soothed.
The Creativity Code explores beyond machine intelligence to machine creativity and identifies creativity as a product of the conscious mind. He asks whether machine intelligence can move to machine creativity. Its accomplishments in games, writing, painting and music-making are now of a quality that passes empirical test, whereby a blinded observer might take them as exhibiting human creativity (just as, early on, machine intelligence was judged by whether a blinded user concluded that they were interacting with a human being, rather than a machine). Joseph Weizenbaum (1923–2008), a founder of modern-day AI who I introduce in Chapter Seven, demonstrated with his ELIZA program that quite simple program heuristics were sufficient to dupe human users into behaving as if they were in conversation with another human. And Cass Sunstein, who comes on the scene in Chapter Nine, reasoned persuasively about how humans easily fall foul of bias, through dysfunction of the groups deliberating issues and problems, failing to share information effectively, and propagating bias because of perceived reputation, expertise and behaviour of actors.
A Pause for Reflection
Both Schrödinger and Davies, whose inukbook is discussed next, take their physics to the level of the workings of the mind, and park their thoughts in the realm of unknown physics, and that which is possibly unknowable. Davies and Stewart both chart a still largely hidden pathway towards greater understanding, beyond the level of the machinery of the cell to the properties of networks of cells and organs, and to the symbols they operate and function with and are constrained by, described in language of mathematics and information theory.
Just as the logic and operation of a high-level computer program is implemented through a computing machine and its machine code, so nervous systems and brains are described in terms of machinery of life and living systems. The functions and capabilities of a computer program depend for their enactment on the functions and capabilities of the computer machine on which they run. One can go back further and observe the design that this machine embodies and the component electrical circuits, which involves language of logic, filters, rectifiers, amplifiers and so on. One can dig deeper and observe the electrical potentials and currents, and the component resistors, capacitors, inductors and transistors comprising each electrical circuit of the computer machine. And from there we can step down to description of each such component in the language of the physics of electromagnetism, which determines electron flow throughout.
Abstraction of biological machine as information circuit is analogous to abstraction of electrical circuit above electrical component, and computer program above computer machine, to understand and inform understanding of their designs and integrated functions. Young uses analogy of computer program to describe function of the integrated nervous system and brain machine. In cell biology, it is the information network of genes that Davies looks towards, as a higher level of abstraction required, over and above the machinery of cells and organs, to understand their integrated function. There is then another jump to understand all of this, again as machinery, when taken to the level of the conscious and creative mind.
Hofstadter rose to this level in I Am a Strange Loop. What it is to be conscious, intelligent and creative are symbols in play within his concept of ‘I’; symbols ricochet on his mental billiard table and round his ‘Strange Loop’ of consciousness. They are waves flowing in an ocean of ideas and appearances. We map and describe them with functional MRI and locate and ascribe them to regions of the brain machine, much as we might track the symbols manipulated in a computer program to the active registers, memory addresses and logic circuits of the computer machine or to the regions of the cell and the machinery of cell function that Lane describes. With Davies, we inch upwards to information networks and the functions and meanings they drive and oversee. We piece together the sailors, the ships, the waves and the oceans, and seek to infer about charts, weather, voyages and storms. There is an interesting experience to learn from in the design of computer processor chips. These are now so minutely complex and extensive that no human designer understands it all–the computer manages and guarantees the whole. Perhaps human understanding of the biology of living systems will reach a similar tipping-point, too.
2020–Paul Davies: The Demon in the Machine
Davies has taken on the mantle of guru connecting theories of information, life and mind, from the perspective of physics. The timely publication of this book enables my collection of inukbooks, here, which started with Schrödinger, to end with Davies. His periscope peers further into the medical science of the future, projected as an information science and technology–in this perspective, gene therapy arguably already is.
This inukbook is the most fluent and up-to-date account that I know of, explaining how information is defined and measured experimentally, and how, as a concept, it has gravitated to the centre-ground of theory of physical and biological science. Indeed, some envisage an information theory connecting concepts spanning from mathematics and physics to cellular function of the body, and even beyond, to social and economic domains. As he observes, ‘the challenge to science is to figure out how to couple abstract information to the concrete world of physical objects’.80
Working from Shannon’s 1949 paper on The Mathematical Theory of Communication, which he describes as ‘a pivotal event in science’, he gives a powerful example of its application to the information content of DNA:
Every cell in your body contains about a billion DNA bases arranged in a particular sequence of the four letter or logical alphabet. The number of possible combinations is 4 raised to the power of 1 billion, which is 1 followed by about 600 million zeros. Compare that to the paltry number of atoms in the universe–one followed by about 80 zeros. Shannon’s formula for the information contained in this strand of DNA is to take the logarithm, which gives about 2 billion bits–more than the information contained in all the books in the Library of Congress. This information is packed into a trillionth of the volume of a match head. And the information contained in DNA is only a fraction of the total information in a cell. All of which goes to show how deeply life is invested in information.81
The demon of his title is the Maxwell demon–after James Clerk Maxwell, the physics great–reenvisaged in the light of thought experiments of Leo Szilard (1898–1964) and Rolf Landauer (1927–99). He explores the three-way trade-off, as he describes it, between information, work and heat energy, and the ways in which information shares some of the properties of energy. He poses the question ‘So is information real, or just a convenient way to think about complex processes?’ and finds, ‘There is no consensus on this matter, though I am going to stick my neck out and answer yes, information does have a type of independent existence and it does have causal power’.82
Citing theory of the generation of entropy in the erasure of information, and recognizing the continuing controversies about its meaning, he goes on to describe an imaginary device called an ‘information engine’, as imagined by Christopher Jarzynski and colleagues,83 that ‘systematically withdraws energy from a single thermal reservoir, delivers that energy to lift a mass against gravity, while writing information to a memory register’. This is theory inching towards David Deutsch’s causative power of information, in an area of fundamental principles of science. He brings together examples from the realm of nanoscale engineering that are likewise inching towards real information engines–‘applied demonology’, he calls it!–reporting conversion of information into energy with twenty-eight percent efficiency and envisaging a future nano-engine running on ‘information fuel’. Quantum computing has also entered the world of statistical thermodynamics and information, using entangled particles to induce heat flow from a colder to a hotter system.
Davies lays out his conception of the future digital doctor as follows:
The study of information flow and information clustering would provide a diagnostic tool far more powerful than the battery of chemical tests used today. Treatment would focus on establishing healthy, balanced information patterns, perhaps by attending to, or even re-engineering, some defective modules, much as an electronic engineer (of old) might replace a transistor or a resistor to restore a radio to proper functionality.84
He reasons beyond the DNA triplet code towards a higher-level computer language of life. Just as software engineers use higher level language and have left the underlying binary machine code far behind, so he suggests that:
The cell as a unit operates at a much higher level to manage its physical and informational states, deploying complex control mechanisms. These regulatory processes are not arbitrary but obey their own rules, as do the higher-level computer languages used by software engineers. And just as software engineers are able to reprogram advanced code, so will bioengineers redesign the more sophisticated features of living systems.
Paul Nurse, founding Director of the Francis Crick Institute, linked with UCL, was awarded the 2001 Nobel Prize in Physiology or Medicine, with Davies’s colleague Leland Hartwell, and his UCL colleague Tim Hunt, for their work on gene networks that control the cell cycle of yeast. This approach has evolved into the tracking of patterns of information flow in gene networks. These appear to follow their own rules, independently of what Davies calls the ‘circuit topology’–like the function performed by a computer program bearing little relationship to the hardware on which it is run. ‘Only if something goes wrong is it necessary to worry about the actual wiring’, he says. And then, ‘Cells are beginning to look like bottomless pits of complexity. The discovery of all these causal factors which are not located on the actual genes is part of the field known as epigenetics. It seems that epigenetics is at least as important as genetics as far as biological form and function are concerned’.85
In drawing to his conclusions, Davies writes:
Looking back over the past 3.5 billion years, the origin of life was the first, and most momentous, transformation. However, the history of evolution contains other major transitions, critical steps without which further advance would be impossible […] Eukaryogenesis, sex and multicellularity: all involved marked physical alterations. But the true significance lay not with changes in form or complexity but with the concomitant reorganization of informational architecture. Each step represents a mammoth ‘software upgrade’. And the biggest upgrade of all began about 500 million years ago with the appearance of a primitive central nervous system. Fast-forward to today, and the human brain is the most complex information processing system known. From that system stems what is undoubtedly the most astonishing phenomenon of all in life’s magic puzzle box–consciousness.86
His epilogue appropriately quotes Einstein as its banner: ‘One can best feel in dealing with living things how primitive physics still is’.
The Magic Mirror of Maurits Escher
I conclude my landmark book list, rather quirkily, perhaps, with The Magic Mirror of M. C. Escher, by Bruno Ernst, which is rich in symbolism of life.87 The eminent mathematician Roger Penrose engaged with Escher’s visual paradoxes, so I feel in good company in the way I, too, use these images. The historian, Norman Davies, who featured in the Introduction, wrote of the importance of presenting history through art and poetry, saying that the historian must collate the widest range of sources, that every source of information is a distortion, and absolute objectivity unattainable: ‘Every technique has its strengths and its weaknesses. The important thing is to understand where the value and the distortions of each technique lie, and to arrive at a reasonable approximation’.88 Mervyn King wrote about the importance of getting to grips with complex issues of monetary policy, through storytelling as much as analysis. Effort to understand, live with and work through complex problems can also benefit from this breadth of approach. And communication of personal health care histories, through narrative, adds important detail and context that goes beyond data and analysis.
Works of art are expressions and experiences of life, connecting artist, subject, media and audience. They are works of hedgehogs and foxes–those who connect deeply and those who connect widely, in Isaiah Berlin’s (1909–97) classification of great authors. Some are simple, some very complex. Our mappings of knowledge and reason with health care systems and services are partial pieces of an unclear picture puzzle. The pieces fit together up to a point, shaped by our vision of the picture itself. The description and relevance of the pieces may change and fit together differently, according to a different vision of the picture. We talk about a picture of health. In all connections there is limitation and imprecision of representation and reason, and bias in presentation and interpretation.
One of my great friends during my nearly twenty years at Bart’s was the medical artist Peter Cull (1927–2012). He had spent some of his early career working in Africa and had a highly individual artistic style, producing striking pictures of human anatomy of disease, to complement pathology museum sample collections. He and his colleague David Tredinnick (1922–2005) were leaders of their time in the medical artist and medical photography professions.89 Before the computer era, preparation of slides for projection at meetings used the skills of both these arts. I worked closely with them in finding a pathway for each of their departments into the computer age, which was a tricky and sometimes fraught professional transition. We worked on computer-based learning, where Peter’s team produced illustrative visuals of complex clinical procedures for display within the software and in creating computer methods to support creation of thirty-five-millimetre projector slides and computer archives from the huge photographic image collections used for teaching and publication.
I discovered from them the power of visual image in illustrating the wicked problems of health informatics. For example: reconciling privacy concerns with the need to aggregate and share data; central and global versus distributed and local policy for implementation of IT systems; market-based versus politically mandated adoption of data standards. In this quest, I alighted on the mathematically inspired woodcuts and lithographs of Escher. These play with competing geometrical perspectives within a single design. For me, they illustrate competition among different perspectives in a more human way. Escher was loved by mathematicians but ostracized for most of his career by the arts community. Here I give a wider appreciation, drawing on the published compendium of Ernst.
The idea of using these images came to me when planning the presentation of the GEHR project to the final conference of the AIM 2 Programme, in Brussels in 1993. I decided to show two of them. The first was Ascending and Descending (1960), depicting two groups of people: one walking up a staircase around the outside of a tower and the other walking down the steps. The geometry of the tower was an illusion.90 The successive flights of stairs that the eye is tricked into following, connect from top to bottom. The ascenders reaching the top emerge onto the bottom of the staircase and the descenders reaching the bottom, emerge at the top. I used this image to parody the purposeful activities of standards makers and innovators in health care IT of those times. All striding purposefully, passing one another on the steps at each circuit round the tower, and neither ascending nor descending. The need was for a realistic staircase where there could be common ascending, and their two essentially connected endeavours could play out in concert.
The second slide, which I introduced in Chapter Five, was the illusion of Drawing Hands (1948), depicting two hands: each clasping a pencil, and each appearing to be drawing the other hand. I used this when talking about medicine and information technology, saying that the study of medical informatics was a middle ground where insights from information technology were leading to new methods in medicine and the challenge of accommodating the complexity of medicine was leading to new methods of information technology. This was a visual metaphor for the coevolution of information technology and health care, with each in some part writing the other’s story.
After the talk in Brussels, the stand we had set up in the conference exhibition to show the work of the project was attended by a huge number of people from that audience, several hundred strong, coming to see what we had done in creating the GEHR (Good European Health Record) architecture for electronic health records. Numbers came to talk to me about the impact of the Escher slides I had shown.
Here are some further artistic metaphors I have found in other Escher images, illustrating themes encountered along the songline of the book.
Tower of Babel (1928) illustrates the confusion of tongues.91 A building progressing upwards towards ever-increasing structural instability, with increasing panic of bricklayers at each higher level, mushrooming from a narrow base into a toppling upper edifice. Woe is the tower of hundreds of thousands of terms in medical terminology!
In Relativity (1953), Escher builds three separate perspectives of a hallway and connecting staircases, into a single incongruous whole.92 Woe is the information engine combining multiply redundant information models in one program! In Up and Down (1947), he depicts a courtyard that is being viewed from high up and at ground level in a single integrated image.93 This is of a small boy looking up and his parent looking down. Woe is the lot of the clinician, seeing and coping with the world they connect with at ground level, constrained to work within an architecture framed by the helicopter view of an information system designer or service manager peering down from high up! In other images, Escher illustrates information systems where information cannot flow (Waterfall (1961))94 and a both bounded and infinitely variable organic information system (Circle Limit III (1959)),95 discussed further below.
The Singularity
A landmark on the other side of the transition into the Information Age is embodied in the concept of ‘the singularity’. It is the topic addressed by Ray Kurzweil in his book The Singularity Is Near: When Humans Transcend Biology; the term refers to the point in evolution when information technology can model and mirror biology at scale and detail that matches human form and human reasoning capacity.96 This vision is also described by James Lovelock, author of The Ages of Gaia, in his 2019 book Novacene,97 which I return to in Chapter Ten.
Kurzweil credits von Neumann as the first to envisage this point in the evolution of life.98 The power and significance of such a vision is now highly influential in conjecture about the rapid evolution of AI. The mathematician Roger Penrose is a notable doubter that such a stage of evolution can or will ever occur. He was right about the emergence of black holes (another kind of singularity), as his recent Nobel Prize award attests. However, the nature of consciousness and mind remain controversial matters, in philosophy as much as science, and are probably not early candidates for Nobel Prizes!
Many predict and imagine this future reality. Novelists like Ian McEwan and Ishiguro write and worry about it.99 Along my songline, the analogy of the evolution from the early chess-playing machines to AlphaGo, AlphaFold and ChatGPT has been instructive. The human brain employs a somewhat nonlinear calibration of the quality of its own achievements compared with those of the machine. Early efforts amused and attracted derision. Machines performing nearer to human capability were yet judged rather dense. When starting to perform competitively with humans, they quickly caused alarm and sweat! And passing beyond that stage, they quickly progressed to inspire human awe, when they started to beat human experts and win world tournaments, communicate with humans in fluent natural language, solve hitherto intractable puzzles, and baffle and bemuse us. Today, quantum computation is forecast to bring potential to solve hitherto intractable combinatorial problems in seconds and minutes that would take today’s most powerful mainframes many thousands of years.
We may speculate how machines that win at the game of Jeopardy and can instantly synthesize material collected from across encyclopaedias of modern-day knowledge will influence health care over the next fifty years and beyond. How will the health care revolution of today appear, looking back, Edward Gibbon-like, should we get to such a reflective place, from hundreds of years ahead? How will the science and engineering, and the beliefs, myths and magical thinking of our own age stand up? As von Neumann purportedly speculated, when foreseeing the singularity where machines overtake humans, ‘The ever-accelerating progress of technology and changes in the mode of human life give the appearance of approaching some essential singularity in the history of the race beyond which human affairs, as we know them, could not continue’.100
My inukbooks on the future to come are numerous and varied. They bridge technological and human perspectives. McEwan wrote Machines Like Me to explore the interface and relationship between humans and cyborg beings. Du Sautoy wrote The Creativity Code to re-explore ideas of creativity in thought and art: especially pertinent, since machines have begun to extend through and beyond winning at chess, Go and Jeopardy, to paint pictures and make music, as well as solve mathematical problems and mimic verbal and literary styles. Maddox’s What Remains to Be Discovered, written on his retirement as Editor of Nature, is about the extraordinary stretches of mind and imagination up to the Year 2000, and what lies beyond.101 The magic of the present era is captured in Marcus Chown’s Infinity on the Palm of Your Hand.102 The Escher’s Circle Limit III (1959) woodcut is a striking image of life and art that is finitely constrained and infinitely variable.103 On the Future of Humanity, by the UK Astronomer Royal and former President of the Royal Society Martin Rees, is a 2020 tour d’horizon of what may lie ahead.104 He was kindly and indulgent in signing copies for me and all my children, as Christmas presents, when he spoke at New Scientist Live last year. And most recently, I have Lovelock’s Novacene vision of future artificial intelligence, which may prove our making or breaking as humans, or hopefully remaking.
Parenthesis–Information Policy
The duty of tolerance is our finite homage to the abundance of inexhaustible novelty, which is awaiting the future, and to the complexity of accomplished fact which exceeds our stretch of insight.105
This quotation reflects a patient and careful perspective about how we should approach the future. A tolerant balance of speculation and scepticism. How well does it stand up against experience of the dramatically anarchic transition of society through the Information Age? How should it reflect in policy adopted for addressing and coping with the potentially breaking changes in the everyday practice of its health care services, poised midway in this transition?
Insurance policies help protect us from future downside risks that we could not face alone. The actuaries who underwrite insurance policies are good at mathematics and cautious about risk. They are brainy people. A close friend from student days switched from a PhD unravelling numerical solutions of Schrödinger’s equation to a stellar career as an actuary. They know a lot about life and death, know what they cannot know, and are seasoned accordingly to act wisely, cautiously and safely. If not, their insurance policies fail.
Information policy for health care needs a careful balance of both upside potential and downside risk, combining imagination, creativity, realism and caution. In this, it must combine the Barack Obama audacity of hope and the Mervyn King audacious pessimism that have already cropped up several times in the book. The influential philosopher, Antonio Gramsci (1891–1937) famously wrote of pessimism of the intellect and optimism of the will. To cope well, information policy must be informedly pessimistic about downside risk and optimistically determined about upside potential. Such policy can fail, too–overestimating future benefit and underestimating related harm, proving inadequate to the task in both capability and will. We need an information policy for health care that readies us as a society, as best possible, to cope with and use gainfully the unfolding insights of life and medical science and new information technology, and their impact on the health care needs and services of the Information Society. These fields will likely continue as an unfolding anarchy, through transition still to come.
Alfred North Whitehead talked of the anarchy of transitions, but experience of the Information Age might have blown even his tolerant and sanguine mind off course. Would he have been with George Orwell (1903–50) and Aldous Huxley (1894–1963) in imagining a technology that became a vehicle of malign official censorship, restricting access to information to control and enchain society, or that conditioned human life to become trivialized and egotistical, surrounding itself and drowning in a sea of false, misleading and irrelevant information? What would he have had to say about the downside risk combined with upside potential of a universal communication network? That it would give rise so swiftly to global cybercrime, political manipulation and titanic battles in the law courts? That academia from east to west coasts in the United States, would inflate and conflate so rapidly into the global powerhouses of IBM, Apple, Microsoft, Google, Meta and Amazon? That machines would master chess and Go, and unfold maps of the molecular biology of life, playing out in a Cloud of calculating machines and data stores persisting deep in the sea? That computers would fluently translate language, mimic literary and artistic style and content, and write program code, as they now do?
This chapter has ranged widely over theory of information and science of life. It has collected diverse ideas and perspectives: order and disorder, equilibrium and disequilibrium, symmetries and broken symmetries, and the emergent behaviours of complex systems. It has connected insights from the mathematical, physical, biological and computer sciences. It has traced hierarchies of abstraction, ascending towards description of biological systems and their functions in terms of information networks and an integrated nervous system ascending into the conscious mind. It has gone a long way beyond DNA coding sequence characterized as information, and life thought of, by analogy, as combining the characteristics of the Turing machine, an abstract model of computation conceived in the 1930s, and the von Neumann universal constructor, an abstract model of self-replicating cellular automata conceived in the 1940s.
AI and quantum computation dangle promises of playing into unfathomable new worlds of insight and capacity that will outshine everything we currently know and experience. The subconscious and conscious processes of the human brain and mind, shape and determine the actions whereby they pursue their purposes and goals. Discoveries in the virtual world are connecting the human body with information that flows within and around it and characterizes its functions. We are seeing ever closer connections of these real and virtual worlds.
How these insights and capacities will connect with everyday health care in the future Information Society is unknown, but potentially highly consequential. There is much speculation and a mix of perspectives about future health care. These face towards concern to remedy the health inequalities summarized in the Michael Marmot Reviews, which have provided a modern-day overview in the same spirit that motivated William Beveridge (1879–1963).106 They face, in parallel, towards the prospect of radically improved prevention, surveillance, mitigation and treatment of disease, powered by new therapeutic interventions made possible by science of the Information Age, and the prospect of a coming era where AI exceeds the capability of humans.
The interaction of information with health care impacts immediately and personally on each of us and those we care for, who in turn care for us. It has immense professional, societal and economic contexts. Much of today’s information policy has focused towards improving the industrial age of hospital medicine and linking this, from the top down and centre outwards, with primary care, and towards new technology, such as for population level informatics and artificial intelligence, that remain to be created, implemented, proven and adjusted to at scale. And, all the while, unsafe environments and behaviours impact balance, continuity and governance of the health care of citizens, making some of their lives, cumulatively, rather worse. This policy focus leaves more immediate service needs relatively unattended to and unmet, but to be coped with, and burdening services, nonetheless.
Information policy for health care in the UK has been established in what has often been a high-level battleground of related professions, services and politicians. Solutions that cohere and are sustainable have not, and likely will not, come from the top down in this way. They will be created by the enablement of individuals, organizations and industries, with goals focused through closer engagement with the wishes and needs of citizens and their local communities, connecting outwards and upwards, iteratively, and incrementally. Complementing this local endeavour, national information policy should best be directed towards enablement and governance, focusing on the discovery, support and protection of a new common ground of values, principles, methods and approaches to care services, supported by a care information utility that reflects and mirrors them. This poses challenges that can only be tackled with methods that can transcend between local and global scale.
Having embarked on this connecting chapter into Part Two of the book, describing transition in understanding of both information and life in the Information Age, the storyline of the book now comes back down to earth with a bump. Chapter Seven focuses on the transition of health care services in the Information Age, and the role of information technology in services that guide, enable and support health. Reflecting, again, Deutsch’s sense of information as a causative agent, we must consider the wider context of what things we need and wish to create or cause to happen, how new knowledge and capability can help and equip us to make them happen, how we choose to generate and use information to these ends, and how well we are succeeding. These comprise one question of what, and three questions of how. Current dilemmas have substantially derived from policies preoccupied with conjecture about what might be; yet, over decades, there has been serial a consistent lack of attention to learning about and improving the how of its becoming. It has been a costly and bumpy transition.
1 Sometimes also attributed to Samuel Johnson (1709–84).
2 D. Adams, The Hitch Hiker’s Guide to the Galaxy: A Trilogy in Five Parts (London: Random House, 1995).
3 On this Polish expression, see Preface.
4 The name here connects to the experiments, a hundred years ago, of the Indian physicist Chandrasekhara Venkata Raman (1888–1970), on the interaction of phonon and photon in scattering of light in the solid state, for which he was awarded the Nobel Prize in 1930. I encountered the quantum theory of the Raman effect, in my university days. Its place now, in a robot-controlled experimental laboratory on Mars, is astonishing to think about! There is a man still alive in Japan, today, who was a teenager when Raman reported his findings.
5 N. Drake, ‘Alien Hunters Detect Mysterious Radio Signal from Nearby Star’, National Geographic (18 December 2020), https://www.nationalgeographic.com/science/article/alien-hunters-detect-mysterious-radio-signal-from-nearby-star
6 N. N. Taleb, The Black Swan: The Impact of the Highly Improbable (London: Random House, 2007).
7 Just this month, as I write, the classical historian Robin Lane Fox, a fellow student of mine at Magdalen College, University of Oxford, published The Invention of Medicine (London: Penguin Books, 2020), tracing this story to manuscripts of Hippocrates dating from an earlier time than previously believed by scholars of ancient history.
8 Galen was a highly accomplished Greek physician, surgeon and philosopher, working in Rome. He was personal physician to some Roman Emperors, and a prolific author: about 20,000 pages of his work survive. He is still known among other things for his discovery of blood in human arteries and for his dissection of the human cranial nerves. Some years ago, my wife and I visited Bergamo (Pergamon) in Turkey and the Aesculapian of Pergamon, where classical Greek medicine played out alongside ancient practices of religion and civic power. The guide told us of ill treatment of mental illness by priests, an illustration of the interplay of mysticism and early science. They whispered suggestions, as if from gods, through holes in the roof of the tunnel through which patients walked on their way for treatments, to destabilize and control them. Galen was a surgeon. Surgical interventions, the earliest thought to have been trepanation, dated from many centuries before. This remarkable historic site—described to us by a very knowledgeable guide who went with us for the day trip from our sailing club (which was the real reason we were there!)—inspires awe. The principal remaining artefacts from its Temple of Zeus were taken to Germany and remain in the Pergamon Museum, in what was East Berlin. That is where I saw them, in 1984, when visiting to give a conference talk.
9 C. E. Shannon, ‘A Mathematical Theory of Communication’, The Bell System Technical Journal, 27.3 (1948), 379–423.
10 Von Neumann, in recommending the term ‘information entropy’ to Shannon, suggested that: ‘You should call it entropy, for two reasons. In the first place your uncertainty function has been used in statistical mechanics under that name, so it already has a name. In the second place, and more important, no one really knows what entropy really is, so in a debate you will always have the advantage’ (quoted in M. Tribus and E. C. McIrvine, ‘Energy and Information’, Scientific American, 225.3 (1971), 179–88 (p. 180)). It is amusing that physicists struggle to understand the meaning of theories but are confident in computing with and accepting their predictions, and informatics now likewise struggles. I live in hope of seeing what comes next in unifying the theories of general relativity and quantum reality, with theory of information.
11 J. A. Wheeler, ‘Information, Physics, Quantum: The Search for Links’, in Feynman and Computation, ed. by A. Hey (Boca Raton, FL: CRC Press, 2018), pp. 309–36, https://doi.org/10.1201/9780429500459-19
12 Ibid., p. 311.
13 J. Maddox, What Remains to Be Discovered: Mapping the Secrets of the Universe, the Origins of Life, and the Future of the Human Race (New York: Macmillan, 1998); M. du Sautoy, What We Cannot Know: Explorations at the Edge of Knowledge (London: Fourth Estate, 2016).
14 B. Duignan, ‘Willard Van Orman Quine’, Encyclopedia Britannia (21 June 2023), https://www.britannica.com/biography/Willard-Van-Orman-Quine
15 Quoted in R. E. Susskind and D. Susskind, The Future of the Professions: How Technology Will Transform the Work of Human Experts (Oxford: Oxford University Press, 2015), p. 276.
16 B. G. Buchanan, G. Sutherland and E. A. Feigenbaum, Heuristic DENDRAL: A Program for Generating Explanatory Hypotheses in Organic Chemistry (Stanford, CA: Stanford University Department of Computer Science, 1968); R. K. Lindsay, B. G. Buchanan, E. A. Feigenbaum and J. Lederberg, ‘DENDRAL: A Case Study of the First Expert System for Scientific Hypothesis Formation’, Artificial Intelligence, 61.2 (1993), 209–61; B. G. Buchanan and E. A. Feigenbaum, ‘DENDRAL and Meta-DENDRAL: Their Applications Dimension’, Artificial Intelligence, 11.1–2 (1978), 5–24.
17 J. Weizenbaum, Computer Power and Human Reason: From Judgment to Calculation (Harmondsworth: Penguin Books, 1993).
18 S. Muggleton, ‘Obituary: Donald Michie’, The Guardian (10 July 2007), http://www.theguardian.com/science/2007/jul/10/uk.obituaries1
19 R. A. Miller, ‘INTERNIST-1/CADUCEUS: Problems Facing Expert Consultant Programs’, Methods of Information in Medicine, 23.01 (1984), 9–14.
20 H. R. Warner Jr., ‘Iliad: Moving Medical Decision-Making into New Frontiers’, Methods of Information in Medicine, 28.04 (1989), 370–72.
21 E. P. Hoffer, M. J. Feldman, R. J. Kim, K. T. Famiglietti and G. O. Barnett, ‘DXplain: Patterns of Use of a Mature Expert System’, AMIA Annual Symposium Proceedings (2005), 321–24.
22 E. Shortliffe, Computer-Based Medical Consultations: MYCIN (New York: Elsevier, 2012).
23 See Chapter Two.
24 Available at ‘Kazuo Ishiguro: Remembering and Forgetting’, BBC One (28 March 2021), https://www.bbc.co.uk/programmes/m000tqn0
25 K. Ishiguro, Klara and the Sun (New York: Knopf, 2021).
26 E. Schrödinger, What Is Life? (Cambridge, UK: University Press, 1948).
27 J. von Neumann, The Computer and the Brain, Mrs. Hepsa Ely Silliman Memorial Lectures (London: Yale University Press, 1958).
28 J. Z. Young, Programs of the Brain: Based on the Gifford Lectures, 1975‒7 (Oxford: Oxford University Press, 1978).
29 R. P. Feynman, Feynman Lectures on Computation (New York: CRC Press, 2018).
30 I. Stewart, Life’s Other Secret: The New Mathematics of the Living World (New York: John Wiley and Sons, 1998).
31 D. R. Hofstadter, I Am a Strange Loop (New York: Basic Books, 2007).
32 J. S. Avery, Information Theory and Evolution, 2nd ed. (Singapore: World Scientific Publishing, 2012).
33 N. Lane, The Vital Question: Energy, Evolution, and the Origins of Complex Life (New York: W. W. Norton and Company, 2015).
34 M. du Sautoy, The Creativity Code: How AI Is Learning to Write, Paint and Think (Cambridge, MA: Harvard University Press, 2019).
35 P. Davies, The Demon in the Machine: How Hidden Webs of Information Are Solving the Mystery of Life (Chicago, IL: University of Chicago Press, 2021).
36 Schrödinger, What Is Life?, p. 20.
37 Ibid., p. 68.
38 Ibid., p. 69.
39 Ibid., p. 68.
40 From the thermodynamics perspective argued by Schrödinger, the system of living organism and its environment is viewed as one. The heat energy of the sun’s radiation incident on the living organism is associated with a low entropy component of this one system, because of the extremely high temperature at which it originates in the sun (entropy being calculated as heat energy divided by temperature). The same quantity of heat generated in a living organism constitutes a much higher entropy component because it is associated with a very much lower temperature—that of the living organism. In this way, the same heat input and output components are associated with different entropy components, and thus the system as a whole is seen to have increasing entropy, as required by the second law of thermodynamics, without an associated increase in disorder of the living component. Discussed without all the mathematics, this is inevitably a rather convoluted verbal handwaving, and probably not a very satisfying one from either physics or life science perspectives. Schrödinger qualified his position in response to criticism about his coverage of entropy, arguing later that the system should be analyzed from the perspective of Gibbs energy—a concept associated with Josiah Gibbs (1839–1903), a pioneer of statistical mechanics and thermodynamics and their application to physical chemistry.
41 R. Kurzweil, How to Create a Mind: The Secret of Human Thought Revealed (New York: Viking Books, 2012).
42 M. Cobb, The Idea of the Brain: The Past and Future of Neuroscience (New York: Basic Books, 2020). This historical overview of the brain and neuroscience appeared in 2020, as I wrote. In it, Matthew Cobb describes how brains have observed brains and reasoned about their function: Galen pressing on a pig’s brain to render it unconscious; surgery of 1940 to relieve temporal lobe epilepsy; electrical stimulation creating scenes of piano playing, a man and dog walking, a telephone conversation. He describes how early ideas of brain function focused on electrophysiology and coding mechanisms linking stimulus to action of neurons (the work of the 1932 Nobel Prize winners Adrian and Sherrington), and on the characterization of the stimulus itself. These led to later ideas of the brain in some way creating, as opposed to just representing, information.
43 Von Neumann, Computer and the Brain, pp. 50–70.
44 Ibid., p. 80.
45 Just ten years after von Neumann’s death, the emerging semiconductor industry was fabricating transistor-based electrical circuits on wafers of silicon, as this new era of computer technology gained scale and traction. The mighty Intel Company pioneered by Gordon Moore saw its capabilities doubling every year, in terms of the density with which circuit elements could be fabricated and connected in two dimensional arrays, onto a silicon wafer substrate. Within the subsequent decade, this number settled and remained at a doubling every two years, over four decades—a phenomenon characterized as Moore’s Law. In his 1964 paper, Moore described the packing of seventy circuits on a single silicon chip, and today that number is two billion. The circuit dimension achievable today is around ten nanometres—a red blood cell has a four thousand nanometre diameter and that corresponds to four hundred such circuits. New and more efficient semiconductor technologies continue to emerge from advances in devices exploiting quantum physics phenomena, and ability to compress them continues to evolve through the three-dimensional packing of circuit layers, which moves the metric of circuit density to a volume, rather than area, basis of comparison. The William Blake poem quoted in Chapter Two, in which he saw the world in a grain of sand, is now a new kind of metaphor of the evolving virtual world of information. Silicon-based semiconductor technology still cannot approach the information storage density of DNA in the living cell, however, as noted in Paul Davies’ inukbook, covered below! The wider application and impact of this technology is also now stretching towards a one-hundred-fold reduction in the cost of solar cell energy conversion, achieved since its early stages of development.
46 Young, Programs of the Brain.
47 Ibid., p. 7.
48 Ibid., p. 10.
49 Ibid., p. 8.
50 Ibid., p. 7.
51 Ibid., p. 10.
52 Ibid., p. 11.
53 Ibid., p. 193.
54 Ibid., p. 262.
55 Ibid., p. 264.
56 R. P. Feynman, R. B. Leighton and M. Sands, The Feynman Lectures on Physics (Beijing: Beijing World Publishing Corporation, 2004).
57 Feynman, Lectures on Computation.
58 Stewart, Life’s Other Secret, p. 30.
59 Ibid., p. 246.
60 Ibid., p. 71.
61 Ibid., p. 38.
62 D. R. Hofstadter, Gödel, Escher, Bach: An Eternal Golden Braid (New York: Basic Books, 1979).
63 I have taken my liberty to invent words like ‘omnuscle’ and ‘inukbook’ from the liberty he expresses!
64 Hofstadter, I Am a Strange Loop, p. 276.
65 Ibid., p. 279.
66 I learned from the book that Avery has been a stalwart of the Pugwash peace movement, led from Bart’s in my time there, by Joseph Rotblat (1908–2005).
67 Orgel graduated from Oxford in 1948 and was made a fellow of Magdalen College in 1951, two years before gaining his PhD. An unconstrained and unencumbered, Guyton-like, meteoric rise!
68 Lane, The Vital Question, p. 2.
69 Ibid., p. 28.
70 Ibid., p. 2.
71 Ibid., p. 22.
72 Ibid., p. 32.
73 Ibid., pp. 51–52.
74 Ibid., p. 52.
75 Ibid., p. 52.
76 Ibid., p. 51.
77 And today, as modelling of weather systems combines with machine intelligence, both are proving differently advantageous in forecasting of weather and what is being called its ‘nowcasting.’ For an immediate (now) prediction of local weather trends, machine intelligence can nowcast based on measurements of the current weather, including wind, temperature, cloud cover and time of day, to outperform complex physics-based model predictions. Longer-term and wide-area forecasts are still the preserve of complex models of atmospheric physics
78 M. du Sautoy, What We Cannot Know: Explorations at the Edge of Knowledge (London: Fourth Estate, 2016); du Sautoy, The Creativity Code.
79 R. Dawkins, The Selfish Gene (Oxford: Oxford University Press, 1976); R. Dawkins, The God Delusion (Boston, MA: Houghton Mifflin Company, 2006).
80 Davies, Demon in the Machine, p. 35.
81 Ibid., pp. 38–39.
82 Ibid., p. 47.
83 Z. Lu, D. Mandal and C. Jarzynski, ‘Engineering Maxwell’s Demon’, Physics Today, 67.8 (2014), 60–61.
84 Davies, Demon in the Machine, pp. 92–93.
85 Ibid., p. 113.
86 Ibid., p. 183.
87 B. Ernst, The Magic Mirror of M. C. Escher, trans. J. E. Brigham (New York: Barnes and Noble, 1994).
88 N. Davies, Europe: A History (Oxford: Oxford University Press, 1996), p. 5.
89 David’s dedication to the National Medical Slide Bank is recorded in an appreciation in ‘David Tredinnick Fbpa, Frps, Hon Fimi (1922–2005), Journal of Visual Communication in Medicine, 28.4 (2005), 166–67, https://doi.org/10.1080/01405110600575928
90 M. C. Escher, ‘Ascending and Descending’, Digital Commonwealth, https://ark.digitalcommonwealth.org/ark:/50959/3r076s51v
91 M. C. Escher, ‘Tower of Babel’, Digital Commonwealth, https://www.digitalcommonwealth.org/search/commonwealth:3r076t25f
92 M. C. Escher, ‘Relativity’, Digital Commonwealth, https://www.digitalcommonwealth.org/search/commonwealth:3r076s67r
93 M. C. Escher, ‘Up and Down’, National Gallery of Art, https://www.nga.gov/collection/art-object-page.47950.html
94 M. C. Escher, ‘Waterfall’, Digital Commonwealth, https://ark.digitalcommonwealth.org/ark:/50959/3r076s93c
95 M. C. Escher, ‘Circle Limit III’, Wikimedia Commons (3 February 2015), https://en.wikipedia.org/wiki/Circle_Limit_III#/media/File:Escher_Circle_Limit_III.jpg
96 R. Kurzweil, The Singularity Is Near: When Humans Transcend Biology (New York: Viking Books, 2005).
97 J. Lovelock, The Ages of Gaia: A Biography of Our Living Earth (Oxford: Oxford University Press, 2000); J. Lovelock, Novacene: The Coming Age of Hyperintelligence (Cambridge, MA: MIT Press, 2019).
98 Kurzweil, Singularity Is Near, p. 194.
99 I. McEwan, Machines like Me (Toronto: Knopf Canada, 2019); Ishiguro, Klara and the Sun; L. Allardice, ‘Kazuo Ishiguro: AI, Gene-editing, Big Data ... I Worry We Are Not in Control of These Things Anymore’, The Guardian (20 February 2021) https://www.theguardian.com/books/2021/feb/20/kazuo-ishiguro-klara-and-the-sun-interview
100 S. Ulam, ‘John von Neumann 1903–1957’, Bull. Math. Soc., 64.3 (1958), 1–49, https://www.ams.org/journals/bull/1958-64-03/S0002-9904-1958-10189-5/S0002-9904-1958-10189-5.pdf
101 J. Maddox, What Remains to Be Discovered: Mapping the Secrets of the Universe, the Origins of Life, and the Future of the Human Race (New York: Macmillan, 1998).
102 M. Chown, Infinity in the Palm of Your Hand: Fifty Wonders That Reveal an Extraordinary Universe (London: Michael O’Mara Books, 2018).
103 M. C. Escher, ‘Circle Limit III’, Wikimedia Commons (3 February 2015), https://en.wikipedia.org/wiki/Circle_Limit_III#/media/File:Escher_Circle_Limit_III.jpg
104 M. Rees, On the Future: Prospects for Humanity (Princeton, NJ: Princeton University Press, 2018).
105 A. N. Whitehead, Adventures of Ideas (New York: Macmillan, 1933), p. 56.
106 M. Marmot, Fair Society, Healthy Lives: The Marmot Review: Strategic Review of Health Inequalities in England Post-2010 (London: Marmot Review, 2010); M. Marmot, ‘Health Equity in England: The Marmot Review 10 Years On’, BMJ, 368 (2020), m693, https://doi.org/10.1136/bmj.m693