10. Half and Whole–Halfway between Information Age and Information Society
© 2023 David Ingram, CC BY-NC 4.0 https://doi.org/10.11647/OBP.0384.06
The previous chapter will have left no doubt that there remains a huge amount still to do. Echoing Bon Jovi, this final chapter builds on a theme of being halfway there! We are at halftime in the transition from Information Age to Information Society health care. The chapter is a halftime report to the new teams girding their loins to come on for the second half. Human societies define themselves by their values and traditions and how they adapt and change in times of anarchic transition.
In whatever way we evolve as individuals and communities in the coming years, the information technology and utility that inform, support and enact health care systems and services will only contribute in half measure to what is needed to create and sustain health and health care for the Information Society. I recount, with her approval, the personal struggle through medical accident, intensive care and prolonged rehabilitative care of my doctor wife, over a two-year period. The story of her survival and recovery is bipartite: half about the health care services and support she experienced and half about her character, struggle and determination to get well.
The book comes full circle, having connected around Shiyali Ramamrita Ranganathan’s (1892–1972) circle of knowledge and a cycle of learning about the coevolution of health care with the science and technology of the Information Age. This has been a first half of transition played out on a landscape populated with emerging and immature information technology. It is a preface to a second half, yet to come, to be played out in the context of maturing information technology and new bioscience, artificial intelligence and robotics, accompanied by an emerging and supportive, citizen-centred information utility. It will play out in the context of new device technologies, information systems and networks that enable much more of health care to be based at home and in the local community, be that in city centres or the most remote of outback communities in the world. There will be a continuing adventure of ideas, anarchy of transition and reform, played out around new circles of knowledge and cycles of learning.
To my brothers and sisters–the half and the whole.1
This, my first school, was a school for four-year-olds to twelve-year-olds. I remember the windows looking out onto the valley. We were half prisoners and also half special, to be able to see the valley and everything that was happening.2
Woah, we’re halfway there
Woah-oh, livin’ on a prayer
Take my hand, we’ll make it, I swear
Woah-oh, livin’ on a prayer.3
Writing this book has been a process of discovery. Looking back on how it has unfolded, it is striking how many times the issues discussed, chapter by chapter, have played out in half and half stories: theory and practice, lifespan and lifestyle, local and global information governance and standardization, discipline and profession, health and social care, Localton and Globalton village life,4 science and engineering, Grand Challenge and Wicked Problem, Big Data and Little Data, defeasible and indefeasible knowledge, object orientation and functional programming, information models and message protocols… the list goes on! The novelist Charles Dickens (1812–70) wrote in the language of half and half when describing the era of the French Revolution as the best and worst of times.
We might think of two halves as a dichotomy–one or the other, either a) or b). But as Robert Oppenheimer (1904–67) set out insightfully in his 1953 Reith Lectures,5 which I have drawn on in several chapters of the book, they often appear, and can more usefully be engaged with, as complementarities, reflecting that the two in combination describe something more whole, encompassing different perspectives and points of view–both a) and b), as in Dickens’s description of the best and worst of times.
The ongoing Information Revolution and the anarchy of transition that it has unleashed might also be described as the best and worst of times. Laurie Lee felt he and his classmates to be ‘half prisoners and also half special’ in their rural village school. We all might somewhat echo that feeling, in how we experience the information technology ‘school’ that both corrals our lives and widens our view of the world! In concert with this revolution, we are living through the best and worst of times in health care.
Information technology has changed everything and will continue to do so. Having traversed seven decades of the Information Age, all around the circle of learning, drawing this book to a conclusion, here, brings a sense of an ending, but not of completion. Realising my dilemma over when to call a halt to the three years of work that it has involved, my astute consultant cardiologist son, Tom, advised me two weeks ago that it was ‘Time to bookend your book, Dad!’ Best to follow one’s wise children’s advice! No doubt there will be odd stray bits of its evolving DNA still floating around in the text. At least that would be true to life; there is no advantage in being too pedantically tidy in telling such an evolving story!
This book ending is where we are, today, in exploring the relationship of information technology with the reform and reinvention of health care. The field feels to have metamorphosed significantly, even as I have been writing the book. Some of the topics covered have donned new colours, chameleon-like, and might already be ripe for some reinvention, too! That is inevitable in such a fast-moving field and the book is offered more as a personal career songline than a definitive history–I doubt that any such history could yet be written. I have recently added some further reflections and speculations about artificial intelligence (AI), in the context of the 2023 debate about its feared considerable downside potential. This builds on coverage of the topic in Chapters One, Two and Eight. I will aim to add useful updating commentary, from time to time, in the online additional resources for the book.6
Today’s reality feels like a Bon Jovi ‘halfway there’ stage in the adventure of ideas that has been unfolding in the encounter of the computer with health care, in the transition through the Information Age towards the Information Society. Health care services and professional practice face the challenge of integrating various threads of this adventure together, weaving without knots, shaping the many halves into useful wholes, and keeping clear of rabbit holes and black holes! Exploring how to deploy and enhance the best, while correcting, mitigating, or avoiding the worst.
Health care services, among many components of the systems and services that are ‘there’ for us in life, are crucial in enabling us all to be and keep well, and to flourish to the best of our opportunities and abilities. But they only get us or keep us, in Bon Jovi style, ‘halfway there’. The other half concerns what we do, and are enabled and capable of doing, for ourselves. This involves many dimensions of awareness and discovery about our bodies and ourselves, our individual capabilities and interests, and the circumstances in which we live, which change and evolve uncertainly, as do our needs, expectations and behaviours, too, as we grow, live more fully and age longer.
It is best not to get too fixated on binary divisions in this discussion, conditioned into thinking of the whole as somehow comprising two separate and equally sized parts. We have plenty of that kind of thinking with the digital world of the computer and it is seldom true to life, except perhaps in the most abstract of realms of John Archibald Wheeler’s (1911–2008) ‘It from bit’, which I discussed in Chapter Six!7 It might be better to think of parts of a whole, as formal logic often does, but the language of halves is pervasive and persuasive in everyday life and behaviour. There are many ways in which we use this imagery.
The most recently contested Presidential election in Poland was decided 51.2 percent against 48.8 percent. Brexit, which set a generational change of course in UK national life as we left the European Union, was considered a clear mandate at 52 against 48. These might rather be described as half and half, noisy judgements. Binary choices made at random would lead close to a 50:50 result. ‘Random’ implies complete uncertainty and, less charitably, could suggest a lack of care or consideration. Some situations are described as a glass half full or half empty, signifying optimistic and pessimistic predispositions when thinking about them. 50:50 describes an equal cleaving towards alternative perceptions or predilections; it can signify dualism as well as dichotomy; complementarity as well as difference. 50:50 seesaw arguments between opposing viewpoints sometimes escalate, being expressed with increasing intensity on either side. As with the placing of increasing weight at each end of a real seesaw, when seeking a dominant position, the outcome is often neither dominance nor balance but a no-fun, broken seesaw! Much of politics in these stormy times, feels rather like a broken seesaw, and, sadly, much of health care, too. The computer has been closely implicated in the breaking, albeit in good ways as well as bad. In many dimensions and localities of health care services, it is the best of times, too, with achievably better to come, very widely.
We also talk about being too clever by half. This was the message of Mervyn King, when he suggested that sophistication in modelling and analysis was less useful in managing national finances and economies than it was given credit for, and that storytelling and ability to cope with, as much as shape, uncertain events, were also important.8 Norman Davies argued for the greater use of art and storytelling as sources in the writing of history, and less dependence on retrospective analysis and historicism.9 I have taken their learned and experienced advice to heart while writing this book, and now in recounting some half-and-half personal stories, here.
Last year, I received a letter from a former colleague who had suffered a heart attack and was in continuing poor health. He commented on the sorry state of information systems in use in the wards where he was cared for, and how much time was devoted to battling them. It brought home for both of us the ‘halfway there’ stage in achieving the professional goals we had shared through our careers. We may not be ‘livin’ on a prayer’, but we are, for sure, only ‘halfway there’ to health care services that meet the challenge and opportunity of the Information Age and match the needs and opportunities of the Information Society that we are creating. It is halftime in the match. The second half and the first half are different phases of a game. The first half may not go so well but the second can prosper nonetheless, overcoming and adapting to adversity, entering new spaces and finding new personal resources and fulfilment. Human nature and community are good like that.
I tell, now, a deeply personal and emotional half and half story, encouraged to do so by my Polish doctor wife, Bożena. It is half-and-half about her survival and recovery from a critical illness. I do so, not to dramatize or critique the painful and harrowing issues it exposed, but to give a detailed, albeit extreme, example of where information utility is fundamental in support of health care, and how its lack can greatly amplify the inevitable difficulty and distress of coping with a prolonged emergency, as patient and carer, as well as hamper and compromise the ability of professional teams to function effectively. It is also a story of the half-and-half of what medicine can do to both harm us and keep us alive and what we can and must do for ourselves, to recover and keep well. Bożena wanted me to tell this half-and-half story like this, as it was experienced by us both.
Five years ago, Bożena was in life-threatening haemorrhagic shock in provincial Poland, after emergency abdominal surgery that should never have been needed, nowadays. The ensuing struggle over four months–within and between two countries with different languages and contrasting clinical cultures, through two intensive care units and in wards of four hospitals, and in blue-light road and air ambulances–was a life and death experience never to be forgotten, from frozen November to Spring-like Easter. It progressed through all professions and levels of care, and, throughout, the struggle was impeded and exacerbated by lack, non-communication and non-coherence of information. Lack of mutual fluency in spoken language was also an impediment that I had to struggle with each day, for two months in Poland. I was on my own there, visiting and staying with her through the day, based at night in a flat rented nearby to the specialist centre she was transferred to in Warsaw, remote from but supported amazingly by family and friends in the two countries.
Medical insurance company communication between the two health services was almost non-existent and depended solely on me, standing in busy hospital corridors outside my critically ill wife’s wards, piecing together communications by mobile phone. I needed to keep in touch with family and friends, medical teams and colleagues, nearby and far away, to enable her to receive the care needed to save her, persuading one level after another of services to cooperate and then get her transferred back to England, to weeks of specialist care there and then home. It was often touch and go, throughout.
In the rescue stage of Bożena’s critical care in Warsaw, secured for her by a close clinical academic colleague of mine, the intensive care unit (ICU) was exemplary. When conscious again, she could scarcely move and only with great pain and unsteadiness. As time went by and she was cared for in an acute surgical ward, her urgent needs for nursing attention–overflowing abdominal drains, frequent nausea, ataxia–stretched capacity. The ward nursing and post-operative rehabilitation care was wonderfully and sometimes quite fiercely, thorough. This approach was both effective and reassuring! Through the months in Poland, I worked as one of the hard-pressed ward team during my day long visits. For example, they would ask me to move her through long underground corridors for extremely onerous investigations, and help her, hour by hour and day by day, massaging her depleted limbs and supporting her in slow and incremental faltering steps away from her bed and up and down the ward corridor.
Bożena’s condition first improved and then deteriorated again. Laboratory measurements and scan images came back from computers but the underlying damage to her gastrointestinal (GI) tract, resulting from the surgery, was not clear, although the ICU chief had suspected it. Clinically, there was evidence of abdominal fistula, because of the considerable fluid leakage now present, but the team had been unable to discern its origins. With no clear action plan, it was decided to pause further action through the two-week Christmas holiday period, when only a skeleton staff team were on duty.
Over Christmas, Bożena gradually became extremely ill once more and I asked an empathetic, more junior clinician on duty, who had befriended us, if it would be okay to provide me with the CT images on a compact disc. I uploaded them from my laptop in the rented apartment, via the Cloud, to the UK, where my doctor daughter was able to draw on her own tertiary care network, to assist in getting a rapid specialist review of them. This helped to clarify and stabilize matters, by pinpointing the location of two abdominal fistulas through which fluid was leaking inside and outside Bożena’s body. Potential professional and legal sensitivities feel somewhat blurred in such situations! One wondered whether faster referral for a second opinion, like this, perhaps even to an AI algorithm, might have spotted and reported them on the original scan. That might have avoided and circumvented three weeks of considerable distress and the need for further, extremely uncomfortable radioisotope scans, which she could hardly endure.
After nearly two months of this oscillating clinical improvement and decline, and continuous parenteral nutrition, a clinical transfer to England, between the two health systems, was agreed. The transfer itself was very professionally executed by an air ambulance team that flew in from Germany to collect Bożena from the ward in Warsaw and deliver her under the care of the NHS in England. I followed on to the next commercial flight. On arrival in England, the air ambulance doctor and nurse were professionally bound to stay with her until she was finally through the delayed and protracted process of admission to an isolation ward. None of the extensive clinical information that had been provided to the insurers–when they were agreeing and subsequently arranging the transfer, in multiple texts, emails and phone calls–had reached the admitting doctors on duty there and she was placed in an hours-long Accident and Emergency triage system. A further protracted queue of administrative delays ensued while a bed was arranged. The air ambulance had to miss its return time slot, and this no doubt escalated the insurance bill! Another part of the NHS subsequently investigated and sought proof of her eligibility for free treatment, although a British citizen and resident here for twenty-five years!
The receiving English district hospital clinical team quickly argued for and sought transfer for her to a specialist centre, the need for which had been clear from the previous history. This information had escaped the administrative protocols in operation between the insurance company and the two country health systems, when deciding where to receive her into the NHS. This further transfer was eventually accomplished, and everything quickly improved with the confident and calm treatment she received there, after the fistula fluid leakage had been endoscopically stemmed and the persistent infections defeated. Phew! Hard to write about, even five years on!
In the many hospitals and wards through which Bożena passed, and the insurance, airline and intergovernmental systems dealing with the transfers, her personal data must have been keyed, processed and transferred through, I would estimate, a hundred or more mutually incompatible information systems and onto inches-thick piles of paper. And in each ward where she lay, much professional and administrative time and capacity were devoted to battling with antiquated, poorly and slowly performing computers. Continuity of care from service to service was pieced together through human contact. Information utility failed to deliver on any of the monads, but not for lack of anyone’s best efforts. Everyone was trying to help her and that did help, hugely.
But this emergency was only one half of the story. The struggles did not end after final discharge home, months later. Home-based services were still needed for stoma management and potentially for parenteral nutrition, that had not ceased for three months, inevitably risking infection and other complications, until just before her final discharge. There was a 50:50 chance that further reparative surgery would be needed. Fortunately, in the end it was not, although damage from the original surgical misadventure and its emergency postoperative management persists.
In retrospect, arriving home, together again, we were still only halfway there. From this time, my wife’s iron will kicked in–herself a doctor, and so all too aware, gradually, of her situation. She was determined to get well and restore her disintegrated, bedraggled and shattered body. Always an exercise acolyte, she was determined to walk, once free from the beds that had imprisoned and disabled her, in extreme discomfort, for so long. We walked around the lake and park of St Albans every day for the next four months, watched the fish and the herons, geese, ducks, moorhens and coots, as they emerged into Spring and Summer, with their new broods. She spurred us on to two or three circuits, where I–tired, as well–was ready to give up for a coffee and baked apple at the nearby mill. Once more around and we will go there, she would say!
She made herself well and we started our dance classes again. We met a most lovely and skilful, former paramedic Pilates teacher with experience of surgical rehabilitation. Through her, my wife found her way to a nurse specialist offering abdominal massage for relieving postoperative adhesion, which was a significant problem. Her inspirational ballet teacher, who had retired from the Royal Ballet and now ran classes lovingly took us both on with weekly balance and posture exercises, based on ballet.
This was the other half of getting to the Bon Jovi ‘there’. The goal of getting better and moving forward; my wife’s knowledge of her situation and services that might help her; our shared professional network enabling us to find our way to them; our brilliant network of family and friends willing us on and supporting us–all of these were essential. But at the centre was my wife’s ownership of the quest to get ‘there’. It challenged even her iron but adaptable will, steeled in the forging of her resilient and independent character in a former life feeling repressed under Communism. She did, and does, extraordinarily well. Our wonderfully balanced and insightful general practitioner (GP) blinked when he saw her months after the critical events, and having heard the history, telling her she was a miracle. Her specialists wonder whether they are needed any more–well we still need them, even if just to be ‘there’!
A less personal story, now. The Covid-19 pandemic had been exactly one year in duration as I wrote the first tentative draft of this concluding chapter. It has been a half-and-half story about treatment and containment. At the outset, it seemed that two years would prove a likely timespan of coping and recovery towards a more stable daily life. It seemed that we were, indeed, halfway towards that after one year, but the virus and the problems it presented continued to mutate, as they do still–a third year having elapsed as I finalize the manuscript, now. Given the global interconnectedness of both science and societies, and comparing with the 1918 Spanish Flu, the time duration of the pandemic today looks to have been lessened because of science and industry and increased because of globally rapid transmission. It seems that the two pandemics have thus followed somewhat similar trajectories, over time and season. Thankfully, thus far it seems that fewer have died before their time.
The key questions now are not about why preventive measures were not in place, that could arguably have enabled containment of the infection more effectively and manageably, but how new capabilities can and should be built into the health care services of tomorrow, to ensure things are better managed next time–half-and-half supporting treatment and prevention. A coherent care information utility will be central to such capabilities. Non-coherent data collected around the world has clouded understanding of the current pandemic. The global openEHR community had systems in place for devising, capturing and sharing a coherent and clinically standardized dataset to record the phenotype of the disease within a few weeks, working across the world where information systems already existed to capture it. This was because it had created and put in place the elements of a care record platform infrastructure and method, able to frame and host such coherent, vendor- and technology-independent, care records.
And a final half-and-half story on a lighter note. Science learns a lot from the study of twins (they, too, being akin to two halves of a whole). The elements twins share give extra opportunity for the more accurate study of those elements they do not. We have two pairs of twins in one of our families. It is wonderful to have seen them grow from childhood, through bonded years of development where identity is more at one, into differentiation of personality, sparring with one another as they grow, both in themselves and into the outside world. The oneness is balanced against the twoness. And the twinness of the girl twins and the twinness of the boy twins is a new pairing of relationship. They spark uniquely and differently, just as in any family. They are a kind of half and whole, one on one, and two on two. The whole is greater than the sum of its two half parts.
As the foregoing examples and stories have illustrated, health care is replete with half-and-half complementarities. Health care intervention and the body and mind getting better, are half-and-half. Dependency and self-reliance are half-and-half. Effective action is half about what we know and can respond to, evidentially, and half about what we do not and cannot–thus acknowledging and coping with the implicit uncertainty of many consequential judgements and decisions that must be made. The idea that such half-and-half human balances could, foreseeably, be wished under the sole purview of AI, looks too clever by half!
A quotation from Whitehead, that featured also in the Introduction, seems prescient of the dilemmas now surfacing in relation to the human connectedness of machine learning and AI.
It is the first step in sociological wisdom, to recognize that the major advances in civilization are processes which all but wreck the societies in which they occur […] Those societies which cannot combine reverence to their symbols with freedom of revision, must ultimately decay either from anarchy, or from the slow atrophy of a life stifled by useless shadows.10
Equally pertinent were the imaginings before the Information Age, of such as E. M. Forster (The Machine Stops), Aldous Huxley (Brave New World) and George Orwell (1984), and the recent novels of Ian McEwan (Machines Like Me) and Kazuo Ishiguro (Klara and the Sun), which have been referred to at several points in the unfolding storyline of this book. Today, in 2023, there is a new crescendo of concerns about AI. The difference is that that the dark imaginings of yesterday are rapidly emerging into the stark light of today. These are half-and-half expressions of optimistic concern that we realize the immense potential benefits of machine learning, and apprehensive concern about a potential Pandora’s box of unregulated, or impossible to regulate, AI unleashing a chaotic evolution towards the Neocene.
In Chapter Seven, I visited Eric Topol’s perspective of the constructive potential of AI to enable the rescue and reform of health care, as set out in his 2019 landmark inukbook.11 We are today starting to think more cautiously about the half-and-half of artificial intelligence and human intelligence. About what they are, and, probably more importantly, how their wholeness might play out in the balance of computer and human reasoning in the context of health care, for both patients and professionals. These issues have started to look more consequential as AI accelerates towards the Neocene and its potential impact on society is compared with that of the Internet. If comparison with the Internet’s societal impact on machine and human communication and computation is the yardstick, surely an AI that burrows deeply into, and may take over, substantial domains of human skill and cognition, is doubly deserving of cautious concern. Revisiting Joseph Weizenbaum’s (1923–2008) story of Computer Power and Human Reason, first published in 1976, is a good starting point.12 My browning copy of this inukbook dates from those early times. Will these two halves function as a complementary whole or as conflicting and destabilizing forces? AI is a very rapidly evolving domain. As the song goes, ‘You ain’t seen nothin’ yet’!
The first small academic unit that I created at Bart’s, in around 1991 was called Clinical Skills and Informatics. I explained its origins in Chapter Four. As I wrote, there, little could I or my close colleague in creating the Bart’s Clinical Skills Centre, Jane Dacre, have imagined how rapidly and remarkably the interrelationship of health care, professional practice and computer technology (including AI) has advanced over the years, with profound implications for health care education, service delivery, governance, regulation and legal accountability, as well as for citizen access and expectations of health care services, as well as their management.
My principal thoughts in relation to the use of AI in medicine are twofold. They revolve around the performance of AI algorithms in the defined context of performance of tasks that are currently the domain of clinically qualified and regulated specialists. And then around what might prove a Pandora’s box of potential consequences that this could set in chain, reaching deep into the heart of health care education, professionalism, governance, regulation and legislation, on which rest checks and balances in the assessment of skills, adjudication of fitness to practice, litigation of clinical risk and harm to patients, and the trusted relationship of citizen and professional.
Alan Turing (1912–54) proposed a test to determine if a computer-mediated dialogue is being controlled by a human behind the scenes, or just by a computer program. Could the user detect the difference? Things moved on. Joseph Weizenbaum’s experience of how human subjects ‘conversed’ with his ELIZA program, which I discussed in Chapter Eight, gave him pause for concern. Some were, he reported, easily drawn into engaging quite intently with ELIZA’s simplistic level of machine-simulated ‘empathy’. And other investigators reported, in the context of the elicitation of clinical histories of sensitive personal matters–for example, relating to alcoholism–that many of us are quite happy, and even happier and are more truthful, when interrogated by a machine algorithm rather than a human. Much as when we now give details of pre-existing medical conditions when taking out travel insurance.
Chatbots such as ChatGPT seem to have passed the Turing test threshold and now opine fluently on all subjects under the sun, even if a bit repetitive after a while–but aren’t we all, especially as we get older; I certainly am! In his ‘The Crack-Up’ article, published in the New Yorker Magazine (1936), which I also discussed in the previous two chapters, F. Scott Fitzgerald (1896–1940) proposed a test of a different and more embedded ranking of intelligence, saying that ‘The test of a first-rate intelligence is the ability to hold two opposed ideas in the mind at the same time, and still retain the ability to function’. Fitzgerald’s point was, I think, that we need good intelligence to guide us in how we function in the face of inevitable half-and-half matters that point us in different directions. Being reflected, perhaps, in the way the two halves of our brains work–for example, in Kahneman’s thinking and acting fast and slow. I wonder how such thinking about human intelligence will play out alongside the machine’s intelligence.
It will be interesting, for example, to see how AI advisors might respond to a request for an opinion about the complex and multi-faceted ethical dilemmas concerning, say, a sick pregnant woman requiring aggressive treatment for an ovarian cancer and parallel concern for their near-term but still premature unborn child. Both will potentially be highly affected, whatever decision is taken. There may be urgent decisions to be made about inducing early birth and delaying treatment, in the context of the mother’s wishes, her clinical condition and that of her unborn child, in the context also of term of pregnancy, risk of harm and maybe also culture and religion. In what ways, and to what extent, are we ready for an AI engagement with such complexity of clinical and ethical dilemmas?
It may not be too difficult to keep the real and virtual world of intelligence apart, at a careful distance, in this case. But that may not be so easy when seeking to draw boundaries between where AI does and does not engage in the broad spectrum of health care education and assessment, clinical practice, governance, regulation and litigation, anticipating the consequences that then might flow. If I were naming an AI health care guru that might be consulted for an opinion on complex and multi-faceted clinical decisions, not that I would feel at all qualified or competent to enter such a contentious domain, I think I might call it PandoraDoc! That name came to mind remembering Sam Heard and Dipak Kalra’s ParaDoc GP practice management system, from the 1980s and 1990s, which featured in my profile of Sam’s contribution to health informatics, in Chapter Eight!
Over recent times there has been a growing emphasis on comparing AI system and clinician performance in the context of well-framed clinical domain tasks. These typically require an extensive reference data set, first to develop the AI method and then to test its performance, prospectively and compare it with that of clinicians, in terms of the sensitivity and specificity of the results obtained. It is very early days in which to review this domain since AI methods are advancing so rapidly. In a July 2022 study of breast cancer diagnosis, Christian Leibig and colleagues showed that radiologist performance consistently outperformed the stand-alone AI in use.13 They conducted a carefully constructed set of trials where the AI was first used to triage cases and bank its answers for those where it was confident of its predictions, while referring the less certain ones to a radiologist for the decision. A variety of different triage criteria were investigated. This resulted in the combined system consistently outperforming both AI and radiologist acting alone, achieving improvements of several percentage points. This finding held true for the testing in many different clinical subgroup and imaging device subdomains. Such exhaustive study of Receiver Operating Characteristics (ROCs) of human and AI systems is the right way to proceed, as highlighted in Chapter Two, but it would potentially be important to know more granular clinical detail about the patients misdiagnosed, as to whether there are any patterns evident there.
More significant consequences would likely flow from similar studies where the AI is found always, or much more often, to significantly outperform expert clinicians. In that scenario, the standing of the human expert, which is very influential in the context of traditional clinical governance, risk management and the assessment of clinical skills and competence of clinicians to practice, will come centre stage. In the context of my leadership of Centre for Health Informatics and Multiprofessional Education (CHIME) at University College London (UCL), I was close to the unfolding domains of clinical risk and professional regulatory practice, with close-by senior scientific and clinical colleagues of mine playing leading national roles. In the litigation of cases involving potential clinical negligence, a common yardstick is ‘how a competent professional might have been expected to act in the situation being discussed’. If the AI consistently outperforms professionals in the domain of action being litigated, are either the expert witness giving evidence about practice norms or the clinician under investigation still deemed competent to act? Would this start to raise legitimate concerns about clinicians’ ‘competence to practice’, more generally.
I could invent more such hypotheticals and others could produce better ones. AI-related issues that bridge between professional practice, regulation and law will likely have snowballing and potentially far-reaching sequelae in the context of the regulatory and legal norms and legislation that apply. Further risks and conundrums then arise. What happens if the AI goes down for forty-eight hours and harm to patients is caused as a direct result? What human clinician backup must be kept there in reserve, to take over? All the passport e-gates in the United Kingdom failed for twenty-four hours yesterday, as I write, and caused many hours of delay for travellers, who were likely to be in an angry, holiday-spoiled, litigious mood!
Two final, clown-like and off-the-cuff, Aunt Sally hypotheticals!
Radiology reporting for a hospital is outsourced to a company using proprietary AI software. The company starts to underperform and bankrupts. The hospital, as with others, has economized on radiologists and fewer doctors, nationally, have developed such skills. Does the hospital switch to a new AI provider, if there is one, albeit risking that the company, perhaps sensing a monopolistic opportunity, might promptly double its price per report? Does it alternatively attract scarce radiologists to its side by doubling the fee it pays to them? If it finds and signs up to use another proprietary and equally opaquely reasoning AI, what about the coherence of the decision making of the two algorithms, by now persisting into the hospital’s digital care records? Given their opaqueness, how can a case review decide what to do if the two disagree about historic clinical decisions and recommendations? Maybe the AI algorithm used by the now bankrupt former AI provider is no longer accessible for checking. Such investigations of different clinicians’ practice do occur and usually involve scrutiny of care records. By then, maybe other AI chatdocs will be involved in creating these, too. How is the performance of such clinically engaged but opaquely reasoning AI algorithms to be weighed, and who will decide whether they are and remain ‘fit to practise’?
Will AI chatdocs appear to prosecute in ‘courts’ adjudicating damage to humans in the context of medical accidents in this now heavily chatdoc-populated domain? Will they be ‘called’ as ‘expert’ witnesses to ‘give evidence’ in relation to a review of disputed practice and what was and was not done at the time? Will another expert chatdoc be the judge in this court and yet another be defendant, or act for the defence? How many chatdocs will be selected for a panel of ‘expert’ ‘adjudicators’ or members of a jury, and which chatdoc(s) will choose them?
These may be fanciful hypotheticals, but they could be improved on to illustrate more clearly the issues faced when deciding what can and should be safely delegated to the opaquely reasoning AI machine. And how the professionally regulated clinicians, and those who regulate them, should relate to the machine. What should remain wholly in the human domain? Health care interacts all around the circle of knowledge and is based on many checks and balances of discipline, profession, organization, industry and governance. Clinical governance and regulation are complex human concerns.
No one has answers to these sorts of hypothetical questions and indeed they may not yet be well-framed. There are likely not any right and wrong answers, just answers adjudged consistent with law and the values and regulations implicit in law. Both the questions and the answers must be learned as we will surely be presented with such grey area scenarios and half-and-half boundary zones. It is not enough that the AI works better than humans in a specific domain of competency. What are the consequences for health care and society if human professionals are no longer there able to do things and to be involved in critiquing how they are done?
Some of the AI of the future could play out in focused and bounded tools that relieve much time of human professionals from performing well characterized tasks that the computer can be relied on to perform better. Some could tend towards noncoherent and proprietary machine-devised and enforced decision-making protocols, based on opaque AI-based virtual caricatures of complex human decisions and judgements. Some could delve deep into human language whereby people express their health needs and concerns, missing or insensitive to nuances of a patient’s usage of terms and non-verbal cues, lacking contextual knowledge about the patient and their home situation, constructing a virtual reality caricature of the patient and acting accordingly, but actually getting things quite wrong. We need to think carefully about the human relationships and fallback position if the AI fails. AI methods and their balance with human skills must be very carefully framed and their safe application assured.
Daniel Kahneman, Olivier Sibony and Cass Sunstein’s recent book expressed caution about how AI will play out in society, while at the same time giving much evidence of demonstrably flawed human judgements and how they range far and wide in their knock-on consequences in society.14 Topol’s recent book proposed a bounded and evidenced use of AI as the saviour of health care from ‘Shallow Medicine’, by releasing much time for doctors to reengage more fully with direct care of patients.15
There will be much new learning needed in shaping the beneficial application of AI and adapting our human practice and governance alongside. Once again, a coherent care information utility seems a sine qua non of progress. I find it hard to imagine how something as potentially disruptive as AI could be constrained to evolve safely without a very large amount of coherently structured data on which to develop, test and assure it. One might anticipate that this will take decades to assemble and for the methods to be proved safe and digested into routine practices. It seems inevitable that AI will be widely experimented with, for better or worse, and regulatory ground rules in guiding this will be crucial. It will be by far preferable that testing be conducted in situ, in context of representative everyday practice and mutually coherent care record systems, rather than in a great number of non-coherent bespoke clinical trials.
As discussed in Chapter Two, formal logic has encountered much difficulty in bridging across defeasible and indefeasible domains in its reasoning about clinical knowledge–about particularities and generalities, as it were. One might reasonably anticipate that different but comparable difficulty may also beset AI methods, as they encounter the same noisy world of clinical appearances, as reported to them in omnuscles of observation and measurement collected from and about patients, in diverse everyday contexts. This will seep and accumulate into their empirically trained neural networks or whatever methodology they use, to hone their skills. The AI will then deploy its resulting virtual skills based on these virtual caricatures of the patients whose data it has been trained with, to make decisions about prospectively encountered real patients, where at one level or another every patient is in a sample space of one.
Clinical science relies on the methodology of randomized clinical trials to temper extraneous variability when tying down cause and effect in relation to clinical interventions. Over four or five recent decades there has been much focus on what became known as evidence-based medicine, which relies heavily on the yardstick of this methodology, albeit that some have argued that routinely collected data can be statistically modelled and analyzed, to achieve comparable reliability in estimation of the cause and effect of interventions.
In Chapter Seven, I traced this movement back to its founder, David Sackett, at McMaster University, who I used to meet, both there and when he came to establish the field in the UK and lecture at St Bartholomew’s Hospital (Bart’s), at John Dickinson’s invitation. Its motivation and importance are unimpeachable but how, then, is it that luminary authors like Topol, and the Deloitte team’s report that I also introduced in Chapter Seven, describe so many of today’s medical interventions as ineffective and so much of money spent on today’s health care as wasted? Presumably they trust the evidence for these assertions, but I am always puzzled how this can be the case, if evidence truly does count in the way claimed for it? The situation they describe might suggest that formal evidence is not as significantly powerful a driver as might be thought, in relation to care quality. For me, the answer to this seeming contradiction seems likely to lie, to a significant extent, in the non-coherent, and disjoint scope and quality of information systems in use in health care, in how they have been envisaged and implemented, and in how they perform and are used. I am not so involved nowadays and stand to be corrected in that impression.
We will likely always be faced with coping with health care’s intrinsically defeasible knowledge base and the difficulties in formal reasoning arising from the complexity reflected in the uniqueness of each presenting case, with its different contingencies, in different contexts, at different times. Oppenheimer also focused on this in his discussion of the uniqueness of how general laws play out in particular circumstances, and Bertrand Russell (1872–1970) in saying how all knowledge must be placed in a clearly defined context. Health care connects with multiple other services and the knowledge and skills they, in turn, encompass. In the face of this uniqueness of individual patients and their wide-ranging connections, we must be very careful in testing at each stage, as we open the door and give the floor to my AI PandoraDoc’s paws and flaws. The Whitehead quotation at the head of Part Three of this book captures this reality rather emphatically well!
There is one final half-and-half that seems relevant to highlight here. This is one of both Grand Challenge and wicked problem. Deliberations about health care reinvention and reform often assume the language of ‘Grand Challenge’. And the cap does fit up to a point, especially in the context of science and technology. Taming Big Data and AI and collecting population genomics datasets are major scientific enterprises. But we also need to think in the language of ‘wicked problem’, where everything affects everything else, and all manner of more human factors and uncertainties tend to assert themselves. How will the presence of whole genome sequences in digital care records play out in practice? How will the anonymization of unencrypted data be approached, in the face of the uniqueness of each patient’s genome, assuming, of course, that encryption itself can still be kept secure in the quantum computing world?
The citizen-centred care information utility proposed in Part Three of this book can contribute a great deal to how successfully we tackle the combination of Grand Challenge and wicked problem of tomorrow’s health care. To have a chance of successful implementation, its values, principles, goals and methods need to be centre stage, guiding and communicating endeavours, coherently. And the teams and environments where they come together, iteratively and incrementally, likewise. Otherwise, as in the past, the problems of non-coherence, discontinuity, fragmentation and cost of services will continue.
This book has come full circle–from Shiyali Ramamrita Ranganathan (1892–1972) and the circle of knowledge, with its Grand Challenges and wicked problems, from Localton, to Globalton, from the invention of medicine to its reinvention in health care for the Information Society, from the foundations of knowledge and reason to the Pandora’s box of the James Lovelock Novacene. We have moved from terrains of practical cubits to abstract qubits, and enriched and enhanced our scales of measurement and data to the zepto small and zetta large. Information technology and ambition have expanded our attempted gait, from shoes fit to step a metre to imagined seven-league boots. We exist locally and imagine and project ourselves globally. Ernst Schumacher (1911–77) wrote Small Is Beautiful to warn us about slippery slopes, there.16 We have segued too far from acting locally and thinking globally, to thinking locally and acting globally.
I have gone full circle along my personal songline and am back enjoying the Oxford Physics Alumni Osborne Society, with the time and opportunity to tour the sites and meet the teams at places like the Culham Centre for Fusion Energy, the Harwell Diamond Light Source and the quantum computer laboratories in the Beecroft Centre in Parks Road. The artful design of this new centre is an architectural reminder of the interplay of theoretical musing and practical experiment. The labs must have extremely tiny levels of vibration as the entrained qubits get motion sickness! In order to work, they need peace and quiet, like any thinking brain, and are thus located deep underground. The building flows upwards through multiple levels, interconnected through central open wooden stairways within an atrium. The theorists and luminary sages of today and yesteryear live in the upper levels towards the clouds, where they, too, find their peace. And of course, there is an information network connecting throughout, communicating over the heights and depths. The building is an architectural parable of form and function, with the civil engineer’s knowledge of foundations and structures and their vibrations underpinning it all. It is an ongoing story of people and computing machines, and their goals and capabilities. And it is advancing towards the Novacene–we do not know where to, but somewhere. Will the machines break from the shackles of NP-completeness? Will they break the security of today’s data transmission? Will they render humanly intractable wicked problems, machine intelligently tractable? Zobaczymy [we will see]!17
The parallel of all this with atomic physics of the 1930s and Los Alamos and nuclear weapons is sobering. Did we need to learn how to make and deliver these weapons, and did a Hiroshima inevitably happen, before sentiments to contain dangerous political adventurism asserted themselves in treaties and cooperation, much weakened though those now appear to be. Does the marrying of theory and engineering of quantum computation carry Los Alamos-like risk that we should be preparing for. John Houghton, in a pessimistic remark regarding climate change, wrote that humankind only takes issues seriously after a major disaster. Will the Covid-19 pandemic prove to have been such an event? Will AI?
Advanced technology advances the cost and impact of mistakes and the difficulty of containing and reversing them. One difference, today, is in the power and footprint of international corporations–Google, Microsoft, IBM, Meta, Amazon, Alibaba, Huawei, Twitter. These have immense and beguiling clout. They can commandeer and cultivate talent, dominate and sequester markets and revenues, and innovate within the private realm to defend and secure these positions. They can outspend governments in pursuit of transforming information technology. They can do it more efficiently because they are able and free to establish an organic and supported culture of talented people, not unduly influenced or constrained by the wider world around them. But at the same time, they are legally obligated to commercial and not social ends. Perhaps, and more hopefully, ESG (Environment, Social and Governance) awareness, born of VUCA (Volatility, Uncertainty, Complexity, Ambiguity) experience, will change that, as Gillian Tett surmises and hopes it will.18
A deep commercial enclosure of knowledge might have happened in molecular genetics were it not for its pioneers, notablythe luminaries–Fred Sanger, Sydney Brenner, Max Perutz, John Sulston, Francis Crick, James Watson, Paul Nurse, Janet Thornton and more–that cracked and shared the codes and the methods to exploit them. Of equal importance was the philanthropy and dedication of such as the Wellcome Trust, in helping to see off the Craig Venter ambition to patent and enclose the human genome for commercial exploitation. Thank goodness for the balancing of private wealth and collaborative social endeavour that people like Bill and Melinda Gates have seeded in their Foundation.
We need to remember these stories and seek to better understand and support the information commons, assert the value and values of its communal ownership, and provide for its continuing and collaborative improvement and enjoyment. This is a central task of the second half of the information revolution, and we are halfway there. It is about caring for the health of the information world. Good information is not cheap or easy to acquire and will have diseases–fakery and falsity, overload and obscurity–these are givens in any times. We are all prone to them; they are malignant mutations in our genome and destructive memes in our minds. We can all do better in our personal second half and improve and build on what has been achieved in the first. Only when we have better understood the first half of the Information Age can we build forward and safely confront the challenge of reinventing health care services for the future. To do this we must learn, iteratively and incrementally, the new balance, continuity and governance they require and how this can and must be supported by a new citizen-centred care information utility, matching to and evolving with the Information Society of the future.
In this, we must not dream of any coming Utopia; Gulliver discovered its wicked features. As in The Tale of Two Cities, the best and worst will sometimes occur simultaneously. Human problems will remain wicked, the solving of one leading to another. Our challenge is to stop them from becoming an ever greater and more complex danger to human wellbeing and survival. Good ideas can create global utility and bad ideas can unleash global disaster. Good ideas start locally. The little and local and the big and global must be made to balance, over time. In locally centred contexts, crashes are mostly local, too. In globally centred contexts, crashes are global, too. Big Data is not a panacea. An idea that scales ‘bigly’ from the little, becoming global and destroying the local that it grew from, is not a good idea for the future. The wake-up call of the 2020s is to use the local or lose the local, and thereby lose the global as well. AI must prove its potential and fit for human needs. Care information utility is an idea that will depend on its participants feeling part of it and playing, and being enabled to play, their part.
And politics as ever poses its own wicked dilemma, intrinsic since the Greek demos and polis. It alone cannot resolve wicked problems, and those problems cannot be resolved without it. We thus face a choice about where to work and what to do. In health care we must focus on making wholes from its half-and-half components, in concrete and useful ways, recognizing the paradox that one writes the script of the other. Health writes the script of care and vice-versa. Independence writes that of dependence and also vice versa. But the one common language of health care is a half-and-half, too. It is human contact and information, joining the two halves of every citizen’s lifespan and lifestyle and of their carers’ everyday lives.
Is the idea of care information utility an example of Yuval Noah Harari’s dataism? We must work to create it, and thereby show and engender trust, that it is not! Things don’t need to progress as he fears. We can look back in anger or look forward with a mixture of audacious hope and pessimism–those, too, are half and whole. We must do our bit and stay the course. That is something we can all make and do. In creating the care information utility, we can all aspire to be one of Elena Rodriguez-Falcon’s ‘ingeniators’, reflecting ingenuity, imagination and proficiency as a community of openCarers, bringing health care to a safe landing in the Information Society of the future.
1 Laurie Lee, dedication of his book Cider with Rosie (London: Penguin Books, 1959).
2 L. Lee, Down in the Valley: A Writer’s Landscape (London: Penguin Books, 2019), p. 55.
3 Bon Jovi, ‘Livin’ on a Prayer’, Slippery When Wet (1986).
4 On Globalton and Localton, see Chapter Seven.
5 J. R. Oppenheimer, Science and the Common Understanding (Oxford: Oxford University Press, 1954).
7 J. A. Wheeler, ‘Information, Physics, Quantum: The Search for Links’, in Feynman and Computation, ed. by A. Hey (Boca Raton, FL: CRC Press, 2018), pp. 309–36, https://doi.org/10.1201/9780429500459-19
8 M. King, The End of Alchemy: Money, Banking and the Future of the Global Economy (New York: W. W. Norton and Company, 2016).
9 N. Davies, Europe: A History (Oxford: Oxford University Press, 1996).
10 A. N. Whitehead, Symbolism, its Meaning and Effect (New York: Macmillan, 1927), p. 88.
11 E. Topol, Deep Medicine: How Artificial Intelligence Can Make Healthcare Human Again (London: Hachette, 2019).
12 J. Weizenbaum, Computer Power and Human Reason: From Judgment to Calculation (Harmondsworth: Penguin Books, 1993), p. 209.
13 C. Leibig, M. Brehmer, S. Bunk, D. Byng, K. Pinker and L. Umutlu,, ‘Combining the Strengths of Radiologists and AI for Breast Cancer Screening: A Retrospective Analysis’, The Lancet Digital Health, 4.7 (2022), e507–19.
14 D. Kahneman, O. Sibony and C. R. Sunstein, Noise: A Flaw in Human Judgment (New York: Little, Brown Spark, 2021).
15 Topol, Deep Medicine.
16 E. F. Schumacher, Small Is Beautiful: A Study of Economics as if People Mattered (London: Abacus, 1973).
17 On this Polish expression, see Preface.
18 G. Tett, Anthro-Vision: A New Way to See in Business and Life (New York: Simon and Schuster, 2021).