AI-powered personalised medicine could revolutionise healthcare (and no, we’re not putting ChatGPT in charge) | Mihaela van der Schaar
From the soaring costs of US healthcare to the recurrent NHS crisis, it can often seem that effective and affordable healthcare is impossible. This will only get worse as chronic conditions grow in prevalence and we discover new ways to treat previously fatal diseases. These new treatments tend to be costly, while new approaches can be hard to introduce into healthcare systems that are either resistant to change or fatigued by too much of it. Meanwhile, growing demand for social care is compounding funding pressure and making the allocation of resources even more complicated.
Artificial intelligence (AI) is often glibly posed as the answer for services that are already forced to do more with less. Yet the idea that intelligent computers could simply replace humans in medicine is a fantasy. AI tends not to work well in the real world. Complexity proves an obstacle. So far, AI technologies have had little impact on the messy, inherently human world of medicine. But what if AI tools were designed specifically for real-world medicine – with all its organisational, scientific, and economic complexity?
This “reality-centric” approach to AI is the focus of the lab I lead at Cambridge University. Working closely with clinicians and hospitals, we develop AI tools for researchers, doctors, nurses and patients. People often think the principal opportunities for AI in healthcare lie in analysing images, such as MRI scans, or finding new drug compounds. But there are many opportunities beyond. One of the things our lab studies is personalised or precision medicine. Rather than one-size-fits-all, we look to see how treatments can be customised to reflect an individual’s unique medical and lifestyle profile.
Using AI-powered personalised medicine could allow for more effective treatment of common conditions such as heart disease and cancer, or rare diseases such as cystic fibrosis. It could allow clinicians to optimise the timing and dosage of medication for individual patients, or screen patients using their individual health profiles, rather than the current blanket criteria of age and sex. This personalised approach could lead to earlier diagnosis, prevention and better treatment, saving lives and making better use of resources.
Many of these same techniques can be applied in clinical trials. Trials sometimes falter because the average response to a drug fails to meet the trial’s targets. If some people on the trial responded well to treatment, though, AI could help to find those groups within the existing trial data. Creating data models of individual patients, or “digital twins”, could allow researchers to conduct preliminary trials before embarking on an expensive one involving real people. This would reduce the time and investment it takes to create a drug, making more life-enhancing interventions commercially viable and allowing treatments to be targeted at those they will help the most.
In a complex organisation such as the NHS, AI could help to allocate resources efficiently. Our lab created a tool during Covid to help clinicians predict the use of ventilators and ICU beds. This could be extended across the health service to allocate healthcare staff and equipment. AI technologies could also support doctors, nurses and other health professionals to improve their knowledge and combine their expertise. It could also help with conundrums such as patient privacy. The latest AI technologies create what is called “synthetic data”, which reflects the patterns within data, allowing clinicians to draw insights from this, while replacing all identifiable information.
Clinicians and AI specialists are already considering the potential for healthcare of large language models such as ChatGPT. These tools could help with the paperwork burden, recommend drug-trial protocols or propose diagnoses. But although they have immense potential, the risks and challenges are clear. We can’t rely on a system that regularly fabricates information, or that is trained on biased data. ChatGPT is not capable of understanding complex conditions and nuances, which could lead to misinterpretations or inappropriate recommendations. It could have disastrous implications if it was used in fields such as mental health.
If AI is used to diagnose someone and gets it wrong, it needs to be clear who is responsible: the AI developers, or the healthcare professionals who use it? Ethical guidelines and regulations have yet to catch up with these technologies. We need to address the safety issues around using large language models with real patients, and make sure that AI is developed and deployed responsibly. To ensure this, our lab is working closely with clinicians to make sure that models are trained on reliably accurate and unbiased data. We’re developing new ways to validate AI systems to ensure they’re safe, reliable and effective, and techniques to make sure the predictions and recommendations generated by AI can be explained to clinicians and patients.
We must not lose sight of the transformative potential of this technology. We need to make sure that we design and build AI to help healthcare professionals be better at what they do. This is part of what I call the human AI empowerment agenda – using AI to empower humans, not to replace them. The aim should not be to construct autonomous agents that can mimic and supplant humans, but to develop machine learning that allows humans to improve their cognitive and introspective abilities, enabling them to become better learners and decision-makers.
-
Mihaela van der Schaar is the John Humphrey Plummer professor for machine learning, AI and medicine, and director of the Cambridge Centre for AI in Medicine at the University of Cambridge
link