Healthcare is the next big market for AI agents and LLMs. This became clear a few weeks ago when OpenAI announced the release of ChatGPT Health. That was followed just days later by Anthropic’s announcement of Claude for Healthcare. So, what should you expect from a health AI, is it safe to use, and is your data at risk?
How health AIs can help
Healthcare AI can be broadly divided into three types:
- LLM-based health assistants, such as the new ChatGPT Health
- Agentic AI systems that are designed to help you manage your diagnosis better
- AI decision-support systems designed to support health professionals
Let’s look at some examples of how each type is used.
LLM AI health assistants
Large language models and intelligent personal assistants have come a very long way since OpenAI released ChatGPT just a few years ago. Nowadays, we expect these systems to apply human-like reasoning, to have up to date information, and to answer pretty much any question we put to them. So, it’s no surprise that many people have already been using these systems to help with health related questions. ChatGPT Health takes that a step further. It will be able to draw on your own medical records, as well as knowledge of your lifestyle and insights from any fitness apps. It will be able to give you guidance on health matters, but importantly, cannot actually recommend treatments or make diagnoses.
AI support agents
Companies like Mama Health are tackling a different problem. Many people who suffer from chronic diseases are left feeling lonely, unlistened to, and may not be aware of the latest advice for managing their conditions. Mama Health collects a really detailed history of your condition, your treatment, and how you feel about it. This is then fed into an agentic AI system that will try to become your trusted advisor for issues related to managing your condition.
AI decision support systems
We often expect doctors to have almost superhuman levels of skill and knowledge, and to be completely up to date with the latest developments in their field. The reality of course is that doctors are only human. They can and do make mistakes, and they may not have all the answers. AI decision support systems aim to help them make better decisions based on the patient’s record and a wide knowledge of the latest literature. They may also help with identifying potential significant patterns in a patient’s observations or scans.
Is it safe to trust AI with our health?
Many people are worried about how rapidly AI is taking over our lives. Teachers and college professors are having to completely change how they grade students because of AI. Many people just rely on AI summaries when they do research on topics. Even lawyers have been caught out using AI to write court submissions. So, what are the risks of using AI in healthcare?
Hallucinations
LLMs have become infamous for their ability to hallucinate. In effect, they lie to us. Some of the most infamous cases have happened when lawyers have relied on made up judgments in their submissions to court. So, what happens when an AI system decides to invent a non-existent condition? Or worse, promotes an unproven cure or treatment?
HIPAA and data issues
One challenge doctors face is taking a holistic approach to their patients. They need to understand as much as possible about their patient. But sadly, in the US, health data is often extremely fragmented or incomplete. HIPAA makes it (on purpose) hard to share identifiable health data. Even when data is shared, it’s often only a limited summary. This creates a challenge for AI. After all, AI is only ever as good as the data it has. And no healthcare AI is able to get a current, full picture of your health.
Trying to please the user
If you are familiar with ChatGPT, Grok, or Meta AI, you may have noticed that the system often likes to praise the user. This desire to please the user can lead AIs to distort the answers they give if they are worried it might rile the user. Imagine how much worse that gets when the AI is having to deal with something as sensitive as your health!
What will happen to our health data?
Health data is among the most private and valuable data out there. Many security experts will tell you that a leaked health record is many times more valuable than a leaked credit card. That’s because it can be used for identity theft or large scale medical insurance fraud. Now, companies like OpenAI are asking us to entrust our health data to them. In their case, they tell us that the data will be stored in a completely separate system, and will not be used for training their AI models. But we are still going to entrust an ever greater amount of our data to private companies.
Other ways AI is impacting healthcare
Of course, there are many other ways in which AI will have a major impact on our future healthcare.
Identifying new medicines
Increasingly, AI is being used in pharmaceutical research to identify potential compounds of interest. For instance, existing medicines often have secondary effects that are actually more useful than the original purpose. However, identifying potential compounds like this requires processing huge amounts of health data to spot patterns and correlations. AI is also able to identify promising new compounds for drug development.
Protein folding
Protein folding is a crucial part of modern medicine. Proteins are extremely long molecules that fold into unique shapes. These shapes determine what job the protein can do in your body, like fighting off infections, or attaching to receptors in your organs. Scientists used to take years to figure out these complex shapes! Now, AI can predict the correct 3D shape in minutes. This is really important in understanding certain diseases that are caused by small errors in proteins.
Looking ahead
AI is clearly here to stay and it will become an ever more important part of our everyday lives. AI healthcare could revolutionize how we treat people by allowing doctors to provide personalized care, identifying potential illnesses before they become symptomatic, and rapidly speeding up the creation of new drugs. However, we are still some way away from that, and many of the recent developments are not going to deliver this golden age any time soon.
