It took me almost a decade to get diagnosed.
It took ChatGPT ten seconds.
I fed my symptoms into an AI — just as a test, years after I'd already done the work myself — and watched it arrive at POTS in seconds. The diagnosis that thirty-three doctors missed. The condition I'd had to teach myself medicine to identify. Ten seconds.
I stared at the screen and felt two things at once. Relief that this tool exists for the next person. And grief for the decade I lost because it didn't exist for me.
That moment gave me a framework I've been thinking about ever since. Machines handle information. Humans handle transformation.
AI can pattern-match across millions of data points faster than any doctor. It can cross-reference symptoms, flag rare conditions, catch the thing that a tired physician in minute fourteen of a fifteen-minute appointment might miss. This is real. This saves lives. I'm not arguing against it.
But here's what AI cannot do.
It cannot understand what "I'm fine" actually means when a patient says it while gripping the sides of the chair. It cannot hold space for grief. It cannot recognise that a patient's "non-compliance" with treatment is because they can't afford the medication, or because the side effects make them unable to care for their child, or because taking the pills every day is a reminder that their body has failed them.
AI sees the data. Humans see the person inside the data.
A machine can tell you what's wrong with your heart. A human can sit with you while you process what that means for your life.
A machine can recommend the optimal treatment protocol. A human can understand why you won't follow it and help you find the version you actually can.
A machine can flag that a patient's vitals are declining. A human can notice that a patient's hope is declining, which often happens first.
The question we should be asking about AI in healthcare is not "can it replace doctors?" It can replace parts of what doctors do — the information parts. The diagnosis, the pattern recognition, the data analysis.
The question is: what should only humans do?
Hold grief. Create meaning. Understand context that doesn't fit in a data field. Notice the thing that nobody reported because they didn't have language for it. Sit with uncertainty without rushing to resolve it.
Those are the human tasks. And they're the ones the system is already failing at — not because of technology gaps, but because we've built a system that treats medicine as information delivery rather than human transformation.
AI won't fix that. Only humans can. But AI can free up enough time and cognitive load that humans might finally have space to do the human work.
That's the real promise. Not that machines replace us. That machines handle the information so we can handle each other.