A viral post on Reddit is reigniting debate around the role of artificial intelligence (AI) chatbots in medical advice after a user claimed that ChatGPT helped diagnose a condition that had eluded doctors for over a decade.
According to the post, which was shared recently on X by OpenAI President Greg Brockman, the user experienced unexplained symptoms for more than 10 years, undergoing spinal MRIs, CT scans, blood tests, and even checks for Lyme disease — all to no avail.
After entering lab results and symptom history into ChatGPT, the AI flagged a potential connection to the A1298C mutation in the MTHFR gene. A physician later confirmed the diagnosis and the B12 supplement treatment later “largely resolved” the symptoms.
The doctor was “super shocked” to find out ChatGPT correctly diagnosed the symptoms. “Not sure how they didn’t think to test me for MTHFR mutation,” the post said.
About 1 in 6 adults ask AI chatbots for health information and advice at least once a month, according to the KFF Health Misinformation Tracking Poll. When it comes to adults aged 18 to 29, the percentage rises to 25%. The next largest age group are those from 30 to 49, at 19%. The 50 to 64 age group comes in at 15% and over 65 at 10%.
However, when it comes to trusting that information, 56% of those who use AI are not confident about its accuracy. Among ethnic and age groups, those under age 50 — as well as Black and Hispanic adults — tend to trust the data more than older white respondents.
Kim Rippy, practice owner and licensed counselor at Keystone Therapy Group, told PYMNTS that clients have used ChatGPT or AI for “substitute therapy,” which is “both helpful and dangerous at the same time.”
For those with ADHD, ChatGPT can help them summarize or organize their thoughts. “You can ‘thought dump’ into the system and the AI program can return your thoughts to you in a clear, succinct format. This can help you better understand your own thoughts and potentially improve your ability to communicate.”
AI Chatbots Can Miss Nuances
But the dangers are that ChatGPT can never fully understand the patient’s experience and “can’t pick up on nuances of language, behaviors, nonverbals, tone, syntax and emotion that a human therapist can,” Rippy said.
“ChatGPT can’t challenge unhealthy cognitions, or even pick up on when those may be occurring for someone. ChatGPT can’t gauge when someone is at-risk and may push someone past their ability to safely regulate themselves. … [It] also doesn’t hold people accountable.”
In the end, AI chatbots should be “recognized as a coping tool for organizing thoughts, just as journaling or meditation can,” Rippy said.
An AI and mental health survey by Iris Telehealth shared with PYMNTS showed that 65% of patients feel comfortable using AI assessment tools and AI chatbots before speaking with a human provider. But 70% worry about the privacy and security of their data and 55% question the accuracy of the chatbot’s assessments of their condition.
Dr. Angela Downey, a family physician, told PYMNTS that AI can be helpful in guiding people toward possible diagnoses, especially if they’ve felt “dismissed or overlooked” in the past. These chatbots work around the clock and process a lot of information quickly.
“But there are limits,” Downey said. “AI can’t examine you or pick up on subtle cues, and it can delay proper care if taken as a substitute for medical advice. It can offer a list of possibilities, but you still need a trained clinician to put the full picture together.”
But for Gil Spencer, CTO of WitnessAI, it was a lifesaver.
He told PYMNTS that he had injured his knee skiing and MRI scans from radiologists were inconclusive. So he turned to ChatGPT, using a multimodal prompt workflow he created and uploaded his MRI scans. The AI correctly diagnosed a major meniscus tear and confirmed his ACL was intact. His surgeon later validated the AI’s diagnosis.
Source: https://www.pymnts.com/