Is It Safe to Use AI Chatbots to Make Health Care Decisions?
May 08, 2026
Key Takeaways
- AI chatbots provide health information quickly, but their confident answers can be misleading.
- To use AI safely when making health decisions, always verify clinical information with a doctor and avoid sharing personal health information.
- AI can support healthcare decisions, but it should never replace the experience and judgment of a trusted healthcare provider.
According to OpenAI, health and wellness questions are among the most common people ask ChatGPT. The popularity of using AI chatbots for health-related information isn’t surprising. People have been asking “Dr. Google” medical questions for decades.
But AI chatbots work differently from traditional internet search engines. Instead of giving you a list of links, a chatbot quickly delivers a single answer. That answer can feel personalized, as if it were created just for you.
“The speed and confidence of AI chatbots can be deceptive and have serious health consequences,” says Jennifer Goldman, DO, family medicine physician and Chief Medical Information Officer at Memorial Healthcare System. “Despite the risks, AI is here to stay and will continue to grow. We need to find safe ways to use it.”
Concerns About Using AI for Health Information
AI has evolved at record speed, faster than the government’s ability to set rules for its use. “If you’re turning to AI for health reasons, it’s important to understand its limitations,” says Dr. Goldman.
Inaccurate and Incomplete Information
Accuracy has long been a concern with AI. Chatbots mine the internet, where misinformation is common.
A 2026 study evaluated answers from five popular chatbots to questions about cancer, vaccines, stem cells, nutrition and athletic performance. Researchers rated nearly half of the answers as “problematic,” meaning they were either inaccurate or missing important details.
One error noted in the study was listing alternative cancer treatments as legitimate. Mistakes like this can lead people to make harmful decisions.
Overconfidence
AI systems are designed to sound confident and authoritative. However, their responses often lack explanations or precautions. Without these warnings, users may assume the information is completely accurate.
This perceived credibility may explain why some people turn to chatbots. If you’ve struggled to get answers from a healthcare provider, a confident-sounding response can feel reassuring.
“It’s important to remember that medicine is both a science and an art,” says Dr. Goldman. “We look at your medical history, family history, physical exam and test results through the lens of training and experience. We move carefully because health conditions don’t always follow the textbook.”
Chatbots don’t understand these nuances or know your full story. With limited information about you and no real-world experience, they can easily miss crucial details.
Lack of Privacy
Information you share with a chatbot is not protected in the same way it is in your provider’s electronic medical record system. Under the Health Insurance Portability and Accountability Act (HIPAA), healthcare systems must keep your medical information private. Chatbots are not HIPAA-protected and may use what you share to improve their systems.
Even so, a KFF poll found that many people upload their personal health data to AI tools. Among those who use chatbots for health purposes, 40 percent have uploaded doctors’ notes, test results or other sensitive information.
Schedule an appointment at one of our Primary Care Centers:
954-276-5552 Schedule with the first available providerHow to Use AI to Your Advantage
When used carefully and without sharing personal information, AI chatbots can be a helpful way to gather general health information.
One benefit is how easy they are to interact with. If you don’t understand a response, Dr. Goldman advises asking follow-up questions. You can also guide the chatbot by:
- Asking it to take on the persona of a clinician
- Requesting answers at a fifth-grade reading level
How you ask a question also matters. Posing one question at a time can help keep answers focused. Specific questions can lead to clearer and more reliable responses than open-ended ones.
Verify AI Results With Your Provider
In her practice, Dr. Goldman welcomes conversations about AI use. The greater risk is relying on AI when you don’t have a primary care provider or feel comfortable discussing what you’ve learned.
Results from the KFF poll show that many people who consult with AI do not follow up with a healthcare provider, including:
- 58 percent of those asking about mental health
- 42 percent of those asking about physical health
“Healthcare hasn’t always provided patients with quick answers,” says Dr. Goldman. “That’s something AI does well and is changing expectations. Ironically, being open to discussing AI has helped me create more meaningful conversations with my patients.”
It's important to note, however, that you should always bring any clinical questions to your doctor to ensure you're getting an accurate answer that's personalized to you.
Healthcare Providers Are Using AI Too
As the Chief Medical Information Officer, Dr. Goldman is helping introduce AI tools across Memorial Healthcare System, including:
- Scribe: Writing visit notes takes time. This tool listens during appointments and drafts a summary of the discussion and next steps. The provider reviews and finalizes the note after the visit, saving time on documentation.
- Text assistant: Medical notes often include complex language. This tool simplifies the wording. “Memorial patients can view all of their notes,” says Dr. Goldman. “When patients understand them, they’re more likely to follow the plan we’ve created together.”
- Medical record summary tool: Reviewing every document in a medical record can take hours. This tool pulls together key information, helping providers prepare more efficiently before meeting with patients.
“When providers ask how they can use AI effectively, I give them the same advice I give my patients,” says Dr. Goldman. “AI doesn’t replace human judgment. But when used thoughtfully, it can help analyze information, save time and support better care.”
Medically Reviewed by Jennifer Goldman, DO. This content has been medically reviewed to ensure clinical accuracy.