# “Better than waiting half a year”: medical professionals support the launch of ChatGPT Health
Experts have supported the release of ChatGPT Health for health consultations despite the risks of hallucinations in neural networks. TechCrunch reports this.
Sina Bari, a practicing surgeon and head of AI at iMerit, shared how his patient consulted with ChatGPT:
“Recently, he came to me after I recommended a medication and showed a printed dialogue with the chatbot. It stated that the drug has a 45% chance of developing pulmonary embolism.”
Dr. Bari checked the sources and found that the statistics were taken from an article about the drug’s impact on a niche subgroup of people with tuberculosis. These data were not applicable to his clinical case.
Despite inaccuracies, the doctor positively evaluated the launch of ChatGPT Health. In his opinion, the service gives users the opportunity to discuss health issues in a more private setting.
“I think it’s great. This is already happening, so formalizing the process to protect patient information and implementing some safety measures will make the process more effective for patients,” — said Dr. Bari.
Users can receive more personalized recommendations from ChatGPT Health by uploading medical records and syncing the app with Apple Health and MyFitnessPal. Such deep access to personal information has raised community concerns.
“Suddenly, medical data is transferred from HIPAA-compliant organizations to providers that do not comply. It will be interesting to see how regulators respond to this,” — noted Ittay Schwarz, co-founder of MIND.
Over 230 million people discuss their health with ChatGPT weekly. Many have stopped “Googling” symptoms, choosing the chatbot as their source of information.
“This is one of the largest use cases for ChatGPT. So it makes sense that they want to create a more private, secure, and optimized version of the chatbot for healthcare questions,” — Schwarz added.
The Hallucination Problem
The main issue with chatbots remains “hallucinations,” which is especially critical in healthcare. A Vectara study showed that GPT-5 from OpenAI “hallucinates” more often than competitors from Google and Anthropic.
However, Stanford University medical professor Nigam Shah considers these concerns secondary. According to him, the real problem with the system is the difficulty in accessing doctors, not the risk of receiving incorrect advice from ChatGPT.
“Right now, if you contact any healthcare system and want to see a primary care doctor, you will have to wait three to six months. If you have a choice: wait half a year to see a real specialist or talk immediately to someone who can do something for you, which would you choose?” — he said.
Administrative tasks for doctors can take up about half of their time, significantly reducing the number of appointments. Automating these processes will allow specialists to devote more attention to patients.
Dr. Shah leads a team at Stanford developing ChatEHR — software that enables doctors to work rationally and efficiently with electronic medical records.
“By making it more user-friendly, doctors will spend less time searching for the necessary information,” — said one of the first testers of ChatEHR, Dr. Sneha Jain.
Recall that in January, Anthropic announced the release of Claude for Healthcare — a set of tools for healthcare providers and patients.
View Original
This page may contain third-party content, which is provided for information purposes only (not representations/warranties) and should not be considered as an endorsement of its views by Gate, nor as financial or professional advice. See Disclaimer for details.
"Better than waiting half a year": medical professionals support the launch of ChatGPT Health - ForkLog: cryptocurrencies, AI, singularity, future
Experts have supported the release of ChatGPT Health for health consultations despite the risks of hallucinations in neural networks. TechCrunch reports this.
Sina Bari, a practicing surgeon and head of AI at iMerit, shared how his patient consulted with ChatGPT:
Dr. Bari checked the sources and found that the statistics were taken from an article about the drug’s impact on a niche subgroup of people with tuberculosis. These data were not applicable to his clinical case.
Despite inaccuracies, the doctor positively evaluated the launch of ChatGPT Health. In his opinion, the service gives users the opportunity to discuss health issues in a more private setting.
Users can receive more personalized recommendations from ChatGPT Health by uploading medical records and syncing the app with Apple Health and MyFitnessPal. Such deep access to personal information has raised community concerns.
Over 230 million people discuss their health with ChatGPT weekly. Many have stopped “Googling” symptoms, choosing the chatbot as their source of information.
The Hallucination Problem
The main issue with chatbots remains “hallucinations,” which is especially critical in healthcare. A Vectara study showed that GPT-5 from OpenAI “hallucinates” more often than competitors from Google and Anthropic.
However, Stanford University medical professor Nigam Shah considers these concerns secondary. According to him, the real problem with the system is the difficulty in accessing doctors, not the risk of receiving incorrect advice from ChatGPT.
Administrative tasks for doctors can take up about half of their time, significantly reducing the number of appointments. Automating these processes will allow specialists to devote more attention to patients.
Dr. Shah leads a team at Stanford developing ChatEHR — software that enables doctors to work rationally and efficiently with electronic medical records.
Recall that in January, Anthropic announced the release of Claude for Healthcare — a set of tools for healthcare providers and patients.