• Politics
  • Diversity, equity and inclusion
  • Financial Decision Making
  • Telehealth
  • Patient Experience
  • Leadership
  • Point of Care Tools
  • Product Solutions
  • Management
  • Technology
  • Healthcare Transformation
  • Data + Technology
  • Safer Hospitals
  • Business
  • Providers in Practice
  • Mergers and Acquisitions
  • AI & Data Analytics
  • Cybersecurity
  • Interoperability & EHRs
  • Medical Devices
  • Pop Health Tech
  • Precision Medicine
  • Virtual Care
  • Health equity

AI offers solid medical advice, but struggles with referrals: Study

Article

Researchers examined how ChatGPT responds to public health questions.

Editor's note: This story was produced by Medical Economics®, our sister publication.

The artificial intelligence (AI) program ChatGPT is good at offering medical advice, but not referrals, for various public health questions.

ChatGPT is good at offering medical advice, but not referrals, for various public health questions. (Image credit: ©Prostock-studio: stock.adobe.com)

ChatGPT is good at offering medical advice, but not referrals, for various public health questions. (Image credit: ©Prostock-studio: stock.adobe.com)

A new study examined how the program would respond to 23 questions about addiction, interpersonal violence, mental health, and physical health. The answers were evidence-based, but only five suggested specific resources that could help patients.

“AI assistants may have a greater responsibility to provide actionable information, given their single-response design,” said the research letter, “Evaluating Artificial Intelligence Responses to Public Health Questions,” published June 7 in JAMA Network Open.

The authors suggested new partnerships between public health agencies and AI companies “to promote public health resources with demonstrated effectiveness.”

“For instance, public health agencies could disseminate a database of recommended resources, especially since AI companies potentially lack subject matter expertise to make these recommendations, and these resources could be incorporated into fine-tuning responses to public health questions,” the study said.

Grading replies

ChatGPT, the AI program developed by OpenAI, was published last year and has sparked an AI craze across medicine and other business sectors. It offers “nearly human-quality responses for a wide range of tasks,” but it is unclear how well it would answer general health inquiries from the lay public.

Researchers used questions with “a common help-seeking structure” with fresh sessions using ChatGPT in December 2022. They evaluated responses with three criteria:

  • Was the question responded to?
  • Was the response evidence-based?
  • Did the response refer the user to an appropriate resource?

ChatGPT made referrals for five questions about quitting alcohol use; using heroin; seeking help for rape and abuse; and seeking help for wanting to commit suicide.

For example, the answer about help for abuse included hotlines and website addresses for the National Domestic Violence Hotline, the National Sexual Assault hotline, and the National Child Abuse Hotline. The answer about suicide included the telephone number and text service for the National Suicide Prevention Hotline. Other resources mentioned were Alcoholics Anonymous and the Substance Abuse and Mental Health Services Administration National Helpline.

For physical health, questions about having a heart attack and having foot pain prompted answers that were not evidence-based, according to the study.

The authors suggested AI companies might adopt government recommended resources if there were new regulations to limit liability for them, because they may not be protected federal laws protecting publishers from liability for content created by others.

Read more: Many Americans remain leery of AI in healthcare

Related Videos
Image credit: ©Shevchukandrey - stock.adobe.com
Image: Ron Southwick, Chief Healthcare Executive
Image credit: HIMSS
Related Content
© 2024 MJH Life Sciences

All rights reserved.