Get Healthy!

Can You Rely on AI to Answer Questions About Cancer?
  • Posted August 24, 2023

Can You Rely on AI to Answer Questions About Cancer?

AI might not always be your most accurate source of health information, especially when it comes to cancer care, new research finds.

Two new studies assessed the quality of responses offered by AI chatbots to a variety of questions about cancer care.

One, published Aug. 24 in JAMA Oncology, zeroed in on the full-sentence conversational AI service known as ChatGPT, which launched to great fanfare last November.

The upside: About two-thirds of cancer information offered by ChatGPT accurately matched current guidelines from the U.S. National Comprehensive Cancer Network.

The downside: The rest did not.

"Some recommendations were clearly completely incorrect,"said study author Dr. Danielle Bitterman, an assistant professor of radiation oncology at the Brigham and Women's Hospital/Dana-Farber Cancer Institute and at Harvard Medical School in Boston. "For example, instances where curative treatment was recommended for an incurable diagnosis."

Other times, incorrect recommendations were more subtle -- for instance, including some, but not all, parts of a treatment regimen, such as recommending surgery alone, when standard treatment also includes radiotherapy and/or chemotherapy, Bitterman said.

That's concerning, she said, given the degree to which "incorrect information was mixed in with correct information, which made it especially difficult to detect errors even for experts."

A second study in the same journal issue offered a much rosier assessment of AI accuracy.

In this instance, investigators looked at answers from four different chatbot services -- ChatGPT, Perplexity, Chatsonic and Microsoft's Bing. Each was prompted to discuss skin, lung, breast, prostate and/or colon cancer.

Researchers judged the quality and accuracy of the responses as "good."

But, they said, that doesn't necessarily mean that patients will find the AI experience useful. That's because much of the information provided was too complex for most medical non-professionals.

At the same time, all responses were tethered to a blanket warning that patients should not make any health care decisions based on the data provided without first consulting a doctor.

The key takeaway: Many AI users are likely to find that chatbot-generated medical information is incomprehensible, impractical or both.

"The results were encouraging in the sense that there was very little misinformation, because that was our biggest concern going in,"said study author Dr. Abdo Kabarriti, chief of urology at South Brooklyn Health in New York City.

"But a lot of the information, while accurate, was not in layman's terms,"he added.

Basically the chatbots provided information at a college reading level, while the average consumer reads about roughly the sixth-grade level, Kabarriti said.

The AI information Kabarriti's team received was, he said, "way beyond that."

Another factor that will likely frustrate many patients is that AI won't tell you what to do about the cancer symptoms it outlines, Kabarriti said.

"It will just say 'consult your physician,'" he said. "Perhaps there's a liability issue. But the point is that AI chats do not replace the interaction that patients will need to have with their physicians."

Dr. Atul Butte, chief data scientist with the University of California Health System, wrote an accompanying editorial.

Despite concerns raised by both studies, he views AI "as a huge net plus" for patients and the medical community as a whole.

"I believe the glass is already more than half-full," Butte said, noting that over time the information provided by chatbots will inevitably become more and more accurate and accessible.

Already, said Butte, some studies have shown that AI has the potential to offer better advice and even more empathy than medical professionals.

His take: Over time AI chatbots are going to play an ever more critical role in delivery of medical information and care. For many patients, the benefit will be tangible, Butte predicted.

Few patients have the resources or privilege to go to the world's best medical centers, he noted.

"But imagine if we could train artificial intelligence on the data and practices from those top places, and then deliver that knowledge through digital tools across the world," either to patients through apps or to doctors through electronic health record systems, Butte said.

"That's why I'm starting to call artificial intelligence 'scalable privilege,'" he added. "[It's] our best way to scale that privileged medical care, that [only] some are able to get, to all."

More information

The U.S. National Institute of Biomedical Imaging and Bioengineering has an AI overview.

SOURCES: Danielle Bitterman, MD, assistant professor, radiation oncology, Brigham and Women's Hospital/Dana-Farber Cancer Institute and Harvard Medical School, Boston; Abdo Kabarriti, MD, chief, urology, South Brooklyn Health, New York City; Atul Butte, MD, PhD, chief data scientist, University of California Health System, and professor and inaugural director, Bakar Computational Health Sciences Institute, UC San Francisco; JAMA Oncology, Aug. 24, 2023

HealthDay
Health News is provided as a service to Juro's Pharmacy Health & Wellness site users by HealthDay. Juro's Pharmacy Health & Wellness nor its employees, agents, or contractors, review, control, or take responsibility for the content of these articles. Please seek medical advice directly from your pharmacist or physician.
Copyright © 2024 HealthDay All Rights Reserved.