ChatGPT can be used as a partner for patient diagnosis and performs well

Mondo Health Updated on 2024-01-29

A recent study shows that doctors' minds are usually good decision-makers, but even the brightest doctors may benefit greatly from the diagnostic help provided by ChatGPT.

The main benefit comes from a thought process known as "probabilistic reasoning" – knowing the odds that something will (or won't) happen.

Humans struggle with probabilistic reasoning, which is the practice of making decisions based on calculated probabilities," explains Dr. Adam Rodman, lead author of the study at Beth Israel Deaconess Medical Center in Boston.

Probabilistic reasoning is one of several key components of the diagnostic process, which is an incredibly complex process that involves many different cognitive strategies," he explained in a press release issued by Beth Israel Hospital. "We chose to evaluate probabilistic reasoning separately because it is a well-known area that humanity needs support. ”

Beth Israel's team used data from a previously published survey of 550 healthcare practitioners. They were all asked to perform probabilistic reasoning to diagnose five different cases.

However, in the new study, Rodham's team provided ChatGPT's large language model (LLM), ChatGPT-4, with the same five cases.

These cases include information from routine medical tests such as chest scans for pneumonia, mammograms for breast cancer, exercise tests for coronary artery disease, and urine cultures for urinary tract infections.

Based on this information, the chatbot uses its own probabilistic reasoning to re-evaluate the likelihood of various patient diagnoses.

In two of the five cases, the chatbot was more accurate than the human doctor;There are two that are as accurate as human doctors;There is one less accurate. When researchers compared humans to chatbots for medical diagnoses, they considered it a "draw."

However, the ChatGPT-4 chatbot performed well when patients had negative (rather than positive) test results, surpassing doctors in five cases with diagnostic accuracy.

Humans sometimes assume that a negative test result is riskier, which can lead to over-testing, increased testing, and excessive drug use," noted Rodman, an internist and researcher at the Beth Israel Department of Medicine in the United States.

The study was published Dec. 11 in the open online journal JAMA.

Researchers say that one day, doctors may work in tandem with artificial intelligence, which makes it possible to make patient diagnoses more accurate.

Rodman called that prospect "exciting."

"Although they [chatbots] aren't perfect, they're easy to use and can be integrated into clinical workflows, which could theoretically allow humans to make better decisions," he said. In the future, research on the collective intelligence of humans and artificial intelligence is very necessary. ”

More information. Want to learn more about AI and Medicine at Harvard?

*: Beth Israel Deaconess Medical Center press release, 11 December 2023.

Copyright 2023 Healthday. All rights reserved.

Related Pages