Researchers at the University of Maryland School of Medicine warned against using ChatGPT for medical advice after a study found that artificial intelligence (AI)-powered chatbot made up health facts when asked for information about cancer.
According to a report published on the website of the Daily Mail citing the research, the chatbot answered incorrectly one in 10 questions about breast cancer screening and the correct answers were not as “complete” as those found through a simple Google search.
The researchers said that, in some cases, the AI chatbot even used fake magazine articles to back up its claims.
This study comes amid warnings that users should treat the software with caution, as it has a tendency to hallucinate, or in other words, make stuff up.
The study that exposed ChatGPT
Researchers asked ChatGPT to answer 25 questions related to advice on getting screened for breast cancer. Since the chatbot was known to vary its response, each question was asked three separate times. The results were then analyzed by three radiologists trained in mammography.
88 percent of the responses were appropriate and easy to understand. But some of the answers, however, were inaccurate or even fictitiousthey warned.
One answer, for example, was based on outdated information. She advised delaying a mammogram for four to six weeks after receiving a COVID-19 vaccine, however this advice was changed more than a year ago to advise women not to wait.
ChatGPT also provided inconsistent answers to questions about breast cancer risk and where to get a mammogram. The study found that the answers “varied significantly” each time the same question was asked.
Study co-author Dr. Paul Yi stated: “We have seen in our experience that ChatGPT sometimes fabricates fake magazine articles or health consortiums to support their claims. Consumers should be aware that these are new and unproven technologies, and should still rely on their doctor, rather than ChatGPT, for advice”.