Recent studies have testedChatGPTThe ability to answer patients' questions about medicationsArtificial Intelligence ModelThe answers were wrong or incomplete in about 75% of the cases. The findings, presented this week at the annual meeting of the American Pharmacists Association, have attracted much attention.
The study, which tested a free version of ChatGPT, which has more than 100 million users, warned providers to be wary because many patients may rely on ChatGPT to answer health-related questions. The study was conducted by pharmacy researchers at Long Island University, who first collected 45 questions that patients asked the university's drug information service in 2022 and 2023 and compiled answers to those questions, each of which was reviewed by a second researcher.
Source Note: The image is generated by AI, and the image is authorized by Midjourney
The research team then fed the same questions to ChatGPT and compared its answers with those produced by the pharmacists. Because the topics of six of the questions lacked the published literature that ChatGPT needed to provide data-based answers, the researchers fed ChatGPT 39 questions instead of 45.
The study found that only a quarter of ChatGPT answers were satisfactory. Specifically, ChatGPT did not directly answer 11 questions, gave incorrect answers to 10 questions, and provided incomplete answers to another 12 questions. For example, one question asked whether there was a drug interaction between the blood pressure drug verapamil and Pfizer's anti-new crown drug Paxoli. ChatGPT stated that there was no interaction between the two drugs, which is incorrect because taking the two drugs together could dangerously lower a person's blood pressure.
In some cases, the AI model generated fake scientific references to support its responses. In each prompt, the researchers asked ChatGPT to provide references for the information provided in its answer, but the model only provided references in eight answers, all of which were fictitious.
"Healthcare professionals and patients should be cautious when using ChatGPT to obtain medication-related information, and anyone using ChatGPT to obtain medication-related information should verify the information with a trusted source," Sara Grossman, a physician and one of the study's lead authors, said in a statement.
Notably, ChatGPT’s usage policy also echoes Dr. Grossman’s sentiments, noting that the model is “not tailored to provide medical information” and that people should not rely on it when seeking “diagnosis or treatment services for a serious medical condition.”