Despite ChatGPT's rapid rise as one of the most widely known technological innovations in recent times, it has also generated quite a bit of controversy, much of which has to do with its propensity to provide error messages. As it turns out, a study conducted by Long Island University showed that this ChatGPT provided an alarmingly high rate of false answers when it came to medication issues.
The large language model chatbot received 39 questions about various medications and how they were used appropriately. But in these instructions, ChatGPT provided the wrong answer or ignored the question altogether in about 74% of the questions (29 to be exact).
When ChatGPT is asked to give citations** or references for the information it provides, it can only do so in about 8 out of 39 questions. This trend is worrying because it delivers misinformation about the drug to unsuspecting consumers, and in most cases these messages are likely to be completely absent at all, scribbled by ChatGPT itself.
For example, when asked about a potentially harmful interaction between Paxlovid, an antiviral drug used for COVID-19, and a blood pressure medication called Verapamil, the chatbot claimed no such interaction. Despite the fact that the combined use of these two drugs may lead to an exacerbation of the effect of verapamil in lowering blood pressure.
But it's important to note that OpenAI itself doesn't recommend users to use ChatGPT for medical purposes, and the chatbot itself will quickly state that it's not a doctor before providing any user-requested answers.
However, many consumers may not realize that the data they get won't be as accurate, and the dangers that could result if they act on ChatGPT's erroneous instructions could cause widespread harm. It is crucial to educate consumers so that they can better understand the pitfalls involved.