Generative artificial intelligence may feel like it’s progressing at a breakneck pace, with new and more sophisticated large language models released every year.
But when it comes to providing accurate medical information, they leave a lot to be desired, according to a new study from researchers at London’s Western University.
Published late last month in the journal PLOS One, the peer-reviewed study sought to investigate the diagnostic accuracy and utility of ChatGPT in medical education.
Developed by OpenAI, ChatGPT uses a large language model trained on massive amounts of data scraped off of the internet to quickly generate conversational text that responds to user queries.
“This thing is everywhere,” said Dr. Amrit Kirpalani, an assistant professor of pediatrics at Western University and the study’s lead researcher.
“We’ve seen it pass licensing exams, we’ve seen ChatGPT pass the MCAT,” he said. “We wanted to know, how would it deal with more complicated cases, those complicated cases that we see …