ChatGPT can help empower patients and improve health literacy for different populations
A new study by Cedars-Sinai investigators describes how ChatGPT, an artificial intelligence (AI) chatbot, may help improve health outcomes for patients with cirrhosis and liver cancer by providing easy-to-understand information about basic knowledge, lifestyle and treatments for these conditions.
The findings, published in the peer-reviewed journal Clinical and Molecular Hepatology, highlights the AI system’s potential to play a role in clinical practice.
Patients with cirrhosis and/or liver cancer and their caregivers often have unmet needs and insufficient knowledge about managing and preventing complications of their disease. We found ChatGPT—while it has limitations—can help empower patients and improve health literacy for different populations.”
Brennan Spiegel, MD, MSHS, Director of Health Services Research, Cedars-Sinai and Co-corresponding Author of the Study.
Patients diagnosed with liver cancer and cirrhosis, an end-stage liver disease that is also a major risk factor for the most common form of liver cancer, often require extensive treatment that can be complex and challenging to manage.
The complexity of the care required for this patient population makes patient empowerment with knowledge about their disease crucial for optimal outcomes.While there are currently online resources for patients and caregivers, the literature available is often lengthy and difficult for many to understand, highlighting the limited options for this group.”
Alexander Kuo, MD, Medical Director of Liver Transplantation Medicine, Cedars-Sinai and Co-corresponding Author of the Study.
Personalized education AI models could help increase patient knowledge and education, noted Kuo.
One of those is ChatGPT, which stands for generative pre-trained transformer. It has quickly become popular for its human-like text in chatbot conversations where users can input any prompt and it will generate a response based on the information stored in its database.
It has already shown some potential for medical professionals by writing basic medical reports and correctly answering medical student examination questions.
ChatGPT has shown to be able to provide professional, yet highly comprehensible responses. However, this is one of the first studies to examine the ability of ChatGPT to answer clinically oriented, disease-specific questions correctly and compare its performance to physicians and trainees.”
Yee Hui Yeo, MD, First Author of the Study and A Clinical Fellow in the Karsh Division of Gastroenterology and Hepatology, Cedars-Sinai.
To verify the accuracy of the AI model in its knowledge about both cirrhosis and liver cancer, investigators presented ChatGPT with 164 frequently asked questions in five categories. The ChatGPT answers were then graded independently by two liver transplant specialists.
Each question was posed twice to ChatGPT and was categorized as either basic knowledge, diagnosis, treatment, lifestyle or preventive medicine.
Study results include:
- ChatGPT answered about 77% of the questions correctly, providing high levels of accuracy in 91 questions from a variety of categories.
- The specialists grading the responses said 75% of the responses for basic knowledge, treatment and lifestyle were comprehensive or correct, but inadequate.
- The proportion of responses that were “mixed with correct and incorrect data” was 22% for basic knowledge, 33% for diagnosis, 25% for treatment, 18% for lifestyle and 50% for preventive medicine.
The AI model also provided practical and useful advice to patients and caregivers regarding the next steps adjusting to a new diagnosis.
Still, the study left no doubt that advice from a physician was superior.
“While the model was able to demonstrate strong capability in the basic knowledge, lifestyle and treatment domains, it suffered on the ability to provide tailored recommendations according to the region where the inquirer lived,” said Yeo. “This is most likely due to the varied recommendations in liver cancer surveillance interval and indications reported by different professional societies. But we are hopeful that it will be more accurate in addressing the questions according to the inquirers’ location.”
“More research is still needed to better examine the tool in patient education, but we believe ChatGPT to be a very useful adjunctive tool for physicians—not a replacement—but adjunctive tool that provides access to reliable and accurate health information that is easy for many to understand,” Spiegel said. “We hope that this can help physicians to empower patients and improve health literacy for patients facing challenging conditions such as cirrhosis and liver cancer.”
Other Cedars-Sinai authors are Jamil Samaan, Hirsh Trivedi, Aarshi Vipani, Walid Ayoub, Ju Dong Yang and Omer Liran.
Yeo, Y. H., et al. (2023). Assessing the performance of ChatGPT in answering questions regarding cirrhosis and hepatocellular carcinoma. Clinical and Molecular Hepatology. doi.org/10.3350/cmh.2023.0089.