Access to accurate and reliable information about radiation protection is essential for patients undergoing medical imaging. A recent study compared the effectiveness of AI, specifically ChatGPT, in providing this information against human experts from institutional websites.
Why This Study Was Conducted
Medical imaging is crucial for diagnosis and treatment, but public understanding of radiation risks often lags behind. Patients frequently seek answers online, where information quality can vary. This study aimed to evaluate whether ChatGPT, an AI language model developed by OpenAI, could deliver scientifically adequate and easily understandable information about radiation protection, and how it compares to human expert responses.
The Team Behind the Study
The research was led by experts from the Department of Diagnostic and Interventional Radiology at Lausanne University Hospital (CHUV) in Switzerland and the Department of Radiology at Duke University Health System in the United States. The team included Sofyan Jankowski, MSc, David Rotzinger, MD, PhD, Francesco Ria, PhD, and Chiara Pozzessere, MD.
Study Methodology
The study involved retrieving 12 common radiation protection-related questions from institutional websites and entering them into ChatGPT (version 3.5). The AI-generated responses were compared to those from human experts. Twelve expert participants, including radiologists, medical physicists, and radiographers, evaluated the responses based on scientific adequacy, public comprehension, overall satisfaction, and whether the response appeared AI-generated.
Types of Queries Tested
Here are some examples of the types of queries that were tested in the study:
- Is the radiation from CT harmful? (Source: Cancer.gov)
- What are the risks of CT scans for children? (Source: Stanford Health Care)
- What are the risks related to mammograms? (Source: Mayo Clinic)
- What are the possible effects of radiation exposure from interventional procedures? (Source: IAEA)
- Is MRI safer than CT? (Source: Stanford Health Care)
Key Findings
The study revealed several key insights:
- Scientific Adequacy: ChatGPT performed comparably to human experts, with no significant differences in scientific adequacy.
- Public Comprehension: Both human and AI responses were similarly understandable to the general public.
- Overall Satisfaction: Satisfaction levels were similar for both ChatGPT and human responses.
- Identification of AI-generated Responses: Experts more frequently identified ChatGPT responses due to their length and detailed nature.
Implications and Future Directions
This study highlights the potential of AI like ChatGPT as a valuable tool in patient education, particularly in radiation protection. While AI can enhance the quality of information available to patients, it should complement, not replace, the communication between healthcare providers and patients.
At INFAB, providing accurate and comprehensible information about radiation safety is crucial for both healthcare providers and patients. Embracing AI technology could further enhance the quality of information and support that we offer to our clients.
As we continue to explore the intersection of AI and healthcare, studies like this pave the way for more integrated and effective communication tools, ultimately benefiting patient care and safety.
This article was inspired by the study “ChatGPT vs Radiology Institutional Websites: Comparative Analysis of Radiation Protection Information Provided to Patients” published in Radiology by Jankowski et al. (2024). For more detailed insights, refer to the full study available for download here or cite the published version at Radiology Journal.
