A comprehensive evaluation of ChatGPT consultation quality for augmentation mammoplasty: A comparative analysis between plastic surgeons and laypersons
- Abstract
- Objectives
ChatGPT has gained significant popularity as a source of healthcare information among the general population. Evaluating the quality of chatbot responses is crucial, requiring comprehensive and qualitative analysis. This study aims to assess the answers provided by ChatGPT during hypothetical breast augmentation consultations across various categories and depths. The evaluation involves the utilization of validated tools and a comparison of scores between plastic surgeons and laypersons.
Methods
A panel consisting of five plastic surgeons and five laypersons evaluated ChatGPT's responses to 25 questions spanning consultation, procedure, recovery, and sentiment categories. The DISCERN and PEMAT tools were employed to assess the responses, while emotional context was examined through ten specific questions. Additionally, readability was measured using the Flesch Reading Ease score. Qualitative analysis was performed to identify the overall strengths and weaknesses.
Results
Plastic surgeons generally scored lower than laypersons across most domains. Scores for each evaluation domain varied by category, with the consultation category demonstrating lower scores in terms of DISCERN reliability, information quality, and DISCERN score. Plastic surgeons assigned significantly lower overall quality ratings to the procedure category compared to other question categories. They also gave lower emotion scores in the procedure category compared to laypersons. The depth of the questions did not impact the scoring.
Conclusions
Existing health information evaluation tools may not be entirely suitable for comprehensively evaluating the quality of individual responses generated by ChatGPT. Consequently, the development and implementation of appropriate evaluation tools to assess the appropriateness and quality of AI consultations are necessary.
- Issued Date
- 2023
Ji Young Yun
Dong Jin Kim
Nara Lee
Eun Key Kim
- Type
- Article
- Keyword
- Artificial intelligence; Consumer health information; Health literacy; Internet
- DOI
- 10.1016/j.ijmedinf.2023.105219
- URI
- https://oak.ulsan.ac.kr/handle/2021.oak/16828
- Publisher
- INTERNATIONAL JOURNAL OF MEDICAL INFORMATICS
- Language
- 영어
- ISSN
- 1386-5056
- Citation Volume
- 179
- Citation Number
- 179
- Citation Start Page
- 1
- Citation End Page
- 9
-
Appears in Collections:
- Medicine > Nursing
- 공개 및 라이선스
-
- 파일 목록
-
Items in Repository are protected by copyright, with all rights reserved, unless otherwise indicated.