The Utility of Chat GPT in Venous Education

Document Type

Conference Proceeding

Publication Date

5-1-2024

Publication Title

Journal of Vascular Surgery: Venous and Lymphatic Disorders

Abstract

Objectives: Chat GPT is an artificial intelligence-powered language model that is being increasingly used in the medical setting. Although quick access to large amounts of information is promising for vascular surgical education, the quality and depth of information provided by the current Chat GPT model is not well-understood. We aimed to study the utility of Chat GPT in teaching medical students and vascular surgery residents about varicose veins. We hypothesized that Chat GPT can provide a basic overview to medical students’ and possibly residents’ education. Methods: We generated two learning documents using Chat GPT, one for medical students and one for residents. We asked Chat GPT 3.5 to produce a document for “varicose veins explained to a medical student” and another for “varicose veins explained to a vascular surgery resident.” We asked it to “include background, anatomy, pathophysiology, risk factors, clinical presentation, complications, diagnostic evaluation, and management.” Texts generated for students and residents were compared and reviewed by seven academic vascular surgeons practicing in a teaching hospital. Five-point Likert scales were used to rate the accuracy, completeness, complexity, and applicability of each text (Table I). Average values of each survey question were compared using Mann-Whitney U tests. Results: Aside from increased use of more advanced medical terminology in residents’ texts, content was similar in the two texts. Overall, the scores were slightly higher for the text generated for the residents (average, 3.91) vs the students (average, 3.71) (Table II). All surgeons believed that information was accurate (average, 4.5), although more accurate for residents (average, 4.71) vs students (average, 4.29). Most surgeons believed that information was not advanced enough (average, 3.21), albeit slightly more advanced for residents (average, 3.29 vs 3.14 for students). Most surgeons were on the fence on whether they would use the text to teach medical students (average, 3.43) or residents (average, 3.57). Conclusions: Although Chat GPT offers promising prospects in venous education, the current Chat GPT is not up to standards when used for medical education. Although the information was accurate and concise, it was not advanced enough for medical education, and most surgeons were not very enthusiastic about using it to teach students or residents. Optimizing Chat GPT-generated searches and expanding its applicability to specialized education is subject to future development and research. [Formula presented] [Formula presented]

Volume

12

Issue

3

Share

COinS