Scopus
YÖKSİS Eşleşti
Performance of Chatgpt in ophthalmology exam; human versus AI
International Ophthalmology · Aralık 2024
YÖKSİS Kayıtları
Performance of Chatgpt in ophthalmology exam; human versus AI
International Ophthalmology · 2024 SCI-Expanded
PROFESÖR BANU TURĞUT ÖZTÜRK →
Makale Bilgileri
DergiInternational Ophthalmology
Yayın TarihiAralık 2024
Cilt / Sayfa44
Scopus ID2-s2.0-85208603164
Özet
Purpose: This cross-sectional study focuses on evaluating the success rate of ChatGPT in answering questions from the ‘Resident Training Development Exam’ and comparing these results with the performance of the ophthalmology residents. Methods: The 75 exam questions, across nine sections and three difficulty levels, were presented to ChatGPT. The responses and explanations were recorded. The readability and complexity of the explanations were analyzed and The Flesch Reading Ease (FRE) score (0–100) was recorded using the program named Readable. Residents were categorized into four groups based on their seniority. The overall and seniority-specific success rates of the residents were compared separately with ChatGPT. Results: Out of 69 questions, ChatGPT answered 37 correctly (53.62%). The highest success was in Lens and Cataract (77.77%), and the lowest in Pediatric Ophthalmology and Strabismus (0.00%). Of 789 residents, overall accuracy was 50.37%. Seniority-specific accuracy rates were 43.49%, 51.30%, 54.91%, and 60.05% for 1st to 4th-year residents. ChatGPT ranked 292nd among residents. Difficulty-wise, 11 questions were easy, 44 moderate, and 14 difficult. ChatGPT's accuracy for each level was 63.63%, 54.54%, and 42.85%, respectively. The average FRE score of responses generated by ChatGPT was found to be 27.56 ± 12.40. Conclusion: ChatGPT correctly answered 53.6% of questions in an exam for residents. ChatGPT has a lower success rate on average than a 3rd year resident. The readability of responses provided by ChatGPT is low, and they are difficult to understand. As difficulty increases, ChatGPT's success decreases. Predictably, these results will change with more information loaded into ChatGPT.
Yazarlar (4)
1
Ali Safa Balci
2
Zeliha Yazar
3
Banu Turgut Ozturk
ORCID: 0000-0003-0702-6951
4
Cigdem Altan
Anahtar Kelimeler
Artificial intelligence
ChatGPT
Education
Exam
Resident
Kurumlar
Selçuk Üniversitesi
Selçuklu Turkey
University of Health Sciences
Istanbul Turkey
Metrikler
1
Atıf
4
Yazar
5
Anahtar Kelime