Turkish Journal of Medical Sciences
Abstract
Background/aim: This study evaluated the accuracy rates and response consistency of four different large language models (ChatGPT- 4o, Gemini 2.0, Claude 3.5, and DeepSeek R1) in answering questions from the Emergency Medicine Fellowship Examination (YDUS), which was administered for the first time in Türkiye.
Materials and methods: In this observational study, 60 multiple-choice questions from the Emergency Medicine YDUS administered on 15 December 2024, were classified as knowledge-based (n = 26), visual content (n = 2), and case-based (n = 32). Each question was presented three times to the four large language models. The models’ accuracy rates were evaluated according to overall accuracy, strict accuracy, and ideal accuracy criteria. Response consistency was measured using Fleiss’ Kappa test.
Results: The ChatGPT-4o model was the most successful in terms of overall accuracy (90.0%), while DeepSeek R1 showed the lowest performance (76.7%). Claude 3.5 (83.3%) and Gemini 2.0 (80.0%) demonstrated moderate success. When analyzed by category, ChatGPT-4o achieved the highest success with 92.3% accuracy in knowledge-based questions and 90.6% in case-based questions. In terms of response consistency, the Claude 3.5 model (Fleiss’ Kappa = 0.68) showed the highest consistency, while Gemini 2.0 (Fleiss’ Kappa = 0.49) showed the lowest. Inconsistent hallucinations were more frequent in the Gemini 2.0 and DeepSeek R1 models, whereas persistent hallucinations were less common in the ChatGPT-4o and Claude 3.5 models.
Conclusion: Large language models can achieve high accuracy rates for knowledge and clinical reasoning questions in emergency medicine but show differences in terms of response consistency and hallucination tendency. While these models have significant potential for use in medical education and as clinical decision support systems (CDSS), they need further development to provide reliable, up-to-date, and accurate information.
Author ORCID Identifier
İSHAK ŞAN: 0000-0002-9658-9010
MEDİNE AKKAN ÖZ: 0000-0002-6320-9667
MEHMET YORTANLI: 0000-0002-6744-2423
MURAT GENÇ: 0000-0003-3407-1942
BENSU BULUT: 0000-0002-5629-3143
AYŞENUR GÜR: 0000-0002-9521-1120
RAMİZ YAZICI: 0000-0001-9210-914X
HÜSEYİN MUTLU: 0000-0002-1930-3293
MUSTAFA ÖNDER GÖNEN: 0000-0002-6059-4387
DOI
10.55730/1300-0144.6083
Keywords
artificial intelligence, Emergency medicine, fellowship, language
First Page
1292
Last Page
1299
Publisher
The Scientific and Technological Research Council of Türkiye (TÜBİTAK)
Creative Commons License

This work is licensed under a Creative Commons Attribution 4.0 International License.
Recommended Citation
ŞAN, İ, AKKAN ÖZ, M, YORTANLI, M, GENÇ, M, BULUT, B, GÜR, A, YAZICI, R, MUTLU, H, & GÖNEN, M (2025). AI performance in emergency medicine fellowship examination: comparative analysis of Chatgpt-4o, Gemini 2.0, Claude 3.5, and Deepseek R1 models. Turkish Journal of Medical Sciences 55 (5): 1292-1299. https://doi.org/10.55730/1300-0144.6083