Artificial intelligence in dental education: can AI-based chatbots compete with general practitioners?


BULUT A. C., BAHADIR H. S., ATEŞ G.

BMC Medical Education, vol.25, no.1, 2025 (SCI-Expanded) identifier identifier

  • Publication Type: Article / Article
  • Volume: 25 Issue: 1
  • Publication Date: 2025
  • Doi Number: 10.1186/s12909-025-07880-7
  • Journal Name: BMC Medical Education
  • Journal Indexes: Science Citation Index Expanded (SCI-EXPANDED), Social Sciences Citation Index (SSCI), Scopus, Biotechnology Research Abstracts, EMBASE, MEDLINE, Veterinary Science Database, Directory of Open Access Journals
  • Keywords: AI-based chatbots, Artificial intelligence, ChatGPT, Claude, Dental education, Dental specialization exam, Google gemini, Microsoft bing, Prosthetic dentistry
  • Ankara Yıldırım Beyazıt University Affiliated: Yes

Abstract

Background: This study aimed to evaluate the performance of seven AI-based chatbots (ChatGPT-4, ChatGPT-3.5, ChatGPT 01-Preview, ChatGPT 01-Mini, Microsoft Bing, Claude, and Google Gemini) in answering multiple-choice questions related to prosthetic dentistry from the Turkish Dental Specialization Mock Exam (DUSDATA TR). Additionally, the study investigated whether these chatbots could provide responses at an accuracy level comparable to general practitioners. Methods: A total of ten multiple-choice questions related to prosthetic dentistry were selected from a preparatory exam by a private educational institution. Two groups were formed: (1) General practitioners (Human Group, N = 657) and (2) AI-based chatbots. Each question was manually input into the chatbots, and their responses were recorded. Correct responses were marked as “1” and incorrect responses as “0”. The consistency and accuracy of chatbot responses were analyzed using Fisher’s exact test and Cochran’s Q test. Statistical significance was set at p < 0.05. Results: A statistically significant difference was found between the accuracy rates of chatbot responses (p < 0.05). ChatGPT-3.5, ChatGPT-4, and Google Gemini failed to provide correct answers to questions 2, 5, 7, 8, and 9, while Microsoft Bing failed on questions 5, 7, 8, and 10. None of the chatbots answered question 7 correctly. General practitioners demonstrated the highest accuracy rates, particularly for question 10 (80.3%) and question 9 (44.4%). Despite variations in accuracy, chatbot responses remained consistent over time (p > 0.05). However, Bing was identified as the chatbot with the highest number of incorrect responses. Conclusion: The study findings indicate that The performance of AI-based chatbots varies significantly and lacks consistency in answering prosthetic dentistry-related exam questions, necessitating further improvement before implementation.