TY - JOUR
T1 - Performance of ChatGPT in answering the oral pathology questions of various types or subjects from Taiwan National Dental Licensing Examinations
AU - Wu, Yu Hsueh
AU - Tso, Kai Yun
AU - Chiang, Chun Pin
N1 - Publisher Copyright:
© 2025 Association for Dental Sciences of the Republic of China
PY - 2025/7
Y1 - 2025/7
N2 - Background/purpose: ChatGPT, a large language model, can provide an instant and personalized solution in a conversational format. Our study aimed to assess the potential application of ChatGPT-4, ChatGPT-4o without a prompt (ChatGPT-4o-P-), and ChatGPT-4o with a prompt (ChatGPT-4o-P+) in helping dental students to study oral pathology (OP) by evaluating their performance in answering the OP multiple choice questions (MCQs) of various types or subjects. Materials and methods: A total of 280 OP MCQs were collected from Taiwan National Dental Licensing Examinations. The chatbots of ChatGPT-4, ChatGPT-4o-P-, and ChatGPT-4o-P+ were instructed to answer the OP MCQs of various types and subjects. Results: ChatGPT-4o-P+ achieved the highest overall accuracy rate (AR) of 90.0 %, slightly outperforming ChatGPT-4o-P- (88.6 % AR) and significantly exceeding ChatGPT-4 (79.6 % AR, P < 0.001). There was a significant difference in the AR of odd-one-out questions between ChatGPT-4 (77.2 % AR) and ChatGPT-4o-P- (91.3 % AR, P = 0.015) or ChatGPT-4o-P+ (92.4 % AR, P = 0.008). However, there was no significant difference in the AR among three different models when answering the image-based and case-based questions. Of the 11 different OP subjects of single-disease, all three different models achieved a 100 % AR in three subjects; ChatGPT-4o-P+ outperformed ChatGPT-4 and ChatGPT-4o-P- in other 3 subjects; ChatGPT-4o-P- was superior to ChatGPT-4 and ChatGPT-4o-P+ in another 3 subjects; and ChatGPT-4o-P- and ChatGPT-4o-P+ had equal performance and both were better than ChatGPT-4 in the rest of two subjects. Conclusion: In overall evaluation, ChatGPT-4o-P+ has better performance than ChatGPT-4o-P- and ChatGPT-4 in answering the OP MCQs.
AB - Background/purpose: ChatGPT, a large language model, can provide an instant and personalized solution in a conversational format. Our study aimed to assess the potential application of ChatGPT-4, ChatGPT-4o without a prompt (ChatGPT-4o-P-), and ChatGPT-4o with a prompt (ChatGPT-4o-P+) in helping dental students to study oral pathology (OP) by evaluating their performance in answering the OP multiple choice questions (MCQs) of various types or subjects. Materials and methods: A total of 280 OP MCQs were collected from Taiwan National Dental Licensing Examinations. The chatbots of ChatGPT-4, ChatGPT-4o-P-, and ChatGPT-4o-P+ were instructed to answer the OP MCQs of various types and subjects. Results: ChatGPT-4o-P+ achieved the highest overall accuracy rate (AR) of 90.0 %, slightly outperforming ChatGPT-4o-P- (88.6 % AR) and significantly exceeding ChatGPT-4 (79.6 % AR, P < 0.001). There was a significant difference in the AR of odd-one-out questions between ChatGPT-4 (77.2 % AR) and ChatGPT-4o-P- (91.3 % AR, P = 0.015) or ChatGPT-4o-P+ (92.4 % AR, P = 0.008). However, there was no significant difference in the AR among three different models when answering the image-based and case-based questions. Of the 11 different OP subjects of single-disease, all three different models achieved a 100 % AR in three subjects; ChatGPT-4o-P+ outperformed ChatGPT-4 and ChatGPT-4o-P- in other 3 subjects; ChatGPT-4o-P- was superior to ChatGPT-4 and ChatGPT-4o-P+ in another 3 subjects; and ChatGPT-4o-P- and ChatGPT-4o-P+ had equal performance and both were better than ChatGPT-4 in the rest of two subjects. Conclusion: In overall evaluation, ChatGPT-4o-P+ has better performance than ChatGPT-4o-P- and ChatGPT-4 in answering the OP MCQs.
UR - https://www.scopus.com/pages/publications/105001947335
UR - https://www.scopus.com/pages/publications/105001947335#tab=citedBy
U2 - 10.1016/j.jds.2025.03.030
DO - 10.1016/j.jds.2025.03.030
M3 - Article
AN - SCOPUS:105001947335
SN - 1991-7902
VL - 20
SP - 1709
EP - 1715
JO - Journal of Dental Sciences
JF - Journal of Dental Sciences
IS - 3
ER -