Researchers from Karabuk University and Antalya Oral and Dental Health Hospital assess ChatGPT 3.5 and Google Gemini performance in addressing parent queries on pediatric dental trauma. They employ the DISCERN instrument and PEMAT-P tool to evaluate response quality, understandability, and actionability. Both chatbots deliver comparable guidance, with Gemini showing marginally higher reliability and ChatGPT demonstrating superior clarity, yet neither system substitutes professional dental consultation.
Key points
- ChatGPT 3.5 and Google Gemini are evaluated using the DISCERN instrument, with Gemini achieving marginally higher mean reliability scores.
- PEMAT-P analysis shows ChatGPT delivers superior understandability and both chatbots provide similar actionability for pediatric dental trauma guidance.
- Study uses 17 IADT-based case scenarios with inter-rater Cohen’s kappa of 0.72–0.78 and parametric statistical tests to compare chatbot performance.
Why it matters: This study validates AI chatbots as accessible, consistent sources of pediatric dental trauma guidance, heralding scalable support alongside clinical expertise.
Q&A
- What is the DISCERN instrument?
- How does PEMAT-P measure actionability?
- Why can’t AI chatbots replace dentists?
- What factors influence chatbot reliability?
- How were the case scenarios designed?