bmcmededuc.biomedcentral.com


Researchers at Sultan Qaboos University's College of Medicine and Health Sciences use the MAIRS-MS questionnaire to evaluate medical students' AI readiness following preclinical exposure, revealing moderate preparedness overall yet significant gaps in cognition, particularly in AI terminology and data science.

Key points

  • Students scored lowest in the cognition domain (mean=3.52), reflecting gaps in AI terminology and data-science knowledge.
  • Vision domain achieved the highest score (mean=3.90), indicating strong ability to anticipate AI’s applications, risks, and limitations.
  • No statistically significant differences in overall AI readiness were found based on gender or prior exposure to AI topics.

Why it matters: Assessing and improving AI readiness among medical students highlights crucial training gaps and guides curriculum enhancements for future healthcare innovations.

Q&A

  • What is the MAIRS-MS questionnaire?
  • Why focus on preclinical AI exposure?
  • What do the cognition and vision domains measure?
  • How reliable are the survey results?
Copy link
Facebook X LinkedIn WhatsApp
Share post via...
Assessing medical students' readiness for artificial intelligence after pre-clinical training

A cross-sectional study led by Zagazig University and collaborators conducted a structured online survey of 423 medical students from ten Egyptian universities, assessing their understanding, attitudes, and practices regarding generative artificial intelligence. Findings indicate 61.5% satisfactory knowledge levels, higher scores among males and clinical-phase students, and widespread use of Chat-GPT tools for academic tasks.

Key points

  • An 8-question knowledge score, 13-item attitude Likert scale, and 7-item practice frequency scale evaluated generative AI competencies among 423 Egyptian medical students.
  • Binary logistic regression revealed male gender (OR=1.87), 6th October University affiliation (OR=3.55), and clinical-phase status (OR=0.54) as significant predictors of satisfactory AI knowledge (p<0.05).
  • Students primarily employed Chat-GPT 3.5 (37.1%) and 4 (35.2%) for grammar correction, assignment preparation, research, and idea generation, correlating with knowledge scores (r=0.303, p<0.001).

Why it matters: Understanding medical students’ readiness for generative AI informs curriculum design for future healthcare education and practice.

Q&A

  • What is generative artificial intelligence?
  • How were knowledge, attitude, and practice measured?
  • Which factors influenced AI knowledge levels?
  • Why do students use generative AI in academics?
  • How can medical curricula integrate generative AI?
Copy link
Facebook X LinkedIn WhatsApp
Share post via...
Medical students' knowledge, attitudes, and practices toward generative artificial intelligence in Egypt 2024: a Cross-Sectional study

A recent scoping review by BMC Medical Education illustrates how generative AI, particularly ChatGPT, transforms psychiatric education. By creating case vignettes, simulations, and refined assessments, the study showcases AI’s ability to mirror clinical reasoning challenges. An example includes AI-built illness scripts that supplement traditional teaching, providing nuanced insights for evolving medical training.

Q&A

  • How does generative AI improve psychiatric education?
  • What methods were analyzed in the study?
  • What challenges are highlighted in the review?
Copy link
Facebook X LinkedIn WhatsApp
Share post via...
The role of generative artificial intelligence in psychiatric education- a scoping review