A team from Gachon University, Al-Ahliyya Amman University, Chitkara University and others deploys a NASNet Large deep learning model integrated with XAI techniques like LIME and Grad-CAM. By processing augmented MRI datasets, the framework achieves 92.98% accuracy and clearly visualizes tumour features to support informed clinical decisions.
Key points
- Integration of NASNet Large with depthwise separable convolutions for efficient feature extraction from MRI scans.
- Application of XAI methods LIME and Grad-CAM to highlight critical tumour regions, enhancing model transparency.
- Use of Monte Carlo Dropout to quantify prediction uncertainty, achieving 92.98% accuracy and 7.02% miss rate.
Why it matters: This approach integrates interpretability into high-performance deep learning, fostering clinician trust and accelerating accurate neuro-oncology diagnostics.
Q&A
- What is NASNet Large?
- How do LIME and Grad-CAM differ?
- Why is interpretability crucial in medical AI?
- What is Monte Carlo Dropout uncertainty estimation?