A team led by Khon Kaen University applies an EfficientNetB7 convolutional neural network to color fundus photographs, classifying glaucoma severity according to the Hodapp-Parrish-Anderson criteria via transfer learning and fine-tuning. This approach offers accurate, single-image glaucoma screening in low-resource settings.
Key points
- EfficientNetB7 CNN, pre-trained on ImageNet, classifies 2,940 fundus images into three glaucoma stages.
- Transfer learning freezes 61% of layers and fine-tunes remaining layers for domain adaptation.
- Model achieves overall accuracy 0.871 and AUCs of 0.988 (normal), 0.932 (mild-moderate), 0.963 (severe).
Why it matters: This AI-driven grading tool enhances early glaucoma detection and prioritizes severe cases, improving vision-loss prevention in resource-limited clinical settings.
Q&A
- What is fundus photography?
- What are Hodapp-Parrish-Anderson criteria?
- How does transfer learning improve model performance?
- Why use EfficientNetB7 specifically?
- What do AUC and accuracy metrics indicate?