Researchers at the University of Kentucky and collaborators design MyoVision-US, a software leveraging DeepLabV3 with a ResNet50 backbone for semantic segmentation and post-processing to quantify quadriceps and tibialis anterior thickness, cross-sectional area, and echo intensity. The AI achieves excellent consistency (ICC >0.92) and reduces analysis time by 99.8%, aiding critical and chronic illness assessment.
Key points
- DeepLabV3-ResNet50 models segment quadriceps complex and tibialis anterior ultrasound images.
- Post-processing uses contour extraction, morphological opening/closing, and cubic spline smoothing to refine masks.
- Software calculates muscle thickness, cross-sectional area, and echo intensity via pixel counts and grayscale averaging.
- Validation shows Dice ~0.90, IoU ~0.88, and ICCs of 0.92–0.99 compared to manual analysis.
- Automated pipeline analyzes 180 images in 247 s versus 24 h manually, saving 99.8% of analysis time.
Why it matters: Automating muscle ultrasound analysis transforms bedside assessments by delivering rapid, reproducible measurements that previously required expert manual effort. This scalability can improve monitoring of muscle wasting in critically ill and cancer patients, reduce human bias, and pave the way for real-time clinical integration.
Q&A
- What is semantic segmentation?
- How does echo intensity reflect muscle quality?
- Why use Intraclass Correlation Coefficient (ICC)?
- What roles do Dice coefficient and IoU play?