A cross-disciplinary team from Sichuan University's NICUs employs a machine learning pipeline to classify neonatal intestinal diseases using bowel sound recordings captured by a digital stethoscope. They preprocess audio with filters, extract time–frequency features such as MFCCs, and train a transformer-based model combined with a Random Forest to detect conditions like NEC, FPIAP, and obstruction, aiming to supplement subjective clinical assessment with objective, automated diagnostics.
Key points
- Collected neonatal bowel sounds via 3M Littmann 3200 digital stethoscope with 2-minute recordings from six abdominal regions, filtered to exclude noise exceeding 30%.
- Extracted acoustic features—zero-crossing rate, spectral centroid, chroma, MFCCs—after pre-emphasis, framing, and Hamming windowing, forming a multidimensional feature vector.
- Trained a Random Forest for disease detection and a transformer-based network for multi-class classification (NEC, FPIAP, volvulus, obstruction), validated via tenfold cross-validation and external cohorts with high AUC.
Why it matters: An AI-based bowel sound diagnostic tool offers rapid, noninvasive neonatal intestinal disease screening, potentially reducing delays and improving outcomes compared with subjective auscultation.
Q&A
- What are bowel sounds?
- How does a digital stethoscope record sound?
- What are Mel-frequency cepstral coefficients (MFCCs)?
- What is a BERT-inspired transformer in this context?