A team led by University of California Santa Barbara and UMBC deploys convolutional neural networks on one-second segments of pupil diameter and gaze data to accurately detect stimulus onsets, revealing generalization and task-specific patterns in cognitive event recognition with Matthews correlation coefficients up to 0.75.
Key points
- Five CNN models—including four task-specific and one generalized—process 1 s of 250 Hz pupil diameter and gaze data to detect stimulus onsets.
- SMOTE oversampling rebalances training data for unbiased binary classification, achieving MCC scores from 0.43 to 0.75 across tasks.
- Permutation feature importance shows task-specific models focus on gaze and pupillary light reflex, while the generalized model balances pupil dilation and gaze contributions.
Why it matters: This method enables rapid, individualized detection of cognitive events via ML-driven pupillometry for real-time attention and workload monitoring.
Q&A
- What is pupillometry?
- Why use Matthews Correlation Coefficient (MCC)?
- What role does SMOTE play in this study?
- How do task-specific and generalized models differ?
- What is permutation feature importance?