A collaborative team at Université Paris-Est Créteil and Children’s National Medical Center introduces a multichannel convolutional transformer for EEG-based mental disorder classification. The model preprocesses signals with CSP, SSP, and wavelet filters, tokenizes via convolutional layers, and employs self- and cross-attention to detect PTSD, depression, and anxiety. Evaluations on three datasets yield accuracies up to 92%, showcasing its potential for reliable, noninvasive diagnostics.

Key points

  • Combined CSP, SSP, and wavelet denoising filters achieve average signal attenuation of 17.4 dB.
  • Convolutional blocks tokenize scaleograms derived via continuous Morlet wavelet transforms for localized feature extraction.
  • Transformer encoder applies multi-head self- and cross-attention across five EEG channels (Cz, T3, Fz, Fp1, F3).
  • Fusion block uses element-wise multiplication, max-pooling, and multi-head attention to integrate channel representations.
  • Achieves accuracies of 92.28% on EEG Psychiatric, 89.84% on MODMA, and 87.40% on Psychological Assessment datasets.

Why it matters: This approach integrates convolutional tokenization with transformer-based attention to improve EEG analysis, offering a scalable framework for accurate, real-time mental disorder detection. By outperforming existing LSTM and SVM methods across multiple datasets, it paves the way for reliable, noninvasive diagnostic tools in clinical and remote settings.

Q&A

  • What is a convolutional transformer?
  • How do CSP and SSP filters enhance EEG signal quality?
  • Why use scaleograms in EEG classification?
  • What is the role of cross-attention across EEG channels?
  • How robust is the model’s performance across datasets?
Copy link
Facebook X LinkedIn WhatsApp
Share post via...


Read full article
Multichannel convolutional transformer for detecting mental disorders using electroancephalogrpahy records