Firat University’s digital forensics and neuroscience researchers introduce FriendPat, a new-generation explainable feature engineering model for EEG-based epilepsy detection. FriendPat computes channel distance matrices, applies voting-based feature extraction, and employs CWINCA feature selection with a t-algorithm kNN classifier. Integrated with Directed Lobish symbolic language, it produces interpretable connectomes for accurate epilepsy diagnosis.

Key points

  • FriendPat uses L1-norm channel distance matrices and pivot-based voting to generate 595-dimensional feature vectors from 35-channel EEG signals.
  • CWINCA self-organized selector reduces features to 82 through cumulative weight thresholds, ensuring linear time complexity and optimal feature subset.
  • tkNN ensemble classifier coupled with Directed Lobish symbolism achieves 99.61% accuracy under 10-fold CV and generates interpretable cortical connectome diagrams.

Why it matters: This explainable, lightweight EEG classification approach could transform clinical epilepsy diagnostics by combining high accuracy with interpretable neural connectome insights.

Q&A

  • What is FriendPat?
  • How does Directed Lobish (DLob) improve interpretability?
  • Why use CWINCA over standard NCA for feature selection?
  • Why does LOSO cross-validation show lower accuracy?
Copy link
Facebook X LinkedIn WhatsApp
Share post via...


Read full article

Electroencephalography (EEG): Brainwave Monitoring Explained

Electroencephalography (EEG) is a non-invasive method for recording the brain’s electrical activity. Sensors called electrodes are placed on the scalp to detect voltage fluctuations produced by neuronal firings. These signals are then amplified and digitized for analysis. EEG is widely used in medical diagnostics, research, and brain–computer interface (BCI) applications.

  • How EEG works: Groups of neurons in the brain generate tiny electrical currents when they fire. EEG electrodes capture these currents through the scalp.
  • Common applications: Epilepsy diagnosis, sleep studies, cognitive research, seizure monitoring, and brain–computer interfaces.
  • Signal characteristics: EEG signals are sampled at rates from 128 to 5,000 Hz and have amplitudes in the microvolt range.

Because the skull and scalp attenuate and diffuse signals, raw EEG data can be noisy. Preprocessing steps—such as band-pass filtering, artifact removal, and baseline correction—are essential before interpretation or automated analysis.

Explainable AI in Medical Diagnostics

Explainable Artificial Intelligence (XAI) refers to machine learning and deep learning methods designed to provide transparent and interpretable results. In healthcare, transparency is critical: clinicians need to understand why a model made a particular decision to trust and act on it.

  1. Feature engineering-based XAI: Classical models extract handcrafted features and use interpretable algorithms, such as decision trees or k-nearest neighbors, to classify conditions like epilepsy.
  2. Symbolic languages: Techniques like Directed Lobish convert numerical features into human-readable symbols, enabling sequence-based interpretation and entropy analysis.
  3. Self-organized selection: Algorithms like CWINCA automatically choose informative features without manual tuning, ensuring reproducibility.
  4. Clinical insight: Explainable outputs—such as connectome diagrams or symbol sequences—help neurologists relate findings to brain anatomy and function.

By combining EEG with XAI, researchers and clinicians can achieve both high diagnostic accuracy and meaningful explanations. This hybrid approach enhances trust, supports personalized treatment planning, and accelerates the integration of AI into routine neurological care.

An explainable EEG epilepsy detection model using friend pattern