Skip to content

42ibaran/total_perspective_vortex

Folders and files

NameName
Last commit message
Last commit date

Latest commit

 

History

11 Commits
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 

Repository files navigation

total_perspective_vortex

Project Description

The goal of this project is to create a ML BCI classifier able to distinguish between motor/imagery tasks as well as resting state. For the scope of this project I chose to focus on 3 classification tasks:

  • Movement of left fist vs right fist
  • Movement of feet vs fists
  • Any movement vs resting state

The educational focus of this project is to implement a dimensionality reduction algorithm, Common Spatial Patterns (CSP) in my case. I also implemented a wrapper that performs CSP on selected set of bands, and stacks learned filters. Call it Multi-Frequency Band CSP if you like.

Learned filters are then used to extract features from the data. The features are then used to train a classifier. Several classifiers were tested, Linear Discriminant Analysis (LDA) is the default choice, at least in my implementation.

Setup

Dataset used is EEG Motor Movement/Imagery Dataset from PhysioNet (https://physionet.org/content/eegmmidb/1.0.0/). You can download the dataset with the following command:

wget -r -N -c -np -P "/DIRECTORY_OF_YOUR_CHOICE/data" https://physionet.org/files/eegmmidb/1.0.0/

The Jupyter notebooks were written to run with Python 3.12.7. Requirements can be installed using pip:

pip install -r requirements.txt

Notebooks

  • explore.ipynb: Loading the data for selected subject(s), plotting the data in several introspective ways.

  • train.ipynb: Training the classifier for selected experiment. The classifiers are then saved to a file in /model directory.

  • train_and_evaluate_multi: Training subject-specific classifiers for multiple subjects at a time. Displays average test accuracy afterwards.

  • evaluate.ipynb: Evaluating the classifier performance on selected subjects (useful if you have subject-non-specific models, I don't).

  • simulate_stream.ipynb: Simulating a real-time stream of data. The classifier is used to predict the class of the data in "real-time".

Results

Depending on the subject, the results vary. Subject-specific classification accuracy varies from 50% (chance) to 90% (e.g. subjects 1 and 4). The average accuracy for the subject-specific classifiers is around 70%. The non-subject specific classifiers perform worse (barely 60% accuracy), which is the threshold for the project. Every brain is different, why bother with generalized models? Not for me.

About

No description, website, or topics provided.

Resources

Stars

Watchers

Forks

Releases

No releases published

Packages

 
 
 

Contributors