This project was presented as a poster at the Cyberworlds 2024 international conference held in Yamanashi, Japan.
This is LLM Fine tuning model that classifies four movements (left hand, right hand, tongue, foot) from EEG.
- LLM performs its own classification operations based on EEG data.
- We trained gpt-4o model utilizing fine-tuning for better performance.
Python>=3.8, openai>=1.30.2, mne>=1.6.1
You can install all libraries entering the code:
!pip install -r requirements.txt
Data description : https://www.bbci.de/competition/iii/desc_IIIa.pdf
- cued motor imagery (multi-class) with 4 classes (left hand, right hand, foot, tongue) three subjects (ranging from quite good to fair performance)
- EEG, 60 channels, 60 trials per class
- performance measure: kappa-coefficient
Download : BBCI Competition III (https://www.bbci.de/competition/iii/download/index.html?agree=yes&submit=Submit)
1) Power spectral density (PSD) is computed in 2Hz steps from 4Hz to 36Hz.
For feature selection, Fisher Ratio is utilized.
2) Common spatial pattern (CSP) is used to extract spatial features that maximize discriminability between classes.
To compare fine-tuned LLM classifier's performance with traditional ML models, we additionally trained SVM, RF and MLP in the same data and same preprocessing method.
Performance metrics: Accuracy, F1 score, ROC-AUC
- Although the performance of the GPT-4o-based supervised learning model slightly lagged behind traditional machine learning models, this project is significant in demonstrating the potential of utilizing LLMs as supervised learning models.
- Furthermore, it highlights the expectation that as LLM performance continues to improve, the capabilities of LLM-based supervised learning models will also advance.