Skip to content

Learning representations from EEG with deep recurrent-convolutional neural networks #30

Open
@jinglescode

Description

@jinglescode

Paper

Link: https://arxiv.org/pdf/1511.06448.pdf
Year: 2016

Summary

  • designed to preserve the spatial, spectral, and temporal structure of EEG which leads to finding features that are less sensitive to variations and distortions within each dimension
  • robust to inter- and intra-subject differences, as well as to measurementrelated noise

image

Contributions and Distinctions from Previous Works

  • none of these studies attempted to jointly preserve the structure of EEG data within space, time, and frequency

Methods

  • transform EEG activities into a sequence of topology-preserving multi-spectral images
  • train a deep recurrent convolutional network inspired by state-of-the-art video classification techniques to learn robust representations from the sequence of images
  • working memory experiment. 15 subjects, 64 electrodes, sampling 500 Hz, 240 trial per subject, 3.5 seconds per trial, 4 categories

Results

conv + LSTM + 1D conv performed best

Code

https://github.com/VDelv/EEGLearn-Pytorch

Metadata

Metadata

Assignees

No one assigned

    Projects

    No projects

    Milestone

    No milestone

    Relationships

    None yet

    Development

    No branches or pull requests

    Issue actions