Skip to content

dheerajkallakuri/Posture-Classification-with-Neural-Networks

Repository files navigation

Posture Classification with Neural Networks & Activation Functions Comparison

Overview

This project extends the work done in the previous lying posture tracking project by incorporating more classes, collecting additional data, and designing a machine learning algorithm for posture classification. The goal is to build and evaluate a machine learning model offline using collected IMU sensor data, focusing on five postures: supine, prone, side (either right or left side), sitting, and an unknown posture.

Project Phases

Phase 1: Reading IMU Sensor Data

Run readData.ino code to read the IMU sensor data from the Arduino board and store the signal readings.

Phase 2: Data Collection

Run readData.py to collect data for different scenarios by simulating various postures without actually wearing the board. Ensure to collect data for each posture in multiple orientations to ensure robustness.

Phase 3: Dataset Construction

Create SampleData.csv combination of all postures dataset from the collected data and split it into training, validation, and test sets.

Phase 4: Model Architecture Selection

Decide on a neural network architecture to train your model for posture classification. Consider architectures suitable for processing sequential data.

Phase 5: Model Training and Evaluation

Train your chosen neural network model on the training data and evaluate its performance using the validation set. Make adjustments to the architecture and dataset as needed to prevent overfitting or underfitting.

Phase 6: Testing

Test your final model on the test dataset to assess its performance and generalization capabilities.

Robustness Considerations

  • Ensure that the model is insensitive to changes in sensor orientations by collecting data with various orientations representing the same posture.
  • Label signals with the same class label for similar postures in different orientations (e.g., 'side' for both right and left side lying).
  • Make assumptions about possible ways the sensor unit can be worn and discuss these assumptions, operating points, and corner cases in the project report.

Results

  1. Activation function: ReLU
    Layers: 3 inputs, 16 neurons in the first layer, and 5 neurons in the second layer(output).
    Test accuracy for this model on test data was about 99.55% and the validation loss was about 0.0018.
    r1

  2. Activation function: ReLU
    Layers: 3 inputs, 16 neurons in the first layer, 16 neurons in 2nd layer, and 5 neurons in the third layer(output).
    Test accuracy for this model on test data was about 99.89% and the validation loss was about 7.2*10-5.
    r2
    This model was overfitting

  3. Activation function: ReLU
    Layers: 3 inputs, 16 neurons in the first layer, 8 neurons in 2nd layer, and 5 neurons in the third layer(output).
    Test accuracy for this model on test data was about 99.89% and the validation loss was about 0.0011.
    r3

  4. Activation function: Sigmoid
    Layers: 3 inputs, 16 neurons in the first layer, and 5 neurons in the second layer(output).
    Test accuracy for this model on test data was about 99.79% and the validation loss was about 0.0023.
    r4

  5. Activation function: Tanh
    Layers: 3 inputs, 16 neurons in the first layer, and 5 neurons in the second layer(output).
    Test accuracy for this model on test data was about 99.69% and the validation loss was about 0.0034.
    r5

About

No description, website, or topics provided.

Resources

Stars

Watchers

Forks

Releases

No releases published

Packages

No packages published