This project extends the work done in the previous lying posture tracking project by incorporating more classes, collecting additional data, and designing a machine learning algorithm for posture classification. The goal is to build and evaluate a machine learning model offline using collected IMU sensor data, focusing on five postures: supine, prone, side (either right or left side), sitting, and an unknown posture.
Run readData.ino
code to read the IMU sensor data from the Arduino board and store the signal readings.
Run readData.py
to collect data for different scenarios by simulating various postures without actually wearing the board. Ensure to collect data for each posture in multiple orientations to ensure robustness.
Create SampleData.csv
combination of all postures dataset from the collected data and split it into training, validation, and test sets.
Decide on a neural network architecture to train your model for posture classification. Consider architectures suitable for processing sequential data.
Train your chosen neural network model on the training data and evaluate its performance using the validation set. Make adjustments to the architecture and dataset as needed to prevent overfitting or underfitting.
Test your final model on the test dataset to assess its performance and generalization capabilities.
- Ensure that the model is insensitive to changes in sensor orientations by collecting data with various orientations representing the same posture.
- Label signals with the same class label for similar postures in different orientations (e.g., 'side' for both right and left side lying).
- Make assumptions about possible ways the sensor unit can be worn and discuss these assumptions, operating points, and corner cases in the project report.
-
Activation function: ReLU
Layers: 3 inputs, 16 neurons in the first layer, and 5 neurons in the second layer(output).
Test accuracy for this model on test data was about 99.55% and the validation loss was about 0.0018.
-
Activation function: ReLU
Layers: 3 inputs, 16 neurons in the first layer, 16 neurons in 2nd layer, and 5 neurons in the third layer(output).
Test accuracy for this model on test data was about 99.89% and the validation loss was about 7.2*10-5.
This model was overfitting -
Activation function: ReLU
Layers: 3 inputs, 16 neurons in the first layer, 8 neurons in 2nd layer, and 5 neurons in the third layer(output).
Test accuracy for this model on test data was about 99.89% and the validation loss was about 0.0011.
-
Activation function: Sigmoid
Layers: 3 inputs, 16 neurons in the first layer, and 5 neurons in the second layer(output).
Test accuracy for this model on test data was about 99.79% and the validation loss was about 0.0023.
-
Activation function: Tanh
Layers: 3 inputs, 16 neurons in the first layer, and 5 neurons in the second layer(output).
Test accuracy for this model on test data was about 99.69% and the validation loss was about 0.0034.