Skip to content
This repository has been archived by the owner on Aug 22, 2024. It is now read-only.

Explore the world of non-verbal communication like never before with our Body Language Detection solution. Utilizing the advanced capabilities of MediaPipe and OpenCV, we provide real-time insights into human gestures, postures, and facial expressions.

License

Notifications You must be signed in to change notification settings

ThisIs-Developer/Body-Language-Detection-with-MediaPipe-and-OpenCV

Repository files navigation

Body-Language-Detection-with-MediaPipe-and-OpenCV

Body.Language.Decoder.mp4

This Jupyter Notebook (IPython Notebook) provides the code and instructions for implementing body language detection using MediaPipe and OpenCV. This innovative tool incorporates two distinct models to achieve its functionality, providing users with a comprehensive approach to body language analysis.

1. Scikit-Learn (.pkl)

The first model is built using Scikit-Learn and is stored in a .pkl (Python Pickle) format.

  1. It employs pipelines to encapsulate preprocessing and modeling steps for multiple algorithms.

    pipelines = {
    'lr':make_pipeline(StandardScaler(), LogisticRegression(max_iter=5000)),
    'rc':make_pipeline(StandardScaler(), RidgeClassifier()),
    'rf':make_pipeline(StandardScaler(), RandomForestClassifier()),
    'gb':make_pipeline(StandardScaler(), GradientBoostingClassifier()),
    }
  2. It systematically trains and evaluates different models using accuracy as a metric.

     lr 0.995260663507109
     rc 0.985781990521327
     rf 0.9881516587677726
     gb 0.9928909952606635
    
  3. It saves the best-performing model for later use using pickle.

     with open('body_language.pkl', 'wb') as f:
     pickle.dump(fit_models['rf'], f)

2. TensorFlow-Keras (.tflite)

The second model is built using TensorFlow-Keras and is stored in a TensorFlow Lite (.tflite) format.

  1. It Builds and compiles a neural network model for classification.
model.compile(optimizer='adam',
              loss='sparse_categorical_crossentropy',
              metrics=['accuracy'])
  1. It trains the model with relevant metrics.
  2. It converts and saves the model in TensorFlow Lite format for mobile deployment.
     converter = tf.lite.TFLiteConverter.from_keras_model(model)
     tflite_model = converter.convert()
     open("body_language.tflite", "wb").write(tflite_model)

Features ⭐

1. Create the training dataset using both a Webcam and recording video data (.mp4), extracting relevant frames, and annotating those frames with corresponding labels.

View Folder: Video Decoder

Modify the codef for MP4:

class_name = "Happy"
# Replace 'path_to_your_video_file' with the actual path to your video file
cap = cv2.VideoCapture('path_to_your_video_file') 
# Initiate holistic model
with mp_holistic.Holistic(min_detection_confidence=0.5, min_tracking_confidence=0.5) as holistic:
    
    while cap.isOpened():
        ret, frame = cap.read()

2. Trained models to recognize 10 distinct body language and facial expression categories, enabling the automated recognition of emotions and gestures in videos.

Class Labels

  1. Happy
  2. Sad
  3. Angry
  4. Surprised
  5. Confused
  6. Tension
  7. Surprised
  8. Excited
  9. Pain
  10. Depressed

3. Visual representation of different emotional expressions, with each expression depicted in a separate chart or plot using the Matplotlib library in Python.

Pie plot

image

Bar plot

image

Horizontal bar plot

image

Horizontal bar plot in creasing order of sizes

image

Horizontal bar plot with increasing order of sizes

image

About

Explore the world of non-verbal communication like never before with our Body Language Detection solution. Utilizing the advanced capabilities of MediaPipe and OpenCV, we provide real-time insights into human gestures, postures, and facial expressions.

Topics

Resources

License

Stars

Watchers

Forks

Releases

No releases published

Packages

No packages published