You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
How the system currently uses pose landmarks (for videos, not images. Image processing is a simple single-step process)
For every frame:
Each person in the frame is detected.
For each person detected, their pose landmarks are extracted
Both wrists are saved with their confidence scores (how confident the model is that the wrist exists), even if the wrists are not found/visible. A wrist that is not found/visible will have some score close to 0.
Any wrist datapoint with a high enough confidence value will be processed in the next step, otherwise ignored.
Each wrist that is found is cropped, and the cropped hand is processed by the hand landmark and gesture recognition model.
The system assumes the ordering of the people detected in each frame stays the same (as in no one switches spots). This allows the system to relate and track each wrist and person with the previous frame.
Each person/wrist detected can be saved into the history of the system as the video is processed by a simply saving the current frames data in some variables which is used to compare with the next frame, and the cycle repeats.
The “best” gesture for each person/wrist is saved by comparing the gesture confidence scores of each frame and previous frame.
The issue
When body parts (especially arms) are close together or occluded, our current YOLOv8 pose-estimation model will inaccurately draw the arms of people using the wrong arms.
The same arm may be used as the arm for two people people in one frame (so that arm’s wrist will end be being processed twice)
The confidence score of the gesture of these incorrect limbs may end up being higher than the gesture confidence scores of correctly identified limbs in other frames, so the video processing will save the wrong gestures, even if the video has the correct poses at one point.
The result may end up having more gestures than expected, or the same gesture being processed more than once as unique gestures, or both…
To solve this we either need to train our YOLOv8 model for much longer so that it can correctly draw the poses of people correctly even when body parts are close together/overlap/are obstructed, or use a different model/method entirely.
One idea is separating the human object detection and pose estimation models. Currently our YOLOv8 pose estimation model does both.
This will allow us to crop each person before processing their pose landmarks, similar to how the system crops gestures for gesture classification. This MIGHT solve the issue of incorrectly extending/attaching the wrong arms of the wrong people.
TODO
The text was updated successfully, but these errors were encountered:
How the system currently uses pose landmarks (for videos, not images. Image processing is a simple single-step process)
For every frame:
Each person in the frame is detected.
For each person detected, their pose landmarks are extracted
Both wrists are saved with their confidence scores (how confident the model is that the wrist exists), even if the wrists are not found/visible. A wrist that is not found/visible will have some score close to 0.
Any wrist datapoint with a high enough confidence value will be processed in the next step, otherwise ignored.
Each wrist that is found is cropped, and the cropped hand is processed by the hand landmark and gesture recognition model.
The system assumes the ordering of the people detected in each frame stays the same (as in no one switches spots). This allows the system to relate and track each wrist and person with the previous frame.
Each person/wrist detected can be saved into the history of the system as the video is processed by a simply saving the current frames data in some variables which is used to compare with the next frame, and the cycle repeats.
The “best” gesture for each person/wrist is saved by comparing the gesture confidence scores of each frame and previous frame.
The issue
When body parts (especially arms) are close together or occluded, our current YOLOv8 pose-estimation model will inaccurately draw the arms of people using the wrong arms.
The same arm may be used as the arm for two people people in one frame (so that arm’s wrist will end be being processed twice)
The confidence score of the gesture of these incorrect limbs may end up being higher than the gesture confidence scores of correctly identified limbs in other frames, so the video processing will save the wrong gestures, even if the video has the correct poses at one point.
The result may end up having more gestures than expected, or the same gesture being processed more than once as unique gestures, or both…
To solve this we either need to train our YOLOv8 model for much longer so that it can correctly draw the poses of people correctly even when body parts are close together/overlap/are obstructed, or use a different model/method entirely.
One idea is separating the human object detection and pose estimation models. Currently our YOLOv8 pose estimation model does both.
This will allow us to crop each person before processing their pose landmarks, similar to how the system crops gestures for gesture classification. This MIGHT solve the issue of incorrectly extending/attaching the wrong arms of the wrong people.
TODO
The text was updated successfully, but these errors were encountered: