This code is simple attempt to stabilize (smoothen) a video using just traditional computer vision techniques using OpenCV. It's not best but will surely help learn many small concepts in computer vision like feature detection, optical flow, transformation and warping.
input-video8.mp4
Here, I am using 'goodFeaturesToTrack' from OpenCV to detect feature points
I am using 'calcOpticalFlowPyrLK' from OpenCV to calculate optical flow in concurrent frames from features ppints detected in previous frame. It uses Lucas-Kanade Pyramid method to calculate the pixel positions.
With the help of 'estimateRigidTransform' module, I calculated the transformation values [x, y, theta] b/w frames.
To get the idea of how it smoothen the curve, here is the picturization.
First, I use 'numpy.cumsum' to get trajectory for entire video, which later was used to smoothen the transformation using filtering. I am using 'MovingAverageFilter', the logic is defined below.
The filter is applied trajectory matrix which smoothen values along translation along x, y and rotation along x direction.
Using 'cv2.warpAffine' to wrap consecutive frames from the filtered trajectory matrix.
Since we are warping image, to maintain the frame size. This will led to some dead pixels along border which will be visible in the output video.
-
Video Stabilization Using Point Feature Matching in OpenCV - Abhishek Singh Thakur https://learnopencv.com/video-stabilization-using-point-feature-matching-in-opencv/
-
Optical Flow in OpenCV (C++/Python) - Maxim Kuklin (Xperience.AI) https://learnopencv.com/optical-flow-in-opencv/
-
CS231M · Mobile Computer Vision - Standford University https://web.stanford.edu/class/cs231m/lectures/lecture-7-optical-flow.pdf
The method is primitive and doesn't work if there are objects moving in video at faster pace. The other approached would be to find where optical flow is maximum and compensate for that using mathematical logic or use deep learning model.