- Iterate the bounding boxes of previous frame.
- Iterate the bounding boxes of current frame.
- Iterate the keypoint matches.
- Check the keypoints of match are included in each frame. Then increment the counter.
- Make the bbBestMatches with the maximum of counter.
- Find min X on previous's and current's point cloud on roi. (Remove outlier with IQR)
- Calculate the TTC = minXCurr * dT / (minXPrev - minXCurr)
- Calculate the euclidean distance in each match.
- Iterate the keypoint matches.
- Check the current point is included in bounding box. (Remove outlier with IQR)
- Then add keypoint and match on bounding box data structure.
- Iterate the keypoint matches
- Make the ratio the distances each frame. (distCurr/ distPrev)
- Calculate the TTC with median of ratio.
-
Good Case
There are low variance and no outliers on point cloud. -
Bad Case
-
High Variance
There is high variance on point cloud. It can make an error on filtering process. -
Outlier
There is an outlier on max X position. It's not a big deal. There is an outlier on min X position. It can make an error on calculating TTC with lidar data. So I applied filter to be robust on this case. My code ignores the point which far from median of point cloud.
-
- Make the combinations and calculate the TTC (inf or nan -> 100 ; for calculating)
- Calculate the TTC difference between LiDAR and Camera
- Find the best combinations for TTC of Camera
-
OS : Ubuntu 16.04
-
cmake >= 2.8
-
make >= 4.1
-
Git LFS
-
OpenCV >= 4.1
- This must be compiled from source using the
-D OPENCV_ENABLE_NONFREE=ON
cmake flag for testing the SIFT and SURF detectors.
- This must be compiled from source using the
-
gcc/g++ >= 5.4
-
or Make docker container by below command with my docker image
docker run -p 6080:80 -v /dev/shm:/dev/shm kimjw7981/sfnd
- Clone this repo.
- Make a build directory in the top level project directory:
mkdir build && cd build
- Compile:
cmake .. && make
- Run it:
./3D_object_tracking
.