Tracking I added #7
Open
Add this suggestion to a batch that can be applied as a single commit.
This suggestion is invalid because no changes were made to the code.
Suggestions cannot be applied while the pull request is closed.
Suggestions cannot be applied while viewing a subset of changes.
Only one suggestion per line can be applied in a batch.
Add this suggestion to a batch that can be applied as a single commit.
Applying suggestions on deleted lines is not supported.
You must change the existing code in this line in order to create a valid suggestion.
Outdated suggestions cannot be applied.
This suggestion has been applied or marked resolved.
Suggestions cannot be applied from pending reviews.
Suggestions cannot be applied on multi-line comments.
Suggestions cannot be applied while the pull request is queued to merge.
Suggestion cannot be applied right now. Please check back later.
Initialises and Captures Data: The code begins by setting up a connection to the Intel RealSense camera. It then enters a loop where it continuously captures pairs of perfectly synchronised colour and depth video frames.
Detects and Describes Visual Features: In each colour frame, it uses the ORB algorithm to find hundreds of distinct, trackable points, like corners or textures. Each feature is then assigned a unique numerical "fingerprint," called a descriptor, that identifies it.
Matches Features Between Frames: Then program compares the descriptors from the current frame to those from the previous one to find corresponding points. It then filters these matches, discarding ambiguous or incorrect ones to create a reliable list of features that are visible in both frames.
Creates 3D-2D Correspondences: For each reliable match, it takes the 2D pixel location from the previous frame and looks up its distance in the corresponding depth map. Using the camera's intrinsic parameters, it deprojects this 2D pixel + depth information into a full 3D point in space.
Estimates Relative Motion: It feeds the set of 3D points (from the last frame) and their corresponding 2D pixel locations (in the current frame) into the Perspective-n-Point (PnP) algorithm. This calculates the precise 6 Degrees of Freedom motion that the camera underwent between the two frames.
Accumulates Pose to Track Path: The calculated relative motion is converted into a 4x4 transformation matrix. This matrix is then multiplied with the camera's previously known global pose, continuously updating its total estimated position and orientation relative to its starting point. This final pose is then printed to the console.