Skip to content

update the camera Frame #2

@redto0

Description

@redto0

The Issue

Currently the cameras are publishing raw data with a message header. We need to change that. Specifically, we need to store what "pixel space" that the data generated in and were on the robot this data was create on.

What does that mean? Basically, the planner works on the original frame (resolution, pixel ratio, etc) so therefore, any data the perception team publishes the data should also be in the same "frame", ie with respect to the same pixel space and resolution.

Keep in mind that the data should come from the camera topic in order to be the most up to date.

currently the CV team already publishes data in the full camera frame but with a different resolution. This will be fixed by simply not down scaling the image.

What we need

  1. Information and documentation regarding how the AI polynomials are generated and information regarding what transformations were applied to making them.
  2. In addition, we need to undo the the changes made to them so they are in the original frame.

I believe the best solution is to adjust how the binary mask is made, since this will correct how the frame that they are originally from.

binary_mask = np.zeros(cropped_frame.shape[:2], dtype=np.uint8)
(line 148 of road_detectors/obj_detector_ai/obj_detector_ai/yolo_subscriber_node_new.py)

Making this mask fit the original frame, possibly by shifting the data to fit the original frame should be ideal solution.

Although this

Useful Links!

ROS2 Docs on tf2 simple python tf2 frame

Due Date

Prior to on the ground testing for the for either system.

Metadata

Metadata

Labels

bugSomething isn't workingenhancementNew feature or requesthelp wantedExtra attention is needed

Type

No type

Projects

No projects

Milestone

No milestone

Relationships

None yet

Development

No branches or pull requests

Issue actions