Skip to content

SteamGearAutoDeliver

David Matthews edited this page Jan 20, 2017 · 9 revisions

Gear delivery can be broken down into two steps;

Target detection

See our RustyCommsVision for details about our implementation so far.

Target detection can be done a few different ways; we will be doing this using contour based analysis. To begin, we want to do some pre-processing on the source image to mostly isolate the target, remove small noise errors, and human placement errors.

We now can identify contours in our image, and analyze them at a more abstract level. The gear lift target consists of two pieces of retro-reflective tape seen below.

Steamworks Gear Lift Target

Each of the target's pieces should look roughly rectangular, and should roughly be of aspect ratio (width / height = 2/5). I say roughly for both due to distortions from the lens of the camera and due to the positioning of the robot.

From a set of potential targets, we are looking for two that are about the same size, and about the correct distance apart given their heights (being further away will make them appear closer and smaller but the aspect ratio will remain the same.)

It is possible that our program finds more than one target. In this case, we would need to be able to rank each potential full target and pick one to treat as a real target.

We will send the target that we find over UDP broadcast to the roboRIO, which will use this data along with data from other sensors to determine how to move the robot.

Data to broadcast: All values should be from 0 to 1 where for position, 0,0 is upper left. this will ensure that changes to the camera stream size will not effect the motion software unless we also change the camera.

  • X, Y position of target
  • Size of target in height and Width.
  • Possibly; probability of being correct.

Robot motion

Automated deliver of gears into the ship can be completed during the Auto period during the first 15 seconds of the match, or during the remaining teleOP period.

If in auto, we can immediately execute the delivery routine.
If we are in teleOP then we should indicate to the driver that we have detected the gear lift, and wait for their instruction to deliver the gear.

Routine

If we are far away from the target and it is detected by camera, we will want to move the robot to be normal to the target and aiming at it.

We can then move the robot forward until it is the correct distance from the target. This can be determined through the camera feed (although distortion will be more of an issue at closer range) or a range finder sensor such as IR or ultrasound or LIDAR.

We can then deposit the gear onto the peg and when a sensor determines that we no longer have a gear; we can move backwards and end the routine. Possibly we could confirm that we correctly loaded the gear by using the camera to detect a gear (although the gear will be pulled up into the ship very quickly.)

Vision

My idea for placing a gear on the hook during autonomous is similar to what you see in video games. In video games, there is a circle that helps you aim at your target so you can shoot it. I was thinking that instead of a circle we could have two digital red boxes that are the same size and distance apart as the two pieces of reflective tape. These digital red boxes can help the robot figure out how far it needs to go in order for it to be at the hook. If the robot is too far away, it will see that the tape isn't the same size as the digital red box. Which means it needs to move forward. As the robot moves towards the tape, it will see the reflective tape getting bigger and closer to the size of the digital red boxes. When the two digital boxes and the two pieces of tape are the same size and line up, the robot will know it is at the hook and place the gear. We can also use something else like a sensor to tell the robot that it is near the hook and it can place the gear.

(I will add a photo to describe what I am taking about later)