Building a Modular Robot
- Develop a robust detection and localization solution based on supplied demo parts
- Develop a web-based interface that integrates into the robot's existing UI
- Include Mapping from camera to robot coordinates
-
Recommended to work with Anaconda - manages the env's better
-
OpenCV - $ sudo apt-get install python3-opencv
-
Flask - $ pip install flask
-
- run the scripts available under ./cam_calibration/
-
when the camera is successfully calibrated, the calibration data is stored under ./camera_data/
-
To capture images from the Basler camera, run $ python camera_control.py. The script uses the data in config-a2A3840-13gmPRO_40137700.pfs as camera configurations.
-
To test the object recognition
- We need 2 images: image of the scene with no objects i.e background image and an image with objects in the scene i.e sample image.
- change the background image param(img_bg) and sample image param(img_example) in the script to desired values.
- run $ python object_recognition.py
-
Assuming that the camera is calibrated by running all the calibration scriprts, test the CV2D code by running $ python test.py.
- run $ python server.py
- import flows.json into the NodeRed interface.
- open the imported flow named CV2D
- change the param in the http node to the url obtained from running server.py script.
- The CAD models of the Robot end of Arm, camera along with its assembly can be found here
- Object Detection Ref: https://github.com/pacogarcia3/hta0-horizontal-robot-arm/blob/master/README.md
- NodeRED
- graphical programming : https://docs.robco.de/sections/node_programming.html
- test based programming : https://docs.robco.de/sections/text_based_programming.html
Additionally, We also tried different state of the art object detection and matching algorithms and the results are documented here.