Skip to content

nyu_finger_tutorial

Huaijiang Zhu edited this page Aug 23, 2022 · 12 revisions

You can find the supplementary code for this tutorial here.

Setup the PC

Before starting this tutorial, make sure you have setup your PC according to the following tutorials

Fast prototyping

We provide a simple wrapper around the hardware communication APIs for the sole purpose of fast prototyping. This wrapper is not real-time safe and can have undefined behaviors in the face of packet loss, which may damage the robot. We thus strongly discourage you from using it for any serious experiments. However, it is an easy way to check if the robots are correctly connected and functioning. To initialize the robot, run

from nyu_finger import NYUFingerReal
robot = NYUFingerReal()
robot.initialize('eth3', [1,2,0]) # ethernet port and motor numbers

Note that you need to create a separate wrapper for each NYU Finger robot. To initialize the robot, you need to know the Ethernet port the robot is connected to and the motor numbers. The motor numbers depend on the wiring but they are usually written on the robot if it is built by us. Finding the Ethernet port is a bit trickier as you may need to do a bit trial-and-error: first, open a terminal window and type

netstat -i

This will list all of your network interfaces. The Kernel column shows the interface name and the RX-OK/ERR/DRP/OVR columns give statistics about the packets that have been received by the interface. OK stands for packets correctly received. The interface that the robot is connected to should behave as such: 1) when the robot is turned off, the number of the received packets does not change 2) when the robot is switched on, the number of the received packets increases. If you are not sure, simply try each port sequentially: if the robot is successfully initialized, the LED light should turn to green from red. This way, you can find the Ethernet port connected to the robot. We recommend doing this for one robot at a time.

Once the robot is initialized, you can perform some basic operations such as reading joint measurements and sending control torques

q, dq = robot.get_state()
robot.send_joint_torque(tau)

For more information, refer to this Jupyter notebook

Dynamic Graph Manager (DGM) and Dynamic Graph Head (DGH)

To properly communicate with the robot and use advanced functionalities, we use DGM and DGH. A general tutorial for DGH can be found here. We provide a gentle introduction using the NYU TriFinger robot as an example. First, let us git clone the supplementary code for this tutorial

cd ~/devel/workspace/src
git clone https://github.com/machines-in-motion/nyu_finger_tutorial.git

Still, you need to know the Ethernet port and the motor numbers for each robot. This time, you need to save this information in the configuration YAML files. They are located in the robot_properties_nyu_finger package. When cloned with treep, they should be located at ~/devel/workspace/src/robot_properties_nyu_finger/src/robot_properties_nyu_finger/resources/dynamic_graph_manager. Open the respective YAML file, for example, for NYU TriFinger dgm_parameters_nyu_finger_triple[0-2].yaml and update the field network_id and motor_numbers with the information you found.

Once you update the YAML files, you will have to compile your workspace for the changes to take effect.

cd ~/devel/workspace
source /opt/ros/foxy/setup.bash
source /opt/openrobots/setup.bash
colcon build --cmake-args -DCMAKE_BUILD_TYPE=Release -DPYBIND11_TEST=OFF -DBUILD_TESTING=OFF

Calibration

DGM implements a calibration process for the robot by finding the encoder index for each joint. However, to do this you will need to find the index offsets first, hence the distance between the encoder index position and the zero joint position we defined in the URDFs. When the joints are at the zero position, the NYU Finger Robot should point vertically down.

Once you find the index offsets, you need to update the configuration YAML files accordingly (you only need to do this once.) We prepared scripts for this procedure in the supplementary code. Note that you will need to run the scripts from a root shell

sudo -s
source /opt/ros/foxy/setup.bash
source /opt/openrobots/setup.bash
source /$path_to_your_workspace/install/setup.bash

From the root shell, launch the hardware process (HWP) for NYU TriFinger by running the following commands:

cd /$path_to_your_workspace/src/nyu_finger_tutorial
ipython
# From within ipython execute the following command.
run hwp_zerooffset.py

This creates a hardware process for each NYU Finger Robot and runs the calibration procedure using zero index offsets after pressing the enter key at startup. The idea behind doing this is: if we assume the index offsets to be zero at calibration, then the encoder index position will be treated as the joint zero position by the hardware process. When we move the robot into actual zero positions, the negated joint positions being measured become the index offsets. This is implemented by the script read_index_offset.py. Again, you need root privilege to run it.

Keep the previous ipython window running and create a new root shell (don't forget to source the bash files). Now, in the new root shell, run

cd /$path_to_your_workspace/src/nyu_finger_tutorial
ipython
# From within ipython execute the following command.
run read_index_offset.py

This script applies a high damping controller so that once you move each joint to its zero position, it remains there. The shoulder link is a bit heavy so you might still want to hold it at the zero position. The script constantly prints out the index offsets, take a screenshot when you think all the joints are in zero positions, you will need these numbers. At the time being, we do not have calibration tools for the NYU Finger Robot, so you will have to eye ball it.

Once you obtained the index offsets, update them in the same YAML files storing the Ethernet port and the motor numbers. The respective field is index_to_zero_angle. Again, do not forget compiling your workspace for the changes to take effect.

Simple PD control

The process of finding the encoder index position actually shows how the software controls the robot. DGM creates a hardware process to handle the communication with the robot; DGH reads sensor information and writes control commands by accessing the shared memory managed by the hardware process. Simply put, to control the robot, you need to run two scripts sequentially in two different root shells: 1) one that creates the hardware process and calibrates the robot 2) the other that implements your controller.

In the script pd_basic.py, we implement simple PD controllers. To run it, first initiate the hardware process and this time use the index offsets you stored in the YAML file.

cd /$path_to_your_workspace/src/nyu_finger_tutorial
ipython
# From within ipython execute the following command.
run hwp_calibrated.py

Press Enter for the robot to find the encoder index and calibrate, the fingers should move up and down to close zero positions. Then in another root shell, run the PD controllers

cd /$path_to_your_workspace/src/nyu_finger_tutorial
ipython
# From within ipython execute the following command.
run pd_basic.py

This script should stabilize your fingers at the zero position. It is a good time to double check if the index offsets you found are correct or not. All fingers should point vertically down!

Multithreading with ThreadHead

One problem with the implementation above is that the controllers are running in an infinite loop and this prevents us from interacting with the ipython window to retrieve and analyze the sensor data. DGH provides a wrapper ThreadHead to put each controller in a separate thread. We illustrate the usage of ThreadHead in the script pd_multithread.py. Note that ThreadHead requires you to provide safety controllers when, for example, the joint limits are violated. The HoldPDController we used here simply maintains the joint position at which the safety controller is triggered. This may not be an ideal safety controller though.

Seamless simulation with SimHead

Finally, another advantage of using ThreadHead is that it enables a seamless transition from simulation to hardware experiments. By simply replacing the DGMHead object with SimHead and add it to a simulation environment, you can use virtually the same code for simulation and real robot. This is illustrated in the script pd_simulation.py

There are many more advanced features built in DGH. Again, for a more detailed tutorial on DGH, refer to this tutorial.

Clone this wiki locally