Skip to content

550 How to Setup ARL Unity and Send A Navigation Goal to Warty Robot in Simulator

Kaiyu Zheng edited this page Feb 22, 2023 · 1 revision

Setup ARL Docker Container

This is the ROS 1.0 release of the Army Research Laboratory's Autonomy Stack (Phoenix). This document is designed to get someone up and running via Docker. But before that you need the access to the repository to up an run this whole thing


Prerequisites

These are prerequisites for the host system, which is assumed to be a LTS Ubuntu distribution (18.04, 20.04) with a properly configured Nvidia GPU. The latter can be checked via nvidia-smi.

  1. Docker
  2. Nvidia-Docker2
  3. Git LFS
  4. Pip
  5. Catkin-Docker

To check if prerequisites are satisfied, you can run the suite of commands below. If you recently installed any of the prerequisites, restart your computer before going through any checks.

Click for instructions on installing prerequisites

1. Docker

#To install basic Ubuntu utilities
sudo apt update
sudo apt install curl gnupg lsb-release apt-transport-https ca-certificates uuid-runtime

#To install Docker onto your host machine
curl -fsSL https://download.docker.com/linux/ubuntu/gpg | sudo gpg --dearmor -o /usr/share/keyrings/docker-archive-keyring.gpg
echo "deb [arch=amd64 signed-by=/usr/share/keyrings/docker-archive-keyring.gpg] https://download.docker.com/linux/ubuntu \
      $(lsb_release -cs) stable" | sudo tee /etc/apt/sources.list.d/docker.list > /dev/null
sudo apt update &&  sudo apt install docker-ce docker-ce-cli containerd.io

#To use Docker without needing sudo every time
sudo groupadd docker
sudo usermod -aG docker $USER
newgrp docker

Note that by creating a docker group to avoid using sudo to run docker commands can have unwanted security implications, allowing all members of the docker group to also have sudo access to the filesystem through the Docker daemon. For additional information, and alternative installation instructions, view this wiki page.

2. Nvidia-Docker2

Nvidia-docker allows users to build and run GPU accelerated docker containers. Without an nvidia GPU and nvidia-docker GPU accelerated packages, the Phoenix stack will not function or will run significantly slower.

#To install nvidia-docker2
curl -s -L https://nvidia.github.io/nvidia-docker/gpgkey | \
  sudo apt-key add -
distribution=$(. /etc/os-release;echo $ID$VERSION_ID)  # see following note
curl -s -L https://nvidia.github.io/nvidia-docker/$distribution/nvidia-docker.list | \
  sudo tee /etc/apt/sources.list.d/nvidia-docker.list
sudo apt update && sudo apt install nvidia-docker2

Note that if the linux distribution is an Ubuntu variant, such as KDE neon or Linux Mint, the ID value in /etc/os-release is likely not ubuntu, so you will probably need to set distribution to match the underlying Ubuntu distribution, such as:

distribution="ubuntu20.04"

3. Git-LFS

We rely on GIT Large File Storage for handling binary files that are required for building and using Phoenix. E.g., pre-compiled system dependencies that aren't available from public servers and trained neural network weights.

#To install Git-LFS
curl -s https://packagecloud.io/install/repositories/github/git-lfs/script.deb.sh | sudo os=ubuntu dist=`lsb_release -sc` bash
sudo apt install git-lfs
git lfs install

4. Pip

Pip is a tool for installing Python packages. With pip, you can search, download, and install packages from Python Package Index (PyPI) and other package indexes. To install pip and python utilities, only execute one of the following (depending on your Python distribution).

#For systems running Python3 (recommended)
sudo apt update && sudo apt install python3-pip
pip3 install distro pyyaml

#For systems running Python2
sudo apt update && sudo apt install python-pip
pip install distro pyyaml

Note that the Phoenix stack is compatible with virtual Python environments. If you are using virtual environments, install pip and pyyaml using the method appropriate to your virtual environment system.

5. Catkin-Docker

catkin-docker is a custom script that sets out to make developing a catkin workspace that is stored on your host system but building and executing inside a docker container easy.

#To install catkin-docker
cd ~
git clone https://gitlab.sitcore.net/aimm/catkin-docker.git ~/catkin-docker
echo 'export PATH=~/catkin-docker:$PATH' >> ~/.bashrc
cp ~/catkin-docker/home_example.catkin_docker ~/.catkin_docker

#Update terminal environment variables
source ~/.bashrc

Click for instructions on checking for prerequisites

1. Check for Docker

docker run hello-world

Output should look similar to this: Docker check

2. Check for Nividia-Docker2

Check that you have GPU access within Docker. You may need to replace '11.0' with your cuda version

docker run --rm --gpus all nvidia/cuda:11.0-base nvidia-smi

Output should look similar to this: Nvidia-Docker2 check

3. Check for Git-LFS

git lfs env

Output should look similar to this. Your version may be different and there should be additional lines of information below this portion. Git-LFS check

4. Check for Pip

Only execute one of the following (depending on your Python distribution).

# For systems running Python3 (recommended)**
pip3 --version

# For systems running Python2**
pip2 --version

The version number may vary, but the output should look something like this: Pip check

5. Check for Catkin-Docker

which catkin-docker

Output should be a working directory to catkin-docker. Your path may be different. Catkin-Docker check


Installation

Note: This installation needs around 20GB of free space in Docker root directory (/var/lib/docker/ for a global installation or ~/.local/share/docker/ for a rootless installation).

# Clone the repository
git clone https://gitlab.sitcore.net/aimm/phoenix-r1.git
cd phoenix-r1

# Create the phoenix catkin profile based on your Ubuntu distribution.
# Only execute one of the following sets of commands

## For systems running Python2
python build-tools/catkin_config/catkin_profile_build

## For systems running Python3
python3 build-tools/catkin_config/catkin_profile_build

# mark this as the active profile
echo "active: phoenix" > .catkin_tools/profiles/profiles.yaml

Finally, we can download the latest docker image and use catkin-docker to work in the image. Running graphical programs that require GPU (like RVIZ) is possible in this container.

These are the commands you would run regularly. Note that the xhost command must be re-run after every logout / login to your computer.

# Login and pull docker image. Depending on your internet connection this may take a while.
docker login registry.gitlab.sitcore.net:443
docker pull registry.gitlab.sitcore.net:443/aimm/phoenix-r1/noetic/devel:master

# Allow access to X11 (potentially unsafe, use with caution)
xhost +si:localuser:root

# The following will put you in a catkin-docker container (run from within `phoenix-r/`)
catkin-docker run

# From within docker (NOTE: May take a long time to compile. Took over 1 hour on a 4-core 2.60GHz CPU):
catkin build

# Exit docker
exit

Using the System

Before running the system let's check the if everything is working properly by following the steps.

# Outside docker
cd phoenix-r1
mkdir bags
(cd bags && curl -O https://arl-aimm-data-public.s3.amazonaws.com/rcta_t2_marketplace_th05_short.bag)
cd ..

# Enter a catkin-docker container
catkin-docker run

# Set up the phxlaunch command
source docker-build/install/setup.bash

# Generate a fully resolved launch file of the given bag
rosrun phoenix_bag_launch generate_bag_args $(catkin locate)/bags/rcta_t2_marketplace_th05_short.bag -o mybag.launch
roslaunch mybag.launch

# When desired, exit docker
exit

It should open a rviz terminal and will look like below image - RVIZ Window


Opt-In To Unity Based Simulator Capability

Opt-in capabilities can be layered into the phoenix workspace. Note that some of the opt-ins may expect the user to have catkin-tools installed. This is not strictly speaking necessary, and all calls to catkin locate can be replaced with hardcoded paths to packages. For our work we will opt in to unity capability.

We use vcstool to manage these opt-ins. Before proceeding, install it via the following:

# Install to host machine (outside of docker)
curl -s https://packagecloud.io/install/repositories/dirk-thomas/vcstool/script.deb.sh | sudo bash
sudo apt update
sudo apt install python3-vcstool

Note that if you are using a local python environment, using something like venv or conda, vcstool can be installed using pip

# Install to host machine (outside of docker)
pip3 install vcstool

To check whether your opt-in repositories are up-to-date run:

# Be sure to exit the docker shell and navigate into the phoenix directoy before running this command.
./build-tools/check_optins.sh

Installing Unity-based simulator

Note that although this script relies on catkin locate to find package paths, catkin-tools is not a required dependency. You can replace all calls to $(catkin locate PACKAGE) with the full path to the package.

Installation

# Clone opt-in repositories

vcs import -w 1 --repos --input $(catkin locate phoenix_unity_optin)/unity_packages.yaml $(catkin locate phoenix_unity_optin)/..
# The cloning process may take some time

SSH Installation

If you have git SSH keys set up you can use an alternative yaml package.

vcs import --repos --input $(catkin locate phoenix_unity_optin)/ssh_unity_packages.yaml $(catkin locate phoenix_unity_optin)/..

Build Packages and Run the System

Build the packages in your environment, if you are running Phoenix in a docker make sure to launch the docker container (catkin-docker run) before proceeding.

# Ignore packages unused in Phoenix
touch $(catkin locate arl_unity_ros_air)/CATKIN_IGNORE
touch $(catkin locate arl_unity_ros_px4)/CATKIN_IGNORE

# Build necessary packages
catkin build phoenix_unity_launch perception_unity_launch arl_unity_ros_ground arl_unity_simulator

# Source rebuilt workspace and run
source $(catkin config | grep "Install Space:" | awk '{print $4;}')/setup.bash

# Run ARL with in Unity 
phxlaunch phoenix_unity_launch experiment.xlaunch

If everything works correctly it will run Unity simulation and should look like this - ARL with Unity Simulator

Send navigation goal

For this you need to first download the necessary scripts from the arl_object_search branch located in zktony/sloop_object_search repository.

Additional Information (Things not necessarily needed)

Repository for connections between Unity and Phoenix

The default rcta_unity.launch will launch the Unity simulator with a Husky robot in it and connect it to rviz.

Warthog

You can launch the simulator with a Warthog using:

phxlaunch phoenix_unity_launch experiment.xlaunch launch_unity:=true environment:=lejeune_emout

Adjustable Arguments

Environment

The ARL Unity simulator package comes with two included environments.

  • flooded_grounds: This is the default environment that will load on launch.
  • overpasscity: This environment consists of a small road network with cars, buildings, and obstructions. It is located at $(find arl_unity_ros)/config/overpasscity.yaml
  • lejeune_emout: EMOUT site at Lejeune

Conversions

Currently only the Object Detection sensor requires conversions between Unity and Phoenix. The required node is already included when running the main sim.xlaunch of phoenix_unity_launch (which the example experiment.xlaunch builds on) if object detection is enabled, but if running a custom setup, it can be launched with this roslaunch snippet:

  <group ns="$(arg name)">
    <node pkg="phoenix_unity_launch" type="object_detections_conversion_node" name="object_detection_conversion"/>
  </group>

or by manually launching with

rosrun phoenix_unity_launch object_detections_conversion_node

Git Hooks (Personally Didn't Need)

Phoenix repository Git Hooks can be installed to check for updates in the Changelog. They are installed using:

./build-tools/phoenix_git_hooks/setup_git_hooks.sh
#This will install the hooks to the .git/hooks/ directory to run post-merge

This should not interfere with your existing git hooks if they are standard scripts in the .git/hooks/ directory.