Skip to content

Users' guide to Software

Andre edited this page Jun 14, 2023 · 17 revisions

AutoMec's Code

Instructions to run our code will be presented.

Table of Contents

  1. Launch Car
  2. Deep Learning Driving
  3. Signal recognition
  4. Semantic Segmentation

Bringup

This section will focus on how to bringup an environment for both simulation and real situations.

Simulation

To start Gazebo, a physics simulator, use:

roslaunch prometheus_gazebo arena.launch

And use the following command to start the signal panel at the crosswalk:

roslaunch prometheus_gazebo signal_panel.launch

To spawn the car in simulation, please use:

roslaunch prometheus_bringup bringup.launch sim:=true

In case you want to use the car with a controller please set the flag controller to true. More information is available at Common Comands.

Real

To launch the car in a real environment, please use:

roslaunch prometheus_bringup bringup.launch

If the computer can't connect to the ESP board in the PCB, due to lack of permission, please use the following commands to give permission to use the USB port:

sudo chmod a+rw /dev/ttyUSB0

This first command is more an temporary solution, and it's necessary to run it every time the computer is rebooted. To give permanent permission, please use the following commands:

sudo adduser $USER dialout
sudo reboot

Replace $USER with your username.

Common commands

Optionally, you can also add some visualization, and control with a game console controller:

roslaunch prometheus_bringup bringup.launch sim:=*add environment used* visualize:=true controller:=true 

To change the controller in case more than one is connected you can change the joy number by adding the flag joy_number:

roslaunch prometheus_bringup bringup.launch sim:=*add environment used* controller:=true joy_number:=*add number of controller* 

You can also change the linear velocity in m/s:

roslaunch prometheus_bringup bringup.launch sim:=*add environment used* linear_velocity:=0.5

All these parameters are interchangeable and can be used simultaneously.

Deep Learning Driving

Our approaches uses a Convolutional Neural Network (CNN) to control the car. The CNN needs two steps before running a model:

  • Dataset Writing
  • Model Training

Dataset Writing

To write the dataset, please run:

roslaunch prometheus_driving dataset_writing.launch

A window shall appear with the view of the top camera. When the car is moving, the dataset is recording.

To save the dataset, press "s" on the window. After pressing "s", a comment prompt is presented in the command line. Add a comment if you wish to.

To quit without saving, press "q".

After saving the model it's necessary to process the dataset and calculate the normalization std and mean parameters. To do so, please run:

rosrun prometheus_driving create_statistics.py -d *dataset name*

Model Training

The networks are trained using the dataset written in the previous step. To train the model, please run:

rosrun prometheus_driving model_train.py -d *dataset name* -fn *model name* -m '*Network architecture*' -cs 300 -n_epochs 30 -batch_size 150 -cs 300 -loss_f 'MSELoss()' -nw 4 -lr 0.001 -lr_step_size 10 -c 0 

To start the training it's necessary to specify the dataset name, the model name, the network architecture. The hyperparameters can be tweaked to your liking. In case you want to change the GPU used, please change the flag -c being 0 the default. If the -c flag is not used, the training will be done in the CPU.

Currently, the following network architectures are available:

  • Nvidia_model()
  • Rota_model()
  • LSTM() (LSTM with the Nvidia model) (recommended)
  • MyViT() (Vision Transformer)

Other architectures are available, but the results are not as good as the ones presented above.

If you want to visualize the training loss and the accuracy, please add the flag -v.

Running the model

To run the model, please use:

roslaunch prometheus_driving ml_driving.launch model_name:=*insert model name*

Signal recognition

To launch the signal detection, please use:

roslaunch prometheus_signal_recognition signal_recognition.launch 

If you already define the color mask, run this command instead:

roslaunch prometheus_signal_recognition signal_recognition.launch mask_mode:=false

If you want to run the model without using the signal detection, please launch the following command to start the driving:

rostopic pub /signal_detected std_msgs/String "pForward"

Complete Driving

If you want to run the complete driving experience, please use:

roslaunch prometheus_bringup main_driving.launch model_name:=*insert model name* 

Semantic Segmentation

To run the semantic segmentation two steps are needed:

  • Dataset Writing/Labeling
  • Model Training

Dataset Writing/Labeling

In order to obtain the dataset please follow the instructions above in the Dataset Writing section. Afterwards it's necessary to label the dataset. To do so RoboFlow can be used. The dataset can be uploaded to RoboFlow and the labels can be done there. After the labeling is done, the dataset can be downloaded and used to train the model.

Currently the following classes are available:

  • driveable (right lane)
  • driveable_alt (left lane)
  • parking
  • crosswalk

Model Training

In order to train the model, please run:

rosrun prometheus_semantic_segmentation model_train_semantic.py -d *dataset name* -fn *model name* -n_epochs 100 -batch_size 50 -m '*Network architecture*' -cs 112 -loss_f 'CrossEntropyLoss()' -nw 4 -lr 0.001 -lr_step_size 50 -c 0 

To start the training it's necessary to specify the dataset name, the model name, the network architecture. The hyperparameters can be tweaked to your liking. In case you want to change the GPU used, please change the flag -c being 0 the default. If the -c flag is not used, the training will be done in the CPU.

Currently, the following network architectures are available:

  • SegNetV2() (recommended)
  • UNet()
  • DeepLabV3()

Running the model

To run the model, please use:

roslaunch prometheus_semantic_segmentation semantic_segmentation.launch model_name:=*insert model name*