From 4bd313c2e937f235fe1d76a235a8241795eeeb56 Mon Sep 17 00:00:00 2001 From: Kenji Miyake <31987104+kenji-miyake@users.noreply.github.com> Date: Wed, 19 May 2021 17:25:09 +0900 Subject: [PATCH] Sync upstream (#13) --- README.md | 159 +++++++++--------- ansible/roles/ros2/tasks/main.yml | 14 ++ design/ForDevelopers.md | 33 ++++ design/ForDeveloppers.md | 39 ----- design/Messages.md | 38 ++--- design/NamingConvention.md | 32 ++-- design/Overview.md | 87 ++++------ design/TF.md | 83 +++++---- docs/Credits.md | 32 ++-- docs/SimulationTutorial.md | 61 ++++--- ...dd_aw_ros2_use_sim_time_into_launch_xml.pl | 31 ---- ...dd_aw_ros2_use_sim_time_into_launch_xml.sh | 21 --- scripts/get_use_sim_time_all.sh | 8 - 13 files changed, 277 insertions(+), 361 deletions(-) create mode 100644 design/ForDevelopers.md delete mode 100644 design/ForDeveloppers.md delete mode 100755 scripts/add_aw_ros2_use_sim_time_into_launch_xml.pl delete mode 100755 scripts/add_aw_ros2_use_sim_time_into_launch_xml.sh delete mode 100755 scripts/get_use_sim_time_all.sh diff --git a/README.md b/README.md index 9d790b6c1dc1f..03edc3befa1b0 100644 --- a/README.md +++ b/README.md @@ -1,158 +1,153 @@ # Autoware (Architecture Proposal) -meta-repository for Autoware architecture proposal version - ![autoware](https://user-images.githubusercontent.com/8327598/69472442-cca50b00-0ded-11ea-9da0-9e2302aa1061.png) -# What's this - -This is the source code of the feasibility study for Autoware architecture proposal. - -> **WARNING**: This source is solely for demonstrating an architecture proposal. It should not be used to drive cars. - -> **NOTE**: The features in [autoware.iv.universe](https://github.com/tier4/autoware.iv.universe) will be merged into Autoware.Auto. +A meta-repository for the new Autoware architecture feasibility study created by Tier IV. For more details about the architecture itself, please read this [overview](/design/Overview.md). -Architecture overview is [here](/design/Overview.md). +> **WARNING**: All source code relating to this meta-repository is intended solely to demonstrate a potential new architecture for Autoware, and should not be used to autonomously drive a real car! +> +> **NOTE**: Some, but not all of the features within the [AutowareArchitectureProposal.iv repository](https://github.com/tier4/AutowareArchitectureProposal.iv) are planned to be merged into [Autoware.Auto](https://gitlab.com/autowarefoundation/autoware.auto/AutowareAuto) (the reason being that Autoware.Auto has its own scope and ODD which it needs to achieve, and so not all the features in this architecture proposal will be required). -# How to setup +# Installation Guide -## Requirements +## Minimum Requirements ### Hardware - - x86 CPU (8 or more cores) - - 16 GB or more of memory - - NVIDIA GPU (4GB or more of memory) +- x86 CPU (8 cores) +- 16GB RAM +- [Optional] NVIDIA GPU (4GB RAM) + - Although not required to run basic functionality, a GPU is mandatory in order to run the following components: + - lidar_apollo_instance_segmentation + - traffic_light_ssd_fine_detector + - cnn_classifier + +> Performance will be improved with more cores, RAM and a higher-spec graphics card. ### Software - Ubuntu 20.04 - NVIDIA driver -If CUDA or TensorRT is already installed, it is recommended to remove it. +## Review licenses +The following software will be installed during the installation process, so please confirm their licenses first before proceeding. -## How to setup +- [CUDA 11.1](https://docs.nvidia.com/cuda/eula/index.html) +- [cuDNN 8](https://docs.nvidia.com/deeplearning/sdk/cudnn-sla/index.html) +- [OSQP](https://github.com/oxfordcontrol/osqp/blob/master/LICENSE) +- [ROS 2 Foxy](https://index.ros.org/doc/ros2/Releases/Release-Foxy-Fitzroy/) +- [TensorRT 7](https://docs.nvidia.com/deeplearning/sdk/tensorrt-sla/index.html) + +## Installation steps -1. Set up the repository +> If the CUDA or TensorRT frameworks have already been installed, we strongly recommend uninstalling them first. + +1. Set up the Autoware repository ```sh +mkdir -p ~/workspace +cd ~/workspace git clone git@github.com:tier4/AutowareArchitectureProposal.git cd AutowareArchitectureProposal git checkout ros2 ``` -In this step, [osqp](https://github.com/oxfordcontrol/osqp/blob/master/LICENSE) is installed. -Please check that the use is in agreement with its license before proceeding. - -2. Run the setup script +2. Run the setup script to install CUDA, cuDNN 8, OSQP, ROS 2 and TensorRT 7, entering 'y' when prompted (this step will take around 45 minutes) ```sh ./setup_ubuntu20.04.sh ``` -In this step, the following software is installed. -Please confirm their licenses before using them. - -- [ROS 2 Foxy](https://index.ros.org/doc/ros2/Releases/Release-Foxy-Fitzroy/) -- [CUDA 11.1](https://docs.nvidia.com/cuda/eula/index.html) -- [cuDNN 8](https://docs.nvidia.com/deeplearning/sdk/cudnn-sla/index.html) -- [TensorRT 7](https://docs.nvidia.com/deeplearning/sdk/tensorrt-sla/index.html) - -3. Build the source +3. Build the source code (this will take around 15 minutes) ```sh source ~/.bashrc colcon build --cmake-args -DCMAKE_BUILD_TYPE=Release --catkin-skip-building-tests ``` -## How to configure +> Several modules will report stderror output, but these are just warnings and can be safely ignored. -### Set hardware configuration +## Sensor hardware configuration -Prepare launch files and vehicle_description according to the sensor configuration of your hardware. -The following are the samples. +Prepare launch and vehicle description files according to the sensor configuration of your hardware. +The following files are provided as samples: - [sensing.launch](https://github.com/tier4/autoware_launcher.universe/blob/master/sensing_launch/launch/sensing.launch) - [lexus_description](https://github.com/tier4/lexus_description.iv.universe) -## How to run - -### Supported Simulations - -![sim](https://user-images.githubusercontent.com/8327598/79709776-0bd47b00-82fe-11ea-872e-d94ef25bc3bf.png) +# Running Autoware -### Quick Start +## Quick Start -#### Rosbag +### Rosbag simulation \* Currently this feature is not available for ROS 2. -1. Download sample map from [here](https://drive.google.com/open?id=1ovrJcFS5CZ2H51D8xVWNtEvj_oiXW-zk). - -2. Download sample rosbag from [here](https://drive.google.com/open?id=1BFcNjIBUVKwupPByATYczv2X4qZtdAeD). -3. Launch Autoware +1. [Download the sample pointcloud and vector maps](https://drive.google.com/open?id=1ovrJcFS5CZ2H51D8xVWNtEvj_oiXW-zk), unpack the zip archive and copy the two map files to the same folder. +2. [Download the sample rosbag](https://drive.google.com/open?id=1BFcNjIBUVKwupPByATYczv2X4qZtdAeD). +3. Open a terminal and launch Autoware ```sh -cd AutowareArchitectureProposal +cd ~/workspace/AutowareArchitectureProposal source install/setup.bash -roslaunch autoware_launch logging_simulator.launch map_path:=[path] vehicle_model:=lexus sensor_model:=aip_xx1 rosbag:=true +roslaunch autoware_launch logging_simulator.launch map_path:=/path/to/map_folder vehicle_model:=lexus sensor_model:=aip_xx1 rosbag:=true ``` -\* Absolute path is required for map_path. - -4. Play rosbag +4. Open a second terminal and play the sample rosbag file ```sh -rosbag play --clock [rosbag file] -r 0.2 +cd ~/workspace/AutowareArchitectureProposal +source install/setup.bash +rosbag play --clock -r 0.2 /path/to/sample.bag ``` -##### Note +5. Focus the view on the ego vehicle by changing the `Target Frame` in the RViz Views panel from `viewer` to `base_link`. -- sample map : © 2020 TierIV inc. -- rosbag : © 2020 TierIV inc. - - Image data are removed due to privacy concerns. - - Cannot run traffic light recognition - - Decreased accuracy of object detection +#### Note -#### Planning Simulator +- Sample map and rosbag: © 2020 Tier IV, Inc. + - Due to privacy concerns, the rosbag does not contain image data, and so traffic light recognition functionality cannot be tested with this sample rosbag. As a further consequence, object detection accuracy is decreased. -1. Download sample map from [here](https://drive.google.com/open?id=197kgRfSomZzaSbRrjWTx614le2qN-oxx). +### Planning Simulator -2. Launch Autoware +1. [Download the sample pointcloud and vector maps](https://drive.google.com/open?id=197kgRfSomZzaSbRrjWTx614le2qN-oxx), unpack the zip archive and copy the two map files to the same folder. +2. Open a terminal and launch Autoware ```sh -cd AutowareArchitectureProposal +cd ~/workspace/AutowareArchitectureProposal source install/setup.bash -ros2 launch autoware_launch planning_simulator.launch.xml map_path:=[path] vehicle_model:=lexus sensor_model:=aip_xx1 +ros2 launch autoware_launch planning_simulator.launch.xml map_path:=/path/to/map_folder vehicle_model:=lexus sensor_model:=aip_xx1 ``` -\* Absolute path is required for map_path. - -3. Set initial pose -4. Set goal pose -5. Push engage button. - [autoware_web_controller](http://localhost:8085/autoware_web_controller/index.html) +3. Set an initial pose for the ego vehicle + - a) Click the `2D Pose estimate` button in the toolbar, or hit the `P` key + - b) In the 3D View pane, click and hold the left-mouse button, and then drag to set the direction for the initial pose. +4. Set a goal pose for the ego vehicle + - a) Click the `2D Nav Goal` button in the toolbar, or hit the `G` key + - b) In the 3D View pane, click and hold the left-mouse button, and then drag to set the direction for the goal pose. +5. Engage the ego vehicle. + - a) Open the [autoware_web_controller](http://localhost:8085/autoware_web_controller/index.html) in a browser. + - b) Click the `Engage` button. -##### Note +#### Note -- sample map : © 2020 TierIV inc. +- Sample map: © 2020 Tier IV, Inc. -#### Running With AutowareAuto -We are planning propose the architecture and reference implementation to AutowareAuto. -For the time being, use ros_bridge if you wish to use this repository with AutowareAuto modules. -You would have to do the message type conversions in order to communicate between AutowareAuto and AutowareArchitectureProposal modules until the architecture is aligned. +## Detailed tutorial instructions -For setting up AutowareAuto, please follow the instruction in: https://gitlab.com/autowarefoundation/autoware.auto/AutowareAuto +Please refer to the [Simulation tutorial](./docs/SimulationTutorial.md) for more details about supported simulations, along with more verbose instructions including screenshots. -For setting up ros_bridge, please follow the instruction in: https://github.com/ros2/ros1_bridge +## Running the AutowareArchitectureProposal source code with Autoware.Auto +For anyone who would like to use the features of this architecture proposal with existing [Autoware.Auto](https://gitlab.com/autowarefoundation/autoware.auto/AutowareAuto) modules right now, [ros_bridge](https://github.com/ros2/ros1_bridge) can be used. +> Until the two architectures become more aligned, message type conversions are required to enable communication between the Autoware.Auto and AutowareArchitectureProposal modules and these will need to be added manually. -#### Tutorial in detail +- To set up Autoware.Auto, please refer to the [Autoware.Auto installation guide](https://autowarefoundation.gitlab.io/autoware.auto/AutowareAuto/installation.html). +- To set up ros_bridge, please follow the [installation instructions on the ros_bridge GitHub repository](https://github.com/ros2/ros1_bridge#prerequisites). -See [here](./docs/SimulationTutorial.md) for more information. -## References +# References -### Videos +## Autoware.IV demonstration videos - [Scenario demo](https://youtu.be/kn2bIU_g0oY) - [Obstacle avoidance in the same lane](https://youtu.be/s_4fBDixFJc) @@ -162,6 +157,6 @@ See [here](./docs/SimulationTutorial.md) for more information. - [360° FOV perception (Camera Lidar Fusion)](https://youtu.be/whzx-2RkVBA) - [Robustness of localization](https://youtu.be/ydPxWB2jVnM) -### Credits +## Credits - [Neural Network Weight Files](./docs/Credits.md) diff --git a/ansible/roles/ros2/tasks/main.yml b/ansible/roles/ros2/tasks/main.yml index 527d26ec2c81f..8daff4bf89978 100644 --- a/ansible/roles/ros2/tasks/main.yml +++ b/ansible/roles/ros2/tasks/main.yml @@ -76,3 +76,17 @@ state: latest update_cache: yes become: yes + +- name: ROS2 (install rmw-cyclonedds-cpp) + apt: + name: "ros-{{ rosdistro }}-rmw-cyclonedds-cpp" + state: latest + update_cache: yes + become: yes + +- name: ROS2 (Add settings to .bashrc) + lineinfile: + dest: ~/.bashrc + line: "export RMW_IMPLEMENTATION=rmw_cyclonedds_cpp" + state: present + become: no diff --git a/design/ForDevelopers.md b/design/ForDevelopers.md new file mode 100644 index 0000000000000..9001c701cf0d6 --- /dev/null +++ b/design/ForDevelopers.md @@ -0,0 +1,33 @@ +# How to setup for the specific hardware configuration + +In order to test Autoware in a real vehicle, it is necessary to setup Autoware for each specific combination of vehicle, drive-by-wire system and sensors as follows: + +## 1. Sensor TF + +* The sensor TF describes the positional relationship of each sensor to the vehicle's base link (defined as the center of the vehicle's rear axle) and has to be created for each configuration of sensors. +* Please setup following the [TF design document](https://github.com/tier4/AutowareArchitectureProposal.proj/blob/master/design/TF.md). + +## 2. Vehicle interface +* The [vehicle interface](https://github.com/tier4/AutowareArchitectureProposal.proj/blob/master/design/Vehicle/Vehicle.md#vehicle-interface) is the Autoware module that communicates with the vehicle's DBW (drive-by-wire) system, and must be created for each specific combination of vehicle and DBW. +* Please create an appropriate vehicle interface following the ["How to design a new vehicle interface"](https://github.com/tier4/AutowareArchitectureProposal.proj/blob/master/design/Vehicle/Vehicle.md#how-to-design-a-new-vehicle-interface) section of the [Vehicle stack design document](https://github.com/tier4/AutowareArchitectureProposal.proj/blob/master/design/Vehicle/Vehicle.md). +* [Sample vehicle interface file](https://github.com/tier4/lexus_description.iv.universe/blob/master/launch/vehicle_interface.launch) (for the Lexus RX 450H vehicle using [AutonomouStuff's PacMod system](https://autonomoustuff.com/products/pacmod)) + +## 3. Vehicle info + +* The `vehicle_info` YAML configuration file contains global parameters for the vehicle's physical configuration (e.g. wheel radius) that are read by Autoware in [rosparam format](http://wiki.ros.org/rosparam) and published to the ROS Parameter Server. +* The required parameters are as follows: +``` +/vehicle_info/wheel_radius # wheel radius +/vehicle_info/wheel_width # wheel width +/vehicle_info/wheel_base # between front wheel center and rear wheel center +/vehicle_info/wheel_tread # between left wheel center and right wheel center +/vehicle_info/front_overhang # between front wheel center and vehicle front +/vehicle_info/rear_overhang # between rear wheel center and vehicle rear +/vehicle_info/vehicle_height # from the ground point to the highest point +``` +* [Sample vehicle info file](https://github.com/tier4/lexus_description.iv.universe/blob/master/config/vehicle_info.yaml) (for the Lexus RX 450H) + +## 4. Sensor launch file + +* The `sensor.launch` file defines which sensor driver nodes are launched when running Autoware, and is dependent on the specific sensors (type, OEM and model) that are to be used. +* [Sample sensor.launch file](https://github.com/tier4/autoware_launcher.iv.universe/blob/master/sensing_launch/launch/sensing.launch) diff --git a/design/ForDeveloppers.md b/design/ForDeveloppers.md deleted file mode 100644 index b487140443b4c..0000000000000 --- a/design/ForDeveloppers.md +++ /dev/null @@ -1,39 +0,0 @@ -# How to setup for the specific hardware configuration - -In order to test the autoware in the vehicle, you need to setup for the specific hardware configuration. Please make the following settings. - -## 1. Sensor TF setting - -The sensor tf describes the positional relationship of the sensors and has to be created with the specific sensor hardware configuration. Please setup following [TF.md](https://github.com/tier4/AutowareArchitectureProposal/blob/master/design/TF.md). - -## 2. Vehicle interface setting - -The vehicle interface is the module that communicates with the vehicle and has to be created with the specific vehicle configuration. Please setup the vehicle interface following [Vehicle.md](https://github.com/tier4/AutowareArchitectureProposal/blob/master/design/Vehicle/Vehicle.md). - -## 3. Vehicle info setting - -The `vehicle_info` is a global parameter for the vehicle configurations. that is read by the Autoware modules. These parameters are read by the Autoware modules and has to be published as rosparam format. The sample is [here](https://github.com/tier4/AutowareArchitectureProposal/blob/master/src/vehicle/vehicle_description/vehicle_body_description/lexus_description/config/vehicle_info.param.yaml). - -Required parameters are as follows. - -``` -/vehicle_info/wheel_radius # wheel radius -/vehicle_info/wheel_width # wheel width -/vehicle_info/wheel_base # between front wheel center and rear wheel center -/vehicle_info/wheel_tread # between left wheel center and right wheel center -/vehicle_info/front_overhang # between front wheel center and vehicle front -/vehicle_info/rear_overhang # between rear wheel center and vehicle rear -/vehicle_info/vehicle_height # from the ground point to the highest point -``` - -## 4. Launch files setting - -The following launch files has to be modified for the specific configuration. - -**sensor.launch** - -The `sensor.launch` defines what or which sensor driver nodes are launched. It is necessary to modify it according to the sensor configuration. The default setting is [here](https://github.com/tier4/AutowareArchitectureProposal/blob/master/src/launcher/sensing_launch/launch/sensing.launch). - - diff --git a/design/Messages.md b/design/Messages.md index b30754436b077..acc23810bc11a 100644 --- a/design/Messages.md +++ b/design/Messages.md @@ -3,20 +3,18 @@ Messages # Overview -In this section, it is described that 8 categories of messages in the architecture: +This page describes the eight categories of message in the new architecture, along with definitions for each message. -- autoware control messages -- autoware lanelet2 messages -- autoware perception messages -- autoware planning messages -- autoware system messages -- autoware traffic light messages -- autoware vector map messages -- autoware vehicle messages +- [Autoware control messages](#autoware-control-messages) +- [Autoware lanelet2 messages](#autoware-lanelet2-messages) +- [Autoware perception messages](#autoware-perception-messages) +- [Autoware planning messages](#autoware-planning-messages) +- [Autoware system messages](#autoware-system-messages) +- [Autoware traffic light messages](#autoware-traffic-light-messages) +- [Autoware vector map messages](#autoware-vector-map-messages) +- [Autoware vehicle messages](#autoware-vehicle-messages) -the definition of each message is shown in following subsection. - -## autoware control messages +## Autoware control messages ### ControlCommand.msg @@ -30,7 +28,7 @@ the definition of each message is shown in following subsection. `Header header` `autoware_control_msgs/ControlCommand control` -## autoware lanelet2 messages +## Autoware lanelet2 messages ### MapBin.msg @@ -39,7 +37,7 @@ the definition of each message is shown in following subsection. `string map_version` `int8[] data` -## autoware perception messages +## Autoware perception messages ### DynamicObject.msg @@ -105,7 +103,7 @@ the definition of each message is shown in following subsection. `bool acceleration_reliable` `PredictedPath[] predicted_paths` -## autoware planning messages +## Autoware planning messages ### LaneSequence.msg @@ -167,7 +165,7 @@ the definition of each message is shown in following subsection. `geometry_msgs/Twist twist` `geometry_msgs/Accel accel` -## autoware system messages +## Autoware system messages ### AutowareState.msg @@ -182,7 +180,7 @@ the definition of each message is shown in following subsection. `string state` `string msg` -## autoware traffic light messages +## Autoware traffic light messages ### LampState.msg @@ -217,7 +215,7 @@ the definition of each message is shown in following subsection. `std_msgs/Header header` `autoware_traffic_light_msgs/TrafficLightState[] states` -## autoware vector map messages +## Autoware vector map messages ### BinaryGpkgMap.msg @@ -226,7 +224,7 @@ the definition of each message is shown in following subsection. `string map_version` `int8[] data` -## autoware vehicle messages +## Autoware vehicle messages ### ControlMode.msg @@ -236,7 +234,7 @@ the definition of each message is shown in following subsection. `uint8 AUTO_STEER_ONLY = 2` `uint8 AUTO_PEDAL_ONLY = 3` `int32 data` - + ### Pedal.msg `std_msgs/Header header` diff --git a/design/NamingConvention.md b/design/NamingConvention.md index 1561f56eeae58..64a89ea0f72a3 100644 --- a/design/NamingConvention.md +++ b/design/NamingConvention.md @@ -1,35 +1,31 @@ # Naming Conventions ## Package Names -Autoware does not set any particular naming convention except that it follows [REP-144](https://www.ros.org/reps/rep-0144.html). -Therefore, package name must: -* only consist of lowercase alphanumerics and _ separators and start with an alphabetic character -* not use multiple _ separators consecutively -* be at least two characters long - -See [REP-144](https://www.ros.org/reps/rep-0144.html) for the details. +Although Autoware does not have its own explicit naming convention, it does adhere to the guidance given in [REP-144](https://www.ros.org/reps/rep-0144.html). Thus an Autoware package name must: +>* only consist of lowercase alphanumerics and _ separators, and start with an alphabetic character +>* not use multiple _ separators consecutively +>* be at least two characters long ## Topic Names ### Default topic names -In Autoware, all topic name should be following this [wiki](http://wiki.ros.org/Names). -Also, it is strongly recommended that the default topic names specified in source code must follow these conventions: -* All topics must be set under private namespace. Any global topics must have an explained in documentation. -* All topics must be specified under following namespace in the node's private namespace. This allows users to easily understand which topics are inputs and outputs when they look at remapping in launch files for example. +In Autoware, all topics should be named according to the guidelines in the [ROS wiki](http://wiki.ros.org/Names). +Additionally, it is strongly recommended that the default topic names specified in source code should follow these conventions: +* All topics must be set under private namespaces. Any global topics must have a documented explanation. +* All topics must be specified under one of the following namespaces within the node's private namespace. Doing so allows users to easily understand which topics are inputs and which are outputs when they look at remapping in launch files for example. * `input`: subscribed topics - * `output`: published topics + * `output`: published topics * `debug`: published topics that are meant for debugging (e.g. for visualization) -For example, if there is a node that subscribes pointcloud and filter it will voxel grid filter, the topics should be: +Consider, for example, a node that subscribes to pointcloud data, applies a voxel grid filter and then publishes the filtered data. In this case, the topics should be named as follows: * ~input/points_original * ~output/points_filtered ### Remapped Topics -The default topics of each node may be remapped to other topic names in launch file. -In general, the topics should be published under namespaces of belonging modules in layered architecture for encapsulation. -This allows the developers and users to easily understand where in the architecture topic is used. +The default topics of each node can be remapped to other topic names using a launch file. +For encapsulation purposes and ease of understanding, remapped topics should be published under the namespaces of the appropriate modules as per Autoware's layered architecture. Doing so allows both developers and users to see at a glance where the topic is used in the architecture. -Some other key topics are listed below: +Some key topics are listed below: ``` /control/vehicle_cmd /perception/object_recognition/detection/objects @@ -43,4 +39,4 @@ Some other key topics are listed below: /planning/scenario_planning/scenario /planning/scenario_planning/scenario_selector/trajectory /planning/scenario_planning/trajectory -``` \ No newline at end of file +``` diff --git a/design/Overview.md b/design/Overview.md index 72783f1033fa0..a3612cd7b702d 100644 --- a/design/Overview.md +++ b/design/Overview.md @@ -1,78 +1,58 @@ Architecture Overview -====================== +===================== # Introduction -This architecture is a proposal by Tier IV. We thought a new Autoware architecture is required to accelerate the development of Autoware. +Currently it is difficult to improve Autoware.AI's capabilities due to a lack of concrete architecture design and a lot of technical debt, such as the tight coupling between modules as well as unclear responsibilities for each module. At Tier IV, we thought that a new architecture was needed to help accelerate the development of Autoware. -(Please also refer [this presentation](https://discourse.ros.org/uploads/short-url/woUU7TGLPXFCTJLtht11rJ0SqCL.pdf) shared at AWF TSC in March 2020.) +The purpose of this proposal is to define a layered architecture that clarifies each module's role and simplifies the interface between them. By doing so: +- Autoware's internal processing becomes more transparent +- Collaborative development is made easier because of the reduced interdependency between modules +- Users can easily replace an existing module (e.g. localization) with their own software component by simply wrapping their software to fit in with Autoware's interface -We thought now it is difficult to improve Autoware.AI capabilities because of: -- No concrete architecture designed -- A lot of technical debt - - Tight coupling between modules - - Unclear responsibility of modules - -The purpose of this proposal is to: -- Define a layered architecture -- Clarify the role of each module -- Simplify the interface between modules - -By defining simplified interface between modules: -- Internal processing in Autoware becomes more transparent -- Joint development of developers becomes easier due to less interdependency between modules -- User's can easily replace a module with their own software component (e.g. localization) just by "wrapping" their software to adjust to Autoware interface - -Note that the initial focus of this architecture design was solely on function of driving capability, and the following features are left as future work: -* Real-time processing -* HMI +Note that the initial focus of this architecture design was solely on driving capability, and so the following features were left as future work: * Fail safe +* HMI +* Real-time processing * Redundant system * State monitoring system # Use Cases -When we designed the architecture, we have set the use case of Autoware to be last-one-mile travel. - -An example would be the following: +When designing the architecture, the use case of last-mile travel was chosen. For example: -**Description:** Travelling from to grocery store in the same city -**Actors:** User, Vehicle with Autoware installed (Autoware) -**Assumption:** -The environment is assumed to be -- urban or suburban area that is less than 1 km^2. -- fine weather -- Accurate HD map for the environment is available +**Description:** Travelling to/from a grocery store in the same city +**Actors:** User, Vehicle with Autoware installed (hence referred to as "Autoware") +**Assumptions:** +- Environment is an urban or suburban area less than 1 km^2. +- Weather conditions are fine +- Accurate HD map of the environment is available **Basic Flow:** -1. **User:** starts a browser and access Autoware page from phone. Press "Summon", and the app sends user’s GPS location to Autoware -2. **Autoware:** plans the route to the user’s location, and show it on the user’s phone -3. **User:** confirms the route and press “Engage” -4. **Autoware:** starts driving autonomously to the requested location and pulls over to the side of the road -5. **User:** rides on to the vehicle and press "Go Home" +1. **User:** Starts a browser on their phone and accesses the Autoware web app. Presses "Summon", and the app sends the user’s GPS location to Autoware +2. **Autoware:** Plans a route to the user’s location, and shows it on the user’s phone +3. **User:** Confirms the route and presses “Engage” +4. **Autoware:** Starts driving autonomously to the requested location and pulls over to the side of the road on arrival +5. **User:** Gets in the vehicle and presses "Go Home" 6. **Autoware:** Plans the route to the user’s location -7. **User:** confirms the route and press “Engage” -8. **Autoware:** Drives autonomously to user's home +7. **User:** Confirms the route and presses “Engage” +8. **Autoware:** Drives autonomously to the user's home # Requirements -To achieve the above use case, we set the functional requirement of the Autoware as following: -- Autoware can plan the route to the specified goal in the specified environment. -- Autoware can drive along the planned route without violation of traffic rules. -- (Nice to have) Autoware drives smooth driving for a comfortable ride with a limited jerk and acceleration. - -The above requirements are broken down into detailed requirements, which are explained in each stack page. - -Since Autoware is open source and is meant to be used/developed by anyone around the world, we also set some non-functional requirements for the architecture: -- Architecture is extensible for new algorithms without changing the interface -- Architecture is extensible to adapt to new traffic rules for different countries +To achieve this last-mile use case, the following functional requirements for Autoware were set: +- Autoware can plan a route to the specified goal within the type of environment described above. +- Autoware can drive along the planned route without violating any traffic rules. +- (Nice to have) Autoware provides a comfortable ride with smooth acceleration and limited jerk. + +Since Autoware is open source and is meant to be used/developed by people around the world, we also set some non-functional requirements for the architecture: +- Architecture can be extended for use with new algorithms without having to change the interface +- Architecture can be extended to follow traffic rules for different countries - The role and interface of a module must be clearly defined # High-level Architecture Design -Here is an overview of this architecture. - ![Overview](/design/img/Overview2.svg) -This architecture consists of 6 stacks: +This new architecture consists of the following six stacks. Each of these design pages contains a more detailed set of requirements and use cases specific to that stack: - [Sensing](Sensing/Sensing.md) - [Localization](Localization/Localization.md) - [Perception](Perception/Perception.md) @@ -80,4 +60,5 @@ This architecture consists of 6 stacks: - [Control](Control/Control.md) - [Map](Map/Map.md) -The details are explained in each page. +# References +- [New architecture presentation given to the AWF Technical Steering Committee, March 2020](https://discourse.ros.org/uploads/short-url/woUU7TGLPXFCTJLtht11rJ0SqCL.pdf) diff --git a/design/TF.md b/design/TF.md index 2e8a7bfcb1270..f498c6e0af8fa 100644 --- a/design/TF.md +++ b/design/TF.md @@ -1,36 +1,32 @@ # TF tree in Autoware -Autoware uses TF library for transforming coordinates, and TF tree can be accessed from any modules in Autoware. +Autoware uses the ROS TF library for transforming coordinates. The Autoware TF tree can be accessed from any module within Autoware, and is illustrated below. -The TF tree of Autoware is illustrated in the image below. ![TF](/design/img/TF.svg) ## Frames -Here is the description of each frame -* earth: the origin of ECEF coordinate(i.e. center of the earth). Currently, this frame is not used by any modules, but this is set for future support of larger maps and multiple vehicles. -* map: Local ENU coordinate. This keeps xy-plane to be relatively parallel to the ground surface. All map information should be provided in this frame, or it should be provided in a frame that can be statically transformed into this frame. Therefore, most of the planning calculations will be done on this frame as well. -* base_link: Frame rigidly attached to the vehicle. Currently, it is the midpoint of rear wheels projected to ground. -* sensor_frames: This represents the position of sensors. The actual name of this frame will be the name of the sensor, such as `camera`, `gnss`, `lidar`. Note that a camera should provide both camera frame and camera optical frame as suggested in [REP-103](https://www.ros.org/reps/rep-0103.html). +* earth: the origin of the [ECEF coordinate system](https://en.wikipedia.org/wiki/ECEF) (i.e. the center of the Earth). Although this frame is not currently used by any modules (and so is not shown in the diagram above), it was added to support the use of larger maps and multiple vehicles in the future. +* map: Local [ENU](http://www.dirsig.org/docs/new/coordinates.html) coordinate. This keeps the xy-plane relatively parallel to the ground surface (thus the z-axis points upwards). All map information should be provided in this frame, or provided in a frame that can be statically transformed into this frame, since most planning calculations are done in this frame. +* base_link: A frame that is rigidly attached to the vehicle. Currently, this is defined as the midpoint of the rear wheels projected to the ground. +* sensor_frame(s): One or more frames that represent the position of individual sensors. The actual name of each frame should be a combination of the name of the sensor and its position relative to the vehicle, such as `camera_front`, `gnss_back` and `lidar_top`. Note that a camera should provide both camera frame and camera optical frame as suggested in [REP-103](https://www.ros.org/reps/rep-0103.html) (eg: 'camera_front' and 'camera_front_optical'). ## Transforms -Transforms between each frame are explained in the following: - |TF|Type|Providing Module|Description| |-|-|-|-| -|earth->map|static|Map|Map modules will provide this TF according to origin information parameter file.| -|map->base_link|dynamic|Localization|Localization module calculates vehicles position relative to maps specified in `map` frame.| -|base_link->sensor|static|Vehicle|Vehicle module provide sensor position relative to base_link using URDF. There may be multiple static transforms between base_link and a sensor frame. For example, if a camera is calibrated against a lidar, then the camera's TF can be expressed by base_link->lidar->camera.| +|earth->map|static|Map|Map modules will provide this TF according to an origin information parameter file.| +|map->base_link|dynamic|Localization|The Localization module calculates the vehicle's position relative to maps specified in the `map` frame.| +|base_link->sensor|static|Vehicle|The Vehicle module provide sensor positions relative to the base_link using URDF. There may be multiple static transforms between the base_link and a sensor frame. For example, if a camera is calibrated against a LiDAR, then the camera's TF can be expressed by base_link->lidar->camera.| Remarks: -* Static frame does not mean it does not change. It may be remapped at times, but no interpolation will be done between the change. (i.e. only newest information is used) -* base_link->sensor is assumed to be static. This is not true due to suspension, but we assume that this displacement is small enough that it shouldn't affect control of the vehicle. There is a [discussion](https://gitlab.com/autowarefoundation/autoware.auto/AutowareAuto/-/issues/292) about nav_base which resolves this issue, which may be integrated. -* Above specification is not meant to restrict the addition of other frames. Developers may add any additional frames but are not allowed to change the meaning of the above frames. +* A static frame does not mean it does not change. It may be remapped at times, but no interpolation will be done between the change (i.e. only the newest information is used). +* base_link->sensor is assumed to be static. In reality, this is not true due to vehicle suspension, but we assume that the displacement is small enough that it doesn't affect control of the vehicle. There is a [discussion](https://gitlab.com/autowarefoundation/autoware.auto/AutowareAuto/-/issues/292) about a new nav_base frame that resolves this issue, and this new frame may be integrated at some later point. +* The specification above is not meant to restrict the addition of other frames. Developers may add any additional frames as required, but the meaning of existing frames as described above must not be changed. ## Regarding REP105 -ROS set [REP-105](https://www.ros.org/reps/rep-0105.html -) regarding TF, and the above explanation follows REP-105, but with the significant change that we removed the odom frame. +For TF, ROS follows the naming conventions and semantic meanings in [REP-105](https://www.ros.org/reps/rep-0105.html +). The explanation given above also follows REP-105, but with the significant change of removing the odom frame. -### What is Odom frame? -Odom frame is defined as following in the REP: +### What is the odom frame? +The odom frame is defined as follows in REP-105: ``` The coordinate frame called odom is a world-fixed frame. The pose of a mobile platform in the odom frame can drift over time, without any bounds. This drift makes the odom frame useless as a long-term global reference. However, the pose of a robot in the odom frame is guaranteed to be continuous, meaning that the pose of a mobile platform in the odom frame always evolves in a smooth way, without discrete jumps. @@ -38,42 +34,39 @@ In a typical setup the odom frame is computed based on an odometry source, such The odom frame is useful as an accurate, short-term local reference, but drift makes it a poor frame for long-term reference. ``` -There are also discussions regarding the existence of odom frame in the following discussions: -* https://discourse.ros.org/t/localization-architecture/8602/28 -+ https://gitlab.com/autowarefoundation/autoware.auto/AutowareAuto/issues/292 - -Within the above discussions, the main reason for using odom frame is that: +There have been [some](https://discourse.ros.org/t/localization-architecture/8602/28) [discussions](https://gitlab.com/autowarefoundation/autoware.auto/AutowareAuto/issues/292) about the purpose of the odom frame, and the main reasons for using it are as follows: * odom->base_link is high-frequency and therefore suitable for control * odom->base_link is continuous and keeps control from "jerks" -* odom->base_link is independent of localization algorithm and therefore it is safe at the failure of localization as long as control is done in odom frame. +* odom->base_link is independent of localization, and so it is still safe to use in the event of localization failure so long as control is done in the odom frame. ### Why we removed the odom frame -However, our view is that it doesn't mean that control is not affected by localization as long as trajectory following is done in the odom frame. For example, if a trajectory is calculated from the shape of the lane specified in an HD map, the localization result will be indirectly used when projecting trajectory into odom frame, and thus will be "blown off" when localization fails. Also, if any other map information is before trajectory calculation, such as shape optimization using map drivable area and velocity optimization using predicted trajectory of other vehicles which is derived from lane shape information, then localization failure will still affect the trajectory. Therefore, to make control independent of localization failure, we have to require all preceding calculation to not use map->odom transform. However, since trajectory following comes after planning in Autoware, it is almost impossible that map->odom isn't involved in trajectory calculation. Although it might be possible if Autoware plans like a human, who uses only route graph information from the map and obtain geometry information from perception, it is very unlikely that autonomous driving stack is capable of not using geometry information with safety ensured. +Tier IV's view is that control *is* affected by localization, even if trajectory following is done only in the odom frame. For example, if a trajectory is calculated from the shape of the lane specified in an HD map, the localization result will be indirectly used when projecting the trajectory into the odom frame, and thus the trajectory calculation will be thrown off if localization fails. Also, any map-based calculations that are done before trajectory calculation, such as shape optimization using the map's drivable areas or velocity optimization using the predicted trajectory of other vehicles (derived from lane shape information) will also be affected by localization failure. + +In order to ensure that control is unaffected by localization failure, we require that all preceding calculations do not use the map->odom transform. However, since trajectory following comes after planning in Autoware, it is almost impossible to prevent map->odom from being involved in trajectory calculations. Although this might be possible if Autoware planned like a human (who only uses route graph information from the map and can obtain geometry information from perception), it is very unlikely that an autonomous driving stack is capable of ensuring safety without using geometry information. -Therefore, regardless of any frame that control is done, it will still have effects of localization results. To make control not to jerk or do sudden steering at localization failure, we set the following requirements to Localization module: -* localization result must be continuous -* localization must detect localization failure and should not use the result to update vehicle pose. +Therefore, regardless of the frame in which it is done, control will still be affected by localization. To control a vehicle without jerking or sudden steering in the event of localization failure, we set the following requirements for the Localization module: +* Localization results must be continuous +* Localization failures must be detected and the vehicle's pose should not be updated with any failed localization results -Additionally, Localization architecture assumes sequential Bayesian Filter, such as EKF and particle filter, to be used to integrate twist and pose. Since it can also integrate odometry information, it can update TF at high frequency. -As a result, all the merits of odom->base_link stated in REP-105 can be satisfied by map->base_link, and thus there is no need of setting odom frame. +This new Localization architecture assumes that twist and pose will be integrated with a sequential Bayesian Filter such as [EKF](https://en.wikipedia.org/wiki/Extended_Kalman_filter) or a [particle filter](https://en.wikipedia.org/wiki/Particle_filter). Additionally, the new architecture is able to integrate odometry information directly from sensor data (currently IMU and vehicle speed sensors only, but GNSS and doppler sensor data may be added in future) and is able to update TF smoothly and continuously at high frequency. As a result, all of the merits of odom->base_link stated in REP-105 can be satisfied by map->base_link in this new architecture and so there is no need to set the odom frame. -As a conclusion, we removed odom frame from this architecture proposal due to the following reasons: -1. It is almost impossible to not use map information in Planning module, and trajectory will have a dependency on map->odom(or map->base_link) -2. Therefore, to keep control safe even at localization failure, the following conditions must be satisfied: - 1. It must be able to detect localization failure - 2. When the failure is detected, map->odom should not be updated -3. If Localization module can satisfy the above conditions, there is no merit of using odom->base_link, and all modules should use map->base_link whenever they need a world-fixed frame. +To conclude, we removed the odom frame from this architecture proposal for the following reasons: +1. It is almost impossible to avoid using map information in the Planning module, and thus the trajectory will have a dependency on map->odom (or map->base_link) +2. Therefore, to maintain safe control even after a localization failure, the following conditions must be satisfied by the Localization module: + 1. It must be able to detect localization failures + 2. When a failure is detected, map->odom should not be updated +3. If the Localization module can satisfy the above conditions, then there is no benefit in using odom->base_link, and all modules should use map->base_link instead whenever they need a world-fixed frame. ### Possible Concerns -* Above argument focuses on replacing map->odom->base_link with map->base_link, but doesn't prove that map->base_link is better. If we set odom->base_link, wouldn't we have more options of frames? - * Once we split map->base_link into map->odom and odom->base_link, we loose velocity information and uncertainty(covariance) information between them. We can expect more robustness if we integrate all information(odometry and output of multiple localization) at once. - * If it is split into map->odom and odom->base_link, we have to wait for both transforms to obtain map->base_link transform. It is easier to estimate delay time if it is combined into one TF. - * We think that creating odom frame using current architecture is possible. However, we should first discuss if we have any component that wants to use odom->base_link. We anticipate that it might be needed when we design safety architecture, which is out-of-scope in this proposal, but it must be added after all safety analysis is done. As stated above, using odom frame in Control module is not enough for safety. -* Most of ROS localization nodes assumes odom->base_link to calculate map->odom. Wouldn't it be impossible to utilize such nodes without odom frame? - * It is very unlikely that a single algorithm supports all use cases in different environments. We need a module that integrates output from different localization algorithms, but most of 3rd party localization nodes are not made with consideration of integration modules. Therefore, we still need changes on 3rd party software anyways. Technically, REP-105 is integrating the odometry and localization results by sequentially connecting TFs, but this is not realistic when we add more algorithms. We should be thinking of a way to integrate localization methods in parallel, and current architecture made to support such use cases. +* The argument above focuses on replacing map->odom->base_link with map->base_link, but doesn't prove that map->base_link is better. If we set odom->base_link, wouldn't we have more frame options available? + * Once we split map->base_link into map->odom and odom->base_link, we lose velocity information and uncertainty (covariance) information between them. We can expect more robustness if we integrate all information (odometry and output of multiple localization) at once. + * We have to wait for both transforms in order to obtain the map->base_link transform. However, it is easier to estimate delay time if map->base_link is combined into one TF. + * Creating the odom frame within this new architecture should be possible, but first one has to consider if there are any components that actually need to use odom->base_link. For example, we anticipate that odom->base_link might be needed when designing a safety architecture (out-of-scope for this proposal). However, it should only be added after a comprehensive safety analysis has been completed since [we already know that using the odom frame in the Control module does not ensure safety](#why-we-removed-the-odom-frame). +* Most ROS localization nodes assume odom->base_link to calculate map->odom. Wouldn't it be impossible to utilize such nodes without the odom frame? + * It is very unlikely that a single algorithm supports all use cases in all environments. We need a module that integrates output from different localization algorithms, but most third-party localization nodes are not made with consideration for integration modules. Therefore, we still need to make changes when using third-party software anyway. Technically, REP-105 is integrating the odometry and localization results by sequentially connecting TFs, but this is not realistic when we add more algorithms. We should be thinking of a way to integrate localization methods in parallel, and this new architecture was made to support such a use case. ## Reference * REP105: https://www.ros.org/reps/rep-0105.html * REP103: https://www.ros.org/reps/rep-0103.html -* Discourse Discussion: https://discourse.ros.org/t/localization-architecture/8602/28 -* TF Discussion in Autoware.Auto: https://gitlab.com/autowarefoundation/autoware.auto/AutowareAuto/issues/292 +* TF discussion on ROS Discourse: https://discourse.ros.org/t/localization-architecture/8602/28 +* TF discussion in Autoware.Auto (GitLab): https://gitlab.com/autowarefoundation/autoware.auto/AutowareAuto/issues/292 diff --git a/docs/Credits.md b/docs/Credits.md index a9312d86f3b2a..b1f191071530b 100644 --- a/docs/Credits.md +++ b/docs/Credits.md @@ -1,21 +1,17 @@ -Some pre-trained models provided by other repositories are used in some packages. +Certain AutowareArchitectureProposal packages rely on pre-trained CNN models provided by other open source repositories. +- tensorrt_yolo3 + - The pre-trained models originate from [TRTForYolov3](https://github.com/lewes6369/TensorRT-Yolov3). + - [Weights for the trained model](https://drive.google.com/drive/folders/18OxNcRrDrCUmoAMgngJlhEglQ1Hqk_NJ) (416 folder) are automatically downloaded during the build process. - - tensorrt_yolo3
-The pre-trained models are provided in the following repository. The trained file is automatically downloaded when you build.
-https://github.com/lewes6369/TensorRT-Yolov3
-\[Original URL]
-Trained file (416) : https://drive.google.com/drive/folders/18OxNcRrDrCUmoAMgngJlhEglQ1Hqk_NJ -- traffic_light_fine_detector
-A trained model in this package is based on the following .weights file and was fine-tuned with darknet by Tier IV.
-\[Original URL]
-https://pjreddie.com/media/files/yolov3.weights
-After fine-tuning, the trained model is converted to ONNX file with the following script.
-https://github.com/tier4/Pilot.Auto/blob/master/src/perception/traffic_light_recognition/traffic_light_fine_detector/scripts/yolov3_to_onnx.py
+- traffic_light_fine_detector + - The trained model in this package is based on the [pjreddie's YOLO .weights file](https://pjreddie.com/media/files/yolov3.weights), with additional fine-tuning by Tier IV using [Darknet](https://github.com/pjreddie/darknet). + - After fine-tuning, the new weights for the trained model are converted into an ONNX file using [Python](https://github.com/tier4/AutowareArchitectureProposal.iv/blob/master/src/perception/traffic_light_recognition/traffic_light_fine_detector/scripts/yolov3_to_onnx.py). -- lidar_apollo_instance_segmentation
-This package makes use of three pre-trained models provided by Apollo. These files are automatically downloaded when you build.
-\[Original URL]
-VLP-16 : https://github.com/ApolloAuto/apollo/raw/88bfa5a1acbd20092963d6057f3a922f3939a183/modules/perception/production/data/perception/lidar/models/cnnseg/velodyne16/deploy.caffemodel
-HDL-64 : https://github.com/ApolloAuto/apollo/raw/88bfa5a1acbd20092963d6057f3a922f3939a183/modules/perception/production/data/perception/lidar/models/cnnseg/velodyne64/deploy.caffemodel
-VLS-128 : https://github.com/ApolloAuto/apollo/raw/91844c80ee4bd0cc838b4de4c625852363c258b5/modules/perception/production/data/perception/lidar/models/cnnseg/velodyne128/deploy.caffemodel
+ +- lidar_apollo_instance_segmentation + - This package makes use of three pre-trained models provided by [Apollo Auto](https://github.com/ApolloAuto). + - The following files are automatically downloaded during the build process: + - [VLP-16](https://github.com/ApolloAuto/apollo/raw/88bfa5a1acbd20092963d6057f3a922f3939a183/modules/perception/production/data/perception/lidar/models/cnnseg/velodyne16/deploy.caffemodel) + - [HDL-64](https://github.com/ApolloAuto/apollo/raw/88bfa5a1acbd20092963d6057f3a922f3939a183/modules/perception/production/data/perception/lidar/models/cnnseg/velodyne64/deploy.caffemodel) + - [VLS-128](https://github.com/ApolloAuto/apollo/raw/91844c80ee4bd0cc838b4de4c625852363c258b5/modules/perception/production/data/perception/lidar/models/cnnseg/velodyne128/deploy.caffemodel) diff --git a/docs/SimulationTutorial.md b/docs/SimulationTutorial.md index 9a932c69d3b69..951c012f85dfa 100644 --- a/docs/SimulationTutorial.md +++ b/docs/SimulationTutorial.md @@ -1,16 +1,18 @@ # Simulation in Autoware -Autoware provides 2 types of simulations. Rosbag is used for testing/validation for `Sensing`, `Localization` and `Perception` stacks. Planning Simulator is mainly used for testing/validation for `Planning` stack by simulating traffic rules, interactions with dynamic objects and control command to vehicle. +Autoware provides two types of simulation: +- rosbag-based simulation that can be used for testing/validation of the `Sensing`, `Localization` and `Perception` stacks. +- The Planning Simulator tool which is mainly used for testing/validation of `Planning` stack by simulating traffic rules, interactions with dynamic objects and control commands to the ego vehicle. ![sim](https://user-images.githubusercontent.com/8327598/79709776-0bd47b00-82fe-11ea-872e-d94ef25bc3bf.png) -## How to use rosbag for simulation +## How to use a pre-recorded rosbag file for simulation \* Currently this feature is not available for ROS 2. -Assuming already completed [Autoware setup](https://github.com/tier4/AutowareArchitectureProposal#autoware-setup). +> Assumes that [installation and setup of AutowareArchitectureProposal](../README.md#installation-steps) has already been completed. -1. Download sample map from [here](https://drive.google.com/open?id=1ovrJcFS5CZ2H51D8xVWNtEvj_oiXW-zk). -2. Download sample rosbag from [here](https://drive.google.com/open?id=1BFcNjIBUVKwupPByATYczv2X4qZtdAeD). +1. Download the sample pointcloud and vector maps from [here](https://drive.google.com/open?id=197kgRfSomZzaSbRrjWTx614le2qN-oxx), unpack the zip archive and copy the two map files to the same folder. +2. Download the sample rosbag from [here](https://drive.google.com/open?id=1BFcNjIBUVKwupPByATYczv2X4qZtdAeD). | Sensor | Topic name | | --------------------- | ---------------------------------------- | @@ -27,67 +29,74 @@ Assuming already completed [Autoware setup](https://github.com/tier4/AutowareArc | | /vehicle/status/twist | | ~~Camera x 7~~ | ~~/sensing/camera/camera[]/image_raw~~ | -Note: Image data are removed due to privacy concerns. +> Note: Due to privacy concerns, image data has been removed from the rosbag file. -3. Launch Autoware with rosbag mode. +3. Open a terminal and launch Autoware in "rosbag mode". ```sh +cd ~/workspace/AutowareArchitectureProposal source install/setup.bash -roslaunch autoware_launch logging_simulator.launch map_path:=[path] +roslaunch autoware_launch logging_simulator.launch map_path:=/path/to/map_folder ``` -4. Play sample rosbag. +4. Open a second terminal and play the sample rosbag file ```sh -rosbag play --clock -r 0.2 sample.bag +cd ~/workspace/AutowareArchitectureProposal +source install/setup.bash +rosbag play --clock -r 0.2 /path/to/sample.bag ``` ![rosbag_sim](https://user-images.githubusercontent.com/10920881/79726334-9381b000-8325-11ea-9ac6-ebbb29b11f14.png) ##### Note -- sample map : © 2020 TierIV inc. -- rosbag : © 2020 TierIV inc. +- Sample map and rosbag: © 2020 Tier IV, Inc. -## How to use Planning Simulator +## How to use the Planning Simulator -Assuming already completed [Autoware setup](https://github.com/tier4/AutowareArchitectureProposal#autoware-setup). +> Assumes that [installation and setup of AutowareArchitectureProposal](../README.md#installation-steps) has already been completed. -1. Download sample map from [here](https://drive.google.com/open?id=197kgRfSomZzaSbRrjWTx614le2qN-oxx) and extract the zip file. +1. Download the sample pointcloud and vector maps from [here](https://drive.google.com/open?id=197kgRfSomZzaSbRrjWTx614le2qN-oxx), unpack the zip archive and copy the two map files to the same folder. 2. Launch Autoware with Planning Simulator ```sh +cd ~/workspace/AutowareArchitectureProposal source install/setup.bash -roslaunch autoware_launch planning_simulator.launch map_path:=[path] +ros2 launch autoware_launch planning_simulator.launch.xml map_path:=/path/to/map_folder vehicle_model:=lexus sensor_model:=aip_xx1 ``` ![initial](https://user-images.githubusercontent.com/10920881/79816587-8b298380-83be-11ea-967c-8c45772e30f4.png) -3. Set initial position by using `2D Pose Estimate` in rviz. +3. Set an initial pose for the ego vehicle + - a) Click the `2D Pose estimate` button in the toolbar, or hit the `P` key + - b) In the 3D View pane, click and hold the left-mouse button, and then drag to set the direction for the initial pose. ![start](https://user-images.githubusercontent.com/10920881/79816595-8e247400-83be-11ea-857a-32cf096ac3dc.png) -4. Set goal position by using `2D Nav Goal` in rviz. +4. Set a goal pose for the ego vehicle + - a) Click the "2D Nav Goal" button in the toolbar, or hit the `G` key + - b) In the 3D View pane, click and hold the left-mouse button, and then drag to set the direction for the goal pose. ![goal](https://user-images.githubusercontent.com/10920881/79816596-8fee3780-83be-11ea-9ee4-caabbef3a385.png) -5. Engage vehicle. - - a. Go to [autoware_web_controller](http://localhost:8085/autoware_web_controller/index.html). - - b. Push `Engage` button. +5. Engage the ego vehicle. + - a. Open the [autoware_web_controller](http://localhost:8085/autoware_web_controller/index.html) in a browser. + - b. Click the `Engage` button. ![engage](https://user-images.githubusercontent.com/10920881/79714298-4db7ee00-830b-11ea-9ac4-11e126d7a7c4.png) ### Simulate dummy obstacles -- Set obstacles' position by using `2D Dummy Pedestrian` or `2D Dummy Car` in rviz. - - Shortcut keys `l` and `k` are assigned respectively. - - Can adjust obstacles' information including velocity, position/orientation error and etc, via `Tool Properties` in rviz. - - Can delete all the objects by using `Delete All Objects` in rviz. +- Set the position of dummy obstacle by clicking the `2D Dummy Pedestrian` or `2D Dummy Car` buttons in Rviz. + - These two buttons correspond to the shortcut keys `L` and `K` respectively. + - The properties of an object (including velocity, position/orientation error etc) can be adjusted via the `Tool Properties` panel in Rviz. + - Objects placed in the 3D View can be deleted by clicking the `Delete All Objects` button in Rviz and then clicking inside the 3D View pane. ![dummy](https://user-images.githubusercontent.com/10920881/79742437-c9cb2980-833d-11ea-8ad7-7c3ed1a96540.png) ### Simulate parking maneuver -Set goal in parking area. +- Set an initial pose for the ego vehicle first, and then set the goal pose in a parking area. ![parking](https://user-images.githubusercontent.com/10920881/79817389-56b6c700-83c0-11ea-873b-6ec73c8a5c38.png) diff --git a/scripts/add_aw_ros2_use_sim_time_into_launch_xml.pl b/scripts/add_aw_ros2_use_sim_time_into_launch_xml.pl deleted file mode 100755 index d37320ec806ff..0000000000000 --- a/scripts/add_aw_ros2_use_sim_time_into_launch_xml.pl +++ /dev/null @@ -1,31 +0,0 @@ -#!/usr/bin/perl - -$use_sim_time_str='' . "\n"; - -$out_line=""; -while(<>){ - if(/^(\s*)<\s*node\s/){ - $indent=$1; - $out_line = $indent . " " . $use_sim_time_str; - if(/\/\s*>\s*$/){ - $re_str = quotemeta($&); - ~s/$re_str/ >\n/; - print; - $out_line = $out_line . $indent . "\n"; - print $out_line; - $out_line = ""; - }else{ - print; - } - }elsif($out_line ne ""){ - if(/^\s*<\s*param\s*name.*use_sim_time.*AW_ROS2_USE_SIM_TIME/){ - $out_line = ""; - }elsif(/^\s*<\s*\/\s*node\s*>\s*$/){ - print $out_line; - $out_line = ""; - } - print; - }else{ - print; - } -} diff --git a/scripts/add_aw_ros2_use_sim_time_into_launch_xml.sh b/scripts/add_aw_ros2_use_sim_time_into_launch_xml.sh deleted file mode 100755 index d259567c2b54c..0000000000000 --- a/scripts/add_aw_ros2_use_sim_time_into_launch_xml.sh +++ /dev/null @@ -1,21 +0,0 @@ -#!/bin/bash - -if [ -n "$1" ]; then - find_dir=$1 -else - echo "Usage: $0 " - exit 0 -fi - -patch_script=$(dirname $(realpath $0))/$(basename $0 .sh).pl - -for xml in $(find $find_dir -type f | grep launch.xml$); do - echo "mv $xml ${xml}.org_tmp" - eval "mv $xml ${xml}.org_tmp" - echo "$patch_script ${xml}.org_tmp > $xml" - eval "$patch_script ${xml}.org_tmp > $xml" - echo "rm ${xml}.org_tmp" - eval "rm ${xml}.org_tmp" - echo "" -done - diff --git a/scripts/get_use_sim_time_all.sh b/scripts/get_use_sim_time_all.sh deleted file mode 100755 index 2b829656b1d7d..0000000000000 --- a/scripts/get_use_sim_time_all.sh +++ /dev/null @@ -1,8 +0,0 @@ -#!/bin/bash -( for node in $(ros2 run topic_tools node_list |& grep \/ | perl -pe "~s/^[^\/]\//\//") -#for node in $(ros2 param list|grep :) -do - #n=$(dirname $node)/$(basename $node :) - #echo "$n: use_sim_time: $(ros2 param get $n use_sim_time)" - echo "$node: use_sim_time: $(ros2 param get $node use_sim_time)" -done) |& grep \/