Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

feat(image_projection_based_fusion, roi_cluster_fusion): roi and obstacle fusion method #548

Merged
Merged
Show file tree
Hide file tree
Changes from 19 commits
Commits
Show all changes
25 commits
Select commit Hold shift + click to select a range
7734c8c
feat: init image_projection_based_fusion package
Mar 23, 2022
ba98b09
feat: debugger
Mar 24, 2022
dc335f1
feat: port roi_cluster_fusion to image_projection_based_fusion
Mar 24, 2022
382ee32
feat: project detected_objects onto image
Mar 25, 2022
d66b322
feat: update detected_object.classification
Mar 25, 2022
9c78540
fix: add reset_cluster_semantic_type of roi_cluster_fusion
Mar 25, 2022
8a80209
refactor: organize code
Mar 25, 2022
733f09d
feat: add cylinderToVertices
Mar 28, 2022
6ec3f39
feat: get transform_stamped at the fixed stamp
Mar 28, 2022
3d08682
feat: not miss outside points of object on the image
Mar 28, 2022
a244579
chore: change the name of Copyright
Mar 28, 2022
7463caa
kfeat: use image_projection_based_fusion instead of roi_cluster_fusion
Mar 28, 2022
5bfed9e
docs: add roi_cluster_fusion and roi_detected_object_fusion
Mar 28, 2022
34bcf35
ci(pre-commit): autofix
pre-commit-ci[bot] Mar 28, 2022
2a8ffe1
docs: fix typo
Mar 29, 2022
ad2e762
refactor: rename function
Mar 29, 2022
2b0e0d2
refactor: delete member variables of input/output msg
Mar 29, 2022
946a705
fix: change when to clear a debugger
Mar 29, 2022
ff518eb
ci(pre-commit): autofix
pre-commit-ci[bot] Apr 6, 2022
f53a818
refactor: use pre-increment
Apr 10, 2022
a33f315
refactor: use range-based for loop
Apr 11, 2022
6f52e3f
Merge branch 'main' into feature/object-level-late-fusion
Apr 11, 2022
d18f43b
chore: add maintainer
yukke42 Apr 12, 2022
765e663
feat: change the output in perception_launch
Apr 14, 2022
40527ba
Merge branch 'main' into feature/object-level-late-fusion
yukke42 Apr 14, 2022
File filter

Filter by extension

Filter by extension

Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
Original file line number Diff line number Diff line change
Expand Up @@ -53,7 +53,7 @@
<arg name="output_clusters" value="clusters"/>
<arg name="use_pointcloud_map" value="false"/>
</include>
<include file="$(find-pkg-share roi_cluster_fusion)/launch/roi_cluster_fusion.launch.xml">
<include file="$(find-pkg-share image_projection_based_fusion)/launch/roi_cluster_fusion.launch.xml">
<arg name="input/camera_info0" value="$(var camera_info0)"/>
<arg name="input/rois0" value="/perception/object_recognition/detection/rois0"/>
<arg name="input/camera_info1" value="$(var camera_info1)"/>
Expand Down
2 changes: 1 addition & 1 deletion launch/tier4_perception_launch/package.xml
Original file line number Diff line number Diff line change
Expand Up @@ -13,6 +13,7 @@
<exec_depend>compare_map_segmentation</exec_depend>
<exec_depend>euclidean_cluster</exec_depend>
<exec_depend>ground_segmentation</exec_depend>
<exec_depend>image_projection_based_fusion</exec_depend>
<exec_depend>image_transport_decompressor</exec_depend>
<exec_depend>lidar_apollo_instance_segmentation</exec_depend>
<exec_depend>map_based_prediction</exec_depend>
Expand All @@ -22,7 +23,6 @@
<exec_depend>occupancy_grid_map_outlier_filter</exec_depend>
<exec_depend>pointcloud_preprocessor</exec_depend>
<exec_depend>pointcloud_to_laserscan</exec_depend>
<exec_depend>roi_cluster_fusion</exec_depend>
<exec_depend>shape_estimation</exec_depend>
<exec_depend>tensorrt_yolo</exec_depend>
<exec_depend>traffic_light_classifier</exec_depend>
Expand Down
54 changes: 54 additions & 0 deletions perception/image_projection_based_fusion/CMakeLists.txt
Original file line number Diff line number Diff line change
@@ -0,0 +1,54 @@
cmake_minimum_required(VERSION 3.8)
project(image_projection_based_fusion)

if (NOT CMAKE_CXX_STANDARD)
set(CMAKE_CXX_STANDARD 14)
endif()
if(CMAKE_COMPILER_IS_GNUCXX OR CMAKE_CXX_COMPILER_ID MATCHES "Clang")
add_compile_options(-Wall -Wextra -Wpedantic)
endif()

# find dependencies
find_package(ament_lint_auto REQUIRED)
find_package(ament_cmake_auto REQUIRED)
find_package(OpenCV REQUIRED)
find_package(Eigen3 REQUIRED)

ament_auto_find_build_dependencies()

include_directories(
include
${OpenCV_INCLUDE_DIRS}
${EIGEN3_INCLUDE_DIRS}
)

ament_auto_add_library(${PROJECT_NAME} SHARED
src/fusion_node.cpp
src/debugger.cpp
src/utils/geometry.cpp
src/utils/utils.cpp
src/roi_cluster_fusion/node.cpp
src/roi_detected_object_fusion/node.cpp
)
target_link_libraries(${PROJECT_NAME}
${OpenCV_LIBRARIES}
${EIGEN3_LIBRARIES}
)

rclcpp_components_register_node(${PROJECT_NAME}
PLUGIN "image_projection_based_fusion::RoiClusterFusionNode"
EXECUTABLE roi_cluster_fusion_node
)

rclcpp_components_register_node(${PROJECT_NAME}
PLUGIN "image_projection_based_fusion::RoiDetectedObjectFusionNode"
EXECUTABLE roi_detected_object_fusion_node
)

if(BUILD_TESTING)
find_package(ament_lint_auto REQUIRED)
endif()

ament_auto_package(INSTALL_TO_SHARE
launch
)
14 changes: 14 additions & 0 deletions perception/image_projection_based_fusion/README.md
Original file line number Diff line number Diff line change
@@ -0,0 +1,14 @@
# image_projection_based_fusion

## Purpose

The `image_projection_based_fusion` is a package to fuse detected obstacles (bounding box or segmentation) from image and 3d pointcloud or obstacles (bounding box, cluster or segmentation).

## Inner-workings / Algorithms

Detail description of each fusion's algorithm is in the following links.

| Fusion Name | Description | Detail |
| -------------------------- | ----------------------------------------------------------------------------------------------- | -------------------------------------------- |
| roi_cluster_fusion | Overwrite a classification label of clusters by that of ROIs from a 2D object detector. | [link](./docs/roi-cluster-fusion.md) |
| roi_detected_object_fusion | Overwrite a classification label of detected objects by that of ROIs from a 2D object detector. | [link](./docs/roi-detected-object-fusion.md) |
Original file line number Diff line number Diff line change
Expand Up @@ -2,7 +2,7 @@

## Purpose

roi_cluster_fusion is a package for filtering clusters that are less likely to be objects and overwriting labels of clusters with that of Region Of Interests (ROIs) by a 2D object detector.
The `roi_cluster_fusion` is a package for filtering clusters that are less likely to be objects and overwriting labels of clusters with that of Region Of Interests (ROIs) by a 2D object detector.

## Inner-workings / Algorithms

Expand All @@ -16,16 +16,17 @@ The clusters are projected onto image planes, and then if the ROIs of clusters a

| Name | Type | Description |
| --------------------- | -------------------------------------------------------- | ---------------------------------------------------------------------------------- |
| `clusters` | `tier4_perception_msgs::msg::DetectedObjectsWithFeature` | clustered pointcloud |
| `input` | `tier4_perception_msgs::msg::DetectedObjectsWithFeature` | clustered pointcloud |
| `input/camera_infoID` | `sensor_msgs::msg::CameraInfo` | camera information to project 3d points onto image planes, `ID` is between 0 and 7 |
| `input/roisID` | `tier4_perception_msgs::msg::DetectedObjectsWithFeature` | ROIs from each image, `ID` is between 0 and 7 |
| `input/image_rawID` | `sensor_msgs::msg::Image` | images for visualization, `ID` is between 0 and 7 |

### Output

| Name | Type | Description |
| -------------------- | ------------------------- | ------------------------------------------------- |
| `output/image_rawID` | `sensor_msgs::msg::Image` | images for visualization, `ID` is between 0 and 7 |
| Name | Type | Description |
| -------------------- | -------------------------------------------------------- | ------------------------------------------------- |
| `output` | `tier4_perception_msgs::msg::DetectedObjectsWithFeature` | labeled cluster pointcloud |
| `output/image_rawID` | `sensor_msgs::msg::Image` | images for visualization, `ID` is between 0 and 7 |

## Parameters

Expand All @@ -39,6 +40,7 @@ The clusters are projected onto image planes, and then if the ROIs of clusters a
| `use_cluster_semantic_type` | bool | if `false`, the labels of clusters are overwritten by `UNKNOWN` before fusion |
| `iou_threshold` | float | the IoU threshold to overwrite a label of clusters with a label of roi |
| `rois_number` | int | the number of input rois |
| `debug_mode` | bool | If `true`, subscribe and publish images for visualization. |

## Assumptions / Known limits

Expand All @@ -62,6 +64,7 @@ Example:
<!-- Write performance information like complexity. If it wouldn't be the bottleneck, not necessary.

Example:

### Complexity

This algorithm is O(N).
Expand Down
Original file line number Diff line number Diff line change
@@ -0,0 +1,50 @@
# roi_detected_object_fusion

## Purpose

The `roi_detected_object_fusion` is a package to overwrite labels of detected objects with that of Region Of Interests (ROIs) by a 2D object detector.

## Inner-workings / Algorithms

The detected objects are projected onto image planes, and then if the ROIs of detected objects (3D ROIs) and from a 2D detector (2D ROIs) are overlapped, the labels of detected objects are overwritten with that of 2D ROIs. Intersection over Union (IoU) is used to determine if there are overlaps between them.

The `DetectedObject` has three shape and the polygon vertices of a object are as below:

- `BOUNDING_BOX`: The 8 corners of a bounding box.
- `CYLINDER`: The circle is approximated by a hexagon.
- `POLYGON`: Not implemented yet.

## Inputs / Outputs

### Input

| Name | Type | Description |
| --------------------- | -------------------------------------------------------- | ---------------------------------------------------------------------------------- |
| `input` | `autoware_auto_perception_msgs::msg::DetectedObjects` | detected objects |
| `input/camera_infoID` | `sensor_msgs::msg::CameraInfo` | camera information to project 3d points onto image planes, `ID` is between 0 and 7 |
| `input/roisID` | `tier4_perception_msgs::msg::DetectedObjectsWithFeature` | ROIs from each image, `ID` is between 0 and 7 |
| `input/image_rawID` | `sensor_msgs::msg::Image` | images for visualization, `ID` is between 0 and 7 |

### Output

| Name | Type | Description |
| -------------------- | ----------------------------------------------------- | ------------------------------------------------- |
| `output` | `autoware_auto_perception_msgs::msg::DetectedObjects` | detected objects |
| `output/image_rawID` | `sensor_msgs::msg::Image` | images for visualization, `ID` is between 0 and 7 |

## Parameters

### Core Parameters

| Name | Type | Description |
| --------------- | ----- | ------------------------------------------------------------------------------ |
| `use_iou_x` | bool | calculate IoU only along x-axis |
| `use_iou_y` | bool | calculate IoU only along y-axis |
| `use_iou` | bool | calculate IoU both along x-axis and y-axis |
| `iou_threshold` | float | the IoU threshold to overwrite a label of a detected object with that of a roi |
| `rois_number` | int | the number of input rois |
| `debug_mode` | bool | If `true`, subscribe and publish images for visualization. |

## Assumptions / Known limits

`POLYGON`, which is a shape of a detected object, isn't supported yet.
Original file line number Diff line number Diff line change
@@ -0,0 +1,65 @@
// Copyright 2022 TIER IV, Inc.
//
// Licensed under the Apache License, Version 2.0 (the "License");
// you may not use this file except in compliance with the License.
// You may obtain a copy of the License at
//
// http://www.apache.org/licenses/LICENSE-2.0
//
// Unless required by applicable law or agreed to in writing, software
// distributed under the License is distributed on an "AS IS" BASIS,
// WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
// See the License for the specific language governing permissions and
// limitations under the License.

#ifndef IMAGE_PROJECTION_BASED_FUSION__DEBUGGER_HPP_
#define IMAGE_PROJECTION_BASED_FUSION__DEBUGGER_HPP_

#define EIGEN_MPL2_ONLY

#include <Eigen/Core>
#include <Eigen/Geometry>
#include <image_transport/image_transport.hpp>
#include <rclcpp/rclcpp.hpp>

#include <sensor_msgs/msg/image.hpp>
#include <sensor_msgs/msg/region_of_interest.hpp>

#include <boost/circular_buffer.hpp>

#include <memory>
#include <vector>

namespace image_projection_based_fusion
{

using sensor_msgs::msg::RegionOfInterest;

class Debugger
{
public:
explicit Debugger(rclcpp::Node * node_ptr, const int image_num);

void publishImage(const int image_id, const rclcpp::Time & stamp);

void clear();

std::vector<RegionOfInterest> image_rois_;
std::vector<RegionOfInterest> obstacle_rois_;
std::vector<Eigen::Vector2d> obstacle_points_;

private:
void imageCallback(
const sensor_msgs::msg::Image::ConstSharedPtr input_image_msg, const int image_id);

rclcpp::Node * node_ptr_;
std::shared_ptr<image_transport::ImageTransport> image_transport_;
std::vector<image_transport::Subscriber> image_subs_;
std::vector<image_transport::Publisher> image_pubs_;
std::vector<boost::circular_buffer<sensor_msgs::msg::Image::ConstSharedPtr>> image_buffers_;
std::size_t image_buffer_size{5};
};

} // namespace image_projection_based_fusion

#endif // IMAGE_PROJECTION_BASED_FUSION__DEBUGGER_HPP_
Original file line number Diff line number Diff line change
@@ -0,0 +1,108 @@
// Copyright 2021 TIER IV, Inc.
//
// Licensed under the Apache License, Version 2.0 (the "License");
// you may not use this file except in compliance with the License.
// You may obtain a copy of the License at
//
// http://www.apache.org/licenses/LICENSE-2.0
//
// Unless required by applicable law or agreed to in writing, software
// distributed under the License is distributed on an "AS IS" BASIS,
// WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
// See the License for the specific language governing permissions and
// limitations under the License.

#ifndef IMAGE_PROJECTION_BASED_FUSION__FUSION_NODE_HPP_
#define IMAGE_PROJECTION_BASED_FUSION__FUSION_NODE_HPP_

#include <image_projection_based_fusion/debugger.hpp>
#include <rclcpp/rclcpp.hpp>

#include <autoware_auto_perception_msgs/msg/detected_objects.hpp>
#include <sensor_msgs/msg/camera_info.hpp>
#include <tier4_perception_msgs/msg/detected_objects_with_feature.hpp>

#include <message_filters/pass_through.h>
#include <message_filters/subscriber.h>
#include <message_filters/sync_policies/approximate_time.h>
#include <message_filters/synchronizer.h>
#include <tf2_ros/buffer.h>
#include <tf2_ros/transform_listener.h>

#include <map>
#include <memory>
#include <string>
#include <vector>

namespace image_projection_based_fusion
{
using autoware_auto_perception_msgs::msg::DetectedObject;
using autoware_auto_perception_msgs::msg::DetectedObjects;
using tier4_perception_msgs::msg::DetectedObjectsWithFeature;
using tier4_perception_msgs::msg::DetectedObjectWithFeature;

template <class Msg>
class FusionNode : public rclcpp::Node
{
public:
explicit FusionNode(const std::string & node_name, const rclcpp::NodeOptions & options);

protected:
void cameraInfoCallback(
const sensor_msgs::msg::CameraInfo::ConstSharedPtr input_camera_info_msg, const int camera_id);

void fusionCallback(
typename Msg::ConstSharedPtr input_msg,
DetectedObjectsWithFeature::ConstSharedPtr input_roi0_msg,
DetectedObjectsWithFeature::ConstSharedPtr input_roi1_msg,
DetectedObjectsWithFeature::ConstSharedPtr input_roi2_msg,
DetectedObjectsWithFeature::ConstSharedPtr input_roi3_msg,
DetectedObjectsWithFeature::ConstSharedPtr input_roi4_msg,
DetectedObjectsWithFeature::ConstSharedPtr input_roi5_msg,
DetectedObjectsWithFeature::ConstSharedPtr input_roi6_msg,
DetectedObjectsWithFeature::ConstSharedPtr input_roi7_msg);

virtual void preprocess(Msg & output_msg);

virtual void fuseOnSingleImage(
const Msg & input_msg, const int image_id, const DetectedObjectsWithFeature & input_roi_msg,
const sensor_msgs::msg::CameraInfo & camera_info, Msg & output_msg) = 0;

// set args if you need
virtual void postprocess();

void publish(const Msg & output_msg);

int rois_number_{0};
tf2_ros::Buffer tf_buffer_;
tf2_ros::TransformListener tf_listener_;

// camera_info
std::map<int, sensor_msgs::msg::CameraInfo> camera_info_map_;
std::vector<rclcpp::Subscription<sensor_msgs::msg::CameraInfo>::SharedPtr> camera_info_subs_;

// fusion
typename message_filters::Subscriber<Msg> sub_;
message_filters::PassThrough<DetectedObjectsWithFeature> passthrough_;
std::vector<std::shared_ptr<message_filters::Subscriber<DetectedObjectsWithFeature>>> rois_subs_;
inline void dummyCallback(DetectedObjectsWithFeature::ConstSharedPtr input)
{
passthrough_.add(input);
}
using SyncPolicy = message_filters::sync_policies::ApproximateTime<
Msg, DetectedObjectsWithFeature, DetectedObjectsWithFeature, DetectedObjectsWithFeature,
DetectedObjectsWithFeature, DetectedObjectsWithFeature, DetectedObjectsWithFeature,
DetectedObjectsWithFeature, DetectedObjectsWithFeature>;
using Sync = message_filters::Synchronizer<SyncPolicy>;
typename std::shared_ptr<Sync> sync_ptr_;

// output
typename rclcpp::Publisher<Msg>::SharedPtr pub_ptr_;

// debugger
std::shared_ptr<Debugger> debugger_;
};

} // namespace image_projection_based_fusion

#endif // IMAGE_PROJECTION_BASED_FUSION__FUSION_NODE_HPP_
Loading