Skip to content

Commit de4335f

Browse files
author
yukke42
committed
docs: add roi_cluster_fusion and roi_detected_object_fusion
Signed-off-by: yukke42 <yusuke.muramatsu@tier4.jp>
1 parent c167904 commit de4335f

File tree

4 files changed

+157
-0
lines changed

4 files changed

+157
-0
lines changed
Original file line numberDiff line numberDiff line change
@@ -0,0 +1,14 @@
1+
# image_projection_based_fusion
2+
3+
## Purpose
4+
5+
The `image_projection_based_fusion` is a package to fuse detected obstacles (bounding box or segmentation) from image and 3d pointcloud or obstacles (bounding box, cluster or segmentation).
6+
7+
## Inner-workings / Algorithms
8+
9+
Detail description of each fusion's algorithm is in the following links.
10+
11+
| Fusion Name | Description | Detail |
12+
| -------------------------- | ---------------------------------------------------------------------------------------------- | -------------------------------------------- |
13+
| roi_cluster_fusion | Overwrite a classfication label of clusters by that of ROIs from a 2D object detector. | [link](./docs/roi-cluster-fusion.md) |
14+
| roi_detected_object_fusion | Overwrite a classfication label of detected objects by that of ROIs from a 2D object detector. | [link](./docs/roi-detected-object-fusion.md) |
Loading
Original file line numberDiff line numberDiff line change
@@ -0,0 +1,93 @@
1+
# roi_cluster_fusion
2+
3+
## Purpose
4+
5+
The `roi_cluster_fusion` is a package for filtering clusters that are less likely to be objects and overwriting labels of clusters with that of Region Of Interests (ROIs) by a 2D object detector.
6+
7+
## Inner-workings / Algorithms
8+
9+
The clusters are projected onto image planes, and then if the ROIs of clusters and ROIs by a detector are overlapped, the labels of clusters are overwritten with that of ROIs by detector. Intersection over Union (IoU) is used to determine if there are overlaps between them.
10+
11+
![roi_cluster_fusion_image](./images/roi_cluster_fusion.png)
12+
13+
## Inputs / Outputs
14+
15+
### Input
16+
17+
| Name | Type | Description |
18+
| --------------------- | -------------------------------------------------------- | ---------------------------------------------------------------------------------- |
19+
| `input` | `tier4_perception_msgs::msg::DetectedObjectsWithFeature` | clustered pointcloud |
20+
| `input/camera_infoID` | `sensor_msgs::msg::CameraInfo` | camera information to project 3d points onto image planes, `ID` is between 0 and 7 |
21+
| `input/roisID` | `tier4_perception_msgs::msg::DetectedObjectsWithFeature` | ROIs from each image, `ID` is between 0 and 7 |
22+
| `input/image_rawID` | `sensor_msgs::msg::Image` | images for visualization, `ID` is between 0 and 7 |
23+
24+
### Output
25+
26+
| Name | Type | Description |
27+
| -------------------- | -------------------------------------------------------- | ------------------------------------------------- |
28+
| `output` | `tier4_perception_msgs::msg::DetectedObjectsWithFeature` | labeled cluster pointcloud |
29+
| `output/image_rawID` | `sensor_msgs::msg::Image` | images for visualization, `ID` is between 0 and 7 |
30+
31+
## Parameters
32+
33+
### Core Parameters
34+
35+
| Name | Type | Description |
36+
| --------------------------- | ----- | ----------------------------------------------------------------------------- |
37+
| `use_iou_x` | bool | calculate IoU only along x-axis |
38+
| `use_iou_y` | bool | calculate IoU only along y-axis |
39+
| `use_iou` | bool | calculate IoU both along x-axis and y-axis |
40+
| `use_cluster_semantic_type` | bool | if `false`, the labels of clusters are overwritten by `UNKNOWN` before fusion |
41+
| `iou_threshold` | float | the IoU threshold to overwrite a label of clusters with a label of roi |
42+
| `rois_number` | int | the number of input rois |
43+
| `debug_mode` | bool | If `true`, subscribe and publish images for visualization. |
44+
45+
## Assumptions / Known limits
46+
47+
<!-- Write assumptions and limitations of your implementation.
48+
49+
Example:
50+
This algorithm assumes obstacles are not moving, so if they rapidly move after the vehicle started to avoid them, it might collide with them.
51+
Also, this algorithm doesn't care about blind spots. In general, since too close obstacles aren't visible due to the sensing performance limit, please take enough margin to obstacles.
52+
-->
53+
54+
## (Optional) Error detection and handling
55+
56+
<!-- Write how to detect errors and how to recover from them.
57+
58+
Example:
59+
This package can handle up to 20 obstacles. If more obstacles found, this node will give up and raise diagnostic errors.
60+
-->
61+
62+
## (Optional) Performance characterization
63+
64+
<!-- Write performance information like complexity. If it wouldn't be the bottleneck, not necessary.
65+
66+
Example:
67+
68+
### Complexity
69+
70+
This algorithm is O(N).
71+
72+
### Processing time
73+
74+
...
75+
-->
76+
77+
## (Optional) References/External links
78+
79+
<!-- Write links you referred to when you implemented.
80+
81+
Example:
82+
[1] {link_to_a_thesis}
83+
[2] {link_to_an_issue}
84+
-->
85+
86+
## (Optional) Future extensions / Unimplemented parts
87+
88+
<!-- Write future extensions of this package.
89+
90+
Example:
91+
Currently, this package can't handle the chattering obstacles well. We plan to add some probabilistic filters in the perception layer to improve it.
92+
Also, there are some parameters that should be global(e.g. vehicle size, max steering, etc.). These will be refactored and defined as global parameters so that we can share the same parameters between different nodes.
93+
-->
Original file line numberDiff line numberDiff line change
@@ -0,0 +1,50 @@
1+
# roi_detected_object_fusion
2+
3+
## Purpose
4+
5+
The `roi_detected_object_fusion` is a package to overwrite labels of detected objects with that of Region Of Interests (ROIs) by a 2D object detector.
6+
7+
## Inner-workings / Algorithms
8+
9+
The detected objects are projected onto image plnaes, and then if the ROIs of detected objects (3D ROIs) and from a 2D detector (2D ROIs) are overlapped, the labels of detected objects are overwritten with that of 2D ROIs. Intersection over Union (IoU) is used to determine if there are overlaps between them.
10+
11+
The `DetectedObject` has three shape and the polygon vertices of a object are as below:
12+
13+
- `BOUNDING_BOX`: The 8 corners of a bounding box.
14+
- `CYLINDER`: The circle is approximated by a hexagon.
15+
- `POLYGON`: Not implemented yet.
16+
17+
## Inputs / Outputs
18+
19+
### Input
20+
21+
| Name | Type | Description |
22+
| --------------------- | -------------------------------------------------------- | ---------------------------------------------------------------------------------- |
23+
| `input` | `autoware_auto_perception_msgs::msg::DetectedObjects` | detected objects |
24+
| `input/camera_infoID` | `sensor_msgs::msg::CameraInfo` | camera information to project 3d points onto image planes, `ID` is between 0 and 7 |
25+
| `input/roisID` | `tier4_perception_msgs::msg::DetectedObjectsWithFeature` | ROIs from each image, `ID` is between 0 and 7 |
26+
| `input/image_rawID` | `sensor_msgs::msg::Image` | images for visualization, `ID` is between 0 and 7 |
27+
28+
### Output
29+
30+
| Name | Type | Description |
31+
| -------------------- | ----------------------------------------------------- | ------------------------------------------------- |
32+
| `output` | `autoware_auto_perception_msgs::msg::DetectedObjects` | detected objects |
33+
| `output/image_rawID` | `sensor_msgs::msg::Image` | images for visualization, `ID` is between 0 and 7 |
34+
35+
## Parameters
36+
37+
### Core Parameters
38+
39+
| Name | Type | Description |
40+
| --------------- | ----- | ------------------------------------------------------------------------------ |
41+
| `use_iou_x` | bool | calculate IoU only along x-axis |
42+
| `use_iou_y` | bool | calculate IoU only along y-axis |
43+
| `use_iou` | bool | calculate IoU both along x-axis and y-axis |
44+
| `iou_threshold` | float | the IoU threshold to overwrite a label of a detected object with that of a roi |
45+
| `rois_number` | int | the number of input rois |
46+
| `debug_mode` | bool | If `true`, subscribe and publish images for visualization. |
47+
48+
## Assumptions / Known limits
49+
50+
`POLYGON`, which is a shape of a detected object, isn't supported yet.

0 commit comments

Comments
 (0)