Skip to content

Commit 4cf499e

Browse files
csu-bot-zividSatjaSivcev
authored andcommitted
Samples: Stitching, 50/60 Hz Adaption, and GUI fixes
New samples: - stitch_using_robot_mounted_camera.py This sample simulates scanning an object using a robot-mounted camera. It captures multiple point clouds at different robot poses and stitches them using robot pose and hand-eye calibration. It supports both small and large objects and includes optional voxel downsampling. - adapt_settings_for_flickering_ambient_light.py This sample is a tool to adjust settings from the provided YAML files to compensate for known 50 Hz or 60 Hz flickering frequencies in the scene. Fixes: - robodk_verify_hand_eye_calibration.py The function load_and_assert_affine_matrix() function was made stricter, making this sample fail; this commit makes the sample work. It corrects a typo (--id instead of --ids). - HandEyeGUI In both Eye-In-Hand and Eye-To-Hand, we calculate marker poses in the Robot Base Frame. This also makes the touch pose calculation consistent. While refactoring the functions to get resource paths, the function used in the pose_conversion_gui.py was not updated; it is now. In hand_eye_verification_gui.py projection workaround is removed now. Updating on the fly was not possible before, and we had to use a workaround to ensure the projector image was not monochrome. This reduced the projection strength as well, but avoided glitches when changing the projection on the fly. The workaround is no longer needed, as the projection can now be updated on the fly without issues. Improves verification pattern in HandEyeGUI. We now use the checker centers we get from the API. This is not as powerful as projecting on the corners. Calculating the corners requires a transformation in the checkerboard coordinate system. This, in turn, requires transformation to the checkerboard coordinate system followed by a return to the camera coordinate system. Instead, we now project across the longest diagonals across the white checkers. In HandEyeGUI, pose confirmation logic is fixed. When fixing verification with projection for manual operation, I broke the logic forthe robot. This commit fixes it.
1 parent fb16911 commit 4cf499e

File tree

8 files changed

+558
-15
lines changed

8 files changed

+558
-15
lines changed

README.md

Lines changed: 4 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -55,6 +55,8 @@ from the camera can be used.
5555
- [capture\_hdr\_print\_normals](https://github.com/zivid/zivid-python-samples/tree/master/source/camera/advanced/capture_hdr_print_normals.py) - Capture Zivid point clouds, compute normals and print a
5656
subset.
5757
- **info\_util\_other**
58+
- [adapt\_settings\_for\_flickering\_ambient\_light](https://github.com/zivid/zivid-python-samples/tree/master/source/camera/info_util_other/adapt_settings_for_flickering_ambient_light.py) - Adapt camera acquisition settings based on known ambient
59+
light conditions.
5860
- [automatic\_network\_configuration\_for\_cameras](https://github.com/zivid/zivid-python-samples/tree/master/source/camera/info_util_other/automatic_network_configuration_for_cameras.py) - Automatically configure the IP addresses of connected
5961
cameras to match the network of the user's PC.
6062
- [camera\_info](https://github.com/zivid/zivid-python-samples/tree/master/source/camera/info_util_other/camera_info.py) - Print version information for Python, zivid-python and
@@ -126,6 +128,8 @@ from the camera can be used.
126128
- [stitch\_continuously\_rotating\_object](https://github.com/zivid/zivid-python-samples/tree/master/source/applications/advanced/stitch_continuously_rotating_object.py) - Stitch point clouds from a continuously rotating object
127129
without pre-alignment using Local Point Cloud Registration
128130
and apply Voxel Downsample.
131+
- [stitch\_using\_robot\_mounted\_camera](https://github.com/zivid/zivid-python-samples/tree/master/source/applications/advanced/stitch_using_robot_mounted_camera.py) - Stitch multiple point clouds captured with a robot mounted
132+
camera.
129133
- [stitch\_via\_local\_point\_cloud\_registration](https://github.com/zivid/zivid-python-samples/tree/master/source/applications/advanced/stitch_via_local_point_cloud_registration.py) - Stitch two point clouds using a transformation estimated
130134
by Local Point Cloud Registration and apply Voxel
131135
Downsample.

modules/zividsamples/gui/hand_eye_verification_gui.py

Lines changed: 14 additions & 3 deletions
Original file line numberDiff line numberDiff line change
@@ -162,7 +162,9 @@ def connect_signals(self):
162162

163163
def update_instructions(self, has_set_object_poses_in_robot_frame: bool, robot_pose_confirmed: bool):
164164
self.has_confirmed_robot_pose = robot_pose_confirmed
165-
self.has_set_object_poses_in_robot_frame = has_set_object_poses_in_robot_frame and self.has_confirmed_robot_pose
165+
self.has_set_object_poses_in_robot_frame = has_set_object_poses_in_robot_frame and (
166+
self.has_confirmed_robot_pose or self.use_robot
167+
)
166168
self.instruction_steps = {}
167169
if self.use_robot:
168170
self.instruction_steps[
@@ -305,7 +307,7 @@ def on_target_pose_updated(self, robot_target: RobotTarget):
305307
self.update_projection.emit(True)
306308

307309
def has_features_to_project(self) -> bool:
308-
return self.has_confirmed_robot_pose and self.has_set_object_poses_in_robot_frame
310+
return (self.has_confirmed_robot_pose or self.use_robot) and self.has_set_object_poses_in_robot_frame
309311

310312
def process_capture(self, frame: zivid.Frame, rgba: NDArray[Shape["N, M, 4"], UInt8], settings: SettingsPixelMappingIntrinsics): # type: ignore
311313
self.detected_markers = {}
@@ -367,11 +369,20 @@ def generate_projector_image(self, camera: zivid.Camera):
367369
color = (0, 255, 0, 255) if self.has_confirmed_robot_pose else (0, 255, 255, 255)
368370
if self.hand_eye_configuration.calibration_object == CalibrationObject.Checkerboard:
369371
self.cv2_handler.draw_circles(projector_image, non_nan_projector_image_indices, color)
372+
# Add diagonal lines across white checkers
373+
rows, cols = (5, 6) if projector_pixels.shape[0] == 30 else (3, 4)
374+
organized_projector_pixel_indices = non_nan_projector_image_indices.reshape(rows, cols, -1)
375+
diagonal_lines = np.array(
376+
[
377+
[organized_projector_pixel_indices[0, 1], organized_projector_pixel_indices[-1, -1]],
378+
[organized_projector_pixel_indices[-1, 1], organized_projector_pixel_indices[0, -1]],
379+
]
380+
)
381+
self.cv2_handler.draw_polygons(projector_image, diagonal_lines, color=color)
370382
else:
371383
self.cv2_handler.draw_polygons(
372384
projector_image, non_nan_projector_image_indices.reshape((-1, 4, 2)), color=color
373385
)
374-
# projector_image[0, 0] = [1, 1, 1, 255] # TODO(ZIVID-8760): Remove workaround
375386
return projector_image
376387

377388
def calculate_calibration_object_in_camera_frame_pose(self):

modules/zividsamples/gui/pose_widget.py

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -775,7 +775,7 @@ def MarkersInRobotFrame(
775775
"hand-eye-robot-and-calibration-board-ee-object-pose-low-res.png"
776776
)
777777
return cls(
778-
title=f"Marker Poses In Robot {('Base' if eye_in_hand else 'Tool')} Frame",
778+
title="Marker Poses In Robot Base Frame",
779779
initial_rotation_information=initial_rotation_information,
780780
eye_in_hand=eye_in_hand,
781781
display_mode=display_mode,

modules/zividsamples/gui/touch_gui.py

Lines changed: 1 addition & 3 deletions
Original file line numberDiff line numberDiff line change
@@ -172,9 +172,7 @@ def process_capture(self, frame: zivid.Frame, rgba: NDArray[Shape["N, M, 4"], UI
172172
hand_eye_transform = self.hand_eye_pose_widget.transformation_matrix
173173
robot_transform = self.robot_pose_widget.transformation_matrix
174174
robot_frame_transform = (
175-
robot_transform * hand_eye_transform
176-
if self.hand_eye_configuration.eye_in_hand
177-
else robot_transform.inv() * hand_eye_transform
175+
robot_transform * hand_eye_transform if self.hand_eye_configuration.eye_in_hand else hand_eye_transform
178176
)
179177
detected_marker_poses_in_robot_frame = {
180178
key: robot_frame_transform * value for key, value in detected_marker_poses_in_camera_frame.items()

source/applications/advanced/hand_eye_calibration/pose_conversion_gui.py

Lines changed: 3 additions & 3 deletions
Original file line numberDiff line numberDiff line change
@@ -14,7 +14,7 @@
1414
from zividsamples.gui.pose_widget import PoseWidget, PoseWidgetDisplayMode
1515
from zividsamples.gui.qt_application import ZividQtApplication
1616
from zividsamples.gui.rotation_format_configuration import RotationInformation
17-
from zividsamples.paths import get_file_path
17+
from zividsamples.paths import get_image_file_path
1818

1919

2020
class PoseConverter(QMainWindow):
@@ -38,10 +38,10 @@ def __init__(self):
3838

3939
self.create_toolbar()
4040

41-
robot_ee_pose_img_path = get_file_path(
41+
robot_ee_pose_img_path = get_image_file_path(
4242
"hand-eye-robot-and-calibration-board-camera-on-robot-robot-ee-pose-low-res.png"
4343
)
44-
rob_ee_pose_img_path = get_file_path("hand-eye-robot-and-calibration-board-rob-ee-pose-low-res.png")
44+
rob_ee_pose_img_path = get_image_file_path("hand-eye-robot-and-calibration-board-rob-ee-pose-low-res.png")
4545
self.input_pose_widget = PoseWidget(
4646
title="Input Pose",
4747
initial_rotation_information=input_format,

source/applications/advanced/hand_eye_calibration/robodk_hand_eye_calibration/robodk_verify_hand_eye_calibration.py

Lines changed: 6 additions & 5 deletions
Original file line numberDiff line numberDiff line change
@@ -20,6 +20,7 @@
2020

2121
import argparse
2222
import datetime
23+
from pathlib import Path
2324
from typing import Dict, Tuple
2425

2526
import cv2
@@ -72,7 +73,7 @@ def _estimate_calibration_object_pose(frame: zivid.Frame, user_options: argparse
7273
print("Detecting the ArUco marker pose (center of the marker)")
7374

7475
calibration_object_pose = (
75-
zivid.calibration.detect_markers(frame, user_options.ids, user_options.dictionary)
76+
zivid.calibration.detect_markers(frame, user_options.id, user_options.dictionary)
7677
.detected_markers()[0]
7778
.pose.to_matrix()
7879
)
@@ -102,13 +103,13 @@ def _get_base_to_calibration_object_transform(
102103
if user_options.eih:
103104
print("Loading current robot pose")
104105
base_to_flange_transform = np.array(robot.Pose()).T
105-
flange_to_camera_transform = load_and_assert_affine_matrix(user_options.hand_eye_yaml)
106+
flange_to_camera_transform = load_and_assert_affine_matrix(Path(user_options.hand_eye_yaml))
106107

107108
base_to_calibration_object_transform = (
108109
base_to_flange_transform @ flange_to_camera_transform @ camera_to_calibration_object_transform
109110
)
110111
elif user_options.eth:
111-
base_to_camera_transform = load_and_assert_affine_matrix(user_options.hand_eye_yaml)
112+
base_to_camera_transform = load_and_assert_affine_matrix(Path(user_options.hand_eye_yaml))
112113

113114
base_to_calibration_object_transform = base_to_camera_transform @ camera_to_calibration_object_transform
114115
else:
@@ -336,11 +337,11 @@ def _main() -> None:
336337
set_robot_speed_and_acceleration(robot, speed=100, joint_speed=100, acceleration=50, joint_acceleration=50)
337338

338339
print("Loading the Pointed Hand-Eye Verification Tool transformation matrix from a YAML file")
339-
tool_base_to_tool_tip_transform = load_and_assert_affine_matrix(user_options.tool_yaml)
340+
tool_base_to_tool_tip_transform = load_and_assert_affine_matrix(Path(user_options.tool_yaml))
340341

341342
if user_options.mounts_yaml:
342343
print("Loading the on-arm mounts transformation matrix from a YAML file")
343-
flange_to_tool_base_transform = load_and_assert_affine_matrix(user_options.mounts_yaml)
344+
flange_to_tool_base_transform = load_and_assert_affine_matrix(Path(user_options.mounts_yaml))
344345

345346
flange_to_tcp_transform = flange_to_tool_base_transform @ tool_base_to_tool_tip_transform
346347
else:
Lines changed: 203 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -0,0 +1,203 @@
1+
"""
2+
Stitch multiple point clouds captured with a robot mounted camera.
3+
4+
The sample simulates a camera capturing point clouds at different robot poses.
5+
The point clouds are pre-aligned, i.e., transformed into the robot base frame using the robot's pose and the hand-eye
6+
calibration transform. The resulting stitched point cloud is displayed and saved to a PLY file.
7+
8+
The sample demonstrates stitching of a small and a big object.
9+
10+
For the small object that fully fits within the camera's field of view in every capture, the result of stitching multiple
11+
captures is a complete and detailed point cloud of the object. Since the object remains fully visible in each frame from
12+
different perspectives (front, back, left, and right), the stitching produces data for all sides of the object.
13+
14+
For the large object that does not fit within the camera's field of view, stitching is used to expand the effective
15+
field of view. By capturing different sections of the object and combining them, the result is a complete view of the
16+
object, but from a more limited number of perspectives (e.g., mostly front-facing), depending on the capture setup.
17+
18+
The resulting stitched point cloud is voxel downsampled if the `--full-resolution` flag is not set.
19+
20+
The dataset with ZDF and YML files (StitchingPointClouds.zip) for this sample can be found at
21+
https://support.zivid.com/en/latest/academy/applications/stitching.html.
22+
23+
Extract the content into %ProgramData%/Zivid:
24+
StitchingPointClouds/
25+
├── SmallObject/
26+
└── BigObject/
27+
28+
Each of these folders must contain point cloud in ZDF and robot poses and a hand-eye transform in YML format.
29+
30+
Note: This example uses experimental SDK features, which may be modified, moved, or deleted in the future without notice.
31+
"""
32+
33+
import argparse
34+
from pathlib import Path
35+
36+
import numpy as np
37+
import zivid
38+
from zivid.experimental.point_cloud_export import export_unorganized_point_cloud
39+
from zivid.experimental.point_cloud_export.file_format import PLY
40+
from zivid.experimental.toolbox.point_cloud_registration import (
41+
LocalPointCloudRegistrationParameters,
42+
local_point_cloud_registration,
43+
)
44+
from zividsamples.display import display_pointcloud
45+
from zividsamples.paths import get_sample_data_path
46+
from zividsamples.save_load_matrix import load_and_assert_affine_matrix
47+
48+
49+
def _options() -> argparse.Namespace:
50+
"""Parse command-line arguments.
51+
52+
Returns:
53+
argparse.Namespace: Parsed command-line arguments.
54+
55+
"""
56+
parser = argparse.ArgumentParser()
57+
parser.add_argument(
58+
"--full-resolution",
59+
action="store_true",
60+
help="Use full resolution for stitching. If not set, downsampling is applied.",
61+
)
62+
63+
return parser.parse_args()
64+
65+
66+
def _stitch_point_clouds(directory: Path, full_resolution: bool) -> zivid.UnorganizedPointCloud:
67+
"""Stitch multiple point clouds captured at different robot poses.
68+
69+
Args:
70+
directory (Path): Path to directory containing point clouds in ZDF and robot poses and a hand-eye transform in YML format.
71+
full_resolution (bool): If True, use full resolution (no downsampling). Otherwise, voxel downsample.
72+
73+
Returns:
74+
zivid.UnorganizedPointCloud: The stitched point cloud.
75+
76+
Raises:
77+
FileNotFoundError: If required files (ZDFs, robot poses, or hand-eye calibration) are missing in the directory.
78+
ValueError: If the number of ZDF files and robot pose files do not match.
79+
80+
"""
81+
zdf_files = list(directory.rglob("capture_*.zdf"))
82+
pose_files = list(directory.rglob("robot_pose_*.yaml"))
83+
84+
if not zdf_files or not pose_files or not (directory / "hand_eye_transform.yaml").exists():
85+
raise FileNotFoundError("Required files are missing in the directory.")
86+
87+
if len(zdf_files) != len(pose_files):
88+
raise ValueError("Number of ZDF files and robot pose files do not match.")
89+
90+
hand_eye_transform = load_and_assert_affine_matrix(directory / "hand_eye_transform.yaml")
91+
92+
stitched_point_clouds_to_current_point_cloud_transform = np.eye(4)
93+
unorganized_stitched_point_cloud_bucket = zivid.UnorganizedPointCloud()
94+
registration_params = LocalPointCloudRegistrationParameters(max_correspondence_distance=2)
95+
96+
list_of_transforms = []
97+
stitched_count = 1
98+
99+
for index, zdf in enumerate(zdf_files):
100+
robot_pose = load_and_assert_affine_matrix(pose_files[index])
101+
frame = zivid.Frame(zdf)
102+
103+
base_to_camera_transform = np.matmul(robot_pose, hand_eye_transform)
104+
unorganized_point_cloud_in_base_frame = (
105+
frame.point_cloud()
106+
.to_unorganized_point_cloud()
107+
.voxel_downsampled(voxel_size=1.0, min_points_per_voxel=2)
108+
.transformed(base_to_camera_transform)
109+
)
110+
111+
if index != 0:
112+
local_point_cloud_registration_result = local_point_cloud_registration(
113+
target=unorganized_stitched_point_cloud_bucket,
114+
source=unorganized_point_cloud_in_base_frame,
115+
parameters=registration_params,
116+
)
117+
118+
# Aligning the new point cloud to the existing stitched result using local point cloud registration
119+
if not local_point_cloud_registration_result.converged():
120+
print("Registration did not converge...")
121+
continue
122+
stitched_point_clouds_to_current_point_cloud_transform = (
123+
local_point_cloud_registration_result.transform().to_matrix()
124+
)
125+
unorganized_stitched_point_cloud_bucket.transform(
126+
np.linalg.inv(stitched_point_clouds_to_current_point_cloud_transform)
127+
)
128+
stitched_count += 1
129+
130+
if full_resolution:
131+
print(f"{stitched_count} out of {len(zdf_files)} Point clouds aligned.")
132+
else:
133+
print(f"{stitched_count} out of {len(zdf_files)} Point clouds stitched.")
134+
135+
list_of_transforms.append((base_to_camera_transform, stitched_point_clouds_to_current_point_cloud_transform))
136+
137+
unorganized_stitched_point_cloud_bucket.extend(unorganized_point_cloud_in_base_frame)
138+
139+
final_point_cloud = zivid.UnorganizedPointCloud()
140+
141+
if full_resolution:
142+
for index, transform in enumerate(list_of_transforms):
143+
frame = zivid.Frame(zdf_files[index])
144+
frame.point_cloud().transform(transform[0])
145+
final_point_cloud.transform(np.linalg.inv(transform[1]))
146+
final_point_cloud.extend(frame.point_cloud().to_unorganized_point_cloud())
147+
148+
if index > 0:
149+
print(f"{index + 1} out of {len(list_of_transforms)} point clouds stitched.")
150+
else:
151+
# Downsampling the final result for efficiency
152+
final_point_cloud = unorganized_stitched_point_cloud_bucket.voxel_downsampled(
153+
voxel_size=1.0, min_points_per_voxel=2
154+
)
155+
156+
return final_point_cloud
157+
158+
159+
def _main() -> None:
160+
args = _options()
161+
app = zivid.Application() # noqa: F841 # pylint: disable=unused-variable
162+
163+
# Ensure the dataset is extracted to %ProgramData%\Zivid:
164+
# The folder must contain:
165+
# StitchingPointClouds/
166+
# ├── SmallObject/
167+
# └── BigObject/
168+
# Each folder must include:
169+
# - capture_*.zdf
170+
# - robot_pose_*.yaml
171+
# - hand_eye_transform.yaml
172+
small_object_dir = get_sample_data_path() / "StitchingPointClouds" / "SmallObject"
173+
big_object_dir = get_sample_data_path() / "StitchingPointClouds" / "BigObject"
174+
175+
if not small_object_dir.exists() or not big_object_dir.exists():
176+
raise FileNotFoundError(
177+
f"Missing dataset folders.\n"
178+
f"Make sure 'StitchingPointClouds/SmallObject' and 'StitchingPointClouds/BigObject' exist at {get_sample_data_path()}.\n\n"
179+
f"You can download the dataset (StitchingPointClouds.zip) from:\n"
180+
f"https://support.zivid.com/en/latest/academy/applications/stitching.html"
181+
)
182+
183+
print("Stitching small object...")
184+
final_point_cloud_small = _stitch_point_clouds(small_object_dir, args.full_resolution)
185+
display_pointcloud(
186+
xyz=final_point_cloud_small.copy_data("xyz"),
187+
rgb=final_point_cloud_small.copy_data("rgba_srgb")[:, 0:3],
188+
)
189+
file_name_small = Path(__file__).parent / "StitchedPointCloudSmallObject.ply"
190+
export_unorganized_point_cloud(final_point_cloud_small, PLY(str(file_name_small), layout=PLY.Layout.unordered))
191+
192+
print("Stitching big object...")
193+
final_point_cloud_big = _stitch_point_clouds(big_object_dir, args.full_resolution)
194+
display_pointcloud(
195+
xyz=final_point_cloud_big.copy_data("xyz"),
196+
rgb=final_point_cloud_big.copy_data("rgba_srgb")[:, 0:3],
197+
)
198+
file_name_big = Path(__file__).parent / "StitchedPointCloudBigObject.ply"
199+
export_unorganized_point_cloud(final_point_cloud_big, PLY(str(file_name_big), layout=PLY.Layout.unordered))
200+
201+
202+
if __name__ == "__main__":
203+
_main()

0 commit comments

Comments
 (0)