You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Support set_local_pose in StereoDepthSensor (#112)
* Support local pose for StereoDepthSensor
* support set_local_pose in StereoDepthSensor
* minor fix
---------
Co-authored-by: Jiayuan-Gu <jigu@eng.ucsd.edu>
sensor = StereoDepthSensor('sensor', scene, sensor_config, mount=actor, pose=Pose()) # pose is relative to mount
71
71
72
-
After mounting to an actor, the sensor will move along with it. Calling ``set_pose`` from either the sensor or the actor will move the two entities together.
72
+
After mounting to an actor, the sensor will move along with it. Calling ``sensor.set_local_pose`` can adjust the relative pose to the mounted actor.
73
73
74
-
``sensor`` behaves very similar to a camera. You can ``set_pose`` and ``take_picture`` just like working with a camera. What's more, you can ``compute_depth`` and
75
-
``get_pointcloud`` on the sensor:
74
+
``sensor`` behaves very similar to a camera. You can ``set_pose`` (or ``set_local_pose`` when mounted) and ``take_picture`` just like working with a camera.
75
+
In addtion to these basic functions, you can ``compute_depth`` and ``get_pointcloud`` on the sensor:
One important differences between camera and ``sensor`` is that while camera will only take picture of an RGB image, ``sensor`` will take another pair
81
+
One important difference between camera and ``sensor`` is that while camera will only take picture of an RGB image, ``sensor`` will take another pair
82
82
of infrared images, which will be used to compute depth. After calling ``take_picture``, the RGB image and infrared images will be saved within ``sensor``.
83
83
Calling ``sensor.get_rgb`` and ``sensor.get_ir`` will return the pictures in ndarray form. Let's take a look at them:
84
84
@@ -144,13 +144,14 @@ If depth is the only needed data and RGB data is not needed, you can specify ``i
144
144
This can save the time for rendering RGB image.
145
145
146
146
147
-
As mentioned above, the final depth map generated by ``StereoDepthSensor`` will be transformed into RGB camera frame. It will be of the same resolution and frame as of the RGB
148
-
camera. This feature can be used to achieve fast downsampling/upsampling. All you need is to specify the ``rgb_resolution`` of the ``StereoDepthSensorConfig`` as the final
147
+
As mentioned above, the final depth map generated by ``StereoDepthSensor`` will be transformed into RGB camera frame. It will be of the same resolution and frame as of the
148
+
RGB camera. This feature can be used to achieve fast downsampling/upsampling. All you need is to specify the ``rgb_resolution`` of the ``StereoDepthSensorConfig`` as the final
149
149
resolution you want to sample on, and ``StereoDepthSensor`` will do that for you on the GPU. In this way, you don't need to attach any slow CPU resampling function to it.
150
150
151
-
In general, lowering ``rt_samples_per_pixel`` for renderer, ``ir_resolution`` of sensor config and ``max_disp`` of sensor config are all good ways to enhance computation
152
-
speed. However, note that these changes might also have effect on the output depth map. You can freely adjust the parameters until you find the settings that satisfies
153
-
your need.
151
+
In general, lowering ``rt_samples_per_pixel`` for renderer, ``ir_resolution`` of sensor config and ``max_disp`` of sensor config are all good ways to enhance computation
152
+
speed. For example, try changing the ``rt_samples_per_pixel`` in our script from ``32`` to ``8``. You might find that we can get almost as good results as before while
153
+
``sensor.take_picture`` now runs much faster. However, note that these changes generally will have some effect (trade quality with speed) on the output depth map. You can freely
154
+
adjust the parameters until you find the settings that satisfies your need.
0 commit comments