Skip to content

Commit

Permalink
Merge pull request nutonomy#11 from nutonomy/dev
Browse files Browse the repository at this point in the history
Visualization scripts and minor refactoring
  • Loading branch information
oscar-nutonomy authored Nov 7, 2018
2 parents 6f62e7b + 8278b84 commit 56487d3
Show file tree
Hide file tree
Showing 11 changed files with 280 additions and 236 deletions.
9 changes: 6 additions & 3 deletions README.md
Original file line number Diff line number Diff line change
Expand Up @@ -8,6 +8,7 @@ Welcome to the devkit of the [nuScenes](https://www.nuscenes.org) dataset.
- [Dataset download](#dataset-download)
- [Devkit setup](#devkit-setup)
- [Getting started](#getting-started)
- [Frequently asked questions](#frequently-asked-questions)
- [Setting up a new virtual environment](#setting-up-a-new-virtual-environment)

## Changelog
Expand Down Expand Up @@ -57,11 +58,13 @@ In case you want to avoid downloading and setting up the data, you can also take
Github](https://github.com/nutonomy/nuscenes-devkit/blob/master/python-sdk/tutorial.ipynb). To learn more about the dataset, go to [nuScenes.org](https://www.nuscenes.org) or take a look at the [database schema](https://github.com/nutonomy/nuscenes-devkit/blob/master/schema.md) and [annotator instructions](https://github.com/nutonomy/nuscenes-devkit/blob/master/instructions.md).

## Frequently asked questions
- *How come some objects visible in the camera images are not annotated?* In the [annotator instructions](https://github.com/nutonomy/nuscenes-devkit/blob/master/instructions.md) we specify that an object should only be annotated if it is covered by at least one LIDAR point. This is done to have precise location annotations, speedup the annotation process and remove faraway objects.
1) *How come some objects visible in the camera images are not annotated?* In the [annotator instructions](https://github.com/nutonomy/nuscenes-devkit/blob/master/instructions.md) we specify that an object should only be annotated if it is covered by at least one LIDAR point. This is done to have precise location annotations, speedup the annotation process and remove faraway objects.

- *I have found an incorrect annotation. Can you correct it?* Please make sure that the annotation is indeed incorrect according to the [annotator instructions](https://github.com/nutonomy/nuscenes-devkit/blob/master/instructions.md). Then send an email to nuScenes@nutonomy.com.
2) *I have found an incorrect annotation. Can you correct it?* Please make sure that the annotation is indeed incorrect according to the [annotator instructions](https://github.com/nutonomy/nuscenes-devkit/blob/master/instructions.md). Then send an email to nuScenes@nutonomy.com.

- *How can I use the RADAR data?* We recently [added features to parse and visualize RADAR point-clouds](https://github.com/nutonomy/nuscenes-devkit/pull/6). More visualization tools will follow.
3) *How can I use the RADAR data?* We recently [added features to parse and visualize RADAR point-clouds](https://github.com/nutonomy/nuscenes-devkit/pull/6). More visualization tools will follow.

4) *Why are there less sample pointclouds than samples?* See [this issue](https://github.com/nutonomy/nuscenes-devkit/issues/8). Scenes 169 and 170 overlap and going forward we will remove scene 169.

## Setting up a new virtual environment

Expand Down
24 changes: 24 additions & 0 deletions python-sdk/export/export_egoposes_on_map.py
Original file line number Diff line number Diff line change
@@ -0,0 +1,24 @@
"""
Exports an image for each map location with all the ego poses drawn on the map.
"""
import os

import matplotlib.pyplot as plt
import numpy as np

from nuscenes_utils.nuscenes import NuScenes

# Load NuScenes class
nusc = NuScenes()
locations = np.unique([l['location'] for l in nusc.log])

# Create output directory
out_dir = os.path.expanduser('~/nuscenes-visualization/map-poses')
if not os.path.isdir(out_dir):
os.makedirs(out_dir)

for location in locations:
nusc.render_egoposes_on_map(log_location=location)
out_path = os.path.join(out_dir, 'egoposes-%s.png' % location)
plt.tight_layout()
plt.savefig(out_path)
Original file line number Diff line number Diff line change
@@ -1,3 +1,8 @@
"""
Export fused point clouds of a scene to a Wavefront OBJ file.
This point-cloud can be viewed in your favorite 3D rendering tool, e.g. Meshlab or Maya.
"""

import os
import os.path as osp
import argparse
Expand All @@ -10,22 +15,19 @@

from nuscenes_utils.data_classes import PointCloud
from nuscenes_utils.geometry_utils import view_points
from nuscenes_utils.nuscenes import NuScenes, NuScenesExplorer
from nuscenes_utils.nuscenes import NuScenes


def export_scene_pointcloud(explorer: NuScenesExplorer, out_path: str, scene_token: str, channel: str='LIDAR_TOP',
def export_scene_pointcloud(nusc: NuScenes, out_path: str, scene_token: str, channel: str='LIDAR_TOP',
min_dist: float=3.0, max_dist: float=30.0, verbose: bool=True) -> None:
"""
Export fused point clouds of a scene to a Wavefront OBJ file.
This point-cloud can be viewed in your favorite 3D rendering tool, e.g. Meshlab or Maya.
:param explorer: NuScenesExplorer instance.
:param nusc: NuScenes instance.
:param out_path: Output path to write the point-cloud to.
:param scene_token: Unique identifier of scene to render.
:param channel: Channel to render.
:param min_dist: Minimum distance to ego vehicle below which points are dropped.
:param max_dist: Maximum distance to ego vehicle above which points are dropped.
:param verbose: Whether to print messages to stdout.
:return: <None>
"""

# Check inputs.
Expand All @@ -35,15 +37,15 @@ def export_scene_pointcloud(explorer: NuScenesExplorer, out_path: str, scene_tok
assert channel in valid_channels, 'Input channel {} not valid.'.format(channel)

# Get records from DB.
scene_rec = explorer.nusc.get('scene', scene_token)
start_sample_rec = explorer.nusc.get('sample', scene_rec['first_sample_token'])
sd_rec = explorer.nusc.get('sample_data', start_sample_rec['data'][channel])
scene_rec = nusc.get('scene', scene_token)
start_sample_rec = nusc.get('sample', scene_rec['first_sample_token'])
sd_rec = nusc.get('sample_data', start_sample_rec['data'][channel])

# Make list of frames
cur_sd_rec = sd_rec
sd_tokens = []
while cur_sd_rec['next'] != '':
cur_sd_rec = explorer.nusc.get('sample_data', cur_sd_rec['next'])
cur_sd_rec = nusc.get('sample_data', cur_sd_rec['next'])
sd_tokens.append(cur_sd_rec['token'])

# Write point-cloud.
Expand All @@ -53,11 +55,11 @@ def export_scene_pointcloud(explorer: NuScenesExplorer, out_path: str, scene_tok
for sd_token in tqdm(sd_tokens):
if verbose:
print('Processing {}'.format(sd_rec['filename']))
sc_rec = explorer.nusc.get('sample_data', sd_token)
sample_rec = explorer.nusc.get('sample', sc_rec['sample_token'])
sc_rec = nusc.get('sample_data', sd_token)
sample_rec = nusc.get('sample', sc_rec['sample_token'])
lidar_token = sd_rec['token']
lidar_rec = explorer.nusc.get('sample_data', lidar_token)
pc = PointCloud.from_file(osp.join(explorer.nusc.dataroot, lidar_rec['filename']))
lidar_rec = nusc.get('sample_data', lidar_token)
pc = PointCloud.from_file(osp.join(nusc.dataroot, lidar_rec['filename']))

# Get point cloud colors.
coloring = np.ones((3, pc.points.shape[1])) * -1
Expand All @@ -68,7 +70,7 @@ def export_scene_pointcloud(explorer: NuScenesExplorer, out_path: str, scene_tok

# Points live in their own reference frame. So they need to be transformed via global to the image plane.
# First step: transform the point cloud to the ego vehicle frame for the timestamp of the sweep.
cs_record = explorer.nusc.get('calibrated_sensor', lidar_rec['calibrated_sensor_token'])
cs_record = nusc.get('calibrated_sensor', lidar_rec['calibrated_sensor_token'])
pc.rotate(Quaternion(cs_record['rotation']).rotation_matrix)
pc.translate(np.array(cs_record['translation']))

Expand All @@ -81,7 +83,7 @@ def export_scene_pointcloud(explorer: NuScenesExplorer, out_path: str, scene_tok
print('Distance filter: Keeping %d of %d points...' % (keep.sum(), len(keep)))

# Second step: transform to the global frame.
poserecord = explorer.nusc.get('ego_pose', lidar_rec['ego_pose_token'])
poserecord = nusc.get('ego_pose', lidar_rec['ego_pose_token'])
pc.rotate(Quaternion(poserecord['rotation']).rotation_matrix)
pc.translate(np.array(poserecord['translation']))

Expand All @@ -94,13 +96,14 @@ def export_scene_pointcloud(explorer: NuScenesExplorer, out_path: str, scene_tok
f.write("v {v[0]:.8f} {v[1]:.8f} {v[2]:.8f} {c[0]:.4f} {c[1]:.4f} {c[2]:.4f}\n".format(v=v, c=c/255.0))

if not sd_rec['next'] == "":
sd_rec = explorer.nusc.get('sample_data', sd_rec['next'])
sd_rec = nusc.get('sample_data', sd_rec['next'])


def pointcloud_color_from_image(nusc, pointsensor_token: str, camera_token: str) -> Tuple[np.array, np.array]:
def pointcloud_color_from_image(nusc: NuScenes, pointsensor_token: str, camera_token: str) -> Tuple[np.array, np.array]:
"""
Given a point sensor (lidar/radar) token and camera sample_data token, load point-cloud and map it to the image
plane, then retrieve the colors of the closest image pixels.
:param nusc: NuScenes instance.
:param pointsensor_token: Lidar/radar sample_data token.
:param camera_token: Camera sample data token.
:return (coloring <np.float: 3, n>, mask <np.bool: m>). Returns the colors for n points that reproject into the
Expand Down Expand Up @@ -165,10 +168,10 @@ def pointcloud_color_from_image(nusc, pointsensor_token: str, camera_token: str)
parser = argparse.ArgumentParser(description='Export a scene in Wavefront point cloud format.',
formatter_class=argparse.ArgumentDefaultsHelpFormatter)
parser.add_argument('--scene', default='scene-0061', type=str, help='Name of a scene, e.g. scene-0061')
parser.add_argument('--out_dir', default='', type=str, help='Output folder')
parser.add_argument('--out_dir', default='~/nuscenes-visualization/pointclouds', type=str, help='Output folder')
parser.add_argument('--verbose', default=0, type=int, help='Whether to print outputs to stdout')
args = parser.parse_args()
out_dir = args.out_dir
out_dir = os.path.expanduser(args.out_dir)
scene_name = args.scene
verbose = bool(args.verbose)

Expand All @@ -188,4 +191,4 @@ def pointcloud_color_from_image(nusc, pointsensor_token: str, camera_token: str)
scene_tokens = [s['token'] for s in nusc.scene if s['name'] == scene_name]
assert len(scene_tokens) == 1, 'Error: Invalid scene %s' % scene_name

export_scene_pointcloud(nusc.explorer, out_path, scene_tokens[0], channel='LIDAR_TOP', verbose=verbose)
export_scene_pointcloud(nusc, out_path, scene_tokens[0], channel='LIDAR_TOP', verbose=verbose)
23 changes: 23 additions & 0 deletions python-sdk/export/export_scene_videos.py
Original file line number Diff line number Diff line change
@@ -0,0 +1,23 @@
"""
Exports a video of each scene (with annotations) to disk.
"""
import os

from nuscenes_utils.nuscenes import NuScenes

# Load NuScenes class
nusc = NuScenes()
scene_tokens = [s['token'] for s in nusc.scene]

# Create output directory
out_dir = os.path.expanduser('~/nuscenes-visualization/scene-videos')
if not os.path.isdir(out_dir):
os.makedirs(out_dir)

# Write videos to disk
for scene_token in scene_tokens:
scene = nusc.get('scene', scene_token)
print('Writing scene %s' % scene['name'])
out_path = os.path.join(out_dir, scene['name']) + '.avi'
if not os.path.exists(out_path):
nusc.render_scene(scene['token'], out_path=out_path)
Loading

0 comments on commit 56487d3

Please sign in to comment.