From 08e52c0ae89e4749cbb208e7a85f15d0658290cf Mon Sep 17 00:00:00 2001 From: holger-nutonomy <39502217+holger-nutonomy@users.noreply.github.com> Date: Fri, 3 May 2019 09:50:19 +0800 Subject: [PATCH] Documentation overhaul (#138) * Commented out statements that are likely to crash in notebooks * Limit length of the location string * Added units to translations and sizes, minor reformatting * Refine schema introduction and point to tutorial * Added check that result_path exists * Reworded results file instructions * Clarified that the color corresponds to the depth, not height * Added FAQs * Whats next in nuscenes * Restructured readmes * Overhauled installation instructions --- README.md | 25 ++-- faqs.md | 43 +++++++ python-sdk/nuscenes/eval/detection/README.md | 7 +- .../nuscenes/eval/detection/evaluate.py | 3 + python-sdk/nuscenes/nuscenes.py | 6 +- python-sdk/tutorial.ipynb | 26 ++-- schema.md | 42 ++++--- setup/installation.md | 116 +++++++++--------- 8 files changed, 165 insertions(+), 103 deletions(-) create mode 100644 faqs.md diff --git a/README.md b/README.md index d9d4cbae..5eb92f51 100644 --- a/README.md +++ b/README.md @@ -1,6 +1,5 @@ # nuScenes devkit Welcome to the devkit of the [nuScenes](https://www.nuscenes.org) dataset. - ![](https://www.nuscenes.org/public/images/road.jpg) ## Overview @@ -8,13 +7,14 @@ Welcome to the devkit of the [nuScenes](https://www.nuscenes.org) dataset. - [Dataset download](#dataset-download) - [Devkit setup](#devkit-setup) - [Tutorial](#tutorial) +- [Frequently asked questions](#frequently-asked-questions) - [Object detection task](#object-detection-task) -- [Backward compatibility](#backward-compatibility) - [Citation](#citation) ## Changelog +- Apr. 30, 2019: Devkit v1.0.1: loosen PIP requirements, refine detection challenge, export 2d annotation script. - Mar. 26, 2019: Full dataset, paper, & devkit v1.0.0 released. Support dropped for teaser data. -- Dec. 20, 2018: Initial evaluation code released. Devkit folders restructured. +- Dec. 20, 2018: Initial evaluation code released. Devkit folders restructured, which breaks backward compatibility. - Nov. 21, 2018: RADAR filtering and multi sweep aggregation. - Oct. 4, 2018: Code to parse RADAR data released. - Sep. 12, 2018: Devkit for teaser dataset released. @@ -36,14 +36,13 @@ Eventually you should have the following folder structure: If you want to use another folder, specify the `dataroot` parameter of the NuScenes class (see tutorial). ## Devkit setup -The devkit is tested for Python 3.6 and Python 3.7. To install python, please check [here](https://github.com/nutonomy/nuscenes-devkit/blob/master/installation.md#install-python). +The devkit is tested for Python 3.6 and Python 3.7. +To install Python, please check [here](https://github.com/nutonomy/nuscenes-devkit/blob/master/installation.md#install-python). -Our devkit is available and can be installed via pip: +Our devkit is available and can be installed via [pip](https://pip.pypa.io/en/stable/installing/) : ``` pip install nuscenes-devkit ``` -If you don't have pip, please check [here](https://pip.pypa.io/en/stable/installing/) to install pip. - For an advanced installation, see [installation](https://github.com/nutonomy/nuscenes-devkit/blob/master/setup/installation.md) for detailed instructions. ## Tutorial @@ -52,17 +51,15 @@ To get started with the nuScenes devkit, please run the tutorial as an IPython n jupyter notebook $HOME/nuscenes-devkit/python-sdk/tutorial.ipynb ``` In case you want to avoid downloading and setting up the data, you can also take a look at the [rendered notebook on nuScenes.org](https://www.nuscenes.org/tutorial). -To learn more about the dataset, go to [nuScenes.org](https://www.nuscenes.org) or take a look at the [database schema](https://github.com/nutonomy/nuscenes-devkit/blob/master/schema.md) and [annotator instructions](https://github.com/nutonomy/nuscenes-devkit/blob/master/instructions.md). The [nuScenes paper](https://arxiv.org/abs/1903.11027) provides detailed analysis of the dataset. +To learn more about the dataset, go to [nuScenes.org](https://www.nuscenes.org) or take a look at the [database schema](https://github.com/nutonomy/nuscenes-devkit/blob/master/schema.md) and [annotator instructions](https://github.com/nutonomy/nuscenes-devkit/blob/master/instructions.md). +The [nuScenes paper](https://arxiv.org/abs/1903.11027) provides detailed analysis of the dataset. + +## Frequently asked questions +See [FAQs](https://github.com/nutonomy/nuscenes-devkit/blob/master/faqs.md). ## Object detection task For instructions related to the object detection task (results format, classes and evaluation metrics), please refer to [this readme](https://github.com/nutonomy/nuscenes-devkit/blob/master/python-sdk/nuscenes/eval/detection/README.md). -## Backward compatibility -- Mar. 26, 2019: With the full dataset release we drop support for the code and data of the teaser release. Several changes to the map table and map files break backward compatibility. -- Dec. 20, 2018: We restructured the nuscenes-devkit code, which breaks backward compatibility. - The new structure has a top-level package `nuscenes` which contains packages `eval`, `export` and `utils`. - Therefore, existing imports from `nuscenes_utils` should be replaced by `nuscenes.nuscenes`. - ## Citation Please use the following citation when referencing [nuScenes](https://arxiv.org/abs/1903.11027): ``` diff --git a/faqs.md b/faqs.md new file mode 100644 index 00000000..e87d25e2 --- /dev/null +++ b/faqs.md @@ -0,0 +1,43 @@ +# Frequently asked questions + +On this page we try to answer questions frequently asked by our users. + +- How can I get in contact? + - For questions about commercialization, collaboration and marketing, please contact [nuScenes@nuTonomy.com](mailto:nuScenes@nuTonomy.com). + - For issues and bugs *with the devkit*, file an issue on [Github](https://github.com/nutonomy/nuscenes-devkit/issues). + - For any other questions, please post in the [nuScenes user forum](https://forum.nuscenes.org/). + +- How can I get started? + - Read the [dataset description](https://www.nuscenes.org/overview). + - [Explore](https://www.nuscenes.org/explore/scene-0011/0) the lidar viewer and videos. + - Read the [tutorial](https://www.nuscenes.org/tutorial). + - Read our [publications](https://www.nuscenes.org/publications). + - [Download](https://www.nuscenes.org/download) the dataset. + - Get the [nuscenes-devkit code](https://github.com/nutonomy/nuscenes-devkit). + - Take a look at the [experimental scripts](https://github.com/nutonomy/nuscenes-devkit/tree/master/python-sdk/nuscenes/scripts). + +- Can I use nuScenes for free? + - For non-commercial use [nuScenes is free](https://www.nuscenes.org/terms-of-use), e.g. for educational use and some research use. + - For commercial use please contact [nuScenes@nuTonomy.com](mailto:nuScenes@nuTonomy.com). To allow startups to use our dataset, we adjust the pricing terms to the use case and company size. + +- How can I participate in the nuScenes challenges? + - See the overview site for the [object detection challenge](https://www.nuscenes.org/object-detection). + - See the overview site for the [tracking challenge](https://www.nuscenes.org/tracking). + +- What's next for nuScenes? + - A map expansion kit with 20+ different semantic layers (e.g. lanes, stop lines, traffic lights). + - Raw IMU & GPS data. + - Object detection, tracking and other challenges (see above). + +- How can I get more information on the sensors used? + - Read the [Data collection](https://www.nuscenes.org/data-collection) page. + - Note that we do not give away the vendor name and model to avoid endorsing a particular vendor. All sensors are publicly available from third-party vendors. + - For more information, please contact [nuScenes@nuTonomy.com](mailto:nuScenes@nuTonomy.com). + +- Can I use nuScenes for 2d object detection? + - Objects in nuScenes are annotated in 3d. + - You can use [this script](https://github.com/nutonomy/nuscenes-devkit/blob/master/python-sdk/nuscenes/scripts/export_2d_annotations_as_json.py) to project them to 2d, but note that such 2d boxes are not generally tight. + +- How can I share my new dataset / paper for Autonomous Driving? + - Please contact [nuScenes@nuTonomy.com](mailto:nuScenes@nuTonomy.com) to discuss possible collaborations and listing your work on the [Publications](https://www.nuscenes.org/publications) page. + - To discuss it with the community, please post in the [nuScenes user forum](https://forum.nuscenes.org/). \ No newline at end of file diff --git a/python-sdk/nuscenes/eval/detection/README.md b/python-sdk/nuscenes/eval/detection/README.md index 779ba5aa..81300428 100644 --- a/python-sdk/nuscenes/eval/detection/README.md +++ b/python-sdk/nuscenes/eval/detection/README.md @@ -19,13 +19,13 @@ as well as estimating a set of attributes and the current velocity vector. ## Challenges ### Workshop on Autonomous Driving, CVPR 2019 The first nuScenes detection challenge will be held at CVPR 2019. -Submission window opens in April 2019 and closes June 15th. +Submission window opens in May 2019 and closes June 15th. Results and winners will be announced at the Workshop on Autonomous Driving ([WAD](https://sites.google.com/view/wad2019)) at [CVPR 2019](http://cvpr2019.thecvf.com/). ## Submission rules * We release annotations for the train and val set, but not for the test set. * We release sensor data for train, val and test set. -* Users make predictions on the test set and submit the results to our eval. server, which returns the metrics listed below. +* Users make predictions on the test set and submit the results to our evaluation server, which returns the metrics listed below. * We do not use strata. Instead, we filter annotations and predictions beyond class specific distances. * The maximum time window of past sensor data and ego poses that may be used at inference time is approximately 0.5s (at most 6 camera images, 6 radar sweeps and 10 lidar sweeps). At training time there are no restrictions. * Users must to limit the number of submitted boxes per sample to 500. @@ -36,8 +36,7 @@ Results and winners will be announced at the Workshop on Autonomous Driving ([WA We define a standardized detection result format that serves as an input to the evaluation code. The detection results for a particular evaluation set (train/val/test) are stored in a single JSON file. For the train and val sets the evaluation can be performed by the user on their local machine. -For the test set the user needs to zip the JSON results file and submit it to the official evaluation server. -The ZIP file and the JSON file must have the exact same name, except for the file extension. +For the test set the user needs to zip the single JSON result file and submit it to the official evaluation server. The JSON file includes meta data `meta` on the type of inputs used for this method. Furthermore it includes a dictionary `results` that maps each sample_token to a list of `sample_result` entries. Each `sample_token` from the current evaluation set must be included in `results`, although the list of predictions may be empty if no object is detected. diff --git a/python-sdk/nuscenes/eval/detection/evaluate.py b/python-sdk/nuscenes/eval/detection/evaluate.py index cdb0c942..b8a03eb8 100644 --- a/python-sdk/nuscenes/eval/detection/evaluate.py +++ b/python-sdk/nuscenes/eval/detection/evaluate.py @@ -63,6 +63,9 @@ def __init__(self, self.verbose = verbose self.cfg = config + # Check result file exists. + assert os.path.exists(result_path), 'Error: The result file does not exist!' + # Make dirs. self.plot_dir = os.path.join(self.output_dir, 'plots') if not os.path.isdir(self.output_dir): diff --git a/python-sdk/nuscenes/nuscenes.py b/python-sdk/nuscenes/nuscenes.py index 29e8b21c..74eeea02 100644 --- a/python-sdk/nuscenes/nuscenes.py +++ b/python-sdk/nuscenes/nuscenes.py @@ -488,6 +488,8 @@ def ann_count(record): desc = record['name'] + ', ' + record['description'] if len(desc) > 55: desc = desc[:51] + "..." + if len(location) > 18: + location = location[:18] print('{:16} [{}] {:4.0f}s, {}, #anns:{}'.format( desc, datetime.utcfromtimestamp(start_time).strftime('%y-%m-%d %H:%M:%S'), @@ -550,8 +552,8 @@ def map_pointcloud_to_image(self, pointsensor_token: str, camera_token: str) -> # Grab the depths (camera frame z axis points away from the camera). depths = pc.points[2, :] - # Set the height to be the coloring. - coloring = pc.points[2, :] + # Retrieve the color from the depth. + coloring = depths # Take the actual picture (matrix multiplication with camera-matrix + renormalization). points = view_points(pc.points[:3, :], np.array(cs_record['camera_intrinsic']), normalize=True) diff --git a/python-sdk/tutorial.ipynb b/python-sdk/tutorial.ipynb index 78a1845d..553aadc9 100644 --- a/python-sdk/tutorial.ipynb +++ b/python-sdk/tutorial.ipynb @@ -530,7 +530,9 @@ { "cell_type": "code", "execution_count": null, - "metadata": {}, + "metadata": { + "scrolled": true + }, "outputs": [], "source": [ "nusc.sample_data[10]" @@ -1107,7 +1109,7 @@ "source": [ "my_sample = nusc.sample[20]\n", "\n", - "# The rendering command below is commented out because it tends to crash in notebooks\n", + "# The rendering command below is commented out because it may crash in notebooks\n", "# nusc.render_sample(my_sample['token'])" ] }, @@ -1170,7 +1172,16 @@ "\n", "NOTE: These methods use OpenCV for rendering, which doesn't always play nice with IPython Notebooks. If you experience any issues please run these lines from the command line. \n", "\n", - "Let's grab scene 0043, it is nice and dense." + "Let's grab scene 0061, it is nice and dense." + ] + }, + { + "cell_type": "code", + "execution_count": null, + "metadata": {}, + "outputs": [], + "source": [ + "my_scene_token = nusc.field2token('scene', 'name', 'scene-0061')[0]" ] }, { @@ -1179,8 +1190,8 @@ "metadata": {}, "outputs": [], "source": [ - "my_scene_token = nusc.field2token('scene', 'name', 'scene-0061')[0]\n", - "nusc.render_scene_channel(my_scene_token, 'CAM_FRONT')" + "# The rendering command below is commented out because it may crash in notebooks\n", + "# nusc.render_scene_channel(my_scene_token, 'CAM_FRONT')" ] }, { @@ -1197,7 +1208,8 @@ "metadata": {}, "outputs": [], "source": [ - "nusc.render_scene(my_scene_token)" + "# The rendering command below is commented out because it may crash in notebooks\n", + "# nusc.render_scene(my_scene_token)" ] }, { @@ -1233,7 +1245,7 @@ "name": "python", "nbconvert_exporter": "python", "pygments_lexer": "ipython3", - "version": "3.7.2" + "version": "3.7.3" } }, "nbformat": 4, diff --git a/schema.md b/schema.md index 5b485d3a..30c147df 100644 --- a/schema.md +++ b/schema.md @@ -1,5 +1,13 @@ Database schema ========== +This document describes the database schema used in nuScenes. +All annotations and meta data (including calibration, maps, vehicle coordinates etc.) are covered in a relational database. +The database tables are listed below. +Every row can be identified by its unique primary key `token`. +Foreign keys such as `sample_token` may be used to link to the `token` of the table `sample`. +Please refer to the [tutorial](https://www.nuscenes.org/tutorial) for an introduction to the most important database tables. + + category --------- @@ -15,7 +23,7 @@ category { attribute --------- -An attribute is a property of an instance that can change while the category remains the same. +An attribute is a property of an instance that can change while the category remains the same. Example: a vehicle being parked/stopped/moving, and whether or not a bicycle has a rider. ``` attribute { @@ -38,8 +46,9 @@ visibility { instance --------- -An object instance, e.g. particular vehicle. This table is an enumeration of all object -instances we observed. Note that instances are not tracked across scenes. +An object instance, e.g. particular vehicle. +This table is an enumeration of all object instances we observed. +Note that instances are not tracked across scenes. ``` instance { "token": -- Unique record identifier. @@ -63,15 +72,16 @@ sensor { calibrated_sensor --------- -Definition of a particular sensor (lidar/radar/camera) as calibrated on a particular vehicle. All extrinsic parameters are -given with respect to the ego vehicle body frame. +Definition of a particular sensor (lidar/radar/camera) as calibrated on a particular vehicle. +All extrinsic parameters are given with respect to the ego vehicle body frame. +All camera images come undistorted and rectified. ``` calibrated_sensor { "token": -- Unique record identifier. "sensor_token": -- Foreign key pointing to the sensor type. - "translation": [3] -- Coordinate system origin: x, y, z. + "translation": [3] -- Coordinate system origin in meters: x, y, z. "rotation": [4] -- Coordinate system orientation as quaternion: w, x, y, z. - "camera_intrinsic": [3, 3] -- Intrinsic camera calibration + rectification matrix. Empty for sensors that are not cameras. + "camera_intrinsic": [3, 3] -- Intrinsic camera calibration. Empty for sensors that are not cameras. } ``` ego_pose @@ -81,7 +91,7 @@ Ego vehicle pose at a particular timestamp. Given with respect to global coordin ``` ego_pose { "token": -- Unique record identifier. - "translation": [3] -- Coordinate system origin: x, y, z. + "translation": [3] -- Coordinate system origin in meters: x, y, z. "rotation": [4] -- Coordinate system orientation as quaternion: w, x, y, z. "timestamp": -- Unix time stamp. } @@ -132,9 +142,9 @@ sample { sample_data --------- -A sensor data e.g. image, point cloud or radar return. For sample_data with is_key_frame=True, the time-stamps -should be very close to the sample it points to. For non key-frames the sample_data points to the -sample that follows closest in time. +A sensor data e.g. image, point cloud or radar return. +For sample_data with is_key_frame=True, the time-stamps should be very close to the sample it points to. +For non key-frames the sample_data points to the sample that follows closest in time. ``` sample_data { "token": -- Unique record identifier. @@ -154,8 +164,8 @@ sample_data { sample_annotation --------- -A bounding box defining the position of an object seen in a sample. All location data is given with respect -to the global coordinate system. +A bounding box defining the position of an object seen in a sample. +All location data is given with respect to the global coordinate system. ``` sample_annotation { "token": -- Unique record identifier. @@ -163,8 +173,8 @@ sample_annotation { "instance_token": -- Foreign key. Which object instance is this annotating. An instance can have multiple annotations over time. "attribute_tokens": [n] -- Foreign keys. List of attributes for this annotation. Attributes can change over time, so they belong here, not in the object table. "visibility_token": -- Foreign key. Visibility may also change over time. If no visibility is annotated, the token is an empty string. - "translation": [3] -- Bounding box location as center_x, center_y, center_z. - "size": [3] -- Bounding box size as width, length, height. + "translation": [3] -- Bounding box location in meters as center_x, center_y, center_z. + "size": [3] -- Bounding box size in meters as width, length, height. "rotation": [4] -- Bounding box orientation as quaternion: w, x, y, z. "num_lidar_pts": -- Number of lidar points in this box. Points are counted during the lidar sweep identified with this sample. "num_radar_pts": -- Number of radar points in this box. Points are counted during the radar sweep identified with this sample. This number is summed across all radar sensors without any invalid point filtering. @@ -180,7 +190,7 @@ Map data that is stored as binary semantic masks from a top-down view. map { "token": -- Unique record identifier. "log_tokens": [n] -- Foreign keys. - "category": -- Map category, currently only semantic_prior for drivable surface and sidewalk + "category": -- Map category, currently only semantic_prior for drivable surface and sidewalk. "filename": -- Relative path to the file with the map mask. } ``` diff --git a/setup/installation.md b/setup/installation.md index ca67ba7d..ae8daaa9 100644 --- a/setup/installation.md +++ b/setup/installation.md @@ -1,24 +1,24 @@ -## Advanced Installation +# Advanced Installation We provide step-by-step instructions to install our devkit. -- [Source Download](#source-download) +- [Download](#download) - [Install Python](#install-python) -- [Setup a CONDA environment](#setup-a-conda-environment) +- [Setup a Conda environment](#setup-a-conda-environment) +- [Setup a virtualenvwrapper environment](#setup-a-virtualenvwrapper-environment) - [Setup PYTHONPATH](#setup-pythonpath) - [Install required packages](#install-required-packages) -- [Verify Install](#verify-install) -- [Setup NUSCENES env variable](#setup-nuscens-env-variable) -- [(Alternative) Setup a virtualenv environment](#(Alternative)-Setup-a-virtualenv-environment) +- [Verify install](#verify-install) +- [Setup NUSCENES environment variable](#setup-nuscenes-environment-variable) - -### Source Download +## Download Download the devkit to your home directory using: ``` cd && git clone https://github.com/nutonomy/nuscenes-devkit.git ``` -### Install Python +## Install Python -The devkit is tested for Python 3.6 onwards, but we recommend to use Python 3.7. For Ubuntu: If the right Python version isn't already installed on your system, install it by doing +The devkit is tested for Python 3.6 onwards, but we recommend to use Python 3.7. +For Ubuntu: If the right Python version is not already installed on your system, install it by running: ``` sudo apt install python-pip sudo add-apt-repository ppa:deadsnakes/ppa @@ -26,21 +26,22 @@ sudo apt-get update sudo apt-get install python3.7 sudo apt-get install python3.7-dev ``` -For Mac OS: download from `https://www.python.org/downloads/mac-osx/` and install. +For Mac OS download and install from `https://www.python.org/downloads/mac-osx/`. -### Setup a CONDA environment -It is then recommended to install the devkit in a new virtual environment, follow instructions below to setup your dev environment. Here we include instructions for [CONDA](https://docs.conda.io/projects/conda/en/latest/user-guide/tasks/manage-environments.html). If you don't want to install conda, an alternative would be to use virtualenvwrapper, if you prefer this, you can look at [these instructions](#alternative-setting-up-a-new-virtual-environment). +## Setup a Conda environment +Next we setup a Conda environment. +An alternative to Conda is to use virtualenvwrapper, as described [below](#setup-a-virtualenvwrapper-environment). -##### Install miniconda -https://conda.io/en/latest/miniconda.html +#### Install miniconda +See the [official Miniconda page](https://conda.io/en/latest/miniconda.html). -##### Setup a CONDA environment -We create a new vitrual environment named `nuscenes`. +#### Setup a Conda environment +We create a new Conda environment named `nuscenes`. ``` conda create --name nuscenes python=3.7 ``` -##### Activating the virtual environment +#### Activate the environment If you are inside the virtual environment, your shell prompt should look like: `(nuscenes) user@computer:~$` If that is not the case, you can enable the virtual environment using: ``` @@ -50,64 +51,33 @@ To deactivate the virtual environment, use: ``` source deactivate ``` -### Setup PYTHONPATH -Add the `python-sdk` directory to your `PYTHONPATH` environmental variable, by adding the -following to your `~/.bashrc` (For virtualenvwrapper, you could add it in `~/.virtualenvs/nuscenes/bin/postactivate`): -``` -export PYTHONPATH="${PYTHONPATH}:$HOME/nuscenes-devkit/python-sdk" -``` - -### Install required packages - -To install the required packages, run the following command in your favourite virtual environment: -``` -pip install -r nuscenes-devkit/setup/requirements.txt -``` - -### Verify install -To verify your environment run `python -m unittest` in the `python-sdk` folder. -You can also run `assert_download.py` in the `nuscenes/scripts` folder. - -### Setup NUSCENES environment variable - -Finally, set NUSCENES env. variable that points to your data folder. In the script below, our data is stored in `/data/sets/nuscenes`. -``` -export NUSCENES="/data/sets/nuscenes" -``` - -That's it you should be good to go! ----- -### (Alternative) Setup a virtualenv environment -Another option for setting up a new virtual environment is to use virtualenvwrapper. Follow instructions below to setup your dev environment. +## Setup a virtualenvwrapper environment +Another option for setting up a new virtual environment is to use virtualenvwrapper. +**Skip these steps if you have already setup a Conda environment**. +Follow these instructions to setup your environment. -##### Install virtualenvwrapper +#### Install virtualenvwrapper +To install virtualenvwrapper, run: ``` pip install virtualenvwrapper ``` -Add these two lines to `~/.bashrc` (`~/.bash_profile` on MAC OS) to set the location where the virtual environments -should live and the location of the script installed with this package: +Add the following two lines to `~/.bashrc` (`~/.bash_profile` on MAC OS) to set the location where the virtual environments should live and the location of the script installed with this package: ``` export WORKON_HOME=$HOME/.virtualenvs source [VIRTUAL_ENV_LOCATION] ``` -Replace `[VIRTUAL_ENV_LOCATION]` with either `/usr/local/bin/virtualenvwrapper.sh` or `~/.local/bin/virtualenvwrapper.sh` -depending on where it is installed on your system. - +Replace `[VIRTUAL_ENV_LOCATION]` with either `/usr/local/bin/virtualenvwrapper.sh` or `~/.local/bin/virtualenvwrapper.sh` depending on where it is installed on your system. After editing it, reload the shell startup file by running e.g. `source ~/.bashrc`. -##### Create the virtual environment +#### Create the virtual environment We create a new virtual environment named `nuscenes`. ``` mkvirtualenv nuscenes --python=python3.7 ``` -or -``` -mkvirtualenv nuscenes --python [PYTHON_BINARIES] -``` -PYTHON_BINARIES are typically at either `/usr/local/bin/python3.7` or `/usr/bin/python3.7`. -##### Activating the virtual environment +#### Activate the virtual environment If you are inside the virtual environment, your shell prompt should look like: `(nuscenes) user@computer:~$` If that is not the case, you can enable the virtual environment using: ``` @@ -116,4 +86,30 @@ workon nuscenes To deactivate the virtual environment, use: ``` deactivate -``` \ No newline at end of file +``` + +## Setup PYTHONPATH +Add the `python-sdk` directory to your `PYTHONPATH` environmental variable, by adding the following to your `~/.bashrc` (for virtualenvwrapper, you could alternatively add it in `~/.virtualenvs/nuscenes/bin/postactivate`): +``` +export PYTHONPATH="${PYTHONPATH}:$HOME/nuscenes-devkit/python-sdk" +``` + +## Install required packages + +To install the required packages, run the following command in your favourite virtual environment: +``` +pip install -r setup/requirements.txt +``` + +## Verify install +To verify your environment run `python -m unittest` in the `python-sdk` folder. +You can also run `assert_download.py` in the `nuscenes/scripts` folder. + +## Setup NUSCENES environment variable +Finally, if you want to run the unit tests you need to point the devkit to the `nuscenes` folder on your disk. +Set the NUSCENES environment variable to point to your data folder, e.g. `/data/sets/nuscenes`: +``` +export NUSCENES="/data/sets/nuscenes" +``` + +That's it you should be good to go! \ No newline at end of file