Skip to content

Commit

Permalink
updated docs
Browse files Browse the repository at this point in the history
  • Loading branch information
dusty-nv committed Jun 15, 2023
1 parent b74fcf3 commit 9d75b41
Showing 1 changed file with 2 additions and 2 deletions.
4 changes: 2 additions & 2 deletions docs/aux-docker.md
Original file line number Diff line number Diff line change
Expand Up @@ -24,7 +24,7 @@ Below are the currently available container tags:

> **note:** the version of JetPack-L4T that you have installed on your Jetson needs to be compatible with one of the tags above. If you have a different version of JetPack installed, either upgrade to the latest JetPack or [Build the Project from Source](docs/building-repo-2.md) to compile the project directly.
These containers use the [`l4t-pytorch`](https://ngc.nvidia.com/catalog/containers/nvidia:l4t-pytorch) base container, so support for transfer learning / re-training is already included.
These containers use the [`l4t-pytorch`](https://ngc.nvidia.com/catalog/containers/nvidia:l4t-pytorch) base container, so support for training models and transfer learning is already included.

## Launching the Container

Expand Down Expand Up @@ -54,7 +54,7 @@ The container will source the ROS environment and packages when started. For mo

In addition to being supported on the Jetson ARM-based architectures, the jetson-inference container can be [built](#building-the-container) and run on x86_64 systems with NVIDIA GPU(s). This can be used to run the Hello AI World tutorial and accompanying apps/libraries from it, or for faster training on the PC/server. To do this, first install the [NVIDIA drivers](https://docs.nvidia.com/datacenter/cloud-native/container-toolkit/install-guide.html#pre-requisites) and [NVIDIA Container Runtime](https://docs.nvidia.com/datacenter/cloud-native/container-toolkit/nvidia-docker.html) to enable GPU support in Docker.

To run the latest pre-built jetson-inference x86 container, use the same commands as above ([`docker/run.sh`](#launching-the-container). If you want to use a newer/older version of the [`nvcr.io/nvidia/pytorch`](https://catalog.ngc.nvidia.com/orgs/nvidia/containers/pytorch) base container, edit [this line](https://github.com/dusty-nv/jetson-inference/blob/master/docker/tag.sh#L40) with the desired tag and then run [`docker/build.sh`](#building-the-container)
To run the latest pre-built jetson-inference x86 container, use the same commands as above (`docker/run.sh`). If you want to use a newer/older version of the [`nvcr.io/nvidia/pytorch`](https://catalog.ngc.nvidia.com/orgs/nvidia/containers/pytorch) base container, edit [this line](https://github.com/dusty-nv/jetson-inference/blob/master/docker/tag.sh#L40) with the desired tag and then run [`docker/build.sh`](#building-the-container)

Although the jetson-inference container is built for Linux, it can be run on Windows under WSL 2 by following the [CUDA on WSL User Guide](https://docs.nvidia.com/cuda/wsl-user-guide/index.html#ch02-sub03-installing-wsl2), followed by installing Docker and the NVIDIA Container Runtime as above. If you need to use USB webcams and V4L2 under WSL 2, you'll also need to recompile your WSL kernel with these [config changes](https://github.com/PINTO0309/wsl2_linux_kernel_usbcam_enable_conf).

Expand Down

0 comments on commit 9d75b41

Please sign in to comment.