|
| 1 | +# Installation |
| 2 | +## CUDA Toolkit |
| 3 | +Habitat depends on the CUDA toolkit, including the CUPTI examples. You can find a list of installers [here](https://developer.nvidia.com/cuda-toolkit-archive) from NVIDIA's website. |
| 4 | + |
| 5 | +After installation, verify that the folder `/usr/local/cuda/extras/CUPTI/samples` exists. On other distributions such as Arch Linux, this could also be located at `/opt/cuda/extras/CUPTI/samples`. |
| 6 | + |
| 7 | +## CMake |
| 8 | +Habitat requires `cmake` versions 3.17 or above. To do so, consult `docker/Dockerfile` or run the following commands: |
| 9 | +```sh |
| 10 | +wget "https://github.com/Kitware/CMake/releases/download/v3.17.0-rc1/cmake-3.17.0-rc1.tar.gz" -O cmake-3.17.0-rc1.tar.gz |
| 11 | +tar xzf cmake-3.17.0-rc1.tar.gz |
| 12 | + |
| 13 | +cd cmake-3.17.0-rc1 && \ |
| 14 | + ./bootstrap && \ |
| 15 | + make -j && \ |
| 16 | + sudo make install |
| 17 | +``` |
| 18 | + |
| 19 | +## Building Habitat |
| 20 | +Change directory to `analyzer` and ensure that: |
| 21 | +* the Python version in `SO_PATH` is set correctly (e.g. `habitat_cuda.cpython-39-x86_64-linux-gnu.so` for Python 3.9) |
| 22 | +* the `CUPTI_PATH` variable is pointed to the CUPTI directory for your distribution |
| 23 | + |
| 24 | +Then, to begin building, run `./install-dev.sh`. |
| 25 | + |
| 26 | +## Download pretrained models |
| 27 | +The MLP component of Habitat requires pretrained models that are not included in the main repository. To download them, run: |
| 28 | +```sh |
| 29 | +wget https://zenodo.org/record/4876277/files/habitat-models.tar.gz?download=1 -O habitat-models.tar.gz |
| 30 | +./extract-models.sh habitat-models.tar.gz |
| 31 | +``` |
| 32 | + |
| 33 | +## Verify installation |
| 34 | +You can verify your Habitat installation by running the simple usage example: |
| 35 | +```py |
| 36 | +import habitat |
| 37 | +import torch |
| 38 | +import torchvision.models as models |
| 39 | + |
| 40 | +# Define model and sample inputs |
| 41 | +model = models.resnet50().cuda() |
| 42 | +image = torch.rand(8, 3, 224, 224).cuda() |
| 43 | + |
| 44 | +# Measure a single inference |
| 45 | +tracker = habitat.OperationTracker(device=habitat.Device.RTX2080Ti) |
| 46 | +with tracker.track(): |
| 47 | + out = model(image) |
| 48 | + |
| 49 | +trace = tracker.get_tracked_trace() |
| 50 | +print("Run time on source:", trace.run_time_ms) |
| 51 | + |
| 52 | +# Perform prediction to a single target device |
| 53 | +pred = trace.to_device(habitat.Device.V100) |
| 54 | +print("Predicted time on V100:", pred.run_time_ms) |
| 55 | +``` |
0 commit comments