Skip to content

Update README file #3

New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Merged
merged 1 commit into from
May 26, 2022
Merged
Changes from all commits
Commits
File filter

Filter by extension

Filter by extension

Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
82 changes: 34 additions & 48 deletions README.md
Original file line number Diff line number Diff line change
@@ -1,62 +1,46 @@
# Habitat: A Runtime-Based Computational Performance Predictor for Deep Neural Network Training

[![DOI](https://zenodo.org/badge/DOI/10.5281/zenodo.4885489.svg)](https://doi.org/10.5281/zenodo.4885489)
[![DOI](https://zenodo.org/badge/DOI/10.5281/zenodo.4876277.svg)](https://doi.org/10.5281/zenodo.4876277)

# Habitat
Habitat is a tool that predicts a deep neural network's training iteration
execution time on a given GPU. It currently supports PyTorch. To learn more
about how Habitat works, please see our [research
paper](https://arxiv.org/abs/2102.00527).
## Installation
You can install Habitat using the prebuilt wheel files. To install, download the whl files from the [releases page](https://github.com/CentML/habitat/releases) then run:
```sh
pip install habitat*.whl
```

## Usage example
You can verify your Habitat installation by running the simple usage example. This example measures a single inference iteration of Resnet50 on the RTX2080Ti and extrapolates the runtime to the V100.
```py
import habitat
import torch
import torchvision.models as models

## Running From Source

Currently, the only way to run Habitat is to build it from source. You should
use the Docker image provided in this repository to make sure that you can
compile the code.

1. Download the [Habitat pre-trained
models](https://doi.org/10.5281/zenodo.4876277).
2. Run `extract-models.sh` under `analyzer` to extract and install the
pre-trained models.
3. Run `setup.sh` under `docker/` to build the Habitat container image.
4. Run `start.sh` to start a new container. By default, your home directory
will be mounted inside the container under `~/home`.
5. Once inside the container, run `install-dev.sh` under `analyzer/` to build
and install the Habitat package.
6. In your scripts, `import habitat` to get access to Habitat. See
`experiments/run_experiment.py` for an example showing how to use Habitat.

**Note:** Habitat needs access to your GPU's performance counters, which
requires special permissions if you are running with a recent driver (418.43 or
later). If you encounter a `CUPTI_ERROR_INSUFFICIENT_PRIVILEGES` error when
running Habitat, please follow the instructions
[here](https://developer.nvidia.com/ERR_NVGPUCTRPERM)
and in [issue #5](https://github.com/geoffxy/habitat/issues/5).


## License

The code in this repository is licensed under the Apache 2.0 license (see
`LICENSE` and `NOTICE`), with the exception of the files mentioned below.

This software contains source code provided by NVIDIA Corporation. These files
are:
# Define model and sample inputs
model = models.resnet50().cuda()
image = torch.rand(8, 3, 224, 224).cuda()

- The code under `cpp/external/cupti_profilerhost_util/` (CUPTI sample code)
- `cpp/src/cuda/cuda_occupancy.h`
# Measure a single inference
tracker = habitat.OperationTracker(device=habitat.Device.RTX2080Ti)
with tracker.track():
out = model(image)

The code mentioned above is licensed under the [NVIDIA Software Development
Kit End User License Agreement](https://docs.nvidia.com/cuda/eula/index.html).
trace = tracker.get_tracked_trace()
print("Run time on source:", trace.run_time_ms)

We include the implementations of several deep neural networks under
`experiments/` for our evaluation. These implementations are copyrighted by
their original authors and carry their original licenses. Please see the
corresponding `README` files and license files inside the subdirectories for
more information.
# Perform prediction to a single target device
pred = trace.to_device(habitat.Device.V100)
print("Predicted time on V100:", pred.run_time_ms)
```
## Development Environment Setup
Habitat requires both the native component and the Python binding to function. For detailed installation instructions, see [INSTALL.md](INSTALL.md).

## Release process
Run the `Build Habitat` GitHub action. This will build the wheel files for each platform.

## Research Paper
## Release history
See [Releases](https://github.com/CentML/habitat/releases).
## Meta

Habitat began as a research project in the [EcoSystem
Group](https://www.cs.toronto.edu/ecosystem) at the [University of
Expand All @@ -79,3 +63,5 @@ If you use Habitat in your research, please consider citing our paper:
year = {2021},
}
```
## Contributing
- Guidelines on how to contribute to the project