Skip to content

Commit

Permalink
update docs
Browse files Browse the repository at this point in the history
Summary: Pull Request resolved: fairinternal/detectron2#373

Differential Revision: D19553213

Pulled By: ppwwyyxx

fbshipit-source-id: 285396f9344c10758048a1de3101017931b0f98f
  • Loading branch information
ppwwyyxx authored and facebook-github-bot committed Jan 24, 2020
1 parent 85cfd95 commit 5e04cff
Show file tree
Hide file tree
Showing 7 changed files with 23 additions and 11 deletions.
4 changes: 2 additions & 2 deletions .circleci/config.yml
Original file line number Diff line number Diff line change
Expand Up @@ -81,14 +81,14 @@ jobs:
# Cache the venv directory that contains dependencies
- restore_cache:
keys:
- cache-key-{{ .Branch }}-ID-0
- cache-key-{{ .Branch }}-ID-20200124

- <<: *install_dep

- save_cache:
paths:
- ~/venv
key: cache-key-{{ .Branch }}-ID-0
key: cache-key-{{ .Branch }}-ID-20200124

- <<: *install_detectron2

Expand Down
2 changes: 1 addition & 1 deletion .github/ISSUE_TEMPLATE/unexpected-problems-bugs.md
Original file line number Diff line number Diff line change
Expand Up @@ -35,7 +35,7 @@ Only in one of the two conditions we will help with it:

## Environment:

Please paste the output of `python -m detectron2.utils.collect_env`.
Run `python -m detectron2.utils.collect_env` in the environment where you observerd the issue, and paste the output.
If detectron2 hasn't been successfully installed, use `python detectron2/utils/collect_env.py`.

If your issue looks like an installation issue / environment issue,
Expand Down
13 changes: 8 additions & 5 deletions INSTALL.md
Original file line number Diff line number Diff line change
Expand Up @@ -97,15 +97,18 @@ Two possibilities:
you need to either install a different build of PyTorch (or build by yourself)
to match your local CUDA installation, or install a different version of CUDA to match PyTorch.

* Detectron2 or PyTorch/torchvision is not built with the correct compute compatibility for the GPU model.
* Detectron2 or PyTorch/torchvision is not built for the correct GPU architecture (compute compatibility).

The compute compatibility for PyTorch is available in `python -m detectron2.utils.collect_env`.
The GPU architecture for PyTorch/detectron2/torchvision is available in the "architecture flags" in
`python -m detectron2.utils.collect_env`.

The compute compatibility of detectron2/torchvision defaults to match the GPU found on the machine
during building, and can be controlled by `TORCH_CUDA_ARCH_LIST` environment variable during building.
The GPU architecture flags of detectron2/torchvision by default matches the GPU model detected
during building. This means the compiled code may not work on a different GPU model.
To overwrite the GPU architecture for detectron2/torchvision, use `TORCH_CUDA_ARCH_LIST` environment variable during building.

For example, `export TORCH_CUDA_ARCH_LIST=6.0,7.0` makes it work for both P100s and V100s.
Visit [developer.nvidia.com/cuda-gpus](https://developer.nvidia.com/cuda-gpus) to find out
the correct compute compatibility for your device.
the correct compute compatibility number for your device.

</details>

Expand Down
4 changes: 2 additions & 2 deletions MODEL_ZOO.md
Original file line number Diff line number Diff line change
Expand Up @@ -4,8 +4,8 @@

This file documents a large collection of baselines trained
with detectron2 in Sep-Oct, 2019.
All models were trained on [Big Basin](https://engineering.fb.com/data-center-engineering/introducing-big-basin-our-next-generation-ai-hardware/)
servers with 8 NVIDIA V100 GPUs, with data-parallel sync SGD. The softwares in use were PyTorch 1.3, CUDA 9.2, cuDNN 7.4.2 or 7.6.3.
All numbers were obtained on [Big Basin](https://engineering.fb.com/data-center-engineering/introducing-big-basin-our-next-generation-ai-hardware/)
servers with 8 NVIDIA V100 GPUs & NVLink. The softwares in use were PyTorch 1.3, CUDA 9.2, cuDNN 7.4.2 or 7.6.3.
You can programmataically access these models using [detectron2.model_zoo](https://detectron2.readthedocs.io/modules/model_zoo.html) APIs.

#### How to Read the Tables
Expand Down
4 changes: 4 additions & 0 deletions datasets/README.md
Original file line number Diff line number Diff line change
Expand Up @@ -81,5 +81,9 @@ They are not needed for instance segmentation.
VOC20{07,12}/
Annotations/
ImageSets/
Main/
trainval.txt
test.txt
# train.txt or val.txt, if you use these splits
JPEGImages/
```
3 changes: 2 additions & 1 deletion docs/requirements.txt
Original file line number Diff line number Diff line change
Expand Up @@ -11,7 +11,8 @@ termcolor
yacs
tabulate
cloudpickle
Pillow
Pillow==6.2.2
protobuf
git+git://github.com/facebookresearch/fvcore.git
https://download.pytorch.org/whl/nightly/cpu/torch-1.3.0.dev20191010%2Bcpu-cp37-cp37m-linux_x86_64.whl
https://download.pytorch.org/whl/nightly/cpu/torchvision-0.5.0.dev20191008%2Bcpu-cp37-cp37m-linux_x86_64.whl
4 changes: 4 additions & 0 deletions docs/tutorials/models.md
Original file line number Diff line number Diff line change
Expand Up @@ -16,6 +16,10 @@ in our model zoo.
You can use a model by just `outputs = model(inputs)`.
Next, we explain the inputs/outputs format used by the builtin models in detectron2.

[DefaultPredictor](../modules/engine.html#detectron2.engine.defaults.DefaultPredictor)
is a wrapper around model that provides the default behavior for regular inference. It includes model loading as
well as preprocessing, and operates on single image rather than batches.


### Model Input Format

Expand Down

0 comments on commit 5e04cff

Please sign in to comment.