Skip to content

Commit

Permalink
fix markdown lint and some docs (#68)
Browse files Browse the repository at this point in the history
* fix typos

* fix typos

* fix typos

* fix typos
  • Loading branch information
OceanPang authored Jan 5, 2021
1 parent d868e7e commit 3ceb8e4
Show file tree
Hide file tree
Showing 12 changed files with 33 additions and 30 deletions.
2 changes: 1 addition & 1 deletion .github/workflows/build.yml
Original file line number Diff line number Diff line change
@@ -1,6 +1,6 @@
name: build

on: [pull_request]
on: [push, pull_request]

jobs:
lint:
Expand Down
5 changes: 1 addition & 4 deletions README.md
Original file line number Diff line number Diff line change
Expand Up @@ -15,7 +15,7 @@ Documentation: https://mmtracking.readthedocs.io/
MMTracking is an open source video perception toolbox based on PyTorch.
It is a part of the OpenMMLab project.

The master branch works with PyTorch 1.3 to 1.6.
The master branch works with PyTorch 1.3 to 1.7.

<div align="left">
<img src="https://user-images.githubusercontent.com/24663779/103343312-c724f480-4ac6-11eb-9c22-b56f1902584e.gif" width="800"/>
Expand All @@ -39,12 +39,10 @@ The master branch works with PyTorch 1.3 to 1.6.

**Strong**: We reproduce state-of-the-art models and some of them even outperform the offical implementations.


## License

This project is released under the [Apache 2.0 license](LICENSE).


## Changelog

v0.5.0 was released in 04/01/2021.
Expand Down Expand Up @@ -78,7 +76,6 @@ Please refer to [install.md](docs/install.md) for install instructions.
Please see [dataset.md](docs/dataset.md) and [quick_run.md](docs/quick_run.md) for the basic usage of MMTracking.
We also provide usage [tutorials](docs/tutorials/).


## Contributing

We appreciate all contributions to improve MMTracking. Please refer to [CONTRIBUTING.md](.github/CONTRIBUTING.md) for the contributing guideline.
Expand Down
4 changes: 4 additions & 0 deletions configs/det/README.md
Original file line number Diff line number Diff line change
Expand Up @@ -11,6 +11,7 @@ Please NOTE that there are some differences between the base config in MMTrackin
1. `detector` is only a submodule of the `model`.

For example, the config of Faster R-CNN in MMDetection follows

```python
model = dict(
type='FasterRCNN',
Expand All @@ -19,6 +20,7 @@ Please NOTE that there are some differences between the base config in MMTrackin
```

But in MMTracking, the config follows

```python
model = dict(
detector=dict(
Expand All @@ -31,13 +33,15 @@ Please NOTE that there are some differences between the base config in MMTrackin
2. `train_cfg` and `test_cfg` are merged into `model` / `detector`.

In MMDetection, the configs follows

```python
model = dict()
train_cfg = dict()
test_cfg = dict()
```

While in MMTracking, the config follows

```python
model = dict(
detector=dict(
Expand Down
1 change: 0 additions & 1 deletion configs/mot/deepsort/README.md
Original file line number Diff line number Diff line change
Expand Up @@ -29,7 +29,6 @@ The configs in this folder are basiclly for inference.
Currently we do not support training ReID models.
We directly use the ReID model from [Tracktor](https://github.com/phil-bergmann/tracking_wo_bnw). These missed features will be supported in the future.


| Detector | ReID | Train Set | Test Set | Public | Inf time (fps) | MOTA | IDF1 | FP | FN | IDSw. | Config | Download |
| :-------------: | :----: | :-------: | :------: | :----: | :------------: | :--: | :--: |:--:|:--:| :---: | :----: | :------: |
| R50-FasterRCNN-FPN | - | half-train | half-val | Y | 28.3 | 46.0 | 46.6 | 289 | 82451 | 4581 | [config](sort_faster-rcnn_fpn_4e_mot17-public-half.py) | [detector](https://download.openmmlab.com/mmtracking/v0.5/mot/faster-rcnn_r50_fpn_4e_mot17-half-64ee2ed4.pth) |
Expand Down
3 changes: 3 additions & 0 deletions docs/changelog.md
Original file line number Diff line number Diff line change
@@ -1,10 +1,13 @@
## Changelog

### v0.5.0 (04/01/2021)

#### Highlights

- MMTracking is released!

#### New Features

- Support video object detection methods: [DFF](https://arxiv.org/abs/1611.07715), [FGFA](https://arxiv.org/abs/1703.10025), [SELSA](https://arxiv.org/abs/1907.06390)
- Support multi object tracking methods: [SORT](https://arxiv.org/abs/1602.00763)/[DeepSORT](https://arxiv.org/abs/1703.07402), [Tracktor](https://arxiv.org/abs/1903.05625)
- Support single object tracking methods: [SiameseRPN++](https://arxiv.org/abs/1812.11703)
9 changes: 3 additions & 6 deletions docs/dataset.md
Original file line number Diff line number Diff line change
@@ -1,14 +1,13 @@
## Dataset Preparation


This page provides the instructions for dataset preparation on existing benchmarks, include

- Video Object Detection
- [ILSVRC](http://image-net.org/challenges/LSVRC/2017/)
- [ILSVRC](http://image-net.org/challenges/LSVRC/2017/)
- Multiple Object Tracking
- [MOT Challenge](https://motchallenge.net/)
- [MOT Challenge](https://motchallenge.net/)
- Single Object Tracking
- [LaSOT](http://vision.cs.stonybrook.edu/~lasot/)
- [LaSOT](http://vision.cs.stonybrook.edu/~lasot/)

### 1. Download Datasets

Expand All @@ -24,7 +23,6 @@ Notes:

- For the training and testing of single object tracking task, the MSCOCO, ILSVRC and LaSOT datasets are needed.


```
mmtracking
├── mmtrack
Expand Down Expand Up @@ -64,7 +62,6 @@ mmtracking
| | ├── annotations
```


### 2. Convert Annotations

We use [CocoVID](../mmtrack/datasets/parsers/coco_video_parser.py) to maintain all datasets in this codebase.
Expand Down
4 changes: 2 additions & 2 deletions docs/install.md
Original file line number Diff line number Diff line change
Expand Up @@ -115,10 +115,10 @@ conda activate open-mmlab
conda install pytorch==1.6.0 torchvision==0.7.0 cudatoolkit=10.1 -c pytorch -y
# install the latest mmcv
pip install mmcv-full==latest+torch1.6.0+cu101 -f https://download.openmmlab.com/mmcv/dist/index.html
pip install mmcv-full -f https://download.openmmlab.com/mmcv/dist/cu101/torch1.6.0/index.html
# install mmdetection
pip install git+https://github.com/open-mmlab/mmdetection.git
pip install mmdet
# install mmtracking
git clone https://github.com/open-mmlab/mmtracking.git
Expand Down
12 changes: 8 additions & 4 deletions docs/model_zoo.md
Original file line number Diff line number Diff line change
Expand Up @@ -4,10 +4,8 @@

- We use distributed training.
- All pytorch-style pretrained backbones on ImageNet are from PyTorch model zoo.
- For fair comparison with other codebases, we report the GPU memory as the maximum value of `torch.cuda.max_memory_allocated()` for all 8 GPUs.
Note that this value is usually less than what `nvidia-smi` shows.
- We report the inference time as the total time of network forwarding and post-processing, excluding the data loading time.
Results are obtained with the script `tools/benchmark.py` which computes the average time on 2000 images.
- For fair comparison with other codebases, we report the GPU memory as the maximum value of `torch.cuda.max_memory_allocated()` for all 8 GPUs. Note that this value is usually less than what `nvidia-smi` shows.
- We report the inference time as the total time of network forwarding and post-processing, excluding the data loading time. Results are obtained with the script `tools/benchmark.py` which computes the average time on 2000 images.
- Speed benchmark environments

HardWare
Expand All @@ -24,23 +22,29 @@ Results are obtained with the script `tools/benchmark.py` which computes the ave
## Baselines of video object detection

### DFF

Please refer to [DFF](../configs/vid/dff/README.md) for details.

### FGFA

Please refer to [FGFA](../configs/vid/fgfa/README.md) for details.

### SELSA

Please refer to [SELSA](../configs/vid/selsa/README.md) for details.

## Baselines of multiple object tracking

### SORT/DeepSORT

Please refer to [SORT/DeepSORT](../configs/mot/deepsort/README.md) for details.

### Tracktor

Please refer to [Tracktor](../configs/mot/tracktor/README.md) for details.

## Baselines of single object tracking

### SiameseRPN++

Please refer to [SiameseRPN++](../configs/sot/siamese_rpn/README.md) for details.
5 changes: 1 addition & 4 deletions docs/tutorials/customize_dataset.md
Original file line number Diff line number Diff line change
Expand Up @@ -3,8 +3,8 @@
To customize a new dataset, you can convert them to the existing CocoVID style or implement a totally new dataset.
In MMTracking, we recommand to convert the data into CocoVID style and do the conversion offline, thus you can use the `CocoVideoDataset` directly. In this case, you only need to modify the config's data annotation pathes and the `classes`.


### Convert the dataset into CocoVID style

#### The CocoVID annotation file

The annotation json files in CocoVID style has the following necessary keys:
Expand All @@ -18,7 +18,6 @@ A simple example is presented at [here](../../tests/assets/demo_cocovid_data/ann

The examples of converting existing datasets is presented at [here](../../tools/convert_datasets/).


#### Modify the config

After the data pre-processing, the users need to further modify the config files to use the dataset.
Expand Down Expand Up @@ -207,8 +206,6 @@ data = dict(

```



### Subset of existing datasets

With existing dataset types, we can modify the class names of them to train subset of the annotations.
Expand Down
4 changes: 1 addition & 3 deletions docs/tutorials/customize_mot_model.md
Original file line number Diff line number Diff line change
Expand Up @@ -8,10 +8,8 @@ We basically categorize model components into 5 types.
- reid: usually an independent ReID model to extract the feature embeddings from the cropped image, e.g., BaseReID.
- track_head: the component to extract tracking cues but share the same backbone with the detector, e.g., a embedding head or a regression head.


### Add a new tracker


#### 1. Define a tracker (e.g. MyTracker)

Create a new file `mmtrack/models/mot/trackers/my_tracker.py`.
Expand Down Expand Up @@ -172,6 +170,7 @@ reid=dict(
```

### Add a new track head

#### 1. Define a head (e.g. MyHead)

Create a new file `mmtrack/models/track_heads/my_head.py`.
Expand Down Expand Up @@ -219,7 +218,6 @@ track_head=dict(
arg2=xxx)
```


### Add a new loss

Assume you want to add a new loss as `MyLoss`, for bounding box regression.
Expand Down
10 changes: 5 additions & 5 deletions docs/tutorials/data_pipeline.md
Original file line number Diff line number Diff line change
Expand Up @@ -5,14 +5,13 @@ There are two types of data pipelines in MMTracking:
- Single image, which is consistent with MMDetection in most cases.
- Pair-wise / multiple images.


### Data pipeline for a single image

For a single image, you may refer to the [tutorial in MMDetection](https://mmdetection.readthedocs.io/en/latest/tutorials/data_pipeline.html).

There are several differences in MMTracking:
- We implement `VideoCollect` which is similar to `Collect` in MMDetection but is more comptabile with the video perception tasks.
For example, the meta keys `frame_id` and `is_video_data` are collected by default.

- We implement `VideoCollect` which is similar to `Collect` in MMDetection but is more comptabile with the video perception tasks. For example, the meta keys `frame_id` and `is_video_data` are collected by default.

### Data pipeline for multiple images

Expand Down Expand Up @@ -50,6 +49,7 @@ class CocoVideoDataset(CocoDataset):
img_infos = self.ref_img_sampling(img_info, **self.ref_img_sampler)
...
```

In this case, the loaded annotations is no longer a `dict` but `list[dict]` that contains the annotations for the key and reference images.
The first item of the list indicates the annotations of the key image.

Expand Down Expand Up @@ -78,8 +78,8 @@ class LoadMultiImagesFromFile(LoadImageFromFile):
outs.append(_results)
return outs
```
Sometimes you may need to add a parameter `share_params` to decide whether share the random seed of the transformation on these images.

Sometimes you may need to add a parameter `share_params` to decide whether share the random seed of the transformation on these images.

#### 3. Concat the reference images (if applicable)

Expand All @@ -90,8 +90,8 @@ The length of the list is 2 after the process.

In the end, we implement `SeqDefaultFormatBundle` to convert the list to a dictionary as the input of the model forward.


Here is an example of the data pipeline:

```python
train_pipeline = [
dict(type='LoadMultiImagesFromFile'),
Expand Down
4 changes: 4 additions & 0 deletions docs/useful_tools.md
Original file line number Diff line number Diff line change
Expand Up @@ -6,16 +6,19 @@ We provide lots of useful tools under the `tools/` directory.
It is used as the same manner with `tools/test.py` but different in the configs.

Here is an example that shows how to modify the configs:

1. Define the desirable evaluation metrics to record.

For example, you can define the search metrics as

```python
search_metrics = ['MOTA', 'IDF1', 'FN', 'FP', 'IDs', 'MT', 'ML']
```

2. Define the parameters and the values to search.

Assume you have a tracker like

```python
model = dict(
tracker=dict(
Expand All @@ -27,6 +30,7 @@ Here is an example that shows how to modify the configs:
```

If you want to search the parameters of the tracker, just change the value to a list as follow

```python
model = dict(
tracker=dict(
Expand Down

0 comments on commit 3ceb8e4

Please sign in to comment.