Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

CodeCamp #139 [Feature] Support REFUGE dataset. #2420

Closed
wants to merge 71 commits into from
Closed
Show file tree
Hide file tree
Changes from 4 commits
Commits
Show all changes
71 commits
Select commit Hold shift + click to select a range
f2b8d2f
doc
BLUE-coconut Nov 30, 2022
67b51b7
[Doc]Translate the 1_config.md and modify a wrong statement in 1_conf…
pofengdenihong Dec 2, 2022
4287fd2
Translate the 1_config.md and modify a wrong statement in 1_config.md
pofengdenihong Dec 2, 2022
5668182
modify part of content
BLUE-coconut Dec 3, 2022
ad470a3
Modify some expressions
pofengdenihong Dec 5, 2022
104a24d
changed parts of content
BLUE-coconut Dec 5, 2022
cc6a175
add code for convert refuge datasets
liuruiqiang Dec 5, 2022
b0e57c6
Apply suggestions from code review
MeowZheng Dec 6, 2022
9750c97
Merge pull request #2371 from pofengdenihong/dev-1.x
MeowZheng Dec 6, 2022
8a3d6cd
[Doc] Add ZN datasets.md in dev-1.x
MengzhangLI Dec 6, 2022
dc2c986
add refuge dataset in datasets
liuruiqiang Dec 6, 2022
3d1f8b7
modified
BLUE-coconut Dec 7, 2022
bdf85d8
add REFUGEdATASET in __init__.py
liuruiqiang Dec 7, 2022
addd38d
fix typo
MengzhangLI Dec 7, 2022
372a6ce
Update docs/zh_cn/user_guides/4_train_test.md
BLUE-coconut Dec 7, 2022
07462c3
delete redundant code
liuruiqiang Dec 7, 2022
dc8aa35
CodeCamp #1562 [Doc] update `overview.md`
tianleiSHI Dec 10, 2022
dddf093
Update overview.md
tianleiSHI Dec 10, 2022
44577d4
add refuge config
liuruiqiang Dec 10, 2022
5a96573
add new change for training
liuruiqiang Dec 11, 2022
ed1195a
fix bugs in config
liuruiqiang Dec 11, 2022
d179b87
fix config
liuruiqiang Dec 11, 2022
5383ea6
add init code for transforms
liuruiqiang Dec 11, 2022
e11c76f
debug
liuruiqiang Dec 12, 2022
7edb141
Merge pull request #2355 from BLUE-coconut/master
MeowZheng Dec 12, 2022
b770884
Add torch1.13 in CI
xiexinch Dec 12, 2022
02030b1
use mim install mm packages
xiexinch Dec 12, 2022
d755707
Update docs/zh_cn/overview.md
tianleiSHI Dec 12, 2022
c2042f3
Update docs/zh_cn/overview.md
tianleiSHI Dec 12, 2022
fdd1b95
install all requirements
xiexinch Dec 12, 2022
7537987
Merge pull request #2397 from tianleiSHI/dev-1.x
MeowZheng Dec 12, 2022
e2e70c3
add flip transforms
liuruiqiang Dec 12, 2022
492313b
install wheel
xiexinch Dec 13, 2022
815b24e
add ref
xiexinch Dec 13, 2022
d7b85aa
Merge pull request #2402 from xiexinch/add-torch1.13-in-ci-1.x
MeowZheng Dec 13, 2022
164fd3f
fix
MengzhangLI Dec 13, 2022
e88489e
fix
MengzhangLI Dec 13, 2022
eeee12e
Merge pull request #2387 from MengzhangLI/zn_datasets_1.x
MeowZheng Dec 13, 2022
f5b4c12
add example project
xiexinch Dec 15, 2022
81da7aa
add ci ignore
xiexinch Dec 15, 2022
69048c2
fix transforms
liuruiqiang Dec 15, 2022
adceec3
fix transforms
liuruiqiang Dec 15, 2022
532ff5b
DOC
tianleiSHI Dec 17, 2022
88945e3
train successfully
liuruiqiang Dec 18, 2022
0d59440
train successfully
liuruiqiang Dec 19, 2022
0631053
check code
liuruiqiang Dec 19, 2022
107e964
add revised code
liuruiqiang Dec 19, 2022
ab690bd
Update docs/zh_cn/get_started.md
tianleiSHI Dec 20, 2022
ca68b00
Merge pull request #2417 from tianleiSHI/get_started_doc
MeowZheng Dec 20, 2022
0529df3
fix confilcts in transforms
liuruiqiang Dec 20, 2022
534b27b
add version limits
xiexinch Dec 20, 2022
8539e22
Merge pull request #2412 from xiexinch/mmseg_projects
MeowZheng Dec 20, 2022
d6bf98c
XMerge branch 'dev-1.x' of github.com:liuruiqiang/mmsegmentation into…
MengzhangLI Dec 20, 2022
b667b60
fix code for converting data again and add notes in dataset_converter…
liuruiqiang Dec 21, 2022
6f90a93
fix code for converting data again and add notes in dataset_converter…
liuruiqiang Dec 21, 2022
2846e5a
Merge branch 'dev-1.x' of github.com:liuruiqiang/mmsegmentation into …
MengzhangLI Dec 23, 2022
df45c11
delete background in REFUGEDataset for training
liuruiqiang Dec 27, 2022
680d5e7
delete unnecessary code
liuruiqiang Dec 27, 2022
f1e74dd
Merge branch 'dev-1.x' of github.com:liuruiqiang/mmsegmentation into …
MengzhangLI Jan 5, 2023
882ceae
resolve conflicts & fix pre-commit problem
MengzhangLI Jan 5, 2023
5af7bd9
resolve conflicts
liuruiqiang Jan 6, 2023
d968cbc
Merge branch 'dev-1.x' of github.com:open-mmlab/mmsegmentation into r…
liuruiqiang Jan 6, 2023
62258f4
resolve conflicts
liuruiqiang Jan 6, 2023
b368bdd
refine refuge pr
MengzhangLI Jan 9, 2023
87c8a8d
add ut
MengzhangLI Jan 9, 2023
385a481
Merge branch 'dev-1.x' of github.com:open-mmlab/mmsegmentation into r…
liuruiqiang Jan 9, 2023
6790131
Merge branch 'dev-1.x' of github.com:liuruiqiang/mmsegmentation into …
liuruiqiang Jan 12, 2023
1cc2209
resolve conflicts
liuruiqiang Jan 12, 2023
45a8e34
update upstream dev-1.x & use smaller dataset example in ut
MengzhangLI Jan 16, 2023
fc584cd
update upstream dev-1.x & use smaller dataset example in ut
MengzhangLI Jan 16, 2023
1c16532
fix doc
MengzhangLI Feb 1, 2023
File filter

Filter by extension

Filter by extension

Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
8 changes: 3 additions & 5 deletions docs/en/user_guides/2_dataset_prepare.md
Original file line number Diff line number Diff line change
Expand Up @@ -339,7 +339,7 @@ For Potsdam dataset, please run the following command to download and re-organiz
python tools/dataset_converters/potsdam.py /path/to/potsdam
```

In our default setting, it will generate 3,456 images for training and 2,016 images for validation.
In our default setting, it will generate 3456 images for training and 2016 images for validation.

### ISPRS Vaihingen

Expand Down Expand Up @@ -392,7 +392,7 @@ You may need to follow the following structure for dataset preparation after dow
python tools/dataset_converters/isaid.py /path/to/iSAID
```

In our default setting (`patch_width`=896, `patch_height`=896, `overlap_area`=384), it will generate 33,978 images for training and 11,644 images for validation.
In our default setting (`patch_width`=896, `patch_height`=896, `overlap_area`=384), it will generate 33978 images for training and 11644 images for validation.

## LIP(Look Into Person) dataset

Expand Down Expand Up @@ -445,7 +445,7 @@ cd ./RawData/Training

Then create `train.txt` and `val.txt` to split dataset.

According to TransUNet, the following is the data set division.
According to TransUnet, the following is the data set division.

train.txt

Expand Down Expand Up @@ -509,8 +509,6 @@ Then, use this command to convert synapse dataset.
python tools/dataset_converters/synapse.py --dataset-path /path/to/synapse
```

In our default setting, it will generate 2,211 2D images for training and 1,568 2D images for validation.

Noted that MMSegmentation default evaluation metric (such as mean dice value) is calculated on 2D slice image,
which is not comparable to results of 3D scan in some paper such as [TransUNet](https://arxiv.org/abs/2102.04306).

Expand Down
94 changes: 47 additions & 47 deletions docs/zh_cn/user_guides/2_dataset_prepare.md
Original file line number Diff line number Diff line change
@@ -1,6 +1,6 @@
## 准备数据集(待更新)

推荐用软链接, 将数据集根目录链接到 `$MMSEGMENTATION/data` 里. 如果您的文件夹结构是不同的, 您也许可以试着修改配置文件里对应的路径.
推荐用软链接将数据集根目录链接到 `$MMSEGMENTATION/data` 里如果您的文件夹结构是不同的您也许可以试着修改配置文件里对应的路径

```none
mmsegmentation
Expand Down Expand Up @@ -139,112 +139,112 @@ mmsegmentation

### Cityscapes

注册成功后, 数据集可以在 [这里](https://www.cityscapes-dataset.com/downloads/) 下载.
注册成功后数据集可以在 [这里](https://www.cityscapes-dataset.com/downloads/) 下载

通常情况下, `**labelTrainIds.png` 被用来训练 cityscapes.
通常情况下`**labelTrainIds.png` 被用来训练 cityscapes
基于 [cityscapesscripts](https://github.com/mcordts/cityscapesScripts),
我们提供了一个 [脚本](https://github.com/open-mmlab/mmsegmentation/blob/master/tools/convert_datasets/cityscapes.py),
去生成 `**labelTrainIds.png`.
去生成 `**labelTrainIds.png`

```shell
# --nproc 8 意味着有 8 个进程用来转换,它也可以被忽略.
# --nproc 8 意味着有 8 个进程用来转换它也可以被忽略
python tools/convert_datasets/cityscapes.py data/cityscapes --nproc 8
```

### Pascal VOC

Pascal VOC 2012 可以在 [这里](http://host.robots.ox.ac.uk/pascal/VOC/voc2012/VOCtrainval_11-May-2012.tar) 下载.
此外, 许多最近在 Pascal VOC 数据集上的工作都会利用增广的数据, 它们可以在 [这里](http://www.eecs.berkeley.edu/Research/Projects/CS/vision/grouping/semantic_contours/benchmark.tgz) 找到.
Pascal VOC 2012 可以在 [这里](http://host.robots.ox.ac.uk/pascal/VOC/voc2012/VOCtrainval_11-May-2012.tar) 下载
此外许多最近在 Pascal VOC 数据集上的工作都会利用增广的数据它们可以在 [这里](http://www.eecs.berkeley.edu/Research/Projects/CS/vision/grouping/semantic_contours/benchmark.tgz) 找到

如果您想使用增广后的 VOC 数据集, 请运行下面的命令来将数据增广的标注转成正确的格式.
如果您想使用增广后的 VOC 数据集请运行下面的命令来将数据增广的标注转成正确的格式

```shell
# --nproc 8 意味着有 8 个进程用来转换,它也可以被忽略.
# --nproc 8 意味着有 8 个进程用来转换它也可以被忽略
python tools/convert_datasets/voc_aug.py data/VOCdevkit data/VOCdevkit/VOCaug --nproc 8
```

关于如何拼接数据集 (concatenate) 并一起训练它们, 更多细节请参考 [拼接连接数据集](https://github.com/open-mmlab/mmsegmentation/blob/master/docs/zh_cn/tutorials/customize_datasets.md#%E6%8B%BC%E6%8E%A5%E6%95%B0%E6%8D%AE%E9%9B%86) .
关于如何拼接数据集 (concatenate) 并一起训练它们更多细节请参考 [拼接连接数据集](https://github.com/open-mmlab/mmsegmentation/blob/master/docs/zh_cn/tutorials/customize_datasets.md#%E6%8B%BC%E6%8E%A5%E6%95%B0%E6%8D%AE%E9%9B%86)

### ADE20K

ADE20K 的训练集和验证集可以在 [这里](http://data.csail.mit.edu/places/ADEchallenge/ADEChallengeData2016.zip) 下载.
您还可以在 [这里](http://data.csail.mit.edu/places/ADEchallenge/release_test.zip) 下载验证集.
ADE20K 的训练集和验证集可以在 [这里](http://data.csail.mit.edu/places/ADEchallenge/ADEChallengeData2016.zip) 下载
您还可以在 [这里](http://data.csail.mit.edu/places/ADEchallenge/release_test.zip) 下载验证集

### Pascal Context

Pascal Context 的训练集和验证集可以在 [这里](http://host.robots.ox.ac.uk/pascal/VOC/voc2010/VOCtrainval_03-May-2010.tar) 下载.
注册成功后, 您还可以在 [这里](http://host.robots.ox.ac.uk:8080/eval/downloads/VOC2010test.tar) 下载验证集.
Pascal Context 的训练集和验证集可以在 [这里](http://host.robots.ox.ac.uk/pascal/VOC/voc2010/VOCtrainval_03-May-2010.tar) 下载
注册成功后您还可以在 [这里](http://host.robots.ox.ac.uk:8080/eval/downloads/VOC2010test.tar) 下载验证集

为了从原始数据集里切分训练集和验证集, 您可以在 [这里](https://codalabuser.blob.core.windows.net/public/trainval_merged.json)
下载 trainval_merged.json.
为了从原始数据集里切分训练集和验证集 您可以在 [这里](https://codalabuser.blob.core.windows.net/public/trainval_merged.json)
下载 trainval_merged.json

如果您想使用 Pascal Context 数据集,
请安装 [细节](https://github.com/zhanghang1989/detail-api) 然后再运行如下命令来把标注转换成正确的格式.
如果您想使用 Pascal Context 数据集
请安装 [细节](https://github.com/zhanghang1989/detail-api) 然后再运行如下命令来把标注转换成正确的格式

```shell
python tools/convert_datasets/pascal_context.py data/VOCdevkit data/VOCdevkit/VOC2010/trainval_merged.json
```

### CHASE DB1

CHASE DB1 的训练集和验证集可以在 [这里](https://staffnet.kingston.ac.uk/~ku15565/CHASE_DB1/assets/CHASEDB1.zip) 下载.
CHASE DB1 的训练集和验证集可以在 [这里](https://staffnet.kingston.ac.uk/~ku15565/CHASE_DB1/assets/CHASEDB1.zip) 下载

为了将 CHASE DB1 数据集转换成 MMSegmentation 的格式,您需要运行如下命令:
为了将 CHASE DB1 数据集转换成 MMSegmentation 的格式您需要运行如下命令

```shell
python tools/convert_datasets/chase_db1.py /path/to/CHASEDB1.zip
```

这个脚本将自动生成正确的文件夹结构.
这个脚本将自动生成正确的文件夹结构

### DRIVE

DRIVE 的训练集和验证集可以在 [这里](https://drive.grand-challenge.org/) 下载.
在此之前, 您需要注册一个账号, 当前 '1st_manual' 并未被官方提供, 因此需要您从其他地方获取.
DRIVE 的训练集和验证集可以在 [这里](https://drive.grand-challenge.org/) 下载
在此之前您需要注册一个账号当前 '1st_manual' 并未被官方提供因此需要您从其他地方获取

为了将 DRIVE 数据集转换成 MMSegmentation 格式, 您需要运行如下命令:
为了将 DRIVE 数据集转换成 MMSegmentation 格式您需要运行如下命令

```shell
python tools/convert_datasets/drive.py /path/to/training.zip /path/to/test.zip
```

这个脚本将自动生成正确的文件夹结构.
这个脚本将自动生成正确的文件夹结构

### HRF

首先, 下载 [healthy.zip](https://www5.cs.fau.de/fileadmin/research/datasets/fundus-images/healthy.zip) [glaucoma.zip](https://www5.cs.fau.de/fileadmin/research/datasets/fundus-images/glaucoma.zip), [diabetic_retinopathy.zip](https://www5.cs.fau.de/fileadmin/research/datasets/fundus-images/diabetic_retinopathy.zip), [healthy_manualsegm.zip](https://www5.cs.fau.de/fileadmin/research/datasets/fundus-images/healthy_manualsegm.zip), [glaucoma_manualsegm.zip](https://www5.cs.fau.de/fileadmin/research/datasets/fundus-images/glaucoma_manualsegm.zip) 以及 [diabetic_retinopathy_manualsegm.zip](https://www5.cs.fau.de/fileadmin/research/datasets/fundus-images/diabetic_retinopathy_manualsegm.zip).
首先下载 [healthy.zip](https://www5.cs.fau.de/fileadmin/research/datasets/fundus-images/healthy.zip) [glaucoma.zip](https://www5.cs.fau.de/fileadmin/research/datasets/fundus-images/glaucoma.zip), [diabetic_retinopathy.zip](https://www5.cs.fau.de/fileadmin/research/datasets/fundus-images/diabetic_retinopathy.zip), [healthy_manualsegm.zip](https://www5.cs.fau.de/fileadmin/research/datasets/fundus-images/healthy_manualsegm.zip), [glaucoma_manualsegm.zip](https://www5.cs.fau.de/fileadmin/research/datasets/fundus-images/glaucoma_manualsegm.zip) 以及 [diabetic_retinopathy_manualsegm.zip](https://www5.cs.fau.de/fileadmin/research/datasets/fundus-images/diabetic_retinopathy_manualsegm.zip)

为了将 HRF 数据集转换成 MMSegmentation 格式, 您需要运行如下命令:
为了将 HRF 数据集转换成 MMSegmentation 格式您需要运行如下命令

```shell
python tools/convert_datasets/hrf.py /path/to/healthy.zip /path/to/healthy_manualsegm.zip /path/to/glaucoma.zip /path/to/glaucoma_manualsegm.zip /path/to/diabetic_retinopathy.zip /path/to/diabetic_retinopathy_manualsegm.zip
```

这个脚本将自动生成正确的文件夹结构.
这个脚本将自动生成正确的文件夹结构

### STARE

首先, 下载 [stare-images.tar](http://cecas.clemson.edu/~ahoover/stare/probing/stare-images.tar), [labels-ah.tar](http://cecas.clemson.edu/~ahoover/stare/probing/labels-ah.tar) 和 [labels-vk.tar](http://cecas.clemson.edu/~ahoover/stare/probing/labels-vk.tar).
首先下载 [stare-images.tar](http://cecas.clemson.edu/~ahoover/stare/probing/stare-images.tar), [labels-ah.tar](http://cecas.clemson.edu/~ahoover/stare/probing/labels-ah.tar) 和 [labels-vk.tar](http://cecas.clemson.edu/~ahoover/stare/probing/labels-vk.tar)

为了将 STARE 数据集转换成 MMSegmentation 格式, 您需要运行如下命令:
为了将 STARE 数据集转换成 MMSegmentation 格式您需要运行如下命令

```shell
python tools/convert_datasets/stare.py /path/to/stare-images.tar /path/to/labels-ah.tar /path/to/labels-vk.tar
```

这个脚本将自动生成正确的文件夹结构.
这个脚本将自动生成正确的文件夹结构

### Dark Zurich

因为我们只支持在此数据集上测试模型, 所以您只需下载[验证集](https://data.vision.ee.ethz.ch/csakarid/shared/GCMA_UIoU/Dark_Zurich_val_anon.zip).
因为我们只支持在此数据集上测试模型所以您只需下载[验证集](https://data.vision.ee.ethz.ch/csakarid/shared/GCMA_UIoU/Dark_Zurich_val_anon.zip)

### Nighttime Driving

因为我们只支持在此数据集上测试模型,所以您只需下载[测试集](http://data.vision.ee.ethz.ch/daid/NighttimeDriving/NighttimeDrivingTest.zip).
因为我们只支持在此数据集上测试模型所以您只需下载[测试集](http://data.vision.ee.ethz.ch/daid/NighttimeDriving/NighttimeDrivingTest.zip)

### LoveDA

可以从 Google Drive 里下载 [LoveDA数据集](https://drive.google.com/drive/folders/1ibYV0qwn4yuuh068Rnc-w4tPi0U0c-ti?usp=sharing).
可以从 Google Drive 里下载 [LoveDA数据集](https://drive.google.com/drive/folders/1ibYV0qwn4yuuh068Rnc-w4tPi0U0c-ti?usp=sharing)

或者它还可以从 [zenodo](https://zenodo.org/record/5706578#.YZvN7SYRXdF) 下载, 您需要运行如下命令:

Expand All @@ -257,46 +257,46 @@ wget https://zenodo.org/record/5706578/files/Val.zip
wget https://zenodo.org/record/5706578/files/Test.zip
```

对于 LoveDA 数据集,请运行以下命令下载并重新组织数据集:
对于 LoveDA 数据集请运行以下命令下载并重新组织数据集

```shell
python tools/convert_datasets/loveda.py /path/to/loveDA
```

请参照 [这里](https://github.com/open-mmlab/mmsegmentation/blob/master/docs/zh_cn/inference.md) 来使用训练好的模型去预测 LoveDA 测试集并且提交到官网.
请参照 [这里](https://github.com/open-mmlab/mmsegmentation/blob/master/docs/zh_cn/inference.md) 来使用训练好的模型去预测 LoveDA 测试集并且提交到官网

关于 LoveDA 的更多细节可以在[这里](https://github.com/Junjue-Wang/LoveDA) 找到.
关于 LoveDA 的更多细节可以在[这里](https://github.com/Junjue-Wang/LoveDA) 找到

### ISPRS Potsdam

[Potsdam](https://www2.isprs.org/commissions/comm2/wg4/benchmark/2d-sem-label-potsdam/)
数据集是一个有着2D 语义分割内容标注的城市遥感数据集.
数据集可以从挑战[主页](https://www2.isprs.org/commissions/comm2/wg4/benchmark/data-request-form/) 获得.
需要其中的 `2_Ortho_RGB.zip``5_Labels_all_noBoundary.zip`.
数据集是一个有着2D 语义分割内容标注的城市遥感数据集
数据集可以从挑战[主页](https://www2.isprs.org/commissions/comm2/wg4/benchmark/data-request-form/) 获得
需要其中的 '2_Ortho_RGB.zip''5_Labels_all_noBoundary.zip'。

对于 Potsdam 数据集,请运行以下命令下载并重新组织数据集
对于 Potsdam 数据集请运行以下命令下载并重新组织数据集

```shell
python tools/convert_datasets/potsdam.py /path/to/potsdam
```

使用我们默认的配置, 将生成 3,456 张图片的训练集和 2,016 张图片的验证集.
使用我们默认的配置 将生成 3456 张图片的训练集和 2016 张图片的验证集

### ISPRS Vaihingen

[Vaihingen](https://www2.isprs.org/commissions/comm2/wg4/benchmark/2d-sem-label-vaihingen/)
数据集是一个有着2D 语义分割内容标注的城市遥感数据集.
数据集是一个有着2D 语义分割内容标注的城市遥感数据集

数据集可以从挑战 [主页](https://www2.isprs.org/commissions/comm2/wg4/benchmark/data-request-form/).
需要其中的 'ISPRS_semantic_labeling_Vaihingen.zip' 和 'ISPRS_semantic_labeling_Vaihingen_ground_truth_eroded_COMPLETE.zip'.
需要其中的 'ISPRS_semantic_labeling_Vaihingen.zip' 和 'ISPRS_semantic_labeling_Vaihingen_ground_truth_eroded_COMPLETE.zip'

对于 Vaihingen 数据集, 请运行以下命令下载并重新组织数据集
对于 Vaihingen 数据集请运行以下命令下载并重新组织数据集

```shell
python tools/convert_datasets/vaihingen.py /path/to/vaihingen
```

使用我们默认的配置 (`clip_size`=512, `stride_size`=256), 将生成 344 张图片的训练集和 398 张图片的验证集.
使用我们默认的配置 (`clip_size`=512, `stride_size`=256) 将生成 344 张图片的训练集和 398 张图片的验证集

### iSAID

Expand All @@ -306,7 +306,7 @@ iSAID 数据集(训练集/验证集)的注释可以从 [iSAID](https://captain-w

该数据集是一个大规模的实例分割(也可以用于语义分割)的遥感数据集.

下载后, 在数据集转换前, 您需要将数据集文件夹调整成如下格式.
下载后在数据集转换前您需要将数据集文件夹调整成如下格式.

```
│ ├── iSAID
Expand Down