Skip to content

Commit 0688eaf

Browse files
authored
Merge pull request #125 from SamitHuang/main
update readme
2 parents 3086ffb + 089bd85 commit 0688eaf

File tree

1 file changed

+1
-22
lines changed

1 file changed

+1
-22
lines changed

README.md

Lines changed: 1 addition & 22 deletions
Original file line numberDiff line numberDiff line change
@@ -110,22 +110,6 @@ To train the model, please run
110110
python tools/train.py --config configs/det/dbnet/db_r50_icdar15.yaml
111111
```
112112

113-
To train in distributed mode, please run
114-
115-
```shell
116-
# Distributed training on Ascends
117-
mpirun --allow-run-as-root -n 8 python tools/train.py --config configs/det/dbnet/db_r50_icdar15.yaml
118-
```
119-
120-
```shell
121-
# Distributed training on GPUs
122-
export CUDA_VISIBLE_DEVICES=0,1
123-
# n is the number of GPUs
124-
mpirun --allow-run-as-root -n 2 python tools/train.py --config configs/det/dbnet/db_r50_icdar15.yaml
125-
```
126-
> Notes: please ensure the arg `distribute` in yaml file is set True
127-
128-
129113
The training result (including checkpoints, per-epoch performance and curves) will be saved in the directory parsed by the arg `ckpt_save_dir`.
130114

131115
#### 4. Evaluation
@@ -194,12 +178,7 @@ Optionally, change `num_workers` according to the cores of CPU, and change `dist
194178

195179
#### 3. Training
196180

197-
To train the model, please run
198-
199-
``` shell
200-
# train crnn on MJ+ST dataset
201-
python tools/train.py --config configs/rec/crnn/crnn_resnet34.yaml
202-
```
181+
We will use distributed training for the large LMDB dataset.
203182

204183
To train in distributed mode, please run
205184

0 commit comments

Comments
 (0)