Skip to content

update readme #125

New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Merged
merged 1 commit into from
Mar 29, 2023
Merged
Changes from all commits
Commits
File filter

Filter by extension

Filter by extension

Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
23 changes: 1 addition & 22 deletions README.md
Original file line number Diff line number Diff line change
Expand Up @@ -110,22 +110,6 @@ To train the model, please run
python tools/train.py --config configs/det/dbnet/db_r50_icdar15.yaml
```

To train in distributed mode, please run

```shell
# Distributed training on Ascends
mpirun --allow-run-as-root -n 8 python tools/train.py --config configs/det/dbnet/db_r50_icdar15.yaml
```

```shell
# Distributed training on GPUs
export CUDA_VISIBLE_DEVICES=0,1
# n is the number of GPUs
mpirun --allow-run-as-root -n 2 python tools/train.py --config configs/det/dbnet/db_r50_icdar15.yaml
```
> Notes: please ensure the arg `distribute` in yaml file is set True


The training result (including checkpoints, per-epoch performance and curves) will be saved in the directory parsed by the arg `ckpt_save_dir`.

#### 4. Evaluation
Expand Down Expand Up @@ -194,12 +178,7 @@ Optionally, change `num_workers` according to the cores of CPU, and change `dist

#### 3. Training

To train the model, please run

``` shell
# train crnn on MJ+ST dataset
python tools/train.py --config configs/rec/crnn/crnn_resnet34.yaml
```
We will use distributed training for the large LMDB dataset.

To train in distributed mode, please run

Expand Down