Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

docs: fix typos in documents. #706

Merged
merged 1 commit into from
Jul 18, 2023
Merged
Show file tree
Hide file tree
Changes from all commits
Commits
File filter

Filter by extension

Filter by extension

Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
6 changes: 3 additions & 3 deletions README.md
Original file line number Diff line number Diff line change
Expand Up @@ -186,7 +186,7 @@ python train.py --model=resnet50 --dataset=cifar10 \
--val_while_train --val_split=test --val_interval=1
```

The training loss and validation accuracy for each epoch will be saved in `{ckpt_save_dir}/results.log`.
The training loss and validation accuracy for each epoch will be saved in `{ckpt_save_dir}/results.log`.

More examples about training and validation can be seen in [examples](examples/scripts).

Expand Down Expand Up @@ -316,7 +316,7 @@ See [RELEASE](RELEASE.md) for detailed history.

## How to Contribute

We appreciate all kind of contributions including issues and PRs to make MindCV better.
We appreciate all kinds of contributions including issues and PRs to make MindCV better.

Please refer to [CONTRIBUTING.md](CONTRIBUTING.md) for the contributing guideline.
Please follow the [Model Template and Guideline](docs/en/how_to_guides/write_a_new_model.md) for contributing a model that fits the overall interface :)
Expand All @@ -337,7 +337,7 @@ If you find this project useful in your research, please consider citing:

```latex
@misc{MindSpore Computer Vision 2022,
title={{MindSpore Computer Vision}:MindSpore Computer Vision Toolbox and Benchmark},
title={{MindSpore Computer Vision}:MindSpore Computer Vision Toolbox and Benchmark},
author={MindSpore Vision Contributors},
howpublished = {\url{https://github.com/mindspore-lab/mindcv/}},
year={2022}
Expand Down
6 changes: 3 additions & 3 deletions README_CN.md
Original file line number Diff line number Diff line change
Expand Up @@ -48,7 +48,7 @@ MindCV是一个基于 [MindSpore](https://www.mindspore.cn/) 开发的,致力
python train.py --model swin_tiny --pretrained --opt=adamw --lr=0.001 --data_dir=/path/to/dataset
```

- **高性能** MindCV集成了大量基于CNN和和Transformer的高性能模型, 如SwinTransformer,并提供预训练权重、训练策略和性能报告,帮助用户快速选型并将其应用于视觉模型。
- **高性能** MindCV集成了大量基于CNN和Transformer的高性能模型, 如SwinTransformer,并提供预训练权重、训练策略和性能报告,帮助用户快速选型并将其应用于视觉模型。

- **灵活高效** MindCV基于高效的深度学习框架MindSpore开发,具有自动并行和自动微分等特性,支持不同硬件平台上(CPU/GPU/Ascend),同时支持效率优化的静态图模式和调试灵活的动态图模式。

Expand Down Expand Up @@ -132,14 +132,14 @@ python infer.py --model=swin_tiny --image_path='./dog.jpg'

- 超参配置和预训练策略

您可以编写yaml文件或设置外部参数来指定配置数据、模型、优化器等组件及其超参。以下是使用预设的训练策略(yaml文件)进行模型训练的示例。
您可以编写yaml文件或设置外部参数来指定配置数据、模型、优化器等组件及其超参数。以下是使用预设的训练策略(yaml文件)进行模型训练的示例。

```shell
mpirun --allow-run-as-root -n 4 python train.py -c configs/squeezenet/squeezenet_1.0_gpu.yaml
```

**预定义的训练策略**
MindCV目前提前了超过20种模型训练策略,在ImageNet取得SoTA性能。
MindCV目前提供了超过20种模型训练策略,在ImageNet取得SoTA性能。
具体的参数配置和详细精度性能汇总请见[`configs`](configs)文件夹。
您可以便捷地将这些训练策略用于您的模型训练中以提高性能(复用或修改相应的yaml文件即可)。

Expand Down
16 changes: 8 additions & 8 deletions docs/en/how_to_guides/write_a_new_model.md
Original file line number Diff line number Diff line change
Expand Up @@ -7,7 +7,7 @@ Next, let's take `MLP-Mixer` as an example.

## File Header

A brief description of the document. Include model name and paper title. As follows:
A brief description of the document. Include the model name and paper title. As follows:


```python
Expand Down Expand Up @@ -43,7 +43,7 @@ Only import necessary modules or packages to avoid importing useless packages.

## `__all__`

> Python has no native visibility control, and its visibility is maintained by a set of "conventions" that everyone should consciously abide by `__all__` is a convention for exposing interfaces to modules, and provides a "white list" to expose the interface. If `__all__` is defined, other files use `from xxx import *` to import this file, only the members listed in `__all__` will be imported, and other members can be excluded.
> Python has no native visibility control, its visibility is maintained by a set of "conventions" that everyone should consciously abide by `__all__` is a convention for exposing interfaces to modules and provides a "white list" to expose the interface. If `__all__` is defined, other files use `from xxx import *` to import this file, only the members listed in `__all__` will be imported, and other members can be excluded.

We agree that the exposed interfaces in the model include the main model class and functions that return models of different specifications, such as:

Expand All @@ -60,9 +60,9 @@ Where `MLPMixer` is the main model class, and `mlp_mixer_s_p32` and `mlp_mixer_s

## Submodel

We all know that a depth model is a network composed of multiple layers. Some of these layers can form sub models of the same topology, which we generally call `Layer` or `Block`, such as `ResidualBlock`. This kind of abstraction is conducive to our understanding of the whole model structure, and is also conducive to code writing.
We all know that a depth model is a network composed of multiple layers. Some of these layers can form sub-models of the same topology, which we generally call `Layer` or `Block`, such as `ResidualBlock`. This kind of abstraction is conducive to our understanding of the whole model structure and is also conducive to code writing.

We should briefly describe the function of the sub model through class annotations. In `MindSpore`, the model class inherits from `nn.Cell`. Generally speaking, we need to overload the following two functions:
We should briefly describe the function of the sub-model through class annotations. In `MindSpore`, the model class inherits from `nn.Cell`. Generally speaking, we need to overload the following two functions:

- In the `__init__` function, we should define the neural network layer that needs to be used in the model (the parameters in `__init__` should be declared with parameter types, that is, type hint).
- In the `construct` function, we define the model forward logic.
Expand Down Expand Up @@ -102,10 +102,10 @@ In the process of compiling the `nn.Cell` class, there are two noteworthy aspect

- CellList & SequentialCell

- CellList is just a container that contains a list of neural network layers(Cell). The Cells contained by it can be properly registered, and will be visible by all Cell methods. We must overwrite the forward calculation, that is, the construct function.
- CellList is just a container that contains a list of neural network layers(Cell). The Cells contained by it can be properly registered and will be visible by all Cell methods. We must overwrite the forward calculation, that is, the construct function.


- SequentialCell is a container than holds a sequential list of layers(Cell). The Cells may have a name(OrderedDict) or not(List). We don't need to implement forward computation, which is done according to the order of the sequential list.
- SequentialCell is a container that holds a sequential list of layers(Cell). The Cells may have a name(OrderedDict) or not(List). We don't need to implement forward computation, which is done according to the order of the sequential list.

- construct

Expand All @@ -115,7 +115,7 @@ In the process of compiling the `nn.Cell` class, there are two noteworthy aspect

## Master Model

The main model is the network model definition proposed in the paper, which is composed of multiple sub models. It is the top-level network suitable for classification, detection and other tasks. It is basically similar to the submodel in code writing, but there are several differences.
The main model is the network model definition proposed in the paper, which is composed of multiple sub-models. It is the top-level network suitable for classification, detection, and other tasks. It is basically similar to the submodel in code writing, but there are several differences.

- Class annotations. We should give the title and link of the paper here. In addition, since this class is exposed to the outside world, we'd better also add a description of the class initialization parameters. See code below.
- `forward_features` function. The operational definition of the characteristic network of the model in the function.
Expand Down Expand Up @@ -197,7 +197,7 @@ class MLPMixer(nn.Cell):

## Specification Function

The model proposed in the paper may have different specifications, such as the size of the `channel`, the size of the `depth`, and so on. The specific configuration of these variants should be reflected through the specification function. The specification interface parameters: **pretrained, num_classes, in_channels** should be named uniformly. At the same time, the pretrain loading operation should be performed in the specification function. Each specification function corresponds to a specification variant that determines the configuration. The configuration transfers the definition of the main model class through the input parameter, and returns the instantiated main model class. In addition, you need to register this specification of the model in the package by adding the decorator `@register_model`.
The model proposed in the paper may have different specifications, such as the size of the `channel`, the size of the `depth`, and so on. The specific configuration of these variants should be reflected through the specification function. The specification interface parameters: **pretrained, num_classes, in_channels** should be named uniformly. At the same time, the pretrain loading operation should be performed in the specification function. Each specification function corresponds to a specification variant that determines the configuration. The configuration transfers the definition of the main model class through the input parameter and returns the instantiated main model class. In addition, you need to register this specification of the model in the package by adding the decorator `@register_model`.

Examples are as follows:

Expand Down
12 changes: 6 additions & 6 deletions docs/en/index.md
Original file line number Diff line number Diff line change
Expand Up @@ -61,7 +61,7 @@ See [Installation](./installation.md) for details.

### Hands-on Tutorial

To get started with MindCV, please see the [Quick Start](./tutorials/quick_start.md), which will give you a quick tour on each key component and the train/validate/predict pipelines.
To get started with MindCV, please see the [Quick Start](./tutorials/quick_start.md), which will give you a quick tour of each key component and the train/validate/predict pipelines.

Below are a few code snippets for your taste.

Expand Down Expand Up @@ -105,7 +105,7 @@ It is easy to train your model on a standard or customized dataset using `train.
python train.py --model=resnet50 --dataset=cifar10 --dataset_download
```

Above is an example for training ResNet50 on CIFAR10 dataset on a CPU/GPU/Ascend device
Above is an example of training ResNet50 on CIFAR10 dataset on a CPU/GPU/Ascend device

- Distributed Training

Expand Down Expand Up @@ -142,8 +142,8 @@ It is easy to train your model on a standard or customized dataset using `train.

```text
1. Create a new training task on the cloud platform.
2. Add run parameter `config` and specify the path to the yaml config file on the website UI interface.
3. Add run parameter `enable_modelarts` and set True on the website UI interface.
2. Add the parameter `config` and specify the path to the yaml config file on the website UI interface.
3. Add the parameter `enable_modelarts` and set True on the website UI interface.
4. Fill in other blanks on the website and launch the training task.
```

Expand Down Expand Up @@ -180,7 +180,7 @@ python validate.py --model=resnet50 --dataset=imagenet --data_dir=/path/to/data
--val_while_train --val_split=test --val_interval=1
```

The training loss and validation accuracy for each epoch will be saved in `${ckpt_save_dir}/results.log`.
The training loss and validation accuracy for each epoch will be saved in `${ckpt_save_dir}/results.log`.

More examples about training and validation can be seen in [examples](https://github.com/mindspore-lab/mindcv/tree/main/examples).

Expand Down Expand Up @@ -242,7 +242,7 @@ We provide the following jupyter notebook tutorials to help users learn to use M

## How to Contribute

We appreciate all kind of contributions including issues and PRs to make MindCV better.
We appreciate all kinds of contributions including issues and PRs to make MindCV better.

Please refer to [CONTRIBUTING](./notes/contributing.md) for the contributing guideline.
Please follow the [Model Template and Guideline](./how_to_guides/write_a_new_model.md) for contributing a model that fits the overall interface :)
Expand Down
4 changes: 2 additions & 2 deletions docs/en/installation.md
Original file line number Diff line number Diff line change
Expand Up @@ -49,13 +49,13 @@ This will automatically install compatible versions of dependencies:

If you don't have prior experience with Python, we recommend reading
[Using Python's pip to Manage Your Projects' Dependencies], which is a really
good introduction on the mechanics of Python package management and helps you
good introduction to the mechanics of Python package management and helps you
troubleshoot if you run into errors.

!!! warning

The above command will **NOT** install [MindSpore].
We highly recommand you install [MindSpore] following the [official instructions](https://www.mindspore.cn/install).
We highly recommend you install [MindSpore] following the [official instructions](https://www.mindspore.cn/install).

[Python package]: https://pypi.org/project/mindcv/
[virtual environment]: https://realpython.com/what-is-pip/#using-pip-in-a-python-virtual-environment
Expand Down
30 changes: 15 additions & 15 deletions docs/en/tutorials/configuration.md
Original file line number Diff line number Diff line change
Expand Up @@ -35,7 +35,7 @@ Let's use squeezenet_1.0 model as an example to explain how to configure the cor

4. Corresponding code example

> `args.mode` represents the parameter `mode`, `args.distribute` represents the parameter `distribute`
> `args.mode` represents the parameter `mode`, `args.distribute` represents the parameter `distribute`.

```python
def train(args):
Expand Down Expand Up @@ -66,9 +66,9 @@ Let's use squeezenet_1.0 model as an example to explain how to configure the cor

- dataset_download: whether to download the dataset.

- batch_size: The number of rows each batch.
- batch_size: The number of rows in each batch.

- drop_remainder: Determines whether to drop the last block whose data row number is less than batch size.
- drop_remainder: Determines whether to drop the last block whose data row number is less than the batch size.

- num_parallel_workers: Number of workers(threads) to process the dataset in parallel.

Expand Down Expand Up @@ -133,7 +133,7 @@ Let's use squeezenet_1.0 model as an example to explain how to configure the cor

1. Parameter description

- image_resize: the image size after resize for adapting to network.
- image_resize: the image size after resizing for adapting to the network.

- scale: random resize scale.

Expand All @@ -147,7 +147,7 @@ Let's use squeezenet_1.0 model as an example to explain how to configure the cor

- color_jitter: color jitter factor.

- re_prob: probability of performing erasing.
- re_prob: the probability of performing erasing.

2. Sample yaml file

Expand Down Expand Up @@ -202,21 +202,21 @@ Let's use squeezenet_1.0 model as an example to explain how to configure the cor

1. Parameter description

- model: model name
- model: model name.

- num_classes: number of label classes.
- num_classes: number of label classes.

- pretrained: whether load pretrained model
- pretrained: whether load pretrained model.

- ckpt_path: initialize model from this checkpoint.
- ckpt_path: initialize model from this checkpoint.

- keep_checkpoint_max: max number of checkpoint files
- keep_checkpoint_max: max number of checkpoint files.

- ckpt_save_dir: path of checkpoint.
- ckpt_save_dir: the path of checkpoint.

- epoch_size: train epoch size.

- dataset_sink_mode: the dataset sink mode
- dataset_sink_mode: the dataset sink mode.

- amp_level: auto mixed precision level for saving memory and acceleration.

Expand Down Expand Up @@ -302,7 +302,7 @@ Let's use squeezenet_1.0 model as an example to explain how to configure the cor

- scheduler: name of scheduler.

- min_lr: the minimum value of learning rate if scheduler supports.
- min_lr: the minimum value of learning rate if the scheduler supports.

- lr: learning rate.

Expand Down Expand Up @@ -352,13 +352,13 @@ Let's use squeezenet_1.0 model as an example to explain how to configure the cor

1. Parameter description

- opt: name of optimizer
- opt: name of optimizer.

- filter_bias_and_bn: filter Bias and BatchNorm.

- momentum: Hyperparameter of type float, means momentum for the moving average.

- weight_decay: weight decayL2 penalty)。
- weight_decay: weight decay (L2 penalty).

- loss_scale: gradient scaling factor

Expand Down
Loading