Skip to content

Commit

Permalink
docs: set long_description to the contents of README.md as the descri…
Browse files Browse the repository at this point in the history
…ption on PyPI
  • Loading branch information
geniuspatrick committed Jun 2, 2023
1 parent 6901173 commit ef671e9
Show file tree
Hide file tree
Showing 10 changed files with 311 additions and 313 deletions.
287 changes: 107 additions & 180 deletions README.md

Large diffs are not rendered by default.

219 changes: 96 additions & 123 deletions README_CN.md

Large diffs are not rendered by default.

93 changes: 92 additions & 1 deletion RELEASE.md
Original file line number Diff line number Diff line change
@@ -1,6 +1,97 @@

# Release Note

- 2023/5/30
1. New Models:
- AMP(O2) version of [VGG](configs/vgg)
- [GhostNet](configs/ghostnet)
- AMP(O3) version of [MobileNetV2](configs/mobilenetv2) and [MobileNetV3](configs/mobilenetv3)
- (x,y)_(200,400,600,800)mf of [RegNet](configs/regnet)
- b1g2, b1g4 & b2g4 of [RepVGG](configs/repvgg)
- 0.5 of [MnasNet](configs/mnasnet)
- b3 & b4 of [PVTv2](configs/pvt_v2)
2. New Features:
- 3-Augment, Augmix, TrivialAugmentWide
3. Bug Fixes:
- ViT pooling mode

- 2023/04/28
1. Add some new models, listed as following
- [VGG](configs/vgg)
- [DPN](configs/dpn)
- [ResNet v2](configs/resnetv2)
- [MnasNet](configs/mnasnet)
- [MixNet](configs/mixnet)
- [RepVGG](configs/repvgg)
- [ConvNeXt](configs/convnext)
- [Swin Transformer](configs/swintransformer)
- [EdgeNeXt](configs/edgenext)
- [CrossViT](configs/crossvit)
- [XCiT](configs/xcit)
- [CoAT](configs/coat)
- [PiT](configs/pit)
- [PVT v2](configs/pvt_v2)
- [MobileViT](configs/mobilevit)
2. Bug fix:
- Setting the same random seed for each rank
- Checking if options from yaml config exist in argument parser
- Initializing flag variable as `Tensor` in Optimizer `Adan`

## 0.2.0

- 2023/03/25
1. Update checkpoints for pretrained ResNet for better accuracy
- ResNet18 (from 70.09 to 70.31 @Top1 accuracy)
- ResNet34 (from 73.69 to 74.15 @Top1 accuracy)
- ResNet50 (from 76.64 to 76.69 @Top1 accuracy)
- ResNet101 (from 77.63 to 78.24 @Top1 accuracy)
- ResNet152 (from 78.63 to 78.72 @Top1 accuracy)
2. Rename checkpoint file name to follow naming rule ({model_scale-sha256sum.ckpt}) and update download URLs.

- 2023/03/05
1. Add Lion (EvoLved Sign Momentum) optimizer from paper https://arxiv.org/abs/2302.06675
- To replace adamw with lion, LR is usually 3-10x smaller, and weight decay is usually 3-10x larger than adamw.
2. Add 6 new models with training recipes and pretrained weights for
- [HRNet](configs/hrnet)
- [SENet](configs/senet)
- [GoogLeNet](configs/googlenet)
- [Inception V3](configs/inception_v3)
- [Inception V4](configs/inception_v4)
- [Xception](configs/xception)
3. Support gradient clip
4. Arg name `use_ema` changed to **`ema`**, add `ema: True` in yaml to enable EMA.

## 0.1.1

- 2023/01/10
1. MindCV v0.1 released! It can be installed via PyPI `pip install mindcv` now.
2. Add training recipe and trained weights of googlenet, inception_v3, inception_v4, xception

## 0.1.0

- 2022/12/09
1. Support lr warmup for all lr scheduling algorithms besides cosine decay.
2. Add repeated augmentation, which can be enabled by setting `--aug_repeats` to be a value larger than 1 (typically, 3 or 4 is a common choice).
3. Add EMA.
4. Improve BCE loss to support mixup/cutmix.

- 2022/11/21
1. Add visualization for loss and acc curves
2. Support epochwise lr warmup cosine decay (previous is stepwise)

- 2022/11/09
1. Add 7 pretrained ViT models.
2. Add RandAugment augmentation.
3. Fix CutMix efficiency issue and CutMix and Mixup can be used together.
4. Fix lr plot and scheduling bug.

- 2022/10/12
1. Both BCE and CE loss now support class-weight config, label smoothing, and auxiliary logit input (for networks like inception).

## 0.0.1-beta

- 2022/09/13
1. Add Adan optimizer (experimental)

## MindSpore Computer Vision 0.0.1

### Models
Expand Down
4 changes: 2 additions & 2 deletions docs/en/index.md
Original file line number Diff line number Diff line change
Expand Up @@ -85,7 +85,7 @@ Below are a few code snippets for your taste.
<img src="https://user-images.githubusercontent.com/8156835/210049681-89f68b9f-eb44-44e2-b689-4d30c93c6191.jpg" width=360 />
</p>

Classify the dowloaded image with a pretrained SoTA model:
Classify the downloaded image with a pretrained SoTA model:

```pycon
>>> !python infer.py --model=swin_tiny --image_path='./dog.jpg'
Expand Down Expand Up @@ -156,7 +156,7 @@ It is easy to train your model on a standard or customized dataset using `train.
[Pynative mode with ms_function](https://www.mindspore.cn/tutorials/zh-CN/r1.8/advanced/pynative_graph/combine.html) is a mixed mode for comprising flexibility and efficiency in MindSpore. To apply pynative mode with ms_function for training, please run `train_with_func.py`, e.g.,
``` shell
```shell
python train_with_func.py --model=resnet50 --dataset=cifar10 --dataset_download --epoch_size=10
```
Expand Down
5 changes: 2 additions & 3 deletions docs/zh/index.md
Original file line number Diff line number Diff line change
Expand Up @@ -135,7 +135,6 @@ MindCV是一个基于 [MindSpore](https://www.mindspore.cn/) 开发的,致力
具体的参数配置和详细精度性能汇总请见[`configs`](https://github.com/mindspore-lab/mindcv/tree/main/configs)文件夹。
您可以便捷将这些训练策略用于您的模型训练中以提高性能(复用或修改相应的yaml文件即可)


- 在ModelArts/OpenI平台上训练

在[ModelArts](https://www.huaweicloud.com/intl/en-us/product/modelarts.html)或[OpenI](https://openi.pcl.ac.cn/)云平台上进行训练,需要执行以下操作,:
Expand Down Expand Up @@ -177,7 +176,7 @@ python validate.py --model=resnet50 --dataset=imagenet --data_dir=/path/to/data

```shell
python train.py --model=resnet50 --dataset=cifar10 \
--val_while_train --val_split=test --val_interval=1
--val_while_train --val_split=test --val_interval=1
```

各轮次的训练损失和测试精度将保存在`{ckpt_save_dir}/results.log`中。
Expand Down Expand Up @@ -206,7 +205,7 @@ python validate.py --model=resnet50 --dataset=imagenet --data_dir=/path/to/data
* [Repeated Augmentation](https://openaccess.thecvf.com/content_CVPR_2020/papers/Hoffer_Augment_Your_Batch_Improving_Generalization_Through_Instance_Repetition_CVPR_2020_paper.pdf)
* RandErasing (Cutout)
* CutMix
* Mixup
* MixUp
* RandomResizeCrop
* Color Jitter, Flip, etc
* 优化器
Expand Down
File renamed without changes.
2 changes: 1 addition & 1 deletion infer.py
Original file line number Diff line number Diff line change
Expand Up @@ -47,7 +47,7 @@ def main():
logits = nn.Softmax()(network(ms.Tensor(img)))[0].asnumpy()
preds = np.argsort(logits)[::-1][:5]
probs = logits[preds]
with open("./tutorials/imagenet1000_clsidx_to_labels.txt", encoding="utf-8") as f:
with open("./examples/data/imagenet1000_clsidx_to_labels.txt", encoding="utf-8") as f:
idx2label = ast.literal_eval(f.read())
# print(f"Predict result of {args.image_path}:")
cls_prob = {}
Expand Down
13 changes: 11 additions & 2 deletions setup.py
Original file line number Diff line number Diff line change
@@ -1,19 +1,28 @@
#!/usr/bin/env python

from pathlib import Path

from setuptools import find_packages, setup

# read the contents of README file
this_directory = Path(__file__).parent
long_description = (this_directory / "README.md").read_text()

# read the `__version__` global variable in `version.py`
exec(open("mindcv/version.py").read())

setup(
name="mindcv",
author="MindSpore Ecosystem",
author_email="mindspore-ecosystem@example.com",
author="MindSpore Lab",
author_email="mindspore-lab@example.com",
url="https://github.com/mindspore-lab/mindcv",
project_urls={
"Sources": "https://github.com/mindspore-lab/mindcv",
"Issue Tracker": "https://github.com/mindspore-lab/mindcv/issues",
},
description="A toolbox of vision models and algorithms based on MindSpore.",
long_description=long_description,
long_description_content_type="text/markdown",
license="Apache Software License 2.0",
include_package_data=True,
packages=find_packages(include=["mindcv", "mindcv.*"]),
Expand Down
1 change: 0 additions & 1 deletion tutorials/README.md

This file was deleted.

Binary file removed tutorials/data/test/dog/dog.jpg
Binary file not shown.

0 comments on commit ef671e9

Please sign in to comment.