diff --git a/README.md b/README.md index 296e5f1949e..5a9c221305c 100644 --- a/README.md +++ b/README.md @@ -70,7 +70,7 @@ English | [简体中文](README_zh-CN.md) MMDetection is an open source object detection toolbox based on PyTorch. It is a part of the [OpenMMLab](https://openmmlab.com/) project. -The main branch works with **PyTorch 1.6+**. +The main branch works with **PyTorch 1.8+**. @@ -118,11 +118,11 @@ We are excited to announce our latest work on real-time object recognition tasks -**v3.0.0** was released in 6/4/2023: +**v3.1.0** was released in 30/6/2023: -- Release MMDetection 3.0.0 official version -- Support Semi-automatic annotation Base [Label-Studio](projects/LabelStudio) (#10039) -- Support [EfficientDet](projects/EfficientDet) in projects (#9810) +- Supports tracking algorithms including multi-object tracking (MOT) algorithms SORT, DeepSORT, StrongSORT, OCSORT, ByteTrack, QDTrack, and video instance segmentation (VIS) algorithm MaskTrackRCNN, Mask2Former-VIS. +- Supports inference and evaluation of multimodal algorithms [GLIP](configs/glip) and [XDecoder](projects/XDecoder), and also supports datasets such as COCO semantic segmentation, COCO Caption, ADE20k general segmentation, and RefCOCO. GLIP fine-tuning will be supported in the future. +- Provides a [gradio demo](https://github.com/open-mmlab/mmdetection/blob/dev-3.x/projects/gradio_demo/README.md) for image type tasks of MMDetection, making it easy for users to experience. ## Installation diff --git a/README_zh-CN.md b/README_zh-CN.md index 4ee964f4b21..3812169f7c7 100644 --- a/README_zh-CN.md +++ b/README_zh-CN.md @@ -69,7 +69,7 @@ MMDetection 是一个基于 PyTorch 的目标检测开源工具箱。它是 [OpenMMLab](https://openmmlab.com/) 项目的一部分。 -主分支代码目前支持 PyTorch 1.6 以上的版本。 +主分支代码目前支持 PyTorch 1.8 及其以上的版本。 @@ -117,11 +117,11 @@ MMDetection 是一个基于 PyTorch 的目标检测开源工具箱。它是 [Ope -**v3.0.0** 版本已经在 2023.4.6 发布: +**v3.1.0** 版本已经在 2023.6.30 发布: -- 发布 MMDetection 3.0.0 正式版 -- 基于 [Label-Studio](projects/LabelStudio) 支持半自动标注流程 -- projects 中支持了 [EfficientDet](projects/EfficientDet) +- 支持 Tracking 类算法,包括多目标跟踪 MOT 算法 SORT、DeepSORT、StrongSORT、OCSORT、ByteTrack、QDTrack 和视频实例分割 VIS 算法 MaskTrackRCNN、Mask2Former-VIS。 +- 支持多模态开放检测算法 [GLIP](configs/glip) 和 [XDecoder](projects/XDecoder) 推理和评估,并同时支持了 COCO 语义分割、COCO Caption、ADE20k 通用分割、RefCOCO 等数据集。后续将支持 GLIP 微调 +- 提供了包括 MMDetection 图片任务的 [gradio demo](https://github.com/open-mmlab/mmdetection/blob/dev-3.x/projects/gradio_demo/README.md),方便用户快速体验 ## 安装 diff --git a/docker/serve/Dockerfile b/docker/serve/Dockerfile index 9a6a7784a2f..711a4fc9aae 100644 --- a/docker/serve/Dockerfile +++ b/docker/serve/Dockerfile @@ -4,7 +4,7 @@ ARG CUDNN="8" FROM pytorch/pytorch:${PYTORCH}-cuda${CUDA}-cudnn${CUDNN}-devel ARG MMCV="2.0.0rc4" -ARG MMDET="3.0.0" +ARG MMDET="3.1.0" ENV PYTHONUNBUFFERED TRUE diff --git a/docker/serve_cn/Dockerfile b/docker/serve_cn/Dockerfile index b1dfb00b869..a1cab644a82 100644 --- a/docker/serve_cn/Dockerfile +++ b/docker/serve_cn/Dockerfile @@ -4,7 +4,7 @@ ARG CUDNN="8" FROM pytorch/pytorch:${PYTORCH}-cuda${CUDA}-cudnn${CUDNN}-devel ARG MMCV="2.0.0rc4" -ARG MMDET="3.0.0" +ARG MMDET="3.1.0" ENV PYTHONUNBUFFERED TRUE diff --git a/docs/en/get_started.md b/docs/en/get_started.md index dc543ac93ac..c00eb96b76c 100644 --- a/docs/en/get_started.md +++ b/docs/en/get_started.md @@ -4,7 +4,7 @@ In this section, we demonstrate how to prepare an environment with PyTorch. -MMDetection works on Linux, Windows, and macOS. It requires Python 3.7+, CUDA 9.2+, and PyTorch 1.6+. +MMDetection works on Linux, Windows, and macOS. It requires Python 3.7+, CUDA 9.2+, and PyTorch 1.8+. ```{note} If you are experienced with PyTorch and have already installed it, just skip this part and jump to the [next section](#installation). Otherwise, you can follow these steps for the preparation. diff --git a/docs/en/notes/changelog.md b/docs/en/notes/changelog.md index ded9dc30189..9c12195c0cd 100644 --- a/docs/en/notes/changelog.md +++ b/docs/en/notes/changelog.md @@ -1,5 +1,44 @@ # Changelog of v3.x +## v3.1.0 (30/6/2023) + +### Highlights + +- Supports tracking algorithms including multi-object tracking (MOT) algorithms SORT, DeepSORT, StrongSORT, OCSORT, ByteTrack, QDTrack, and video instance segmentation (VIS) algorithm MaskTrackRCNN, Mask2Former-VIS. +- Supports inference and evaluation of multimodal algorithms [GLIP](../../../configs/glip) and [XDecoder](../../../projects/XDecoder), and also supports datasets such as COCO semantic segmentation, COCO Caption, ADE20k general segmentation, and RefCOCO. GLIP fine-tuning will be supported in the future. +- Provides a [gradio demo](https://github.com/open-mmlab/mmdetection/blob/dev-3.x/projects/gradio_demo/README.md) for image type tasks of MMDetection, making it easy for users to experience. + +### New Features + +- Support DSDL Dataset (#9801) +- Support iSAID dataset (#10028) +- Support VISION dataset (#10530) +- Release SoftTeacher checkpoints (#10119) +- Release `centernet-update_r50-caffe_fpn_ms-1x_coco` checkpoints (#10327) +- Support SIoULoss (#10290) +- Support Eqlv2 loss (#10120) +- Support CopyPaste when mask is not available (#10509) +- Support MIM to download ODL dataset (#10460) +- Support new config (#10566) + +### Bug Fixes + +- Fix benchmark scripts error in windows (#10128) +- Fix error of `YOLOXModeSwitchHook` does not switch the mode when resumed from the checkpoint after switched (#10116) +- Fix pred and weight dims unmatch in SmoothL1Loss (#10423) + +### Improvements + +- Update MMDet_Tutorial.ipynb (#10081) +- Support to hide inference progress (#10519) +- Replace mmcls with mmpretrain (#10545) + +### Contributors + +A total of 29 developers contributed to this release. + +Thanks @lovelykite, @minato-ellie, @freepoet, @wufan-tb, @yalibian, @keyakiluo, @gihanjayatilaka, @i-aki-y, @xin-li-67, @RangeKing, @JingweiZhang12, @MambaWong, @lucianovk, @tall-josh, @xiuqhou, @jamiechoi1995, @YQisme, @yechenzhi, @bjzhb666, @xiexinch, @jamiechoi1995, @yarkable, @Renzhihan, @nijkah, @amaizr, @Lum1104, @zwhus, @Czm369, @hhaAndroid + ## v3.0.0 (6/4/2023) ### Highlights diff --git a/docs/en/notes/faq.md b/docs/en/notes/faq.md index aa473c2f3da..d8205cf555e 100644 --- a/docs/en/notes/faq.md +++ b/docs/en/notes/faq.md @@ -47,7 +47,8 @@ Compatible MMDetection, MMEngine, and MMCV versions are shown as below. Please c | MMDetection version | MMCV version | MMEngine version | | :-----------------: | :---------------------: | :----------------------: | | main | mmcv>=2.0.0, \<2.1.0 | mmengine>=0.7.1, \<1.0.0 | -| 3.x | mmcv>=2.0.0, \<2.1.0 | mmengine>=0.7.1, \<1.0.0 | +| 3.1.0 | mmcv>=2.0.0, \<2.1.0 | mmengine>=0.7.1, \<1.0.0 | +| 3.0.0 | mmcv>=2.0.0, \<2.1.0 | mmengine>=0.7.1, \<1.0.0 | | 3.0.0rc6 | mmcv>=2.0.0rc4, \<2.1.0 | mmengine>=0.6.0, \<1.0.0 | | 3.0.0rc5 | mmcv>=2.0.0rc1, \<2.1.0 | mmengine>=0.3.0, \<1.0.0 | | 3.0.0rc4 | mmcv>=2.0.0rc1, \<2.1.0 | mmengine>=0.3.0, \<1.0.0 | diff --git a/docs/zh_cn/get_started.md b/docs/zh_cn/get_started.md index 72be5fc3441..52d061ef50f 100644 --- a/docs/zh_cn/get_started.md +++ b/docs/zh_cn/get_started.md @@ -4,7 +4,7 @@ 本节中,我们将演示如何用 PyTorch 准备一个环境。 -MMDetection 支持在 Linux,Windows 和 macOS 上运行。它需要 Python 3.7 以上,CUDA 9.2 以上和 PyTorch 1.6 以上。 +MMDetection 支持在 Linux,Windows 和 macOS 上运行。它需要 Python 3.7 以上,CUDA 9.2 以上和 PyTorch 1.8 及其以上。 ```{note} 如果你对 PyTorch 有经验并且已经安装了它,你可以直接跳转到[下一小节](#安装流程)。否则,你可以按照下述步骤进行准备。 diff --git a/docs/zh_cn/notes/faq.md b/docs/zh_cn/notes/faq.md index 7f1333fcd1d..67e2e42968a 100644 --- a/docs/zh_cn/notes/faq.md +++ b/docs/zh_cn/notes/faq.md @@ -47,7 +47,8 @@ export DYNAMO_CACHE_SIZE_LIMIT = 4 | MMDetection 版本 | MMCV 版本 | MMEngine 版本 | | :--------------: | :---------------------: | :----------------------: | | main | mmcv>=2.0.0, \<2.1.0 | mmengine>=0.7.1, \<1.0.0 | - | 3.x | mmcv>=2.0.0, \<2.1.0 | mmengine>=0.7.1, \<1.0.0 | + | 3.1.0 | mmcv>=2.0.0, \<2.1.0 | mmengine>=0.7.1, \<1.0.0 | + | 3.0.0 | mmcv>=2.0.0, \<2.1.0 | mmengine>=0.7.1, \<1.0.0 | | 3.0.0rc6 | mmcv>=2.0.0rc4, \<2.1.0 | mmengine>=0.6.0, \<1.0.0 | | 3.0.0rc5 | mmcv>=2.0.0rc1, \<2.1.0 | mmengine>=0.3.0, \<1.0.0 | | 3.0.0rc4 | mmcv>=2.0.0rc1, \<2.1.0 | mmengine>=0.3.0, \<1.0.0 |