diff --git a/README.md b/README.md index 7c728c8e..f60e97fa 100644 --- a/README.md +++ b/README.md @@ -23,14 +23,16 @@ [![hfpaper](https://img.shields.io/badge/🤗HugginngFace-Paper-yellow)](https://huggingface.co/papers/2401.17270) [![license](https://img.shields.io/badge/License-GPLv3.0-blue)](LICENSE) [![yoloworldseg](https://img.shields.io/badge/YOLOWorldxEfficientSAM-🤗Spaces-orange)](https://huggingface.co/spaces/SkalskiP/YOLO-World) + ## Updates +`🔥[2024-2-22]:` We sincerely thank [RoboFlow](https://roboflow.com/) and [@Skalskip92](https://twitter.com/skalskip92) for the [**Video Guide**](https://www.youtube.com/watch?v=X7gKBGVz4vs) about YOLO-World, nice work! `🔥[2024-2-18]:` We thank [@Skalskip92](https://twitter.com/skalskip92) for developing the wonderful segmentation demo via connecting YOLO-World and EfficientSAM. You can try it now at the [🤗 HuggingFace Spaces](https://huggingface.co/spaces/SkalskiP/YOLO-World). -`🔥[2024-2-17]:` The largest model **X** of YOLO-World is released, which achieves better zero-shot performance! -`🔥[2024-2-17]:` We release the code & models for **YOLO-World-Seg** now! YOLO-World now supports open-vocabulary / zero-shot object segmentation! +`[2024-2-17]:` The largest model **X** of YOLO-World is released, which achieves better zero-shot performance! +`[2024-2-17]:` We release the code & models for **YOLO-World-Seg** now! YOLO-World now supports open-vocabulary / zero-shot object segmentation! `[2024-2-15]:` The pre-traind YOLO-World-L with CC3M-Lite is released! `[2024-2-14]:` We provide the [`image_demo`](demo.py) for inference on images or directories. `[2024-2-10]:` We provide the [fine-tuning](./docs/finetuning.md) and [data](./docs/data.md) details for fine-tuning YOLO-World on the COCO dataset or the custom datasets! @@ -40,7 +42,6 @@ `[2024-1-31]:` We are excited to launch **YOLO-World**, a cutting-edge real-time open-vocabulary object detector. - ## TODO YOLO-World is under active development and please stay tuned ☕️!