This is a fork of the CVAT project. It is customised with some scripts to deploy it with a nextcloud service.
The nextcloud service is for allowing image upload by ICAREWOUND partners. Then the gradiant mantainers of the ICAREWOUNDS project will upload those images to CVAT so that the partners can then annotate.
Relevant files are:
- docker-compose.nextcloud.yml: this is used to deploy the nextcloud and to connect it to the traefik service deployed by CVAT.
- .nextcloud_example.env: this env file contains example configuration that will be used by the both the nextcloud and the mysql service. It should be renamed ".nexcloud.env" and modified with the final deployment vars.
- copy_files_to_fujin.sh: this script scp copies the relevant scripts to the OVH VPS to deploy the services there.
- deploy_fujin.sh: this script sets env vars and runs docker compose with the relevant docker compose files to deploy al the relevant services.
CVAT is an interactive video and image annotation tool for computer vision. It is used by tens of thousands of users and companies around the world. Our mission is to help developers, companies, and organizations around the world to solve real problems using the Data-centric AI approach.
Start using CVAT online: cvat.ai. You can use it for free, or subscribe to get unlimited data, organizations, autoannotations, and Roboflow and HuggingFace integration.
Or set CVAT up as a self-hosted solution: Self-hosted Installation Guide. We provide Enterprise support for self-hosted installations with premium features: SSO, LDAP, Roboflow and HuggingFace integrations, and advanced analytics (coming soon). We also do trainings and a dedicated support with 24 hour SLA.
- Installation guide
- Manual
- Contributing
- Datumaro dataset framework
- Server API
- Python SDK
- Command line tool
- XML annotation format
- AWS Deployment Guide
- Frequently asked questions
- Where to ask questions
CVAT is used by teams all over the world. In the list, you can find key companies which help us support the product or an essential part of our ecosystem. If you use us, please drop us a line at contact@cvat.ai.
- Human Protocol uses CVAT as a way of adding annotation service to the Human Protocol.
- FiftyOne is an open-source dataset curation and model analysis tool for visualizing, exploring, and improving computer vision datasets and models that are tightly integrated with CVAT for annotation and label refinement.
ATLANTIS, an open-source dataset for semantic segmentation of waterbody images, developed by iWERS group in the Department of Civil and Environmental Engineering at the University of South Carolina is using CVAT.
For developing a semantic segmentation dataset using CVAT, see:
CVAT online: cvat.ai
This is an online version of CVAT. It's free, efficient, and easy to use.
cvat.ai runs the latest version of the tool. You can create up to 10 tasks there and upload up to 500Mb of data to annotate. It will only be visible to you or the people you assign to it.
For now, it does not have analytics features like management and monitoring the data annotation team. It also does not allow exporting images, just the annotations.
We plan to enhance cvat.ai with new powerful features. Stay tuned!
Prebuilt docker images are the easiest way to start using CVAT locally. They are available on Docker Hub:
The images have been downloaded more than 1M times so far.
Here are some screencasts showing how to use CVAT.
Computer Vision Annotation Course: we introduce our course series designed to help you annotate data faster and better using CVAT. This course is about CVAT deployment and integrations, it includes presentations and covers the following topics:
- Speeding up your data annotation process: introduction to CVAT and Datumaro. What problems do CVAT and Datumaro solve, and how they can speed up your model training process. Some resources you can use to learn more about how to use them.
- Deployment and use CVAT. Use the app online at app.cvat.ai. A local deployment. A containerized local deployment with Docker Compose (for regular use), and a local cluster deployment with Kubernetes (for enterprise users). A 2-minute tour of the interface, a breakdown of CVAT’s internals, and a demonstration of how to deploy CVAT using Docker Compose.
Product tour: in this course, we show how to use CVAT, and help to get familiar with CVAT functionality and interfaces. This course does not cover integrations and is dedicated solely to CVAT. It covers the following topics:
- Pipeline. In this video, we show how to use app.cvat.ai: how to sign up, upload your data, annotate it, and download it.
For feedback, please see Contact us
- Install with
pip install cvat-sdk
- PyPI package homepage
- Documentation
- Install with
pip install cvat-cli
- PyPI package homepage
- Documentation
CVAT supports multiple annotation formats. You can select the format after clicking the Upload annotation and Dump annotation buttons. Datumaro dataset framework allows additional dataset transformations with its command line tool and Python library.
For more information about the supported formats, see: Annotation Formats.
Annotation format | Import | Export |
---|---|---|
CVAT for images | ✔️ | ✔️ |
CVAT for a video | ✔️ | ✔️ |
Datumaro | ✔️ | ✔️ |
PASCAL VOC | ✔️ | ✔️ |
Segmentation masks from PASCAL VOC | ✔️ | ✔️ |
YOLO | ✔️ | ✔️ |
MS COCO Object Detection | ✔️ | ✔️ |
MS COCO Keypoints Detection | ✔️ | ✔️ |
MOT | ✔️ | ✔️ |
MOTS PNG | ✔️ | ✔️ |
LabelMe 3.0 | ✔️ | ✔️ |
ImageNet | ✔️ | ✔️ |
CamVid | ✔️ | ✔️ |
WIDER Face | ✔️ | ✔️ |
VGGFace2 | ✔️ | ✔️ |
Market-1501 | ✔️ | ✔️ |
ICDAR13/15 | ✔️ | ✔️ |
Open Images V6 | ✔️ | ✔️ |
Cityscapes | ✔️ | ✔️ |
KITTI | ✔️ | ✔️ |
Kitti Raw Format | ✔️ | ✔️ |
LFW | ✔️ | ✔️ |
Supervisely Point Cloud Format | ✔️ | ✔️ |
YOLOv8 Detection | ✔️ | ✔️ |
YOLOv8 Oriented Bounding Boxes | ✔️ | ✔️ |
YOLOv8 Segmentation | ✔️ | ✔️ |
YOLOv8 Pose | ✔️ | ✔️ |
YOLOv8 Classification | ✔️ | ✔️ |
CVAT supports automatic labeling. It can speed up the annotation process up to 10x. Here is a list of the algorithms we support, and the platforms they can be run on:
Name | Type | Framework | CPU | GPU |
---|---|---|---|---|
Segment Anything | interactor | PyTorch | ✔️ | ✔️ |
Deep Extreme Cut | interactor | OpenVINO | ✔️ | |
Faster RCNN | detector | OpenVINO | ✔️ | |
Mask RCNN | detector | OpenVINO | ✔️ | |
YOLO v3 | detector | OpenVINO | ✔️ | |
YOLO v7 | detector | ONNX | ✔️ | ✔️ |
Object reidentification | reid | OpenVINO | ✔️ | |
Semantic segmentation for ADAS | detector | OpenVINO | ✔️ | |
Text detection v4 | detector | OpenVINO | ✔️ | |
SiamMask | tracker | PyTorch | ✔️ | ✔️ |
TransT | tracker | PyTorch | ✔️ | ✔️ |
f-BRS | interactor | PyTorch | ✔️ | |
HRNet | interactor | PyTorch | ✔️ | |
Inside-Outside Guidance | interactor | PyTorch | ✔️ | |
Faster RCNN | detector | TensorFlow | ✔️ | ✔️ |
Mask RCNN | detector | TensorFlow | ✔️ | ✔️ |
RetinaNet | detector | PyTorch | ✔️ | ✔️ |
Face Detection | detector | OpenVINO | ✔️ |
The code is released under the MIT License.
The code contained within the /serverless
directory is released under the MIT License.
However, it may download and utilize various assets, such as source code, architectures, and weights, among others.
These assets may be distributed under different licenses, including non-commercial licenses.
It is your responsibility to ensure compliance with the terms of these licenses before using the assets.
This software uses LGPL-licensed libraries from the FFmpeg project. The exact steps on how FFmpeg was configured and compiled can be found in the Dockerfile.
FFmpeg is an open-source framework licensed under LGPL and GPL. See https://www.ffmpeg.org/legal.html. You are solely responsible for determining if your use of FFmpeg requires any additional licenses. CVAT.ai Corporation is not responsible for obtaining any such licenses, nor liable for any licensing fees due in connection with your use of FFmpeg.
Gitter to ask CVAT usage-related questions. Typically questions get answered fast by the core team or community. There you can also browse other common questions.
Discord is the place to also ask questions or discuss any other stuff related to CVAT.
LinkedIn for the company and work-related questions.
YouTube to see screencast and tutorials about the CVAT.
GitHub issues for feature requests or bug reports. If it's a bug, please add the steps to reproduce it.
#cvat tag on StackOverflow is one more way to ask questions and get our support.
contact@cvat.ai to reach out to us if you need commercial support.
-
Intel AI blog: New Computer Vision Tool Accelerates Annotation of Digital Images and Video
-
Intel Software: Computer Vision Annotation Tool: A Universal Approach to Data Annotation
-
VentureBeat: Intel open-sources CVAT, a toolkit for data labeling
-
How to auto-label data in CVAT with one of 50,000+ models on Roboflow Universe