A professional-grade visual labeling tool for YOLO object detection models, featuring SAHI (Sliced Aided Hyper Inference) support, built with PyQt5.
The YOLO Class Labeling Tool is a comprehensive solution for creating, editing, and managing object detection datasets in YOLO format. It features an intuitive GUI, automatic detection using YOLO/ONNX models with SAHI support, and efficient batch processing capabilities.
Key Highlights:
- Modern PyQt5 interface with gradient designs
- SAHI integration for detecting small objects in large images
- Support for both PyTorch (.pt) and ONNX (.onnx) models
- Real-time visualization of bounding boxes
- Multiple box selection and batch deletion
- Automatic CUDA/CPU device detection
- Class filtering and statistics tracking
git clone https://github.com/ildehakale/YOLO_Labelling_Tool.git
cd YOLO_Labelling_ToolOr download and extract the ZIP file.
pip install -r requirements.txt- Install libxcb
sudo apt-get install -y libxcb-xinerama0 libxcb-cursor0 libxkbcommon-x11-0- Uncomment the lines from main.py
# Linux platform settings (uncomment if needed)
os.environ["QT_QPA_PLATFORM_PLUGIN_PATH"] = "/usr/lib/x86_64-linux-gnu/qt5/plugins/platforms"
os.environ["QT_QPA_PLATFORM"] = "xcb"mkdir modelsPlace your YOLO model files (.pt or .onnx) in the models/ folder.
python main.pyYOLO_Labelling_Tool/
โโโ main.py # Application entry point
โโโ application.py # Main application class
โโโ requirements.txt # Python dependencies
โโโ README.md # This file
โ
โโโ config/
โ โโโ settings.py # Application configuration
โ โโโ embedded_assets.py # Embedded logo
โ
โโโ models/ # Place your .pt or .onnx models here
โ โโโ (your models)
โ
โโโ controllers/
โ โโโ labeling_controller.py # Main business logic
โ
โโโ repositories/
โ โโโ interfaces.py # Repository interfaces
โ โโโ file_repositories.py # File-based repositories
โ
โโโ services/
โ โโโ detection/
โ โโโ interfaces.py # Detection interfaces
โ โโโ detectors.py # YOLO/ONNX/SAHI detectors
โ โโโ detection_service.py # Detection logic
โ
โโโ models/ # Data models (not ML models)
โ โโโ base.py # BoundingBox, Image classes
โ
โโโ ui/
โ โโโ components/
โ โโโ base.py # UI components
โ โโโ image_viewer.py # Image display widget
โ
โโโ views/
โโโ main_window.py # Main window UI
โโโ main_window_slots.py # Event handlers
Edit config/settings.py to customize the application:
# Allowed classes for detection (COCO format)
ALLOWED_CLASSES = {0, 2} # 0=person, 2=car
# Class names mapping
class_names = {
0: "person",
2: "car"
}# IoU threshold for duplicate suppression
IOU_THRESHOLD = 0.10
# Containment ratio threshold
CONTAIN_RATIO = 0.99
# Only suppress same-class overlaps
SAME_CLASS_ONLY = True
# Default confidence threshold
default_confidence = 0.2
# Default slice dimensions for SAHI
default_slice_height = 256
default_slice_width = 256# Window dimensions
default_window_width = 1200
default_window_height = 800
# Colors (RGB)
normal_box_color = (255, 107, 107) # Red for manual labels
selected_box_color = (66, 153, 225) # Blue for selected
detector_box_color = (16, 185, 129) # Green for SAHI detections
detector_selected_color = (255, 193, 7) # Yellow for selected detections-
Launch the application:
python main.py
-
Select folders:
- Click "๐ท Choose image File" to select your images folder
- Click "๐ท๏ธ Choose label File" to select labels folder (auto-created if not exists)
-
Load a model (optional for AI detection):
- Select a model from the "Class Models" list
- The model will be loaded automatically
-
Manual labeling:
- Left-click and drag to draw bounding boxes
- Enter the class ID when prompted
- Box is automatically saved
-
AI-assisted labeling:
- Adjust "Confidence Threshold", "Slice Height", and "Slice Width"
- Click "โจ APPLY SAHI" to run detection on current image
- Click "โจ APPLY SAHI (50 IMG)" for batch processing
- Green boxes show AI detections
- Click "๐พ Save SAHI results" to save them to labels
-
Navigate:
- Use arrow keys or click "Next"/"Prev" buttons
- Progress is saved automatically
| Action | Description |
|---|---|
| Left Click + Drag | Draw new bounding box |
| Right Click | Toggle selection of bounding box (multi-select) |
| Middle Click + Drag | Pan the image |
| Ctrl + Scroll | Zoom in/out at cursor position |
| Key | Action |
|---|---|
โ (Right Arrow) |
Next image |
โ (Left Arrow) |
Previous image |
Delete |
Delete all selected bounding boxes |
- ๐ท Choose image File - Select images folder
- ๐ท๏ธ Choose label File - Select labels folder
- โจ APPLY SAHI - Run detection on current image
- โจ APPLY SAHI (50 IMG) - Batch process 50 images
- ๐พ Save SAHI results - Save green boxes to labels
- ๐๏ธ Delete Selected - Delete selected boxes
- ๐งน Clean all Boxes - Delete all labels from current image
- Fullscreen - Toggle fullscreen mode
- Prev/Next - Navigate images
SAHI (Sliced Aided Hyper Inference) helps detect small objects in large images by slicing them into smaller patches.
- Image is divided into overlapping slices
- Each slice is processed by the YOLO model
- Results are merged with NMS (Non-Maximum Suppression)
- Overlapping predictions are filtered
Confidence Threshold (0.0 - 1.0):
- Lower values: More detections (may include false positives)
- Higher values: Fewer, more confident detections
- Default: 0.2
Slice Height (100 - 4096 pixels):
- Smaller slices: Better for tiny objects, slower
- Larger slices: Faster processing, may miss small objects
- Default: 256
Slice Width (100 - 4096 pixels):
- Similar to slice height
- Default: 256
- Small objects: Use smaller slices (128-256)
- Large images: Use larger slices (512-1024)
- High accuracy: Lower confidence threshold (0.1-0.2)
- Speed: Larger slices, higher confidence (0.3-0.5)
Process multiple images at once:
- Click "โจ APPLY SAHI (50 IMG)"
- Confirm the dialog
- Wait for progress bar to complete
- Review detections (green boxes)
- Click "๐พ Save SAHI results" to save all
.jpg,.jpeg- JPEG images.png- PNG images
Labels are saved as .txt files with the same name as the image:
class_id center_x center_y width height
Example: labels/image001.txt
0 0.716797 0.395833 0.216406 0.147222
2 0.687500 0.379167 0.255469 0.158333
Coordinate System:
- All values are normalized (0.0 to 1.0)
center_x,center_y: Center point of the boxwidth,height: Dimensions of the box- Relative to image dimensions
Supported Models:
.pt- PyTorch YOLO models (Ultralytics).onnx- ONNX format models
Model Requirements:
- Must be trained for object detection
- Output should match YOLO format
- Compatible with Ultralytics or standard ONNX detection
Issue: ModuleNotFoundError: No module named 'PyQt5'
Solution:
pip install PyQt5==5.15.11Issue: CUDA not available
Solution: This is just a warning. The app will use CPU. To enable CUDA:
- Install NVIDIA drivers
- Install CUDA Toolkit 12.x
- Reinstall PyTorch with CUDA support
Issue: Images appear blank Solutions:
- Check image file format (must be .jpg, .png)
- Verify image is not corrupted
- Check file permissions
Issue: No detections appear Solutions:
- Ensure a model is loaded (select from list)
- Lower the confidence threshold
- Check that model file exists in
models/folder - Verify model format (.pt or .onnx)
Issue: "Model state: Waiting..." Solution: Select a model from the "Class Models" dropdown
Issue: Slow detection Solutions:
- Increase slice size (512 or 1024)
- Use GPU if available
- Close other applications
- Use batch processing instead of single-image
Issue: High memory usage Solutions:
- Increase slice size
- Close unnecessary applications
- Process fewer images in batch mode
- Use CPU instead of GPU (less memory)
Issue: Labels disappear after closing Solutions:
- Check labels folder write permissions
- Ensure labels folder path is correct
- Check disk space
- Look for error messages in terminal
Select multiple boxes for batch operations:
- Right-click first box (highlights in color)
- Right-click additional boxes (adds to selection)
- Right-click again to deselect
- Press Delete to remove all selected boxes
The "Chosen" counter shows how many boxes are selected.
Filter displayed classes:
- Look at "๐ฏ Class Filter" panel
- Check/uncheck classes to show/hide
- Filtering is visual only (doesn't delete labels)
Zoom in for precise labeling:
- Hold
Ctrland scroll up/down to zoom - Zoom is centered at cursor position
- Middle-click and drag to pan
- Supports 0.5x to 5.0x zoom range
Load different models:
- Add
.ptor.onnxfiles tomodels/folder - Restart application (or refresh)
- Select model from dropdown
- Model loads with current slice/confidence settings
The tool automatically prevents duplicate detections:
- SAHI detections overlapping with manual labels are suppressed
- Configurable via
IOU_THRESHOLDin settings - Helps maintain clean datasets
my_dataset/
โโโ images/
โ โโโ img_0001.jpg
โ โโโ img_0002.jpg
โ โโโ img_0003.png
โ
โโโ labels/
โโโ img_0001.txt
โโโ img_0002.txt
โโโ img_0003.txt
YOLO_Labelling_Tool/
โโโ models/
โ โโโ yolov8n.pt
โ โโโ my_custom_model.onnx
โโโ (application files)
For issues, questions, or suggestions:
- Open an issue on GitHub
- Check existing issues for solutions
- Read this README thoroughly first
This tool uses SAHI (Slicing Aided Hyper Inference) for improved small object detection. If you use this tool in your research, please consider citing:
SAHI Paper:
@article{akyon2022sahi,
title={Slicing Aided Hyper Inference and Fine-tuning for Small Object Detection},
author={Akyon, Fatih Cagatay and Altinuc, Sinan Onur and Temizel, Alptekin},
journal={2022 IEEE International Conference on Image Processing (ICIP)},
doi={10.1109/ICIP46576.2022.9897990},
pages={966-970},
year={2022}
}SAHI Software:
@software{obss2021sahi,
author = {Akyon, Fatih Cagatay and Cengiz, Cemil and Altinuc, Sinan Onur and Cavusoglu, Devrim and Sahin, Kadir and Eryuksel, Ogulcan},
title = {{SAHI: A lightweight vision library for performing large scale object detection and instance segmentation}},
month = nov,
year = 2021,
publisher = {Zenodo},
doi = {10.5281/zenodo.5718950},
url = {https://doi.org/10.5281/zenodo.5718950}
}SAHI Repository: https://github.com/obss/sahi
