A modern PyQt6 GUI for running DeepLabCut-live experiments with real-time pose estimation. The application streams frames from industrial or consumer cameras, performs DLCLive inference, and records high-quality video with synchronized pose data.
- Modern Python Stack: Python 3.10+ compatible codebase with PyQt6 interface
- Multi-Backend Camera Support: OpenCV, GenTL (Harvesters), Aravis, and Basler (pypylon)
- Real-Time Pose Estimation: Live DLCLive inference with configurable models (TensorFlow, PyTorch)
- High-Performance Recording: Hardware-accelerated video encoding via FFmpeg
- Flexible Configuration: Single JSON file for all settings with GUI editing
- Multiple Backends:
- OpenCV - Universal webcam support
- GenTL - Industrial cameras via Harvesters (Windows/Linux)
- Aravis - GenICam/GigE cameras (Linux/macOS)
- Basler - Basler cameras via pypylon
- Smart Device Detection: Automatic camera enumeration without unnecessary probing
- Camera Controls: Exposure time, gain, frame rate, and ROI cropping
- Live Preview: Real-time camera feed with rotation support (0°, 90°, 180°, 270°)
- Model Support: TensorFlow (base) and PyTorch models
- Processor System: Plugin architecture for custom pose processing
- Auto-Recording: Automatic video recording triggered by processor commands
- Performance Metrics: Real-time FPS, latency, and queue monitoring
- Pose Visualization: Optional overlay of detected keypoints on live feed
- Hardware Encoding: NVENC (NVIDIA GPU) and software codecs (libx264, libx265)
- Configurable Quality: CRF-based quality control
- Multiple Formats: MP4, AVI, MOV containers
- Timestamp Support: Frame-accurate timestamps for synchronization
- Performance Monitoring: Write FPS, buffer status, and dropped frame tracking
- Intuitive Layout: Organized control panels with clear separation of concerns
- Configuration Management: Load/save settings, support for multiple configurations
- Status Indicators: Real-time feedback on camera, inference, and recording status
- Bounding Box Tool: Visual overlay for ROI definition
pip install deeplabcut-live-guiThis installs the core package with OpenCV camera support.
# Install with gentl support
pip install deeplabcut-live-gui[gentl]- Install camera vendor drivers and SDK
- Ensure GenTL producer (.cti) files are accessible
- Common locations:
C:\Program Files\The Imaging Source Europe GmbH\IC4 GenTL Driver\bin\- Check vendor documentation for CTI file location
NOT tested
# Ubuntu/Debian
sudo apt-get install gir1.2-aravis-0.8 python3-gi
# Fedora
sudo dnf install aravis python3-gobjectNOT tested
brew install aravis
pip install pygobjectNOT tested
# Install Pylon SDK from Basler website
# Then install pypylon
pip install pypylonFor NVIDIA GPU encoding (highly recommended for high-resolution/high-FPS recording):
# Ensure NVIDIA drivers are installed
# FFmpeg with NVENC support will be used automatically-
Launch the GUI:
dlclivegui
-
Select Camera Backend: Choose from the dropdown (opencv, gentl, aravis, basler)
-
Configure Camera: Set FPS, exposure, gain, and other parameters
-
Start Preview: Click "Start Preview" to begin camera streaming
-
Optional - Load DLC Model: Browse to your exported DLCLive model directory
-
Optional - Start Inference: Click "Start pose inference" for real-time tracking
-
Optional - Record Video: Configure output path and click "Start recording"
The GUI uses a single JSON configuration file containing all experiment settings:
{
"camera": {
"name": "Camera 0",
"index": 0,
"fps": 60.0,
"backend": "gentl",
"exposure": 10000,
"gain": 5.0,
"crop_x0": 0,
"crop_y0": 0,
"crop_x1": 0,
"crop_y1": 0,
"max_devices": 3,
"properties": {}
},
"dlc": {
"model_path": "/path/to/exported-model",
"model_type": "base",
"additional_options": {
"resize": 0.5,
"processor": "cpu"
}
},
"recording": {
"enabled": true,
"directory": "~/Videos/deeplabcut-live",
"filename": "session.mp4",
"container": "mp4",
"codec": "h264_nvenc",
"crf": 23
},
"bbox": {
"enabled": false,
"x0": 0,
"y0": 0,
"x1": 200,
"y1": 100
}
}- Load: File → Load configuration… (or Ctrl+O)
- Save: File → Save configuration (or Ctrl+S)
- Save As: File → Save configuration as… (or Ctrl+Shift+S)
All GUI fields are automatically synchronized with the configuration file.
| Backend | Platform | Use Case | Auto-Detection |
|---|---|---|---|
| opencv | All | Webcams, simple USB cameras | Basic |
| gentl | Windows, Linux | Industrial cameras via CTI files | Yes |
| aravis | Linux, macOS | GenICam/GigE cameras | Yes |
| basler | All | Basler cameras specifically | Yes |
{
"camera": {
"backend": "opencv",
"index": 0,
"fps": 30.0
}
}Note: Exposure and gain controls are disabled for OpenCV backend due to limited driver support.
{
"camera": {
"backend": "gentl",
"index": 0,
"fps": 60.0,
"exposure": 15000,
"gain": 8.0,
"properties": {
"cti_file": "C:\\Path\\To\\Producer.cti",
"serial_number": "12345678",
"pixel_format": "Mono8"
}
}
}{
"camera": {
"backend": "aravis",
"index": 0,
"fps": 60.0,
"exposure": 10000,
"gain": 5.0,
"properties": {
"camera_id": "TheImagingSource-12345678",
"pixel_format": "Mono8",
"n_buffers": 10,
"timeout": 2000000
}
}
}See Camera Backend Documentation for detailed setup instructions.
The GUI supports both TensorFlow and PyTorch DLCLive models:
- Base (TensorFlow): Original DLC models exported for live inference
- PyTorch: PyTorch-based models (requires PyTorch installation)
Select the model type from the dropdown before starting inference.
The GUI includes a plugin system for custom pose processing:
# Example processor
class MyProcessor:
def process(self, pose, timestamp):
# Custom processing logic
x, y = pose[0, :2] # First keypoint
print(f"Position: ({x}, {y})")
def save(self):
passPlace processors in dlclivegui/processors/ and refresh to load them.
See Processor Plugin Documentation for details.
Enable "Auto-record video on processor command" to automatically start/stop recording based on processor signals. Useful for event-triggered recording in behavioral experiments.
- Use Hardware Encoding: Select
h264_nvenccodec for NVIDIA GPUs - Adjust Buffer Count: Increase buffers for GenTL/Aravis backends
"properties": {"n_buffers": 20}
- Optimize CRF: Lower CRF = higher quality but larger files (default: 23)
- Disable Visualization: Uncheck "Display pose predictions" during recording
- Crop Region: Use cropping to reduce frame size before inference
| FPS Range | Codec | CRF | Buffers | Notes |
|---|---|---|---|---|
| 30-60 | libx264 | 23 | 10 | Standard quality |
| 60-120 | h264_nvenc | 23 | 15 | GPU encoding |
| 120-200 | h264_nvenc | 28 | 20 | Higher compression |
| 200+ | h264_nvenc | 30 | 30 | Max performance |
dlclivegui/
├── __init__.py
├── gui.py # Main PyQt6 application
├── config.py # Configuration dataclasses
├── camera_controller.py # Camera capture thread
├── dlc_processor.py # DLCLive inference thread
├── video_recorder.py # Video encoding thread
├── cameras/ # Camera backend modules
│ ├── base.py # Abstract base class
│ ├── factory.py # Backend registry and detection
│ ├── opencv_backend.py
│ ├── gentl_backend.py
│ ├── aravis_backend.py
│ └── basler_backend.py
└── processors/ # Pose processor plugins
├── processor_utils.py
└── dlc_processor_socket.py
# Syntax check
python -m compileall dlclivegui
# Type checking (optional)
mypy dlclivegui
- Create new backend inheriting from
CameraBackend - Implement required methods:
open(),read(),close() - Optional: Implement
get_device_count()for smart detection - Register in
cameras/factory.py
See Camera Backend Development for detailed instructions.
- Camera Support - All camera backends and setup
- Aravis Backend - GenICam camera setup (Linux/macOS)
- Processor Plugins - Custom pose processing
- Installation Guide - Detailed setup instructions
- Timestamp Format - Timestamp synchronization
- Python 3.10+
- 8 GB RAM
- NVIDIA GPU with CUDA support (for DLCLive inference and video encoding)
- USB 3.0 or GigE network (for industrial cameras)
- SSD storage (for high-speed recording)
- Windows 11
This project is licensed under the GNU Lesser General Public License v3.0. See the LICENSE file for more information.
Cite the original DeepLabCut-live paper:
@article{Kane2020,
title={Real-time, low-latency closed-loop feedback using markerless posture tracking},
author={Kane, Gary A and Lopes, Gonçalo and Saunders, Jonny L and Mathis, Alexander and Mathis, Mackenzie W},
journal={eLife},
year={2020},
doi={10.7554/eLife.61909}
}