The effects of human recreation on wildlife may vary depending on the type of road and trail use that is occurring (e.g. see Naidoo and Burton, 2020). Classifying horses into finer sub-classes (e.g., packhorse, free-ranging horse, saddlehorse) facilitates the study of how different types of recreational activities impact the distribution and abundance of wildlife.
This repo hosts the training and inference code for a PyTorch model that classifies horses cropped from camera trap images (typically cropped with MegaDetector) into the following categories:
- packhorse
- horserider (despite the category name, this category refers to the horse, not the rider; a saddled horse with no rider should still be put into this category)
- horse (i.e., free-ranging horse)
Sample images are provided in the sample images section below.
This classifier is typically used in an ensemble with SpeciesNet; i.e., this classifier is typically only run on crops that SpeciesNet classifies as "domestic horse".
The current release is fine-tuned from the timm/eva02_large_patch14_448.mim_m38m_ft_in22k_in1k base model.
The training data for this model consists of ~50k horses cropped from images collected from 47 camera locations in British Columbia. Training data was provided by Robin Naidoo, World Wildlife Fund.
Download the model zipfile from the releases page and extract locally. It contains a checkpoint file (camera-trap-horse-classifier.2025.08.02.ckpt) and the class list file (classes.txt).
The inference script in this repo assumes that you have created a folder with cropped images; I typically do that with the create_crop_folder module in the MegaDetector Python package.
Clone the repo, e.g. to c:\git\camera-trap-horse-classifier:
mkdir c:\git
cd c:\git
git clone https://github.com/agentmorris/camera-trap-horse-classifier
cd camera-trap-horse-classifierCreate a Python environment and install dependencies, e.g. with Anaconda:
conda create -n camera-trap-horse-classifier python=3.11 pip -y
conda activate camera-trap-horse-classifier
pip install -r requirements.txtIf you are on Windows and you have a GPU, you may have to also install the GPU version of PyTorch:
pip install torch torchvision --upgrade --force-reinstall --index-url https://download.pytorch.org/whl/cu118
python run_horse_classifier [checkpoint_path] [image_dir] --output [output_file] --classes [class_name_file]
...where:
checkpoint_pathis the path to the .ckpt file you extracted from the zipfileimage_diris your image folder (this will be processed recursively)output_fileis the .json file to which you want to write resultsclass_name_fileis the location of the classes.txt file you extracted from the zipfile
PyTorch Lightning script for fine-tuning vision models for horse classification.
Supports timm and Hugging Face models.
Input data is provided as:
- A root image path
- A COCO .json file containing relative filenames within that root path, with a "split" field in each image set to either "train" or "val.
Inference script for PyTorch Lightning models trained with train_horse_classifier.py. Runs inference on a folder, producing a .json file in the MegaDetector batch output format (https://lila.science/megadetector-output-format).
Relies on train_horse_classifier.py for core classes.
These images are included here to capture the gestalt of what this classifier is trained on. The classifier is trained only on the cropped horses, not on the entire images. These are visualizations of the classifier output on the original images. Image credit Robin Naidoo, World Wildlife Fund.





