A Hybrid Approach for Tracking Individual Players in Broadcast Match Videos
- Set up a Tensorflow baseline project
- Clone the full Tensorflow object detection repository located at
https://github.com/tensorflow/models
mkdir tensorflow
cd tensorflow
git clone https://github.com/tensorflow/models.git
- Download the Faster-RCNN-Inception-V2 and the SSD_MOBILENET-V1_COCO models into the
tensorflow/models/research/object_detection/
folder.
- Configure HybridTracker
- Clone this repository and move the content of the
object_detection
folder totensorflow/models/research/object_detection/
git clone git@github.com:UDC-GAC/HybridTracker.git
mv HybridTracker/object_detection/* tensorflow/models/research/object_detection/
- Create a new Anaconda Virtual Environment
conda create -n tensorflow
source activate tensorflow
python -m pip install --upgrade pip
python -m pip install --ignore-installed --upgrade tensorflow-gpu
conda install -c anaconda protobuf
python -m pip install -r requirements.txt
- Compile protobufs and run setup.py
cd tensorflow/models/research
export PYTHONPATH=$PYTHONPATH:'pwd':'pwd/slim'
protoc --python_out=. ./object_detection/protos/anchor_generator.proto ./object_detection/protos/argmax_matcher.proto ./object_detection/protos/bipartite_matcher.proto ./object_detection/protos/box_coder.proto ./object_detection/protos/box_predictor.proto ./object_detection/protos/eval.proto ./object_detection/protos/faster_rcnn.proto ./object_detection/protos/faster_rcnn_box_coder.proto ./object_detection/protos/grid_anchor_generator.proto ./object_detection/protos/hyperparams.proto ./object_detection/protos/image_resizer.proto ./object_detection/protos/input_reader.proto ./object_detection/protos/losses.proto ./object_detection/protos/matcher.proto ./object_detection/protos/mean_stddev_box_coder.proto ./object_detection/protos/model.proto ./object_detection/protos/optimizer.proto ./object_detection/protos/pipeline.proto ./object_detection/protos/post_processing.proto ./object_detection/protos/preprocessor.proto ./object_detection/protos/region_similarity_calculator.proto ./object_detection/protos/square_box_coder.proto ./object_detection/protos/ssd.proto ./object_detection/protos/ssd_anchor_generator.proto ./object_detection/protos/string_int_label_map.proto ./object_detection/protos/train.proto ./object_detection/protos/keypoint_box_coder.proto ./object_detection/protos/multiscale_anchor_generator.proto ./object_detection/protos/graph_rewriter.proto ./object_detection/protos/calibration.proto ./object_detection/protos/flexible_grid_anchor_generator.proto
python setup.py build
python setup.py install
Note: the Tensorflow models repository is in continuous development what can break the functionality of this brief tutorial. Check out the new updates of the Tensorflow models repository in order to adapt this setup as needed. This tutorial was originally done using TensorFlow 1.13.1 and more specifically this GitHub commit of the Tensorflow Object Detection API.
- Create an empty folder called
images
with two sub-folders:train
andtest
. - Label your images or divide your already labeled ones into
train
andtest
folders. A common strategy is to put 80% of them into thetrain
directory and the rest to thetest
one. Your labeled data must be placed outsidetrain
andtest
, inimages
.
2.1. If your labeled data is in .xml
format it must be translated to .csv
. Change directory to object detection
and execute:
python xml_to_csv.py
to translate xml labels to csv format.
- Modify
generate_tfrecord.py
file specifying the classes labeled in our data set. - Execute
python generate_tfrecord.py --csv_input=images/train_label.csv --image_dir=images/train --output_path=train.record
python generate_tfrecord.py --csv_input=images/test_label.csv --image_dir=images/test --output_path=test.record
- Create a file called
labelmap.pbtxt
insideobject_detection/training
folder specifying the classes we are going to use. - Copy your models configuration files from
object_detection/samples/config
totraining
directory. In our case, we need to create a configuration file to Faster-RCNN model and another one to SSD one. - Adapt previous model configuration file to our use-case. Modify:
- Number of classes
fine_tune_checkpoint
parameter indicating the path of the downloaded models.- Paths to train and test directories as well as to
labelmap.pbtxt
file. - Number of tests images in
eval_config
field.
- Train the network. Execute:
python train.py --logtostderr --train_dir=training --pipeline_config_path=training/faster-rcnn_inception_v2_pats.config
- Export inference graph. Select the newest checkpoint generated in
object_detection/training
folder and execute:
python export_inference_graph.py --input_type image_tensor --pipeline_config_path training/faster_rcnn_inception_v2_pets.config --trained/model.ckpt_xxxx --output_directory inference_graph
A .pb
file is created into inference_graph
directory which we will refer to on inference time.
python hybrid_tracker.py jump crop_x crop_y faster_rcnn_model_folder ssd_model_folder crop_z path_to_results path_to_video
for example,
python hybrid_tracker.py 6 125 90 inference_graph_fasterrcnn_kross inference_graph_ssd_kross 500 paper_videos/kross_full/2_HQ2 paper_videos/kross_full/2_HQ2/2_HQ2.mp4
If you find this tool helpful, please cite:
Apache-2.0 License
-- Roberto López Castro