Skip to content

Latest commit

 

History

History
72 lines (58 loc) · 2.04 KB

README.md

File metadata and controls

72 lines (58 loc) · 2.04 KB

Human-Activity-Monitor

Install

Recommend to use Conda

Python == 3.10

pip install -r requirements.txt

Docker

Build image

docker build -t har .

Run

# CPU only
docker -it --net=host har

# With GPU
docker -it --net=host --gpus all har

# Show video (Linux only)
xhost + && \
docker run -it --rm --net=host (--gpus all) \
    -e DISPLAY=$DISPLAY -v /tmp/.X11-unix:/tmp/.X11-unix har && \
xhost -

Run

All configuration can be access at configs/run folder

CLI options:

python3 src/run.py --help

Example:

python3 src/run.py video.path=data/video/abc.mp4 video.speed=2 detector.model.conf=0.5 classifier=false features.heatmap=false features.track_box=false

Train

Setup

  1. Dataset must be put in the data folder
  2. Configure the options in configs/data folder
  3. Then using the video_preparation.py or image_preparation.py file in the tools folder to generate the trainable data
# If using image
python3 tools/image_preparation.py auto.data_path=path/to/the/data

# If using video
python3 tools/video_preparation.py auto.data_path=path/to/the/data

Run

Configure the training setting in configs/train.yaml:
CLI options:

python3 src/train.py --help

Training:

python3 src/train.py

Note

  • Most of the configurations can found in the configs folder. train.py contains additional setting need to be changed in the file directly.
  • See https://hydra.cc/docs/intro/ for configuration and CLI help.