This demo shows how to use model pipelining (with multiple Edge TPUs) to process a video with a larger model.
This demo allows two configurations, one for the Dev Board and one for generic platforms (including x86_64, ARM64, and ARMv7). The Dev Board configuration is optimized to run efficiently for the platform. The generic version relies on SW rendering and decoding to ensure compatability.
- Dev Board
- 1 or 2 USB Accelerators
- USB Accelerators should be connected to Dev Board with a powered USB hub to guarantee sufficient power supply
For the Dev Board, This demo is tested on Mendel Eagle with Eel2 release
- A x86_64, ARM64, ARMv7 host with multiple EdgeTPUs attached (via USB, PCIe, or both).
For generic platforms, this is tested on a Linux x86_64 host with USB accelerators.
You will need to install docker on your computer first to compile the project.
In order to build all CPU targets type:
make DOCKER_TARGETS=demo docker-build
You can also specify the CPU targets you'd like:
make DOCKER_TARGETS=demo DOCKER_CPUS="k8 aarch64" docker-build
The binary should be in out/$(ARCH)/demo directory.
To run the demo, copy test_data and generated binary multiple_edgetpu_demo
to your device. Run:
./multiple_edgetpu_demo --num_segments 3 \
--video_path <your video>
You can enter ./multiple_edgetpu_demo --help to see all the flag options and
default values.
To analyze a 720p 60 FPS video with inception_v3 model, the system can process
about 20 FPS with 1 Edge TPU; and 50+ FPS with 3 Edge TPUs using model
pipelining. Performance may vary in the generic configuration based on SW
decoding and rendering ability.