Back | Next | Contents
Object Detection
Similar to the previous example, detectnet-camera
runs the object detection networks on live video feed from the Jetson onboard camera. Launch it from command line along with the type of desired network:
$ ./detectnet-camera facenet # run using facial recognition network
$ ./detectnet-camera multiped # run using multi-class pedestrian/luggage detector
$ ./detectnet-camera pednet # run using original single-class pedestrian detector
$ ./detectnet-camera coco-bottle # detect bottles/soda cans in the camera
$ ./detectnet-camera coco-dog # detect dogs in the camera
$ ./detectnet-camera # by default, program will run using multiped
note: to achieve maximum performance while running detectnet, increase the Jetson clock limits by running the script:
sudo ~/jetson_clocks.sh
note: by default, the Jetson's onboard CSI camera will be used as the video source. If you wish to use a USB webcam instead, change the
DEFAULT_CAMERA
define at the top ofdetectnet-camera.cpp
to reflect the /dev/video V4L2 device of your USB camera and recompile. The webcam model it's tested with is Logitech C920.
This is the last step of the Hello AI World tutorial, which covers inferencing on Jetson with TensorRT.
To recap, together we've covered:
- Using image recognition networks to classify images
- Coding your own image recognition program in C++
- Classifying video from a live camera stream
- Performing object detection to locate object coordinates
Next, we encourage you to follow our full Training + Inference tutorial, which also covers the re-training of these networks on custom datasets. This way, you can collect your own data and have the models recognize objects specific to your applications. The full tutorial also covers semantic segmentation, which is like image classification, but on a per-pixel level instead of predicting one class for the entire image. Good luck!
Back | Detecting Objects from the Command Line
© 2016-2019 NVIDIA | Table of Contents