Description
Hey, first off, fantastic project. I have tried 4 other NVR projects and this one ticks the most boxes in terms of simple "smart" cameras so thanks for that!
TLDR; Trying to use one of my cameras for ALPR, running on a RPI 4 with Coral USB. Wanting to submit higher resolution image for processing.
I currently have the following config:
ffmpeg:
camera:
camera1:
name: Front PTZ
host: 192.168.1.64
port: 554
path: /Streaming/Channels/101
substream:
path: /Streaming/Channels/103
port: 554
width: 1280
height: 720
fps: 20
codec: mjpeg
username: !secret camera_one_username
password: !secret camera_one_password
mjpeg_streams:
objects:
draw_objects: true
draw_zones: true
draw_motion: true
draw_motion_mask: false
draw_object_mask: false
recorder:
codec: h264
codeprojectai:
host: 192.168.1.130
port: 32168
object_detector:
cameras:
camera1:
fps: 20
log_all_objects: true
labels:
- label: person
confidence: 0.8
trigger_recorder: true
- label: car
confidence: 0.75
trigger_recorder: true
- label: truck
confidence: 0.75
trigger_recorder: true
- label: vehicle
confidence: 0.75
trigger_recorder: true
face_recognition:
save_unknown_faces: true
cameras:
camera1:
labels:
- person
license_plate_recognition:
cameras:
camera1:
labels:
- vehicle
- car
- truck
known_plates:
- MYPLATE
mog2:
motion_detector:
cameras:
camera1:
fps: 20
mask:
- coordinates:
- x: 126
y: 708
- x: 6
y: 709
- x: 6
y: 2
- x: 1279
y: 4
- x: 1277
y: 619
- x: 823
y: 554
- x: 650
y: 354
- x: 617
y: 139
- x: 499
y: 138
nvr:
camera1:
mqtt:
broker: 192.168.1.130
port: 1883
username: !secret mqtt_username
password: !secret mqtt_password
home_assistant:
discovery_prefix: homeassistant
retain_config: true
It is set up this way:
- There is a finite time to catch the car and plate, hence higher FPS. Its about 3-5 seconds to get a good photo to use
My aim is to have a lower resolution stream for motion detection, but then ideally send a higher resolution snapshot to the object recognition, mostly so once cropped, the shot of the plate is higher resolution. I am noticing that each detection is getting the odd letter or 2 incorrect because of the initial resolution of the image used.
Either taking a clip from the main feed, or using the snapshot URL that the camera provides for less processing (take main feed, grab single frame and send - let the camera do that work).
Not sure if this is at all possible, but just trying to optimise that post processing (want to do something similar for face recognition once i have this set up).