-
Notifications
You must be signed in to change notification settings - Fork 126
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Video processing in inference
server
#679
Video processing in inference
server
#679
Conversation
…/video_processing_in_inference_server
…/video_processing_in_inference_server
I think we can treat it as separate feature for workflows / don't need to port all stream management features until we need them. E.g. not sure we ned pause/resume explicitly (benefit is really just to not have to create / configure the stream again, which is ideally done as part of worklfow spec now anyway?)
Reasons for doing it now would be if it requires different architecture for how we process / start / manage streams, but I don't think thats the case. another thought: could re-streaming be separate sink / stateful block that creates a stream?
Can we do it the same way we do for inference / workflow endpoints? For dedicated deployments I think we have a check that API key matches the owner of the deployment. For local / user managed we can allow the requests but auth on model access / other api calls that need API keys |
I think we need:
|
64f2525
Description
The goal of this feature is to bring video processing capabilities into
inference
server - long story short, Workflows should run against videos without additional scripts needed.State of the work:
🟢 Old enterprise stream management components copied and adjusted to process workflows
🟢 Basic endpoints to manage stream states enabled (initialise, list, get state, consume, pause, resume, terminate)
🟢 Basic tests coverage
🔴 Full support for old enterprise features (old stream management was running
InferencePipeline
without Workflows)🔴 true integration tests
🔴 Functionality to start video processing on start of the conrainer
Issues spotted:
Performance
The same workflow tested, reporting only latency for single frame processing inside
WorkflowRunner.run_workflow(...)
function:yolov8n-640
used - which was the model used in test caseWe have docker overhead, not 100% sure if it is visible on Jetson devices, but MacBook one makes drop from 27fps into <10fps 😢
Passing localhost camera to docker
On MacBook it is very hard (requires tons of configuration - https://medium.com/@jijupax/connect-the-webcam-to-docker-on-mac-or-windows-51d894c44468) to pass device camera to container which would be required to have nice demos without UI streaming to the inside of container. @grzegorz-roboflow suggested passing frames through Unix socket which seems feasible - please clarify if I should allocate time to implement that.
Open questions
Type of change
Please delete options that are not relevant.
How has this change been tested, please provide a testcase or example of how you tested the change?
YOUR_ANSWER
Any specific deployment considerations
For example, documentation changes, usability, usage/costs, secrets, etc.
Docs