Skip to content

Conversation

joaoantoniocardoso
Copy link
Collaborator

@joaoantoniocardoso joaoantoniocardoso commented Sep 4, 2023

update 09/29/25:

This seems very promising -- so far, I have already replaced the v4l formats and the v4l device discovery.

I'm thinking we should refactor the source/pipeline part of the application to fully make use of this Device Monitor:

  1. I still need to reason about the architecture, but I think we can safely say that at least we should split our "Pipelines" (v4l2_pipeline.rs, etc) into a source (pipeline: source -> capsfilter -> parser -> proxysink) + transcoding pipeline (pipeline: proxysrc -> video_tee -> rtp_payloader -> capsfilter -> rtp_tee, as well as the future optional decoder+encoder), which will then connect to our many sink (pipeline: proxysrc -> sink).
  2. Then we create a Device enum (w/ VideoSource and AudioSource variants) and a DeviceProvider trait, implemented by both this GSTDeviceMonitor and by the OnvifDiscovery, to give us a common interface.
  3. Each "gst::Device" is also an element factory provider, which allows you to create its source element with no effort, which will help us to more easily reach libcamera and audio support.

Helps #139, #372, #79

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

1 participant