Skip to content

GStreamer

IMG_PRCSNG edited this page Jun 17, 2017 · 1 revision

My relationship with GStreamer is not that intimate - we did share a few moments here and there, where I was desperately trying to read between the lines. But yeah, Memories worth a wiki page were formed.

GStreamer is a pretty handy tool like FFMpeg when it comes to video/audio streaming - reading from one source and writing it to another, be it a camera, file, live stream, etc. It supports a bunch of sources and a bunch of sinks and a huge set of options that you can add to your pipeline.

The first task I had to do with GStreamer was for a Robotics competition where I had to remotely control a bot from the base station by seeing the live stream from the camera mounted on it. Thus, latency and bandwidth were of concern. With a few quick google search on how to do this, I landed on GStreamer and X264 encoding.

Installation

Should be pretty straightforward in Linux. In Ubuntu, a sudo apt-get install gstreamer1.0< tab >< tab > should list the available options. You can install base and good. Maybe you would need bad and ugly set of plugins later

Getting Started

In GStreamer there are a few core concepts

  • It is a Pipeline. You add Elements to it.
  • Sources are the elements which provides data to the pipeline
  • Sinks are the final output of the Pipeline.
  • A Pipeline can have multiple Sources, Sinks and other Kinds of elements related to Video/Audio Encoding / Decoding, Muxing / Demuxing, Parsing, Filtering, etc.
  • You can link one element to another using Pads

The convenience GStreamer provides is the CLI client. You can define and launch a pipeline with the tool gst-launch-1.0. You can lookup the capabilities using gst-inspect-1.0. For example, to look up hlssink, which is a GStreamer Sink for Http Live Streaming streams (.m3u8 and .ts), I had to use gst-inspect 1.0 hlssink which gave me the parameters for that element

You can immediately construct a pipeline using the following:

gst-launch-1.0 videotestsrc ! autovideosink

This will setup a pipeline connecting the videotestsrc, a simulated video input to the autovideosink, which displays it on screen. A bunch of nice examples is documented in this wiki

Building a Useful pipeline

For a useful pipeline,

- We have to connect Source(s) elements - which would usually be a File, Webcam, Network source to 
- A filter pipeline (which would usually involve Encoding/Decoding, Muxing/Demuxing, Parsing, Filtering and passing it to 
- Sink(s) elements - which would usually be a Display, File or a Port for a server to read from.
  • Reading From File (TODO)
  • Reading From Webcam (TODO)
  • Reading From Network source (TODO)
  • Streaming Webcam over UDP (TODO)
  • Streaming Webcam over RTSP (TODO)
GStreamer and WebRTC

I couldn't find a reliable way to display a video stream on a web browser. There are ways where I can write JPEG Frames directly into a http-server listening on a port from OpenCV, display it and be done with it. But bandwidth and latency will become a huge problem when multiple connections are required to be made to the same streaming server. I streamed the frames into the GStreamer pipeline, encoded it h264, made rtp packets and set up an rtsp server for streaming using gst-rtsp-server. I was able to read it in vlc but not using a browser. (I could have embedded a VLC player in the browser). The next thing I tried was HLS Streaming. I tried to use videojs-contrib-hls and one more. But the somehow, the segments created by the pipeline was not supported by the JS plugins. I couldnt use a tcpserversink as I didnt know how to make a nodejs / python server to listen and decode the stream. Probably i should have used python / js wrappers but I wanted it easier than that. Then I came across two things Kurento and Janus-gateway - Two projects which would convert an rtsp stream to webrtc protocol and display it on the browser. I gave Janus a try. It was pretty easy to set it up with their instruction from the github page. I could start the pipeline and it would start a http server which listens to that pipeline, converts it and channels it to the video element on the web page. Need to read up on how that happens soon. Sources

Clone this wiki locally