ffmpeg open remote video with high latency while gstreamer not - opencv

I use MJPEG-Streamer to send my remote camera video via wifi. I use such command to see video:
gstreamer:
gst-launch -v souphttpsrc location= "http://192.168.1.1:8080/?action=stream&type=.mjpg" do-timestamp=true is_live=true ! multipartdemux ! jpegdec ! autovideosink
ffmpeg:
ffplay "http://192.168.1.1:8080/?action=stream&type=.mjpg"
or:
ffplay "http://192.168.1.1:8080/?action=stream&type=.mjpg" -fflags nobuffer
however the ffplay has a high latency up to 3~10 seconds in my test, while the gstreamer
show almost no latency.
when using localhost MJPEG-Streamer, both two methods show low latency. so what's the reason ? and how to decrease the latency?
more detail:
I want to use opencv to capture a remote camera,my opencv compile with ffmpeg support but without gstreamer support ( I tried but failed,the cmake seemed to not find my gstreamer,I don't know which gstreamer library to install in opensuse 13.1), I can get the video in opencv but with high latency,so I compared ffmpeg with gstreamer,the result is as above.so how to decrease the latency? I read this link,but still no solution.
thank you.

Related

Using a video device in 2 applicatrions simoultaneously with Gstreamer

I am trying to use the camera feed on my Jetson Nano (running headless over SSH) in 2 different applications.
From the command line, I am able to run
gst-launch-1.0 nvarguscamerasrc ! 'video/x-raw(memory:NVMM), width=3280, height=2464, format=NV12, framerate=(fraction)21/1' ! nvvidconv ! xvimagesink
Which streams video from my camera (IMX219 connected to the Jetson Nano) to my desktop via an X11 Window.
What I would like to do is somehow use that same video stream in 2 different applications.
My first application is a python program that runs some OpenCV stuff, the second application is a simple bash script which records video to an *.mp4 file.
Is such a thing possible? I have looked into using v4l2loopback, but I'm unsure if that is really the simplest way to do this.
Well, I managed to figure it out thanks to both commentors, here is my solution on my Jetson Nano, but it can be tweaked for any Gstreamer application.
First, use v4l2loopback to create 2 virtual devices like so:
sudo modprobe v4l2loopback video_nr=1,2
That will create /dev/video1 and /dev/video2
Then you use tee to dump a Gstreamer stream into each of these virtual devices, here is my line:
gst-launch-1.0 nvarguscamerasrc ! 'video/x-raw(memory:NVMM), width=3280, height=2464, format=NV12, framerate=(fraction)21/1' ! nvvidconv ! tee name=t ! queue ! v4l2sink device=/dev/video1 t. ! queue ! v4l2sink device=/dev/video2
This is specifically for my Jetson Nano and my specific camera, but you can change the gstreamer pipeline to do what you wish

VIDEOIO ERROR: V4L: can't find camera device

I am using ubuntu16.04 and trying to run opencv script.
when i use:
video_capture = cv2.VideoCapture(-1)
it gives me error VIDEOIO ERROR: V4L: can't find camera device
No video window opens
But when i run
video_capture = cv2.VideoCapture('test.jpg')
It opens window shows the picture and close the window.
Please tell me why it is not streaming video directly from camera.
The suggestion api55 gave in his comment
video_capture = cv2.VideoCapture(0)
is what I would try first.
Generally, you can list the available cameras with ls /dev/video* or v4l2-ctl --list-devices. Here sample output:
NZXT-U:rt-trx> v4l2-ctl --list-devices
Microsoft® LifeCam Cinema(TM): (usb-0000:00:14.0-1):
/dev/video1
Microsoft® LifeCam Cinema(TM): (usb-0000:00:1a.0-1.3):
/dev/video0
/dev/video0 corresponds to device id 0, etc.
PS: v4l2-ctl is quite useful for solving camera issues and can do much more than --list-devices. I installed it via packagev4l-utils on a 16.04 machine.
Late, but to get mine working i put in terminal:
-ltrh /dev/video*
To get a list of the video devices that are plugged into my computer. Then for each one I did:
sudo chmod 777 /dev/videox
Where x was one of the video files that were listed, giving everything access to them. Probaly not the most secure solution, but it got my code working.

gstreamer 1.0 and raspberry pi streaming from webcam to ios browser

Using gstreamer 1.0, I would like to create a pipeline that streams video from a Logitech C920 webcam to an iphone running chrome ios. This pipeline would run on a raspberry pi model B. I think I need to use hlssink, and serve a m3u8 file. I was thinking of running a python-tornado webserver to serve the m3u8 file on the raspberry pi. Also I know that the Logitech C920 supports hardware encoding for H.264 and would like to use that if possible. So far I've been unsuccessful and would appreciate any help or feedback.
Considering that the minimal hlssink pipeline is like this:
gst-launch-1.0 videotestsrc is-live=true ! x264enc ! mpegtsmux ! hlssink max-files=5
You need to encode the raw source from the camera with x264enc and parse then with h264parser. After that you need to multiplex different media streams into an MPEG Transport Stream (in this case we have only video).
The final pipeline would be, for example:
gst-launch-1.0 videotestsrc is-live=true ! video/x-raw, framerate=25/1, width=720, height=576, format=I420 ! x264enc bitrate=1000 key-int-max=25 ! h264parse ! video/x-h264 ! queue ! mpegtsmux ! hlssink playlist-length=10 max-files=20 playlist-root="http://localhost/hls/" playlist-location="/var/www/html/hls/stream0.m3u8" location="/var/www/html/hls/fragment%06d.ts" target-duration=5
I have added some caps to help you adding after v4l2src device=/dev/video0, but this depends on the camera model. I also added some different properties from hlssink to show you how to set the different files location.The pipeline above runs with videotestsrc and writes the files and playlist in /var/www/html/hls folder.
Tested with Apache, it is possible to view the result with vlc or simply running:
gst-launch-1.0 playbin uri=http://localhost/hls/stream0.m3u8
If you have any doubts capturing from webcam you can follow this link for more information.

OpenCV output on V4l2

I wanted to know if I can use "opencv" to write on a v4l2 device.
I would take a picture, apply small changes with the features of opencv, and then send it on a v4l2 device.
I searched on the web, but there are a lot of examples on how to read from a V4L2 device, but I found nothing about writing on v4l2.
can someone help me?
The question is 8 month old, but if you still need an answer (I suppose your OS is Linux):
Install v4l2 loopback module
1.1. Load and configure it linux: i.e. modprobe.conf: options v4l2loopback video_nr=22,23
Use such C++/OpenCV code: gist
2.1. Setup device using ioctl() call
2.2. Write raw RGB data to this device (i.e. /dev/video23)
2.3. Use it as regular v4l2 device (i.e. webcam or vlc v4l2:///dev/video23)
more: You can use ffmpeg with v4l2 loopback:
ffmpeg -f x11grab -r 12 -s 1920x1080 -i :0.0+0,0 -vcodec rawvideo -pix_fmt yuv420p -threads 0 -f v4l2 -vf 'scale=800:600' /dev/video22

gstreamer: cannot launch rtsp streaming

I am new to gstreamer. Although it sounds like a very entry-level question I couldn't find clear answer so far.
I try to launch server like below according to some example.
$ gst-launch-1.0 -v videotestsrc ! x264enc ! rtph264pay name=pay0 pt=96 ! udpsink rtsp://127.0.0.1:8554/test
Then I use VLC as client (on the same computer).
$ vlc rtsp://127.0.0.1:8554/test
VLC reports error of "Unable to connect...". But if I use "test-launch" in the first step, it works fine.
Another question is besides VLC, I try to launch client like this.
$ gst-launch-1.0 rtspsrc location=rtsp://127.0.0.1:8554/test ! rtph264depay ! ffdec_h264 ! xvimagesink
But gstreamer complains no element "ffdec_h264" and no element "xvimagesink".
For extra info, I installed "gstreamer" and "gst-plugins-base/good/bad/ugly", all from git (1.2 version).
Thank you so much for the hint.
ffdec_h264 is from gst-0.10, so you need to use avdec_h264 in gst-1.0 instead. By the other hand, I use to play autovideosink sync=false as pipeline sink in my udp stream.
There is an example code in gst-rtsp-0.10.8/examples that can help you with the rstp stream server, but I suggest you receive the stream using udpsrc in gstreamer in order to reduce the delay (use -v option in the source to see caps parameter and configure it in the receiver).
If you want VLC to play your rtsp stream you need to define the .sdp file according to your rtsp stream session.
You should see this question for more info:
GStreamer rtp stream to vlc
I don't know about VLC, but as far the gstreamer launch line goes, you seem to be missing the ffmpeg package. you can probably find it in the same place as you found the other plugins.
Also, replace xvimagesink with autovideosink, which will use any sinks you have available.

Resources