Gstreamer does not sink to named pipe - opencv

I'm getting different behavior when the sink of a gst-launch pipeline is a named pipe vs a normal file.
I have a gst-launch pipeline which displays video from a camera on an OMAP embedded (linux) board and delivers the video as avi via a tee.
gst-launch -v -e omx_camera device=0 do-timestamp=1 mode=0 name=cam cam.src ! "video/x-raw-yuv, format=(fourcc)NV12, width=240, height=320, framerate=30/1" ! tee name=t1 t1. ! queue ! ducatih264enc profile=100 level=50 rate-preset=low-delay bitrate=24000 ! h264parse ! queue ! avimux ! filesink location=/tmp/camerapipe t1. ! queue ! dri2videosink sync=false
If I make
filesink location=/some/real/file t1.
all is well
but I wish to read the output with a Java/opencv process, and when I do this I don't get anything to the java process. The gst-launch process does announc that it's changed to PLAY.
To simplify things instead of the java process I tail -f the named pipe
and also don't see any output, though in both cases the dri2videosink is displaying the video
With either tail or the java process, killing it also stops the gst-launch process, so obviously it's 'connected' in some sense.
Killing the gst-launch process with the tail running gets what looks like a few K, maybe 1 frame of data, after gst-launch exits.
I've tried saving to normal file and reading with the java process, that works, so I know it's not the data format.

I am trying to do the same thing, I am using opencv in c and working in ubuntu though.
I did get the following to work:
I created a named pipe in /dev/ called video_stream using mkfifo make sur eyou have permissions to read/write to/from it or just use sudo.
Play with test video to a named pipe
sudo gst-launch -e videotestsrc ! video/x-raw-yuv, framerate=20/1, width=640, height=480 ! ffenc_mpeg4 ! filesink location=/dev/video_stream
Play from web cam to a named pipe:
sudo gst-launch -e v4l2src device=/dev/video0 ! ffenc_mpeg4 ! filesink location=/dev/video_stream
I then used the face detection tutorial at
http://docs.opencv.org/doc/tutorials/objdetect/cascade_classifier/cascade_classifier.html#cascade-classifier
to test everything, but changed my input from webcam 1 to the namedpipe.
capture = cvCaptureFromCAM( -1 );
Becomes
VideoCapture capture("/dev/video_stream");

This would work, but problem with pipes and files is that closing reader makes gstreamer to stop working. Solution is to use racic's ftee program:
sudo gst-launch -e videotestsrc ! video/x-raw-yuv, framerate=20/1, width=640, height=480 ! ffenc_mpeg4 ! fdsink fd=1 | ./ftee /dev/video_stream > /dev/null 2>&1
This will output stdin of ftee to named pipe with copy to stdout (sent to /dev/null) but ftee ignored errors and closing of destination pipe. To playing from pipe and stopping does not influence gstreamer. Just try and then think about what I wrote. Not the opposite :)
Play from named pipe, anytime you want:
gst-launch filesrc location=/dev/video_stream ! autovideosink
Regarding your use with OpenCV:
VideoCapture capture("/dev/video_stream");
video stream from /dev/video_stream should be mpeg4 but I'm not sure if OpenCV will sense properly the source. You might have to experiment with provider (even gstreamer provider is available when compiled into opencv).
See api prererence when creating Capture:
VideoCapture (const String &filename, int apiPreference)
set apiPreference to proper value. I'd try ffmpeg or gstreamer.
If you want use gstreamer directly, try with appsink as a sink, that is OpenCV. This might be something like
filesrc location=/dev/video_stream ! video/h264 ! appsink
caps with video/h264 is a blind guess as I don't have ffenc_mpeg4 encoder because it's from gst 0.10 but you get the idea.
Good luck.

Related

cv::handleMessage OpenCV | GStreamer warning: Embedded video playback halted; module udpsrc1 reported: Internal data stream error

I am using below list of version.
opencv-4.2.0, Gstreamer-1.20.2, python 3.7, windows 10.
I want to play video using Gstreamer rtsp camera.why this gst-pipeline it's not run on vscode python but these gst-pipeline perfectly run on cmd.
Pipeline:----
rtspsrc location=rtsp://... ! rtph264depay ! queue ! h264parse ! d3d11h264dec ! d3d11convert ! video/x-raw(memory:D3D11Memory), format=(string)NV12 ! appsink
The error i am getting is as follows:-
[ WARN:0] global opencv-4.2.0\modules\videoio\src\cap_gstreamer.cpp (1759) cv::handleMessage OpenCV | GStreamer warning: Embedded video playback halted; module udpsrc1 reported: Internal data stream error.
[ WARN:0] global opencv-4.2.0\modules\videoio\src\cap_gstreamer.cpp (888) cv::GStreamerCapture::open OpenCV | GStreamer warning: unable to start pipeline
[ WARN:0] global opencv-4.2.0\modules\videoio\src\cap_gstreamer.cpp (480) cv::GStreamerCapture::isPipelinePlaying OpenCV | GStreamer warning: GStreamer: pipeline have not been created
Can you please let me know.How to solve this issue??
You may try adding caps after udpsrc:
rtspsrc location=rtsp://... ! application/x-rtp,encoding-name=H264 ! rtph264depay ! ...
This was a stupid advice. Specifying these caps make sense with udpsrc, but are not required for rtspsrc.
Re-reading it now, the issue is probably the memory space. Opencv may only expect system memory for now, so you may try:
rtspsrc location=rtsp://... ! rtph264depay ! queue ! h264parse ! d3d11h264dec ! d3d11convert ! video/x-raw ! videoconvert ! video/x-raw, format=BGR ! appsink drop=1
Here converting to BGR as most opencv color algorithms expect, but if you intend to process NV12 frames, just change format (not sure videoconvert is required, but if not the overhead may be low).

gstreamer-imx video streaming and encoding

I am currently using an Nitrogen 6 Max development board. I am attempting to retrieve video from my webcam through v4l2src so that the feed back be streamed and encoded to be saved.
This is the pipeline, and it works:
v4l2src device="/dev/video2" ! tee name=t
t. ! queue ! x264enc ! mp4mux ! filesink location=test.mp4
t. ! queue ! videoconvert ! autovideosink
Then I attempted to use the imx-gstreamer library. I spent time looking around and found that this works:
gst-launch-1.0 -e videotestsrc num-buffers=1000 ! \
video/x-raw,width=640,height=480,framerate=30/1 ! imxvpuenc_h264 ! \
h264parse ! avdec_h264 ! filesink location=cx1.mp4
However, when I attempt to use "tee" to split up the video source, it just freezes and my terminal session locks up.
gst-launch-1.0 -e videotestsrc num-buffers=1000 ! autovideoconvert ! tee name=t \
t. ! video/x-raw,width=640,height=480,framerate=30/1 ! imxvpuenc_h264 ! h264parse ! avdec_h264 ! filesink location=cx1.mp4 \
t. ! video/x-raw,width=640,height=480,framerate=30/1 ! autovideosink
I tried isolating the issue by encoding through tee, and realize that this it runs, but the video file that it generates is corrupted:
gst-launch-1.0 -e videotestsrc num-buffers=1000 ! tee name=t \
t. ! video/x-raw,width=640,height=480,framerate=30/1 ! imxvpuenc_h264 ! \
h264parse ! avdec_h264 ! filesink location=cx1.mp4
I tried using queues, videoconvert, but it does not seem to work.
Also, another question here. I am new to GstElement capabilities, which is what decides which element can be linked (i.e, a v4l2src video/x-raw capability includes I420, that's why I can link this element to imxvpuenc_h264). However, for the element tee, does it split and replicate the capability of the src?
I am new to gstreamer, and I can't seem to work around this issue. Can someone help me out here?
A few hints to help you out:
As a rule, you always use queues at the outputs of the tee so that it doesn't block your pipelines.
Another option to avoid blocking is to set async=false in your sink elements.
Try setting dts-method=2 to the mp4mux to see if it makes a difference.
The first troubleshooting line when working gstreamer is using the debug. Please inspect and share the output of GST_DEBUG=2 gst-launch-1.0 ....

What kind of stream GStreamer produce?

I use following 2 commands to stream video from Raspberry Pi
RaPi
raspivid -t 999999 -h 720 -w 1080 -fps 25 -hf -b 2000000 -o - | gst-launch-1.0 -v fdsrc ! h264parse ! rtph264pay config-interval=1 pt=96 ! gdppay ! tcpserversink host=$RA-IP-ADDR port=5000
Linux Box
gst-launch-1.0 -v tcpclientsrc host=$RA-IP-ADDR port=5000 ! gdpdepay ! rtph264depay ! avdec_h264 ! videoconvert ! autovideosink sync=false
But what kind of stream is it? Can I read it with OpenCV? or convert with avconv|ffmpeg nc $RA-IP-ADDR 5000 | avconv? or watch with VLC ?
The stream appears to be an RTP stream encapsulated in a GDP stream, the latter of which which appears to be proprietary to GStreamer. You might be able to remove the gdppay and gdpdepay elements from your pipeline and use other RTP tools (there are plenty out there; I believe VLC supports RTP directly), but you could also use a GStreamer pipeline to pipe the depayloaded GDP stream (in this case, the H.264 stream it contains) from the RPi to a file on the Linux Box side, like so:
gst-launch-1.0 tcpclientsrc host=$RA-IP-ADDR port=5000 ! gdpdepay ! rtph264depay ! filesink location=$FILENAME
or, to pipe it to stdout:
gst-launch-1.0 tcpclientsrc host=$RA-IP-ADDR port=5000 ! gdpdepay ! rtph264depay ! fdsink
One or the other of these should let you operate on the H.264 video at a stream level.
GStreamer 1.0 can also interact with libav more or less directly if you have the right plugin. Use gst-inspect-1.0 libav to see the elements supported. The avdec_h264 element already in your pipeline is one of these libav elements.

How to open an rtp jpeg stream in opencv?

I'm trying to open a video stream in opencv but I'm having some difficulties. I can start a stream with :
gst-launch -v v4l2src device=/dev/video0 ! 'video/x-raw-yuv,width=640,height=480' ! jpegenc quality=30 ! rtpjpegpay ! udpsink host=127.0.0.1 port=1234
`
and I can open it with:
gst-launch udpsrc port=1234 ! "application/x-rtp, payload=127" ! rtpjpegdepay ! jpegdec ! xvimagesink sync=false
But when I tried to open it in my code with
VideoCapture cv_cap;
cv_cap.open("rtp:127.0.0.1:1234/");
I get an error about a missing SDP files. I know what an SDP file is and that I should get the info for it from the gstreamer output but I don't understand exactly how to parse the output.

Raspberry Pi MJPG-Streamer low latency

I've built a raspberry pi robot. Now I want to stream video from Raspberry Pi onboard camera. I followed this tutorial:
http://blog.miguelgrinberg.com/post/how-to-build-and-run-mjpg-streamer-on-the-raspberry-pi/page/2
So I finally made it working, but now I want to get as low latency as possible. It's important to have low latency, cuz controlling a robot with such a lag is impossible.
Any advise ?
Have a nice day!
You should probably ask this on https://raspberrypi.stackexchange.com/
All potent solutions that can be found as by now use raspivid. It directly encodes the video as H.264/MPEG which is much more efficient as capturing every single frame.
The one which works out best for me so far is
- first on you raspberry pi
raspivid -t 999999 -w 1080 -h 720 -fps 25 -hf -b 2000000 -o - | gst-launch-1.0 -v fdsrc ! h264parse ! rtph264pay config-interval=1 pt=96 ! gdppay ! tcpserversink host=<IP-OF-PI> port=5000
on your PC/viewing device
gst-launch-1.0 -v tcpclientsrc host=<IP-OF-PI> port=5000 ! gdpdepay ! rtph264depay ! avdec_h264 ! videoconvert ! autovideosink sync=false
Source: http://pi.gbaman.info/?p=150
I think I have found from experimentation that the camera board does most of the processing relieveing the raspi from much load at all. You can see this by running top on the pi as it captures and streams.
First I run the following on a linux client:
nc -l -p 5001 | mplayer -fps 31 -cache 512 -
Then I run the following on the raspi:
/opt/vc/bin/raspivid -t 999999 -o -w 1920 -h 1080 - | nc 192.168.1.__ 5001
This was done over an ethernet connection from raspi to linux desktop both connected to a common ethernet hub.
I have made the following observations:
these settings give me a pretty low lag (<100ms)
increasing the cache size (on the client) only leads to a larger lag, since the client will buffer more of the stream before it starts
decreasing the cache size below some lower limit (512 for me) leads to a player error: "Cannot seek backward in linear streams!"
specifying dimensions less than the default 1920x1080 leads to longer delays for smaller dimensions especially when they are less than 640x480
specifying bitrates other than the default leads to longer delays
I'm not sure what the default bitrate is
for any of the scenarios that cause lag, it seems that the lag decreases gradually over time and most configurations I tried seemed to have practically no lag after a minute or so
It's unfortunate that very little technical information seems to be available on the board apart from what commands to run to make it operate. Any more input in the comments or edits to this answer would be appreciated.
I realise this is an old post but I recently needed to do something similar so I created a node Raspberry Pi MJpeg Server were you can pass the compression quality and timeout (number of frames per second).
Start the server:
node raspberry-pi-mjpeg-server.js -p 8080 -w 1280 -l 1024 -q 65 -t 100
Options:
-p, --port port number (default 8080)
-w, --width image width (default 640)
-l, --height image height (default 480)
-q, --quality jpeg image quality from 0 to 100 (default 85)
-t, --timeout timeout in milliseconds between frames (default 500)
-h, --help display this help
-v, --version show version
Open sourced as I'm sure it will help others.

Resources