GStreamer pipeline generated by Flumotion stalls - video-encoding

The following gstreamer pipeline was generated by Flumotion while transcoding a file but it stalls
I am not sure entirely why as I only started developing gstreamer application recently. I am guessing that it is because of lack of memory. The file is large (1+ Gb) and I am running this on server with only 2Gb.
Please help.
gst-launch -v filesrc location=vid1.mkv ! decodebin2 name=decoder ! queue ! audiorate ! audioconvert ! legacyresample ! 'audio/x-raw-int, rate=44100, channels=2;audio/x-raw-float, rate=44100, channels=2' ! lame ! mp3parse ! queue ! muxer. decoder. ! queue ! ffmpegcolorspace ! videorate ! videoscale method=1 ! 'video/x-raw-yuv, width=320, height=180, pixel-aspect-ratio=1/1, framerate=25/1;video/x-raw-rgb, width=320, height=180, pixel-aspect-ratio=1/1, framerate=25/1' ! videobox left=0 top=-30 right=0 bottom=-30 ! ffenc_flv bitrate=500000 ! queue ! flvmux name=muxer ! filesink location=vid1.flv

I determinined that it was not a gstreamer problem. FLumotion seems to be killing the process when it takes too long.

Related

Add new sink to gstreamer examples camera

I'm very newbie at GStreamer and I can't write the output to file.
I'm testing the gstreamer detect example in the coral/examples-camera/gstreamer project. This demo run perfectly in my coral dev board mini, but I need help to add a filesink in the gstreamer pipeline of this proyect
PIPELINE += """ ! decodebin ! queue ! v4l2convert ! {scale_caps} !
glupload ! glcolorconvert ! video/x-raw(memory:GLMemory),format=RGBA !
tee name=t
t. ! queue ! glfilterbin filter=glbox name=glbox ! queue ! {sink_caps} ! {sink_element}
t. ! queue ! glsvgoverlay name=gloverlay sync=false ! glimagesink fullscreen=true
qos=false sync=false
"""
The demo uses the glimagesink element to display the video on the screen and I need to add a sink that saves the video to a file
Executing with debug I've captured, I belive, the format of video in glimagesink element, the debug output is:
GST_PADS gstpad.c:3160:gst_pad_query_accept_caps_default:<salida:sink>[00m allowed caps subset ANY, caps video/x-raw(memory:GLMemory), framerate=(fraction)30/1, interlace-mode=(string)progressive, width=(int)640, height=(int)480, format=(string)RGBA, texture-target=(string)2D
I've tried this pipelines configurations:
oral board model: mt8167
Gstreamer pipeline:
v4l2src device=/dev/video0 ! video/x-raw,width=640,height=480,framerate=30/1 ! decodebin ! queue ! v4l2convert ! video/x-raw,format=BGRA,width=640,height=480 !
glupload ! glcolorconvert ! video/x-raw(memory:GLMemory),format=RGBA !
tee name=t
t. ! queue ! glfilterbin filter=glbox name=glbox ! queue ! video/x-raw,format=RGB,width=320,height=320 ! appsink name=appsink emit-signals=true max-buffers=1 drop=true
t. ! queue ! glsvgoverlay name=gloverlay sync=false !
videoconvert ! x264enc ! avimux ! filesink location=output.avi
Traceback (most recent call last):
File "detect.py", line 138, in <module>
main()
File "detect.py", line 135, in main
videofmt=args.videofmt)
File "/home/mendel/google-coral/examples-camera/gstreamer/gstreamer.py", line 294, in run_pipeline
pipeline = GstPipeline(pipeline, user_function, src_size)
File "/home/mendel/google-coral/examples-camera/gstreamer/gstreamer.py", line 36, in __init__
self.pipeline = Gst.parse_launch(pipeline)
gi.repository.GLib.Error: gst_parse_error: could not link videoconvert0 to x264enc0 (3)
Know anyone how write this output to file with Gstreamer?
Any help will be appreciated

gstreamer-imx video streaming and encoding

I am currently using an Nitrogen 6 Max development board. I am attempting to retrieve video from my webcam through v4l2src so that the feed back be streamed and encoded to be saved.
This is the pipeline, and it works:
v4l2src device="/dev/video2" ! tee name=t
t. ! queue ! x264enc ! mp4mux ! filesink location=test.mp4
t. ! queue ! videoconvert ! autovideosink
Then I attempted to use the imx-gstreamer library. I spent time looking around and found that this works:
gst-launch-1.0 -e videotestsrc num-buffers=1000 ! \
video/x-raw,width=640,height=480,framerate=30/1 ! imxvpuenc_h264 ! \
h264parse ! avdec_h264 ! filesink location=cx1.mp4
However, when I attempt to use "tee" to split up the video source, it just freezes and my terminal session locks up.
gst-launch-1.0 -e videotestsrc num-buffers=1000 ! autovideoconvert ! tee name=t \
t. ! video/x-raw,width=640,height=480,framerate=30/1 ! imxvpuenc_h264 ! h264parse ! avdec_h264 ! filesink location=cx1.mp4 \
t. ! video/x-raw,width=640,height=480,framerate=30/1 ! autovideosink
I tried isolating the issue by encoding through tee, and realize that this it runs, but the video file that it generates is corrupted:
gst-launch-1.0 -e videotestsrc num-buffers=1000 ! tee name=t \
t. ! video/x-raw,width=640,height=480,framerate=30/1 ! imxvpuenc_h264 ! \
h264parse ! avdec_h264 ! filesink location=cx1.mp4
I tried using queues, videoconvert, but it does not seem to work.
Also, another question here. I am new to GstElement capabilities, which is what decides which element can be linked (i.e, a v4l2src video/x-raw capability includes I420, that's why I can link this element to imxvpuenc_h264). However, for the element tee, does it split and replicate the capability of the src?
I am new to gstreamer, and I can't seem to work around this issue. Can someone help me out here?
A few hints to help you out:
As a rule, you always use queues at the outputs of the tee so that it doesn't block your pipelines.
Another option to avoid blocking is to set async=false in your sink elements.
Try setting dts-method=2 to the mp4mux to see if it makes a difference.
The first troubleshooting line when working gstreamer is using the debug. Please inspect and share the output of GST_DEBUG=2 gst-launch-1.0 ....

GStreamer and Youtube Problem RTMPSink can not write to resource

I have problem with sending video to youtube using GStreamer.
My pipeline is:
"appsrc name=videoAppSrc ! rawvideoparse name=videoparser use-sink-caps=false format=8 ! videoconvert ! video/x-raw, fromat=YUV, width="+videoWidth+", height="+videoHeight+", framerate=25/1 ! videoconvert ! x264enc key-int-max=60 ! video/x-h264,profile=baseline ! tee name=t t. ! queue ! flvmux streamable=true name=mux ! rtmpsink name=dest location="+this.url+"/"+this.key+" t. ! queue ! matroskamux name=filemux ! filesink name=fileout location="+archFile.getAbsolutePath()+" appsrc name=audioAppSrc ! rawaudioparse use-sink-caps=true ! audioconvert ! volume name=audiovolume volume=1 ! voaacenc ! aacparse ! tee name=ta ta. ! queue ! mux. ta. ! queue ! filemux."
I'm using Java with gst1-java-core to push frames into the pipeline.
After some time I'm getting this kinde of error: Could not write to resource from GstRTMPSink element.
Sometimes it happends after 1 hour, sometimes after 3 hours.
I think the problem is youtube won't receive my stream.
Am I right?
Is something wrong with my pipeline?
Maybe i have to adjust some properties to get this working with youtube propably?

gst-launch-1.0 - rtspsrc audio/video issue

I'm trying to combine two RTSP streams using gst-launch-1.0 but I'm already stuck already at trying to record/play one RTSP stream. The stream contains both audio and video.
The command line I'm using is:
gst-launch-1.0 ^
rtspsrc location=<location-omitted> protocols=GST_RTSP_LOWER_TRANS_TCP latency=5000 name=s_0 ^
s_0. ! application/x-rtp,media=audio ! rtpjitterbuffer ! decodebin ! audioconvert ! autoaudiosink ^
s_0. ! application/x-rtp,media=video ! rtpjitterbuffer ! decodebin ! videoconvert ! autovideosink
If I change the autoaudiosink to fakesink, the video plays. If I remove the video sink (didn't test with fakesink), the audio plays. But if I add both (like above), the video shows 1 frame and then freezes. [not sure if it matters, but I am on Windows]
I actually suspect that (for some reason) the pipeline goes on pause, but I'm at a loss on how to debug this issue.
I've tried with/without rtpjitterbuffer, I've tried with/without queue's at various places. I've tried with combinations of rtp...depay/parse. Although I've tried so much in the past few hours (ohoh) that I'm not sure if I overlooked anything.
Also tried with various options for rtspsrc (sync/etc).
But the end result is pretty much always the same, playing either audio or video works fine, playing both at the same time (or muxing them into a single file) fails.
Writing each to their own file works fine, for example:
gst-launch-1.0 ^
rtspsrc location=<location-omitted> protocols=GST_RTSP_LOWER_TRANS_TCP latency=5000 name=s_0 ^
s_0. ! application/x-rtp,media=audio ! rtpjitterbuffer ! decodebin ! audioconvert ! avenc_aac ! flvmux ! filesink location=audio.flv ^
s_0. ! application/x-rtp,media=video ! rtpjitterbuffer ! decodebin ! videoconvert ! x264enc ! flvmux ! filesink location=video.flv
If I mux the above into one file the same thing happens as with trying to play the file (using the auto-sink), the output file stays at 0-bytes.
The output of gst-launch-1.0 when trying to play is the same as when writing to two files:
Setting pipeline to PAUSED ...
Pipeline is live and does not need PREROLL ...
Progress: (open) Opening Stream
Progress: (connect) Connecting to <location-omitted>
Progress: (open) Retrieving server options
Progress: (open) Retrieving media info
Progress: (request) SETUP stream 0
Progress: (request) SETUP stream 1
Progress: (open) Opened Stream
Setting pipeline to PLAYING ...
New clock: GstSystemClock
Progress: (request) Sending PLAY request
Progress: (request) Sending PLAY request
Progress: (request) Sent PLAY request
Redistribute latency...
Redistribute latency...
Somehow I think it's some type of sync-issue, but I can't seem to figure out how to solve it.
Since this question is more than 1 year ago, I hope someone with same problem still need the help (as I do, I used many hours to figure this out, and thank you to the poster he gives me the hints)
You need to edit as this
gst-launch-1.0 ^
rtspsrc location=<location-omitted> protocols=GST_RTSP_LOWER_TRANS_TCP latency=5000 name=s_0 ^
s_0. ! application/x-rtp,media=audio ! rtpjitterbuffer ! decodebin ! audioconvert ! avenc_aac ! flvmux name=mux ^
s_0. ! application/x-rtp,media=video ! rtpjitterbuffer ! decodebin ! videoconvert ! x264enc ! mux. ^
mux. ! filesink location=video.flv

Port pipeline to gst-rtsp-server

I'm trying to wrap this working sender side pipeline in the gst-rtsp-serve
gst-launch-1.0 --gst-plugin-path=/usr/lib/x86_64-linux-gnu/gstreamer-1.0/ filesrc location=sample.mp4 ! decodebin name=mux mux. ! queue ! videoconvert ! edgedetect ! videoconvert ! x264enc ! rtph264pay ! udpsink host=127.0.0.1 port=5000 mux. ! queue ! audioconvert ! audioresample ! alawenc ! rtppcmapay ! udpsink host=127.0.0.1 port=5001
Using a complementary pipeline at receiver side all the stuff work and I'm able to send a opencv processed stream, getting it at the client side.
Something is wrong when I try to wrap part of this pipeline in the working example provided along with the gst-rtsp-server.
Infact, editing the test-mp4.c and changing the filesrc input pipelin
"filesrc location=%s ! qtdemux name=d "
"d. ! queue ! videoconvert ! edgedetect ! videoconvert ! x264enc ! rtph264pay pt=96 name=pay0 "
"d. ! queue ! rtpmp4apay pt=97 name=pay1 " ")"
the sender doesn't work anymore. On the receiver side I got a 503 error since the receiver is unable to get the sdp.
Could be this a iussue related to missing bad plugin directory?
I set it in the main Makefile but the problem still persists.
I guess so sinche the rtsp-server works perfectly if I do not edit that line and my pipeline works good either.
Thanks,
Francesco
This looks like it is an issue with the pipeline you have created. Try running your pipeline exactly how it is on the command line, but add fakesink elements on the end to see if that works:
gst-launch-1.0 filesrc location=%s ! qtdemux name=d d. ! queue ! videoconvert ! edgedetect ! videoconvert ! x264enc ! rtph264pay pt=96 name=pay0 ! fakesink d. ! queue ! rtpmp4apay pt=97 name=pay1 ! fakesink
At a glance, it looks like you're demuxing the media, but not decoding the video to a raw format for the edgedetect element.

Resources