I want to monitor buffers traveling through my GStreamer pipelines.
For example, in the following pipeline: I want to know if 1 buffer (ie.GstBuffer) flowing between rtph264pay and udpsink correspond to 1 packet streamed on my Ethernet interface.
gst-launch-1.0 filesrc ! x264enc ! rtph264pay ! udpsink
What tool can I use to figure it out? Do I have to go into source code to get the answer? What will be the answer?
You can use GST_SCHEDULING debug category to monitor the data flow.
GST_DEBUG="*SCHED*:5" gst-launch-1.0 filesrc ! x264enc ! rtph264pay ! udpsink 2> gst.log
This will produce a log of every buffers that reaches a sink pad. You can filter the udpsink sink pad to obtain the information you want. For the network side, you'll need to use a network analyser, like Wireshark. You should then be able to compare.
In practice, each payloaded buffer will represent 1 UDP packet, unless your network MTU is smaller then what you have configured on the payload (see mtu property).
Related
hello I have a problem with Caps, when i try to use this pipeline :gst-launch-1.0 v4l2src ! caps ! nvvidconv ! caps ! tee ! queue1 ! nvvideosink
I get :WARNING: erroneous pipeline: no element "caps"
thanks
GstElement and GstCaps are two different things.
Caps is like a structure that can define the stream media type and some stream specifications. It is not a GstElement. Therefore, you should use capsfilter element which is a GstElement and then set your caps into it.
Your pipeline should be like this:
v4l2src ! capsfilter caps="video/x-raw,width=640,height=480,format=I420" ! nvvidconv ! tee ! queue ! nvvideosink
(Careful, your caps may need to be on GPU like 'video/x-raw(memory:NVMM)'. I explained on below continue to read.
You can arrange your format whatever you want. If you are not sure your camera's format, just don't set it, like: caps="video/x-raw,width=640,height=480"
When you use Capsfilter, you force your pipeline to get THAT stream settings. For instance, if your camera does not support 640x480, your pipeline gives crash!
If you are not sure your camera's specifications, just use nvvidconv or videoconvert element which converts stream for you.
If you are not sure whatever you should do, try this pipeline:
v4l2src ! nvvidconv ! nvvideosink
Warning: nvvidconv and nvvideosink probably works on GPU. So if you try to use videoconvert with nvvideosink program might crash because videoconvert works on CPU and nvvideosink may not be able to work on CPU.
Give it a look to that
https://forums.developer.nvidia.com/t/window-playback-using-nvvideosink/42346 , he built his pipeline to be able to work on GPU. He used nvcamerasrc for gettings stream from GPU. v4l2src gets only from CPU.
You decide whether to get stream from CPU or GPU. Try to look at this link too for your pipeline creation:
https://forums.developer.nvidia.com/t/gstreamer-input-nvcamerasrc-vs-v4l2src/50658/2
I want to take a video file and overlay subtitles that fade in and fade out.
I'm just beginning to learn how to work with Gstreamer.
So far, I've managed to put together a pipeline that composits a subtitle stream drawn by the textrender element onto an original video stream with the videomixer element. Unfortunately, textrender, and its sister element textoverlay do not have a fade-in/fade-out feature.
The videomixer sink pad does have an alpha property. For now, I have set the alpha value of the pad named videomixer.sink_1 to 1.0. Here is the command-line version of that pipeline:
#!/bin/bash
gst-launch-1.0 \
filesrc location=sample_videos/my-video.mp4 ! decodebin ! mixer.sink_0 \
filesrc location=subtitles.srt ! subparse ! textrender ! mixer.sink_1 \
videomixer name=mixer sink_0::zorder=2 sink_1::zorder=3 sink_1::ypos=-25 sink_1::alpha=1 \
! video/x-raw, height=540 \
! videoconvert ! autovideosink
I am looking for a way to dynamically modify that alpha value over time so that I can make the subtitle component fade in and out at the appropriate times. (I will parse the SRT file separately to determine when fades begin and end.)
I am studying the GstBin C API (my actual code is in Python). I think after I create the pipeline with Gst.parse_launch(), I can grab any named element with gst_get_bin_by_name(), then use that value to access the pad "sink_1".
Once I've gotten that far, will I be able to modify that alpha value dynamically from an event handler that receives timer events? Will the videomixer element respond instantly to changes in that pad's property? Has anyone else done this?
I found a partial answers here: https://stackoverflow.com/a/17331845/270511 but they don't tell me if this will work after the pipeline is running.
Yes, it will work.
The videomixer pads respond dynamically to changes; I have done this with both the alpha and position properties. The pad properties can be changed using
g_object_set (mix_sink_pad, "alpha", 0.5, NULL);
I am using C, but your python strategy for accessing the bin and pad sound correct. My gstreamer code responds based on inputs from a udp socket, but timer events will work perfectly fine. For example, if you wanted to change the alpha value every 100ms, you could do something like this
g_timeout_add_seconds (100, alpha_changer_cb, loop);
You can then change the alpha property using g_object_set in the callback; it will update dynamically and looks very smooth.
I got this to work. You can read about it in this post: https://westside-consulting.blogspot.com/2017/03/getting-to-know-gstreamer-part-4.html
I am trying to see TCP retransmission packet in tcpdump.
I find commands to filter sync packet, ACK packet but could not able to find filter of retransmitted packet
Is there any command for filter such packets.
Thanks in advance.
I've just been using this for tracing re transmissions in wireshark:
tcp.analysis.retransmission
This also is useful:
tcp.flags.reset==1
In tcpdump, you can do resets with this expression (not tried re-transmissions yet, but if I figure that out I'll reply to my answer):
'tcp[tcpflags] & (tcp-rst) != 0'
When you use Wireshark or TShark you can use a display filter:
field name: tcp.analysis.retransmission
AFAIK there is no capture filter to do the trick on tcpdump, dumpcap, Wireshark or TShark.
I'd like to display a rtp / vp8 video stream that comes from gstreamer, in openCV.
I have already a working solution which is implemented like this :
gst-launch-0.10 udpsrc port=6666 ! "application/x-rtp,media=(string)video,clock-rate=(int)90000,encoding-name=(string)VP8-DRAFT-IETF-01,payload=(int)120" ! rtpvp8depay ! vp8dec ! ffmpegcolorspace ! ffenc_mpeg4 ! filesink location=videoStream
Basically it grabs incoming data from a UDP socket, depacketize rtp, decode vp8, pass to ffmpegcolorspace (I still don't understand what this is for, I see it everywhere in gstreamer).
The videoStream is a pipe I created with mkfifo. On the side, I have my openCV code that does :
VideoCapture cap("videoStream");
and uses cap.read() to push into a Mat.
My main concern is that I use ffenc_mpeg4 here and I believe this alters my video quality. I tried using x264enc in place of ffenc_mpeg4 but I have no output : openCV doesn't react, neither does gstreamer, and after a couple of seconds, gst-launch just stops.
Any idea what I could use instead of ffenc_mpeg4 ? I looked for "lossless codec" on the net, but it seems I am confusing things such as codec, contains, format, compression and encoding ; so any help would be (greatly) appreciated !
Thanks in advance !
I want to have a list of source IP addresses of an interface outbound traffic. How could I find the direction of a packet whether it's inbound or outbound reading traffic using libpcap? I don't know the subnet information of either side. And there are clients/servers on both sides, so I can't rely on port number ranges to filter traffic.
Why there is no information in libpcap packet header about direction, or filter option like inbound in pcap-filter?
Netsniff-NG, while not relying on libpcap, supports Linux kernel packet type extensions.
They're documented
here
One of the packet types is outgoing and commented as "outgoing of any type".
The following example will capture all packets leaving your interface.
$ netsniff-ng --in eth0 --out outgoing.pcap --type outgoing
Using this you can utilize other command-line tools to read the PCAP file and pull out all the source
addresses. Maybe something *nix-ey like this:
$ tcpdump -nnr outgoing.pcap | cut -d " " -f3 | cut -d . -f1-4
Note: I haven't tried this on a router.
you could use "ether src" or "ether dst" to filter packet direction. This require you to know the mac address of the interface.
You can select a direction that packets will be capture calling pcap_setdirection() before pcap_loop().
For example, to capture incoming packets only you need to write:
handle = pcap_open_live("eth0", 65535, 1, 0, errbuf);
pcap_setdirection(handle, PCAP_D_IN);
pcap_loop(handle, -1, process_packet, NULL);
Possible directions are: PCAP_D_IN, PCAP_D_OUT, PCAP_D_INOUT.
See tcpdump.org/manpages/pcap_setdirection.3pcap.txt
The PCAP file format does not contain a field that holds the interface used during the capture. With that said, the newer PCAP-NG file format, currently used by Wireshark & Tshark, supports it along with packet direction.
Existing pcap-ng features:
packet dropped count
annotations (comments)
local IP address
interface & direction
hostname <-> IP address database
PcapNg
It sounds like you're capturing from a router or firewall so something like the following
would not work.
ip src 192.168.1.1
Capturing the traffic into flows may be an option but it still will not provide you with direction information. Though, you will be able to determine the source and destinations address easily. If you have an existing pcap you can convert it to the ARGUS format:
argus -r capture.pcap -w capture.argus
ra -nnr capture.argus
Other tools, some w/ examples, that can easily obtain end-points/hosts are:
ntop -f capture.pcap
tcpprof -nr capture.pcap
Wireshark Endpoints
flow-tools
You'll have to parse out the information you want, but I don't think that's too much trouble. I recommend taking a look at PCAP-NG if you can't work with this.