ffmpeg udp/tcp stream receive frame not same as sent - stream

I am streaming a video on raspberrypi using command:
ffmpeg -re -threads 2 -i sample_video.m2v -f mpegts - | \
ffmpeg -f mpegts -i - -c copy -f mpegts udp://192.168.1.100:12345
The remote PC with 192.168.1.100 uses ffmpeg library to listen to the input stream. For example:
informat = ffmpeg::av_find_input_format("mpegts");
avformat_open_input(&pFormatCtx, "udp://192.168.1.100:12345", informat, options);
However, when I compute the hash value of each decoded frame on two sides (i.e. raspberrypi and PC), they DON'T MATCH at all. A weird thing is, among ~2000 frames, there are in total ~10 frames whose hash value are the same on the sender and receiver side. The match result look like this:
00000....00011000...00011110000...000
where 0 indicates non-match and 1 indicates match. The matched frame appeared 2~6 in sequence and appeared rarely while most of the other frames has different hash value.
The hash is computed on the frame data buffer extracted using avpicture_layout(). On the Pi side, I just stream the video to a local port and there's a local process using the same code to decode and hash the frames:
ffmpeg -re -threads 2 -i sample_video.m2v -f mpegts - | \
ffmpeg -f mpegts -i - -c copy -f mpegts udp://localhost:12345
...
The streaming source raspberry pi, is connected directly to the PC using cable. I don't think it is a packet loss problem. Because, first, I rerun the same process several times and the hash value of the received frames are the same (otherwise the result should be different because packet loss is probabilistic). Secondly, I even try to stream on tcp://192.168.1.100:12345 (and "tcp://192.168.1.100:12345?listen" on PC), and the received frame hash are still the same - different than the hash result on the Pi.
So, does anyone know why the streaming to a remote address will yield different decoded frames? Maybe I am missing some details.
Thanks in advance!!

Related

How to stream cv2.VideoWriter frames to and RTSP server

Environment: Docker, Ubuntu 20.04, OpenCV 3.5.4, FFmpeg 4.2.4
Im currently reading the output of a cv2.VideoCapture session using the CV_FFMPEG backend and successfully writing that back out in real time to a file using cv2.VideoWriter. The reason I am doing this is to drawing bounding boxes on the input and saving it to a new output.
The problem is I am doing this in a headless environment (Docker container). And I’d like to view what's being written to cv2.VideoWriter in realtime.
I know there are ways to pass my display through using XQuartz for example so I could use cv2.imshow. But what I really want to do is write those frames to an RTSP Server. So not only my host can "watch" but also other hosts could watch too.
After the video is released I can easily stream the video to my RTSP Server using this command.
ffmpeg -re -stream_loop -1 -i output.mp4 -c copy -f rtsp rtsp://rtsp_server_host:8554/stream
Is there anyway to pipe the frames as they come in to the above command? Can cv2.VideoWriter itself write frames to an RTSP Server?
Any ideas would be much appreciated! Thank you.
After much searching I finally figured out how to do this with FFmpeg in a subprocess. Hopefully this helps someone else!
def open_ffmpeg_stream_process(self):
args = (
"ffmpeg -re -stream_loop -1 -f rawvideo -pix_fmt "
"rgb24 -s 1920x1080 -i pipe:0 -pix_fmt yuv420p "
"-f rtsp rtsp://rtsp_server:8554/stream"
).split()
return subprocess.Popen(args, stdin=subprocess.PIPE)
def capture_loop():
ffmpeg_process = open_ffmpeg_stream_process()
capture = cv2.VideoCapture(<video/stream>)
while True:
grabbed, frame = capture.read()
if not grabbed:
break
ffmpeg_process.stdin.write(frame.astype(np.uint8).tobytes())
capture.release()
ffmpeg_process.stdin.close()
ffmpeg_process.wait()

Get first and last times from pcap file with Wireshark command line tools (like tshark)

I have a huge collection of PCAP files, some of which have been "touched" since they were captured. This means the system timestamp on the file may not equate to the time of the data capture. Additionally, most of the files are autosaves from Wireshark, and sometimes the host computer doesn't get the data from the tap until after the capture time, so if this occurs just after a file autosaved, the next sequential file actually has captures prior to the end time of the previous file.
I have an automatic parser which uses tshark to go through these files. However, it takes about 2 minutes per file to run and I have tens of thousands of files, and I won't know that there's a timestamp issue until after it's run through the problem files.
Is there an easy way to grab the first "epoch time" and the last "epoch time" from a PCAP file using tshark (or another command line tool) without having to scan the entire file?
No (not with tshark).
However, Wireshark provides a program, capinfos, which reads a capture file to obtain information about the capture file such start-time, end-time, number-of-packets, etc. (See the help for details).
capinfos does no dissection and so will be much faster than tshark.
$capinfos -a -e wireless_080224_first.pcap.gz
File name: wireless_080224_first.pcap.gz
First packet time: 2008-02-24 13:10:09.637336
Last packet time: 2008-02-24 13:40:23.026171
$capinfos -T -r -a -e wireless_080224_first.pcap.gz
wireless_080224_first.pcap.gz 2008-02-24 13:10:09.637336 2008-02-24 13:40:23.026171
; Default output
$capinfos wireless_080224_first.pcap.gz
File name: wireless_080224_first.pcap.gz
File type: Wireshark/tcpdump/... - pcap (gzip compressed)
File encapsulation: Ethernet
File timestamp precision: microseconds (6)
Packet size limit: file hdr: 65535 bytes
Number of packets: 15 k
File size: 12 MB
Data size: 13 MB
Capture duration: 1813.388835 seconds
First packet time: 2008-02-24 13:10:09.637336
Last packet time: 2008-02-24 13:40:23.026171
Data byte rate: 7705 bytes/s
Data bit rate: 61 kbps
Average packet size: 894.31 bytes
Average packet rate: 8 packets/s
SHA1: 222837342c170e8fb0c2673aef9c056a2ddc08ae
RIPEMD160: ecf83704b912da3d2f69f4257fa9ee1658aac6cb
MD5: b82eda24d784e69ac0828a4ebffed885
Strict time order: True
Number of interfaces in file: 1
Interface #0 info:
<snip>
capinfos is the superior solution but if you don't have access to it or want to use tshark this is how you might want to go about it
tshark -r $file -Tfields -e frame.time_delta | sort -n | tail -1

ffmpeg - Continuously stream webcam to single .jpg file (overwrite)

I have installed ffmpeg and mjpeg-streamer. The latter reads a .jpg file from /tmp/stream and outputs it via http onto a website, so I can stream whatever is in that folder through a web browser.
I wrote a bash script that continuously captures a frame from the webcam and puts it in /tmp/stream:
while true
do
ffmpeg -f video4linux2 -i /dev/v4l/by-id/usb-Microsoft_Microsoft_LifeCam_VX-5000-video-index0 -vframes 1 /tmp/stream/pic.jpg
done
This works great, but is very slow (~1 fps). In the hopes of speeding it up, I want to use a single ffmpeg command which continuously updates the .jpg at, let's say 10 fps. What I tried was the following:
ffmpeg -f video4linux2 -r 10 -i /dev/v4l/by-id/usb-Microsoft_Microsoft_LifeCam_VX-5000-video-index0 /tmp/stream/pic.jpg
However this - understandably - results in the error message:
[image2 # 0x1f6c0c0] Could not get frame filename number 2 from pattern '/tmp/stream/pic.jpg'
av_interleaved_write_frame(): Input/output error
...because the output pattern is bad for a continuous stream of images.
Is it possible to stream to just one jpg with ffmpeg?
Thanks...
You can use the -update option:
ffmpeg -y -f v4l2 -i /dev/video0 -update 1 -r 1 output.jpg
From the image2 file muxer documentation:
-update number
If number is nonzero, the filename will always be interpreted as just a
filename, not a pattern, and this file will be continuously overwritten
with new images.
It is possible to achieve what I wanted by using:
./mjpg_streamer -i "input_uvc.so -r 1280×1024 -d /dev/video0 -y" -o "output_http.so -p 8080 -w ./www"
...from within the mjpg_streamer's directory. It will do all the nasty work for you by displaying the stream in the browser when using the address:
http://{IP-OF-THE-SERVER}:8080/
It's also light-weight enough to run on a Raspberry Pi.
Here is a good tutorial for setting it up.
Thanks for the help!

How can I extract audio from video with ffmpeg? [closed]

Closed. This question is not about programming or software development. It is not currently accepting answers.
This question does not appear to be about a specific programming problem, a software algorithm, or software tools primarily used by programmers. If you believe the question would be on-topic on another Stack Exchange site, you can leave a comment to explain where the question may be able to be answered.
Closed 5 days ago.
The community reviewed whether to reopen this question 5 days ago and left it closed:
Original close reason(s) were not resolved
Improve this question
I tried the following command to extract audio from video:
ffmpeg -i Sample.avi -vn -ar 44100 -ac 2 -ab 192k -f mp3 Sample.mp3
but I get the following output
libavutil 50.15. 1 / 50.15. 1
libavcodec 52.72. 2 / 52.72. 2
libavformat 52.64. 2 / 52.64. 2
libavdevice 52. 2. 0 / 52. 2. 0
libavfilter 1.19. 0 / 1.19. 0
libswscale 0.11. 0 / 0.11. 0
libpostproc 51. 2. 0 / 51. 2. 0
SamplE.avi: Invalid data found when processing input
Can anyone help, please?
To extract the audio stream without re-encoding:
ffmpeg -i input-video.avi -vn -acodec copy output-audio.aac
-vn is no video.
-acodec copy says use the same audio stream that's already in there.
Read the output to see what codec it is, to set the right filename extension.
To encode a high quality MP3 or MP4 audio from a movie file (eg AVI, MP4, MOV, etc) or audio file (eg WAV), I find it's best to use -q:a 0 for variable bit rate and it's good practice to specify -map a to exclude video/subtitles and only grab audio:
ffmpeg -i sample.avi -q:a 0 -map a sample.mp3
If you want to extract a portion of audio from a video use the -ss option to specify the starting timestamp, and the -t option to specify the encoding duration, eg from 3 minutes and 5 seconds in for 45 seconds:
ffmpeg -i sample.avi -ss 00:03:05 -t 00:00:45.0 -q:a 0 -map a sample.mp3
The timestamps need to be in HH:MM:SS.xxx format or in seconds.
If you don't specify the -t option it will go to the end.
You can use the -to option instead of the -t option, if you want to specify the range, eg for 45 seconds: 00:03:05 + 45 = 00:03:50
Working example:
Download ffmpeg
Open a Command Prompt (Start > Run > CMD) or on a Mac/Linux open a Terminal
cd (the change directory command) to the directory with the ffmeg.exe, as depicted.
Issue your command and wait for the output file (or troubleshoot any errors)
Windows
Mac/Linux
Extract all audio tracks / streams
This puts all audio into one file:
ffmpeg -i input.mov -map 0:a -c copy output.mov
-map 0:a selects all audio streams only. Video and subtitles will be excluded.
-c copy enables stream copy mode. This copies the audio and does not re-encode it. Remove -c copy if you want the audio to be re-encoded.
Choose an output format that supports your audio format. See comparison of container formats.
Extract a specific audio track / stream
Example to extract audio stream #4:
ffmpeg -i input.mkv -map 0:a:3 -c copy output.m4a
-map 0:a:3 selects audio stream #4 only (ffmpeg starts counting from 0).
-c copy enables stream copy mode. This copies the audio and does not re-encode it. Remove -c copy if you want the audio to be re-encoded.
Choose an output format that supports your audio format. See comparison of container formats.
Extract and re-encode audio / change format
Similar to the examples above, but without -c copy. Various examples:
ffmpeg -i input.mp4 -map 0:a output.mp3
ffmpeg -i input.mkv -map 0:a output.m4a
ffmpeg -i input.avi -map 0:a -c:a aac output.mka
ffmpeg -i input.mp4 output.wav
Extract all audio streams individually
This input in this example has 4 audio streams. Each audio stream will be output as single, individual files.
ffmpeg -i input.mov -map 0:a:0 output0.wav -map 0:a:1 output1.wav -map 0:a:2 output2.wav -map 0:a:3 output3.wav
Optionally add -c copy before each output file name to enable stream copy mode.
Extract a certain channel
Use the channelsplit filter. Example to get the Front Right (FR) channel from a stereo input:
ffmpeg -i stereo.wav -filter_complex "[0:a]channelsplit=channel_layout=stereo:channels=FR[right]" -map "[right]" front_right.wav
channel_layout is the channel layout of the input. It is not automatically detected so you must provide the layout name.
channels lists the channel(s) you want to extract.
See ffmpeg -layouts for audio channel layout names (for channel_layout) and channel names (for channels).
Using stream copy mode (-c copy) is not possible to use when filtering, so the audio must be re-encoded.
See FFmpeg Wiki: Audio Channels for more examples.
What's the difference between -map and -vn?
ffmpeg has a default stream selection behavior that will select 1 stream per stream type (1 video, 1 audio, 1 subtitle, 1 data).
-vn is an old, legacy option. It excludes video from the default stream selection behavior. So audio, subtitles, and data are still automatically selected unless told not to with -an, -sn, or -dn.
-map is more complicated but more flexible and useful. -map disables the default stream selection behavior and ffmpeg will only include what you tell it to with -map option(s). -map can also be used to exclude certain streams or stream types. For example, -map 0 -map -0:v would include all streams except all video.
See FFmpeg Wiki: Map for more examples.
Errors
Invalid audio stream. Exactly one MP3 audio stream is required.
MP3 only supports 1 audio stream. The error means you are trying to put more than 1 audio stream into MP3. It can also mean you are trying to put non-MP3 audio into MP3.
WAVE files have exactly one stream
Similar to above.
Could not find tag for codec in stream #0, codec not currently supported in container
You are trying to put an audio format into an output that does not support it, such as PCM (WAV) into MP4.
Remove -c copy, choose a different output format (change the file name extension), or manually choose the encoder (such as -c:a aac).
See comparison of container formats.
Could not write header for output file #0 (incorrect codec parameters ?): Invalid argument
This is a useless, generic error. The actual, informative error should immediately precede this generic error message.
Seems like you're extracting audio from a video file & downmixing to stereo channel.
To just extract audio (without re-encoding):
ffmpeg.exe -i in.mp4 -vn -c:a copy out.m4a
To extract audio & downmix to stereo (without re-encoding):
ffmpeg.exe -i in.mp4 -vn -c:a copy -ac 2 out.m4a
To generate an mp3 file, you'd re-encode audio:
ffmpeg.exe -i in.mp4 -vn -ac 2 out.mp3
-c (select codecs) & -map (select streams) options:
-c:a -> select best supported audio (transcoded)
-c:a copy -> best supported audio (copied)
-map 0:a -> all audio from 1st (audio) input file (transcoded)
-map 0:0 -> 1st stream from 1st input file (transcoded)
-map 1:a:0 -> 1st audio stream from 2nd (audio) input file (transcoded)
-map 1:a:1 -c:a copy -> 2nd audio stream from 2nd (audio)input file (copied)
ffmpeg -i sample.avi will give you the audio/video format info for your file. Make sure you have the proper libraries configured to parse the input streams. Also, make sure that the file isn't corrupt.
The command line is correct and works on a valid video file. I would make sure that you have installed the correct library to work with mp3, install lame o probe with another audio codec.
Usually
ffmpeg -formats
or
ffmpeg -codecs
would give sufficient information so that you know more.
To encode mp3 audio ffmpeg.org shows the following example:
ffmpeg -i input.wav -codec:a libmp3lame -qscale:a 2 output.mp3
I extracted the audio from a video just by replacing input.wav with the video filename. The 2 means 190 kb/sec. You can see the other quality levels at my link above.
For people looking for the simpler way to extract audio from a video file while retaining the original video file's parameters, you can use:
ffmpeg -i <video_file_name.extension> <audio_file_name.extension>
For example, running:
ffmpeg -i screencap.mov screencap.mp3
extracts an mp3 audio file from a mov video file.
Here's what I just used:
ffmpeg -i my.mkv -map 0:3 -vn -b:a 320k my.mp3
Options explanation:
my.mkv is a source video file, you can use other formats as well
-map 0:3 means I want 3rd stream from video file. Put your N there - video files often has multiple audio streams; you can omit it or use -map 0:a to take the default audio stream. Run ffprobe my.mkv to see what streams does the video file have.
my.mp3 is a target audio filename, and ffmpeg figures out I want an MP3 from its extension. In my case the source audio stream is ac3 DTS and just copying wasn't what I wanted
320k is a desired target bitrate
-vn means I don't want video in target file
Creating an audio book from several video clips
First, extracting the audio (as `.m4a) from a bunch of h264 files:
for f in *.mp4; do ffmpeg -i "$f" -vn -c:a copy "$(basename "$f" .mp4).m4a"; done
the -vn output option disables video output (automatic selection or mapping of any video stream). For full manual control see the -map option.
Optional
If there's an intro of, say, 40 seconds, you can skip it with the -ss parameter:
for f in *.m4a; do ffmpeg -i "$f" -ss 00:00:40 -c copy crop/"$f"; done
To combine all files in one:
ffmpeg -f concat -safe 0 -i <(for f in ./*.m4a; do echo "file '$PWD/$f'"; done) -c copy output.m4a
If the audio wrapped into the avi is not mp3-format to start with, you may need to specify -acodec mp3 as an additional parameter. Or whatever your mp3 codec is (on Linux systems its probably -acodec libmp3lame). You may also get the same effect, platform-agnostic, by instead specifying -f mp3 to "force" the format to mp3, although not all versions of ffmpeg still support that switch. Your Mileage May Vary.
To extract without conversion I use a context menu entry - as file manager custom action in Linux - to run the following (after having checked what audio type the video contains; example for video containing ogg audio):
bash -c 'ffmpeg -i "$0" -map 0:a -c:a copy "${0%%.*}".ogg' %f
which is based on the ffmpeg command ffmpeg -i INPUT -map 0:a -c:a copy OUTPUT.
I have used -map 0:1 in that without problems, but, as said in a comment by #LordNeckbeard, "Stream 0:1 is not guaranteed to always be audio. Using -map 0:a instead of -map 0:1 will avoid ambiguity."
Use -b:a instead of -ab as -ab is outdated now, also make sure your input file path is correct.
To extract audio from a video I have used below command and its working fine.
String[] complexCommand = {"-y", "-i", inputFileAbsolutePath, "-vn", "-ar", "44100", "-ac", "2", "-b:a", "256k", "-f", "mp3", outputFileAbsolutePath};
Here,
-y - Overwrite output files without asking.
-i - FFmpeg reads from an arbitrary number of input “files” specified by the -i option
-vn - Disable video recording
-ar - sets the sampling rate for audio streams if encoded
-ac - Set the number of audio channels.
-b:a - Set the audio bitrate
-f - format
Check out this for my complete sample FFmpeg android project on GitHub.

Plot RTT histogram using wireshark or other tool

I have a little office network and I'm experiencing a huge internet link latency. We have a simple network topology: a computer configured as router running ubuntu server 10.10, 2 network cards (one to internet link, other to office network) and a switch connecting 20 computers. I have a huge tcpdump log collected at the router and I would like to plot a histogram with the RTT time of all TCP streams to try to find out the best solution to this latency problem. So, could somebody tell me how to do it using wireshark or other tool?
Wireshark or tshark can give you the TCP RTT for each received ACK packet using tcp.analysis.ack_rtt which measures the time delta between capturing a TCP packet and the ACK for that packet.
You need to be careful with this as most of your ACK packets will be from your office machines ACKing packets received from the internet, so you will be measuring the RTT between your router seeing the packet from the internet and seeing the ACK from your office machine.
To measure your internet RTT you need to look for ACKS from the internet (ACKing data sent from your network). Assuming your office machines have IP addresses like 192.168.1.x and you have logged all the data on the LAN port of your router you could use a display filter like so:
tcp.analysis.ack_rtt and ip.dst==192.168.1.255/24
To dump the RTTs into a .csv for analysis you could use a tshark command like so;
tshark -r router.pcap -Y "tcp.analysis.ack_rtt and ip.dst==192.168.1.255/24" -e tcp.analysis.ack_rtt -T fields -E separator=, -E quote=d > rtt.csv
The -r option tells tshark to read from your .pcap file
The -Y option specifies the display filter to use (-R without -2 is deprecated)
The -e option specifies the field to output
The -T options specify the output formatting
You can use the mergecap utility to merge all your pcap files into one one file before running this command. Turning this output into a histogram should be easy!
Here's the 5-min perlscript inspired by rupello's answer:
#!/usr/bin/perl
# For a live histogram of rtt latencies, save this file as /tmp/hist.pl and chmod +x /tmp/hist.pl, then run:
# tshark -i wlp2s0 -Y "tcp.analysis.ack_rtt and ip.dst==192.168.0.0/16" -e tcp.analysis.ack_rtt -T fields -E separator=, -E quote=d | /tmp/hist.pl
# Don't forget to update the interface "wlp2s0" and "and ip.dst==..." bits as appropriate, type "ip addr" to get those.
#t[$m=0]=20;
#t[++$m]=10;
#t[++$m]=5;
#t[++$m]=2;
#t[++$m]=1;
#t[++$m]=0.9;
#t[++$m]=0.8;
#t[++$m]=0.7;
#t[++$m]=0.6;
#t[++$m]=0.5;
#t[++$m]=0.4;
#t[++$m]=0.3;
#t[++$m]=0.2;
#t[++$m]=0.1;
#t[++$m]=0.05;
#t[++$m]=0.04;
#t[++$m]=0.03;
#t[++$m]=0.02;
#t[++$m]=0.01;
#t[++$m]=0.005;
#t[++$m]=0.001;
#t[++$m]=0;
#h[0]=0;
while (<>) {
s/\"//g; $n=$_; chomp($n); $o++;
for ($i=$m;$i>=0;$i--) { if ($n<=$t[$i]) { $h[$i]++; $i=-1; }; };
if ($i==-1) { $h[0]++; };
print "\033c";
for (0..$m) { printf "%6s %6s %8s\n",$t[$_],sprintf("%3.2f",$h[$_]/$o*100),$h[$_]; };
}
The newer versions of tshark seem to work better with a "stdbuf -i0 -o0 -e0 " in front of the "tshark".
PS Does anyone know if wireshark has DNS and ICMP rtt stats built in or how to easily get those?
2018 Update: See https://github.com/dagelf/pping

Resources