I've already posted this question on superuser.com (https://superuser.com/questions/1718608/ffmpeg-live-streaming-to-ffserver-delay-at-start), but since part of this question is OpenCV related, they suggested me to post it also here.
I'm trying to achieve a simple camera streaming using FFMpeg and FFserver. I have two slightly different systems acting as source, both Debian OS:
the first one runs ffmpeg 3.4.8, as indicated in figure 1
First system FFMPEG version
the second one runs ffmpeg 2.8.17, as indicated in figure 2
Second system FFMPEG version
The ffmpeg command used to send the stream to to ffserver is the following, identical for both systems:
ffmpeg -re -f v4l2 -s 640x360 -thread_queue_size 20 -probesize 32 -i /dev/video0 -threads 4 -fflags nobuffer -tune zerolatency http://myserverIP:myserverPort/liveFeed.ffm
In order to see the stream result I access the live stream from a third system using openCV pointing to the server URL:
VideoCapture videoCap = new VideoCapture("http://myserverIP:myserverPort/liveFeed.flv");
...
videoCap.read(imageInput);
and start grabbing the incoming frames from the stream.
The wierd thing happens here:
with the first system the video stream visualized through openCV is pretty much real time, with 1-2 seconds of delay from the original source.
with the second system the video stream is affected by a variable delay which is comparable with the elapsed time between the start time of the stream source and the start time of the stream acquisition with openCV (for example: if I start the source stream at 12:00:00 and wait 30 seconds before access the stream with openCV, I have a delay of about 30 seconds shown on the third system)
The ffserver configuration is the following
HTTPBindAddress 0.0.0.0
MaxHTTPConnections 2000
MaxClients 1000
MaxBandwidth 6000
CustomLog -
#NoDaemon
<Feed liveFeed.ffm>
File /tmp/SCT-0001_3.ffm
FileMaxSize 5M
</Feed>
<Stream liveFeed.flv>
Format flv
Feed liveFeed.ffm
VideoCodec libx264
VideoFrameRate 20
VideoBitRate 200
VideoSize 640x360
AVOptionVideo preset superfast
AVOptionVideo tune zerolatency
AVOptionVideo flags +global_header
NoAudio
</Stream>
##################################################################
# Special streams
##################################################################
<Stream stat.html>
Format status
# Only allow local people to get the status
ACL allow localhost
ACL allow 192.168.0.0 192.168.255.255
</Stream>
# Redirect index.html to the appropriate site
<Redirect index.html>
URL http://www.ffmpeg.org/
</Redirect>
Any help to spot the problem would be great! Thanks
Related
I'm making an iOS app that decodes an h264 stream using video-toolbox. I create the stream with ffmpeg on a PC and send it to an iPhone using RTP. It's working nicely when I use this command to create it:
ffmpeg -y -f:v rawvideo -c:v rawvideo -s 1280x720 -pix_fmt bgra -r 30 -an -i - -pix_fmt yuv420p -c:v libx264 -tune zerolatency -preset fast -b:v 5M -refs 1 -g 30 -profile:v high -level 4.1 -f rtp rtp://192.168.1.100:5678
The iPhone receives and displays all the frames. However, when I enable intra-refresh
-intra-refresh 1
decoding fails with error code -12909 (-8969 on simulator) when VTDecompressionSessionDecodeFrame() is called.
I take care of processing UDP packets to extract NAL Units, so I triple checked this process and discarded a problem with this part of the code.
I didn't find any info about Video-toolbox not supporting intra-refresh, so the question is, does Video-toolbox support intra-refresh? and if it does, am I missing something in the ffmpeg side that makes the stream not supported by Video-toolbox?
Do I have to add something to the CMVideoFormatDescriptionRef apart from creating it with SPS and PPS data using CMVideoFormatDescriptionCreateFromH264ParameterSets()?
Yes, Video-toolbox supports intra-refresh
No, nothing to do with ffmpeg
No, don't need to do anything special with CMVideoFormatDescriptionRef
I figured it out, I was creating a new VTDecompressionSession each time I was receiving SPS and PPS NALUs, so the decoder was loosing the context.
It was working without intra-refresh because in that case a complete I-Frame is received right after SPS and PPS, so it doesn't need context from previous frames.
With intra-refresh enabled, only the first frame is a complete I-Frame, then the decoder relies on context from previous frames and must use the same VTDecompressionSession.
I'm trying to use avconv to make a LINEAR16 raw file for Google's speech to text, but whenever I try, I get a really slow file when I try to play it back using the play command in the documentation:
play --rate=16000 --bits=16 --endian=little --encoding=signed-integer --channels=1 out.raw
What's the right way to make this kind of a conversion?
It took some experimentation, but I was able to get it working by explicitly stating the sample rate, number of channels, and output format:
avconv -i michael_queen_v._ed_schultz_cl.mp3 -f s16le -ac 1 -ar 16k out.raw
-f: This forces the output encoding, since .raw isn't apparently enough for it to know what to do.
-ac 1: Mono
-ar 16k: This sounds like a gun, which is depressing, but this sets the sample rate to 16000MHz.
I am making a site in Ruby in which I have a series of images, (almost like a powerpoint) and I need to automatically convert those images into one continuous video file (mov, mpeg) that shows each image for 5 seconds or so. Any one have any clues where to start.
I'm also open to using another language if there are tools to get the job done.
You could probably use FFmpeg to do this. Here's an example from the FFmpeg Wiki on the subject:
ffmpeg -framerate 1/5 -i img%03d.jpg -c:v libx264 -r 30 -pix_fmt yuv420p -movflags +faststart out.mp4
What this would do is...
-i img%03d.jpg
Read input from a series of JPEG files named img001.jpg, img002.jpg and so on
-framerate 1/5
...at one frame per five seconds...
-c:v libx264
...then turn it into H.264/MPEG-4 AVC...
-r 30
...at thirty frames per second...
-pix_fmt yuv420p
...with YUV420 pixel format (really, all the FFmpeg flags work here)...
-movflags +faststart
...after encoding completes, relocate some data to the beginning of the file so playback can begin before the file is completely downloaded...
out.mp4
...and store it into out.mp4.
If you were using this from Ruby you'd likely launch a subprocess. The flags would be similar if you really want a (QuickTime) .mov file instead of H.264 MPEG-4.
I am recording from a cable stream using the hdhomerun command line tool, hdhomerun_config, to a .ts file. The way it works is that you run the command, it produces periods every second or so to let you know that the stream is being successfully recorded. So when I record, it produces only periods, which is desired. And the way to end it is by doing a Ctrl-C. However, whenever I try to convert this to an avi or a mov using FFMpeg, it gives a bunch of errors, some of which being
[mpeg2video # 0x7fbb4401a000] Invalid frame dimensions 0x0
[mpegts # 0x7fbb44819600] PES packet size mismatch
[ac3 # 0x7fbb44015c00] incomplete frame
It still creates the file, but it is bad quality and it doesn't work with OpenCV and other services. Has anyone else encountered this problem? Does anyone have any knowledge that may help with this situation? I tried to trim the ts file but most things require conversion before editing. Thank you!
Warnings/errors like that are normal at the very start of the stream as the recording started mid stream (ie mid PES packet) and ffmpeg expects PES headers (ie the start of the PES packet). Once ffmpeg finds the next PES header it will be happy (0-500ms later in play time).
Short version is that it is harmless. You could eliminate the warnings/errors but removing all TS-frames for each ES until you hit a payload unit start flag, but that is what ffmpeg is already doing itself.
If you see additional warnings/errors after the initial/start then there might be a reception of packet loss issue that needs investigation.
I use ffmpeg to encode my sample videos following the recommanded bitrates in Technical Note TN2224, then use HLS tools to segment it and create playlists, finally create the variant plist file "all.m3u8"
I used the validation tool to validate my HLS content, it ended up showing except for the 64k audio only bandwidth is low, others are stay in the same bandwidth, I opened "all.m3u8" using text editor and seeing that all other bitrate contents are using the same bandwidth. No matter how I change parameters in the ffmpeg command, I still can't correct them. The following command is the one I used to encode contents:
ffmpeg -i input.m4v -acodec libfaac -vcodec libx264 -s 480x360 -b 350k -r 29.97 -vpre medium output.mp4
The following command is for generating the segments and plists:mediafilesegmenter -b http://www.example.com/stream/ -I -f ~/Documents/sample/ output.mp4
The following command is for generating the all.m3u8:variantplaylistcreator -o all.m3u8 http://www.example.com/stream/110/prog_index.m3u8 ~/Documents/sample/110/prog_index.m3u8 -iframe-url http://www.freeyourteam.com/stream/110/iframe_index.m3u8 http://www.example.com/stream/200/prog_index.m3u8 ~/Documents/sample/200/prog_index.m3u8 -iframe-url http://www.freeyourteam.com/stream/200/iframe_index.m3u8 http://www.example.com/stream/350/prog_index.m3u8 ~/Documents/sample/350/prog_index.m3u8 -iframe-url http://www.freeyourteam.com/stream/350/iframe_index.m3u8 http://www.example.com/stream/550/prog_index.m3u8 ~/Documents/sample/550/prog_index.m3u8 -iframe-url http://www.freeyourteam.com/stream/550/iframe_index.m3u8 http://www.example.com/stream/64/prog_index.m3u8 ~/Documents/sample/64/prog_index.m3u8
and in my "all.m3u8", the bandwidths are all 523894:
Please allow me to ask two more basic questions:
In the tech note, recommanded bitrates are 64 Kbps, 110 Kbps, 200 Kbps, 350 Kbps, 550 Kbps, I wonder if this value includes the audio bitrate or exclude the audio.
How do you insert keyframe to segment? Because in the document it says:"You must include at least one keyframe per segment, preferably more. If you only include one, put it at the beginning of the segment." I don't quite get how you can do it.
Thank you very much for your help and I do appreciate your time.
Jason,
To create all.m3u8 should it not be given multiple m3u8 files each corresponding to a different bitrate?
I am guessing you run ffmpeg say 4 times to create for 4 bitrate files. Then you run the segmenter 4 times to create 4 set of segments and its individual m3u8 files.
Finally you have to tell the variantplaylistcreator where the location of the various m3u8 files per bitrate to create a single master m3u8 file.
Eg.
variantplaylistcreator -o mymedia_all.m3u8 http://mywebserver/mymedia_lo/prog_index.m3u8 mymedia_lo.plist http://mywebserver/mymedia_med/prog_index.m3u8 mymedia_med.plist http://mywebserver/mymedia_hi/prog_index.m3u8 mymedia_hi.plist
I don't see you providing the various filese seperately. I hope you get the picture.
EDIT: To answer your other questions:
Bitrates include audio. What you need to do is ensure you have a fixed key frame interval in your encoding. This will allow the segmenter to segment the files at regular intervals. you don't insert anything anywhere.
Out of curiosity why not directly use ffmpeg to give you the output segmented files? It supports it.
Thanks for everybody's attention and suggestions. I finally figured it out. The reason why the bandwidth stayed the same for different bitrate is that my ffmpeg command missed couple settings. I ended up using the following command:ffmpeg -i inputVideo.m4v -f mpegts -acodec libfaac -ar 44100 -ab 64k -vcodec libx264 -b 350k -s 480x360 -r 29.97 -flags +loop -cmp +chroma -partitions +parti4x4+partp8x8+partb8x8 -subq 5 -trellis 1 -refs 1 -coder 0 -me_range 16 -keyint_min 25 -sc_threshold 40 -i_qfactor 0.71 -bt 200k -maxrate 350k -bufsize 350k -rc_eq 'blurCplx^(1-qComp)' -qcomp 0.6 -qmin 10 -qmax 51 -qdiff 4 -level 30 -aspect 4:3 -g 30 -async 2 output.ts
I put it here so that other people who have the same problem as me will have a reference.
It sounds like you may have uncovered a bug in variantplaylistcreator. I recommend to verify that the sub-streams really are the bitrate you expect, and if it's really putting the wrong value, to report it to apple.
It might have something to do with using multiple -iframe-url. I can't understand why it would be necessary to specify it more than once. Adaptive streaming won't work if the substreams have different I-frame positions -- at least all of the segment boundaries must be aligned.
If you need to fix the playlist up programmatically, I recommend to use ffprobe (from ffmpeg suite) to extract the bitrate of each substream, and replace the bandwidth number with the extracted value.