VLC doesn't skip gaps during HLS transcoding - vlc

I want to stream over HLS ASF file with H264 stream. I'm using VLC for transcoding to .ts files and creating index. Command looks like that:
vlc.exe test.asf --sout=#duplicate{dst=std{access=livehttp{seglen=2,delsegs=true,numsegs=2, index=mystream.m3u8,index-url=mystream-####.ts},mux=ts{use-key-frames},dst=mystream-####.ts}}
VLC creates ts files and refreshes index, IIS publishes the stream and everything goes fine. But if there is a gap in file, for example there is no samples with timestamps from 10 to 20 sec, VLC will wait for 10 sec and after that continue streaming which is no suitable for me.
Is it possible somehow tell vlc to skip gaps?

Didn't found anything but recalculating timestamps before feeding video to VLC.

Related

How to: Playm3u8 files offline in VLC

I'm using an offline terminal to create an .m3u8 file but I would like to play it using VLC. Every example so far has shown examples of using m3u8 online but this isn't an option for me. If you can play it offline using VLC how do you do so?
You have to download the playlist from m3u8 to watch it offline and to do that you have to either use youtube-dl or ffmpeg
i would recommend ffmpeg , and the command for downloading the m3u8 file goes like this :-
ffmpeg -i "here your m3u8 link" -c copy Output.mp4
This will download the highest quality video available in that m3u8 file.
To change quality you can use map command or simply download the m3u8 file and open it with notepad and there you will find links for other resolution.
m3u8 files are just playlists without actual contents, but just URL pointers. Those files are mostly being used these days for HLS.
While you can play the files while offline, you won't see anything unless you are also hosting the referenced Contents.

libffmpeg: writing an RTSP stream to an output file

I'm working with libffmpeg in an iOS app. My goal is to connect to an RTSP source and write the media out to a file that can later be used with the iOS media player. Ideally I'd like to do this without transcoding the incoming data. I also want to be able to later re-encode the media with AVAssetExportSession if the user chooses to do so.
Because I want to create a file that is compatible with iOS, I'm limited (I believe) to mpeg, mp4 or quicktime (mov) formats.
Whenever I try to use one of these formats, I see the following warnings during my call to avformat_write_header:
[mov # 0x16401c00] Codec for stream 0 does not use global headers but container format requires global headers
[mov # 0x16401c00] Codec for stream 1 does not use global headers but container format requires global headers
My understanding is that the header wants to know the ultimate file size, which I do not know (the RTSP server is live streaming a camera, and the user stops the recording whenever they want). I guess that makes sense, but I know that others have successfully done this using the ffmpeg command line, so I'm confused as to what else I need to do here.
If I ignore the warning, I can still proceed with writing the file. If I choose mpeg or mp4 formats, my app crashes when I call av_write_trailer. If I use mov, I can successfully close the file, and the file does play back, but usually fails when I try to hand it to the AVAssetExportSession.
I would appreciate any insight into this. Thanks.
Frank
I found what appears to be a solution -- at least, it eliminates the warning. I had to set the CODEC_FLAG_GLOBAL_HEADER on both the audio and video codecs, before calling avcodec_open2.

Streaming Technique from pocketcast in xcode

I've been asked by my client whether it is possible to download a video and stream it once a bit has downloaded, just like pocketcasts does. His reasoning is this will allow him to store his video files on a site such as godaddy and bypass the need to stream the file to the phone which normally requires a dedicated server.
Is this even possible? if so do you know anywhere I can look to find out how pocketcasts does it? At the moment my app just streams an mp4.
Thanks for looking,
Matt
Since you're targetting iOS, HLS (HTTP Live Streaming) is your friend: https://developer.apple.com/streaming/
Please see my answer here for how you can use it: Simultaneously downloading and playing a song that is pieced together from multiple URLs
It's very easy to run a long movie through the mediafilesegmenter tool from Apple (or FFMPEG) which spits out a number of small .ts files (MPEG 2 Transport Stream). Then you create a manifest (a .m3u8 file) which describes how these files fit together (which mediafilesegment will create for you too!). Then you just put the manifest file and the .ts files on a hosting provider (like GoDaddy) and you're all set.
For example, given a file called test.mp4, first turn it into a .ts file with ffmpeg:
ffmpeg -i test.mp4 -acodec copy -vcodec copy -bsf h264_mp4toannexb test.ts
Then turn it into a series of HLS segments with mediafilesegmenter (the same can be done using the ffmpeg segment muxer, but mediafilesegmenter seems to be more robust):
mediafilesegmenter -t 3 test.ts
The result is a bunch of 3 second clips (that's what -t 3 means) and an manifest file called prog_index.m3u8. The contents of that look like:
#EXTM3U
#EXT-X-TARGETDURATION:3
#EXT-X-VERSION:3
#EXT-X-MEDIA-SEQUENCE:0
#EXT-X-PLAYLIST-TYPE:VOD
#EXTINF:2.99520,
fileSequence0.ts
#EXTINF:2.99520,
fileSequence1.ts
#EXTINF:2.99520,
fileSequence2.ts
#EXTINF:2.99520,
fileSequence3.ts
...
#EXTINF:0.37440,
fileSequence75.ts
#EXT-X-ENDLIST
Simply putting all of the .ts files and the .m3u8 file on a web server and pointing your AVPlayer or MPMoviePlayerController in iOS at the URL for the .m3u8 will get you an excellent streaming performance.

How to copy audio stream using FFMpeg API ( not a command line tool )

I'm developing some Video Editing Apps on Android.
the objective of the app is "Editing Videos on Android".
and...
I'm just completed making video file using some images.
but.. I can't attach audio into the video.
my method is same as follows.
1.VideoStream, audio stream creation using AVFormatContext
2.Movie encoding in video stream was successful
3.Encode codec open in audio stream was successful
4.Set sample format to AV_SAMPLE_FMT_FLTP
5.Sample rate and channel was set same as source audio
6.Choose appropriate Decoder and read packet
7.Convert packets using swr_converter, setting same as sample format
8.Encode converted data
9.memory deallocation
10.END!
Problem is here:
Video of finally created video file was normally played. but the Audio wasn't.
It heared like weird. It have many noises and plays slowly.
I've googled with many keywords but they only say about "FFmpeg command line usage".
I wanna make with FFMpeg API. not a Command line tool.
Please help.
Your question is vague without some kind of code to go along with it, as trust me there are a lot of things that can go wrong when using ffmpeg's libraries directly (and on Windows there is no debuging). Unfortunately ffmpeg's libraries are not well documented so it is generally best to read the source code for ffmpeg in order to use its libraries. Find the equivalent command line options to perform what you want and track that through ffmpeg's source to see the library calls.

Chroma key software (script)

I have thousands of small videos in HDTV quality. All with green chroma key.
I need to change green colour for a static image (company logos).
There are several softwares that can change chroma key, but they are only one by one file.
This will take years to be completed.
Is there a software (script) that I can make this automatic?
I would start with avisynth it's a simple scripting language for video and can use avidemux to edit video interactively or x264 and ffmpeg to read/write files automatically.
edit:
see http://forum.videohelp.com/threads/184560-AviSynth-Chroma-Key
The avisynth interpreter loads your file itself, it then spits out the processed frames. If you run the script with the avisynth proxy (included with avidemux) then players like avidemux can connect to the proxy as if they were opening a file and play the processed video.
The x264 commandline H264 codec can also run avisynth scripts, you just supply the script as the input filename (remember the actual source file is inside the script) and it will output the processed, and then H264 coded file to an mp4
the tutorials make more sense than I do .......

Resources