Delay in opening stream in opencv 3.4.1 - vlc

Hello I am trying to capture stream with open cv with python. I am getting a delay of 6 seconds for opening the stream which i feel is very high. In the vlc im able to open the stream in 0.5 seconds at most. Is there any way to fast up the processing of opening the stream.

I was able to solve the issue by using the opencv Compiled with gstreamer and by making opencv to take default as gstreamer(with cv2.CAP_GSTREAMER) instead of ffmpeg to open the rtsp stream. With this i was able to open the stream in 1.5 seconds

Related

rtsp stream of an IP camera is much more delayed in VLC than in the NVR

I have an IP camera that i can view in VLC via the link rtsp://admin:admin#192.168.1.199:554/mpeg4/ch0/main/av_stream but i noticed there is a significant delay to the video in vlc compared to when the camera is viewed in the NVR. The vlc has a delay of 4-6 seconds while in the nvr its barely noticeable at all less than 1 second of delay.
I need to know why that is so i can then plan out what methods/libraries to use in the program im going to make. It helps to know why so that a possible work aroung maybe explored.
Is this a problem inherent to vlc or a limitation to rtsp?
Is there any way i can reduce this delay?
First get sure that your camera has no issue with getting multiple streams. Deactivate your camera on NVR and check if you have a better latency.
VLC use per default rtsp/rtp over TCP so force vlc to use rtsp/rtp over UDP just google about the vlc argument.
And verify if u have better latency.
As BijayRegmi wrote be aware of the default buffering.
Also you can try ffplay from ffmpeg libary and open the rtsp stream with it. There u have more informations about the health of the stream like package loss etc. Also this gives u an second option to verify your stream/latency, then u should know wich part produce the latency.

How to setup HLS Live Video Streaming from iOS Device

Good day everyone!
So, as the title suggests, i am developing an app with similar functionality to that off Periscope and Facebook Live video streaming. Here is what the end goal is:
A Broadcasting device [user]
EC2 Instance [Hosting an ffmpeg transcoder]
Cloudfront Distrubution [CDN]
1 to n viewers of the live feed
I've been doing a lot of googling and what I cant seem to figure out is:
As you send chunks of video to the server from the Broadcaster, how do
you create an
.m3u8 playlist when you don't have all the chunks of video yet (e.g. the
device sends its first 5second chunk of video)?
It seems a .m3u8 file is created from a .mp4 file that is already complete, then broken down into chunks... But i'm sending chunks of the video to the server, how can it generate the .m3u8 file when more chunks are still coming from the Broadcaster, so the watchers / clients can continuously stitch together the video chunks?
I'll be happy to clarify this question further. Thanks!
If you take a look at the docs for the segment muxer you can specify the m3u8 to be outputted and you can also tell it to update the m3u8 as it goes. It might look something like this:
ffmpeg -i infile.mp4 -c:v copy -c:a copy -map 0 -f ssegment -segment_list playlist.m3u8 -segment_list_type hls -segment_list_size 10 -segment_list_flags +live -segment_time 4 outchunk%07d.ts
Note the segment_list_size is the maximum number of chunks referenced in the m3u8 file at one time and the segment_list_flags tells ffmpeg that this a live stream.
I think your confusion is that you are trying to send HLS fragments to their server. Don’t. Send a stream via another protocol like RTPM. Then let the server convert to HLS.

MobileVLCKit Http Stream Video play avi files without jitter, lag, grey screen

I am using MobileVLCKit for playing Http stream avi files. The Stream is ok when video filesize is small. Stream faces grey screen, jitter, lag when video file is large. Can anyone tell me how to solve this ? The log says -
[0000000154e74628] core input error: ES_OUT_SET_(GROUP_)PCR is called too late (pts_delay increased to 2193 ms)
[0000000154e74628] core input error: ES_OUT_RESET_PCR called
[0000000154e74628] core input error: ES_OUT_SET_(GROUP_)PCR is called too late (jitter of 24516 ms ignored)
Pls forgive my writing as i am not used to write.

Transcoding fMP4 to HLS while writing on iOS using FFmpeg

TL;DR
I want to convert fMP4 fragments to TS segments (for HLS) as the fragments are being written using FFmpeg on an iOS device.
Why?
I'm trying to achieve live uploading on iOS while maintaining a seamless, HD copy locally.
What I've tried
Rolling AVAssetWriters where each writes for 8 seconds, then concatenating the MP4s together via FFmpeg.
What went wrong - There are blips in the audio and video at times. I've identified 3 reasons for this.
1) Priming frames for audio written by the AAC encoder creating gaps.
2) Since video frames are 33.33ms long, and audio frames 0.022ms long, it's possible for them to not line up at the end of a file.
3) The lack of frame accurate encoding present on Mac OS, but not available for iOS Details Here
FFmpeg muxing a large video only MP4 file with raw audio into TS segments. The work was based on the Kickflip SDK
What Went Wrong - Every once in a while an audio only file would get uploaded, with no video whatsoever. Never able to reproduce it in-house, but it was pretty upsetting to our users when they didn't record what they thought they did. There were also issues with accurate seeking on the final segments, almost like the TS segments were incorrectly time stamped.
What I'm thinking now
Apple was pushing fMP4 at WWDC this year (2016) and I hadn't looked into it much at all before that. Since an fMP4 file can be read, and played while it's being written, I thought that it would be possible for FFmpeg to transcode the file as it's being written as well, as long as we hold off sending the bytes to FFmpeg until each fragment within the file is finished.
However, I'm not familiar enough with the FFmpeg C API, I only used it briefly within attempt #2.
What I need from you
Is this a feasible solution? Is anybody familiar enough with fMP4 to know if I can actually accomplish this?
How will I know that AVFoundation has finished writing a fragment within the file so that I can pipe it into FFmpeg?
How can I take data from a file on disk, chunk at a time, pass it into FFmpeg and have it spit out TS segments?
Strictly speaking you don't need to transcode the fmp4 if it contains h264+aac, you just need to repackage the sample data as TS. (using ffmpeg -codec copy or gpac)
Wrt. alignment (1.2) I suppose this all depends on your encoder settings (frame rate, sample rate and GOP size). It is certainly possible to make sure that audio and video align exactly at fragment boundaries (see for example: this table). If you're targeting iOS, I would recommend using HLS protocol version 3 (or 4) allowing timing to be represented more accurately. This also allows you to stream audio and video separately (non-multiplexed).
I believe ffmpeg should be capable of pushing a live fmp4 stream (ie. using a long-running HTTP POST), but playout requires origin software to do something meaningful with it (ie. stream to HLS).

How to serve videos like Youtube? Almost instant play and fast seeking

How to serve videos like Youtube does ? Even if the video is long (almost 2 hours long) and is viewed in HD, it would almost instantly play and seeking to not yet loaded parts are very fast.
I'm using a dedicated server from Rackspace with 100Mb up/down for this test, my ping time is below 50ms to the server. My local internet connection is 10Mb, I could maximize my internet connection when I download something from the server so connection to the server is not the issue here.
I'm trying to emulate this and I've tried Real time streaming using Wowza and Pseudostreaming using the H264 Streaming Module. Neither could compare to how fast Youtube delivers video.
Video test file is MP4 (h.264), 300MB, 2 hours long, total bitrate is set to 500kbps, and JWPlayer as the video player
Wowza Streaming (RTMP) - Loading then playing the video is fast, but not as fast as youtube. Seeking is not as fast as well it takes
around 5 - 7 seconds to move to the new position and continue playing the video.
Pseudostreaming H264 Streaming Module (HTTP) - Loading the video takes a long time since its downloading the video header first before
playing it. A 2 hours video has around 2.5MB of MOOV ATOM (video
header file) that it needs to download first before it could play.
Once it starts playing seeking to not downloaded parts is on par with
Wowza but not as fast as Youtube.
What do I need to serve videos with the speed of Youtube? I also need it to buffer/download the video when paused just like Youtube so Real Streaming like Wowza is out.
Pseudostreaming using the H264 Streaming module would have been nice since it does buffer when paused, its just that the initial loading time is very long! Anyway I could remove that initial load time?
What are my other options? I'm open to any other option that I could use in my server.
The way YouTube works is different and they keep on changing the way it works. Doing the reverse engineering on that by capturing the YouTube feeds over wire-shark over last 4 years told me that the pattern is very dynamic. The segmentation is one key, the dual buffer, multiple caching servers and techniques, using the client machine as the buffer render and the functionalities of the player matters a lot. There are many many factors which make YouTube video fast and sleek.
You can emulate the same to some extent but building exactly the same needs loads of efforts and infrastructure.

Resources