how is http live streaming sync between sender and receiver done? - vlc

Http live streaming is a sliding window of a video source. what happens if the sender is slightly faster or slower than the receiver? The receiver will hit one end of the sliding window. Does anybody know how this gets prevented? As sender I use a C++ test program that uses libavcodec and as receiver I use VLC.

Faster is not going to be problem, is it? The frame buffer queue gets full, the TCP reader gets blocked, the TCP stack bufers get full, the TCP stack slides the window closed and comms stops until frames are consumed by the renderer.
Slower - your choice. When all the TCP stack buffers and internal frame buffer queue has run down to zero, you could negotiate with the server for a lower resolution or lower frame rate.

Related

Why is Print Screen versus what is actually displaying on the monitor are different?

I'm working on an application that screen captures a monitor in real-time, encodes it, sends it over ethernet, decodes it, then displays that monitor in an application.
So I put the decoder application on the same monitor that is being captured. I then open a timer application and put it next to the decoder application. I can then start the timer and see the latency between main instance of the timer and the timer within the application.
What's weird is that if I take a picture of the monitor with a camera, I get one latency measurement (almost always ~100ms) but if I take a Print Screen of the monitor, the latency between the two is much lower (~30-60ms).
Why is that? How does Print Screen work? Why would it result in 40+ ms difference? Which latency measurement should I trust?
Print Screen saves the screenshot to your clipboard which is stored on your RAM (highest speed storage system in your computer), whereas what you are doing probably writes the screenshot data to your HDD/SSD and then reads it again to send over the internet, which takes a lot longer to do.

iOS AudioQueue stuttering

I am building a streaming system for audio playback, and every so often, the audio either glitches or starts stuttering for a second or two.
I am running a single output audioQueue with 3 allocated buffers of 1024 samples with a sample rate of 22050.
I hold a separate list of buffers ready to stream, and that buffer is never empty (logs always show at least one filled buffer there whenever a playback_callback is called). playback_callback just memcpy-s a ready buffer into one of the three AudioBuffers with no locks or other weirdness.
playback_callback takes at most 0.9ms to run (measured via mach_absolute_time), which is far below the 1.0/(22050/1024) = 46.4 ms.
I initiate the queue with either CFRunLoopGetMain () or NULL (which should use an "internal thread") and get the same behavior in both cases.
If buffer size is turned absurdly high (16384 instead of 1024), glitches go away. If the nuber of AudioBuffers is turned from 3 to 8, it practically goes away (happens ~20x more rarely). However, neither of those settings is workable for me as it is not ok for the system to take a second to react to a stream switch (0.1 - 0.2s would still be tolerable)
Any help and ideas on the matter would be greatly appreciated.

FFmpeg set playing buffer to 0

I'm using FFmpeg to play RTMP streams inside an iOS application. I use av_read_frame to get the audio and video frames. I need the latency to be as small as possible at all times, but if there's a bottleneck or the download speed decreases, av_read_frame is blocked. Of course, this is how it should work. The problem is that FFmpeg waits too much. As much as it needs to fill its buffers. I need to set those buffers to a value close to 0. Right now, I'm dropping buffered packets "manually" to get the latency at the initial value. The result is the desired one, but I'd wish FFmpeg won't buffer in the first place...
Can anyone help me with this? Thanks in advance.

How to change a DirectShow renderer's buffer size if it's input pin doesn't support IAMBufferNegotiation?

I have a DirectShow application written in Delphi 6. I want to reduce the buffer size of the Renderer from its current 500 ms value to something smaller. The problem is, its input pin does not support IAMBufferNegotiation, which is odd since the renderer is the ear piece on my VOIP phone and it would obviously need a smaller buffer size to avoid an unpleasant delay during phone calls.
I tried a loopback test in Graph Edit connecting the VOIP phones' capture filter (microphone) to the renderer (ear piece). I know the buffer size is 500 ms because that's what Graph Edit shows for the renderer's properties. However, when I use the VOIP phone in a Skype call the delay is much shorter, about 50-100 milliseconds as I would expect.
So Skype knows how to change the renderer's default buffer size. How can I do the same trick?
Output pin is normally responsible for setting up the allocator, and IAMBufferNegotiation is typically available on the output pin. You want to reduce buffers size at capture filter's output pin only, and it will generate small buffers which are going to travel through the graph being still small buffers and small chunks of data, so reducing buffer sizes at intermediate filters is not necessary.

How to synchronize the filling of a ring buffer in a background thread with a remoteio callback

Im looking for an implemtation that uses a ring buffer in remoteio to output a very large audio file.
I have come across the CARingBuffer from apple but I've had a nightmare trying to implement it in my ios project.
As an alternative I came across this ring buffer that I've using (unsuccessfully).
Ring Buffer
How I tried to implement this is as follows.
Open an audio file which is perfectly cut using extaudiofileref.
Fully fill my ring buffer reading from the file (number of frame % inTimeSamples = readpoint)
In my callback if the ring buffer is less than 50% full I call performselector in background to add more samples.
If there is enough samples I just read from the buffer.
This all seems to work fine until I come close to the end of the file and want to loop it. When the reapoint + the number of samples needed to fill the ring buffer exceeds the total number of frames I extract some audio from the remainder of the file, seek to frame 0, then read the rest.
This always sounds glitchy. I think it may have something to do with the fact that the remoteio callback is running much quicker than the background thread so by the time the background thread has completed not only has the calculated readpoint changed but the head and tail of the buffer are not what they should be.
If example code would be too immense to post I would accept pseudo code as an answer. My methodology to solve this is lacking.
This may not be the answer you're looking for, but SFBAudioEngine compiles and runs on iOS and will handle this use case easily. It's basically a higher-level abstraction for the RemoteIO AU and supports many more formats than Core Audio does natively.

Resources