I'm using FFmpeg to play RTMP streams inside an iOS application. I use av_read_frame to get the audio and video frames. I need the latency to be as small as possible at all times, but if there's a bottleneck or the download speed decreases, av_read_frame is blocked. Of course, this is how it should work. The problem is that FFmpeg waits too much. As much as it needs to fill its buffers. I need to set those buffers to a value close to 0. Right now, I'm dropping buffered packets "manually" to get the latency at the initial value. The result is the desired one, but I'd wish FFmpeg won't buffer in the first place...
Can anyone help me with this? Thanks in advance.
Related
My javafx application has 6 small windows and one large one. Each plays hls using VLCJ.
From time to time the picture freezes on some windows, so I want to somehow reduce the consumption of players on the PC.
How can i do this?
In 6 small windows, I don't need sound, if I can turn it off with a parameter, will it affect memory or CPU consumption?
At the moment, I remove the sound there with --aout=directsound and the mute() function, but perhaps the audio is still processed by the players and the consumption is not reduced.
Since these are small windows, high quality content does not need to be displayed there. Is it possible to reduce the quality of the content using the player? Can this help and how to do it?
Tried using the :adaptive-logic=highest playback parameter, but it didn't help, most likely because the content has only one high quality.
Parameters for the player are here: https://wiki.videolan.org/VLC_command-line_help/.
But there are a lot of them and I do not understand how they work, so I ask for help.
Maybe I can skip some frames, which will not be very noticeable, but can help?
Update:
Now I'm trying these options, but I don't notice much change...
--no-audio
--postproc-q=1
--ffmpeg-hw
--avcodec-skip-frame=1
--avcodec-skip-idct=1
--avcodec-skiploopfilter=1
--avcodec-hw=any
--sout-avcodec-hurry-up
--no-sout-avcodec-interlace-me
We have a VoIP app for iOS platform. Where we are using TPCircularBuffer for audio buffering and it's performance is so good.
So i was wondering if it's possible to use TPCircularBuffer for Video buffering also. I have searched a lot but didn't find anything useful on "Using TPCircularBuffer for Video". Is that even possible ?? If yes, then can anyone shade some light on it ? And any code sample would be highly appreciated.
I guess you could copy your video frame's pixels into a TPCircularBuffer, and you'd technically have a video ring buffer, but you've already lost the efficiency race at that point because you don't have time to copy that much data around. You need to keep a reference to your frames.
Or, if you really wanted to mash a solution into TPCircularBuffer, you could write the CMSampleBuffer pointers into the buffer (carefully respecting retain and release). But that seems heavy handed, as you're really not gaining anything from TPCircularBuffer's magical memory mapping wrapping because pointers are so small.
I would simply make my own CMSampleBufferRef ring buffer. You can grab a prebuilt circular buffer or do the clock arithmetic yourself:
CMSampleBufferRef ringBuffer[10]; // or some other number
ringBuffer[(++i) % 10] = frame;
Of course your real problem is not the ring buffer itself, but dealing with the fact that decompressed video is very high bandwidth, e.g. each frame is 8MB for 1080p, or 200MB to store 1 second's worth at 24fps, so you're going to have to get pretty creative if you need anything other than a microscopic video buffer.
Some suggestions:
the above numbers are for RGBA, so try working in YUV, where the numbers become 3MB and 75MB/s
try lower resolutions
I have used the following method iOS4: how do I use video file as an OpenGL texture? to get video frames rendering in openGL successfully.
This method however seems to fall down when you want to scrub (jump to a certain point in the playback) as it only supplies you with video frames sequentially.
Does anyone know a way this behaviour can successfully be achieved?
One easy way to implement this is to export the video to a series of frames, store each frame as a PNG, and then "scrub" by seeing to a PNG at a specific offset. That gives you random access in the image stream at the cost of decoding the entire video first and holding all the data on disk. This would also involve decoding each frame as it is accessed, that would eat up CPU but modern iPhones and iPads can handle it as long as you are not doing too much else.
Im looking for an implemtation that uses a ring buffer in remoteio to output a very large audio file.
I have come across the CARingBuffer from apple but I've had a nightmare trying to implement it in my ios project.
As an alternative I came across this ring buffer that I've using (unsuccessfully).
Ring Buffer
How I tried to implement this is as follows.
Open an audio file which is perfectly cut using extaudiofileref.
Fully fill my ring buffer reading from the file (number of frame % inTimeSamples = readpoint)
In my callback if the ring buffer is less than 50% full I call performselector in background to add more samples.
If there is enough samples I just read from the buffer.
This all seems to work fine until I come close to the end of the file and want to loop it. When the reapoint + the number of samples needed to fill the ring buffer exceeds the total number of frames I extract some audio from the remainder of the file, seek to frame 0, then read the rest.
This always sounds glitchy. I think it may have something to do with the fact that the remoteio callback is running much quicker than the background thread so by the time the background thread has completed not only has the calculated readpoint changed but the head and tail of the buffer are not what they should be.
If example code would be too immense to post I would accept pseudo code as an answer. My methodology to solve this is lacking.
This may not be the answer you're looking for, but SFBAudioEngine compiles and runs on iOS and will handle this use case easily. It's basically a higher-level abstraction for the RemoteIO AU and supports many more formats than Core Audio does natively.
I am attempting to record an animation (computer graphics, not video) to a WMV file using DirectShow. The setup is:
A Push Source that uses an in-memory bitmap holding the animation frame. Each time FillBuffer() is called, the bitmap's data is copied over into the sample, and the sample is timestamped with a start time (frame number * frame length) and duration (frame length). The frame rate is set to 10 frames per second in the filter.
An ASF Writer filter. I have a custom profile file that sets the video to 10 frames per second. Its a video-only filter, so there's no audio.
The pins connect, and when the graph is run, a wmv file is created. But...
The problem is it appears DirectShow is pushing data from the Push Source at a rate greater than 10 FPS. So the resultant wmv, while playable and containing the correct animation (as well as reporting the correct FPS), plays the animation back several times too slowly because too many frames were added to the video during recording. That is, a 10 second video at 10 FPS should only have 100 frames, but about 500 are being stuffed into the video, resulting in the video being 50 seconds long.
My initial attempt at a solution was just to slow down the FillBuffer() call by adding a sleep() for 1/10th second. And that indeed does more or less work. But it seems hackish, and I question whether that would work well at higher FPS.
So I'm wondering if there's a better way to do this. Actually, I'm assuming there's a better way and I'm just missing it. Or do I just need to smarten up the manner in which FillBuffer() in the Push Source is delayed and use a better timing mechanism?
Any suggestions would be greatly appreciated!
I do this with threads. The main thread is adding bitmaps to a list and the recorder thread takes bitmaps from that list.
Main thread
Animate your graphics at time T and render bitmap
Add bitmap to renderlist. If list is full (say more than 8 frames) wait. This is so you won't use too much memory.
Advance T with deltatime corresponding to desired framerate
Render thread
When a frame is requested, pick and remove a bitmap from the renderlist. If list is empty wait.
You need a threadsafe structure such as TThreadList to hold the bitmaps. It's a bit tricky to get right but your current approach is guaranteed to give to timing problems.
I am doing just the right thing for my recorder application (www.videophill.com) for purposes of testing the whole thing.
I am using Sleep() method to delay the frames, but am taking great care to ensure that timestamps of the frames are correct. Also, when Sleep()ing from frame to frame, please try to use 'absolute' time differences, because Sleep(100) will sleep about 100ms, not exactly 100ms.
If it won't work for you, you can always go for IReferenceClock, but I think that's overkill here.
So:
DateTime start=DateTime.Now;
int frameCounter=0;
while (wePush)
{
FillDataBuffer(...);
frameCounter++;
DateTime nextFrameTime=start.AddMilliseconds(frameCounter*100);
int delay=(nextFrameTime-DateTime.Now).TotalMilliseconds;
Sleep(delay);
}
EDIT:
Keep in mind: IWMWritter is time insensitive as long as you feed it with SAMPLES that are properly time-stamped.