Update AVMutableVideoComposition on AVPlayerItem faster than video framerate - ios

I am trying to preview a CIFilter applied to a video using AVMutableVideoComposition's applyingCIFiltersWithHandler initializer.
I have several sliders that change values in the filter, which get reflected by the AVPlayer. The only issue is that there is a noticeable lag between moving the slider and the next frame of the video applying my change.
If I use a higher framerate video, the applier block is called more often and the lag is not noticeable.
I've tried recreating and replacing the AVMutableVideoComposition on the current AVPlayerItem whenever the slider moves but this looks jerky while the video is playing. (It works very well if the video is paused. https://developer.apple.com/library/archive/qa/qa1966/_index.html)
Any idea how to do this without writing a custom video player that has a way to invalidate the frame?

This is a decent solution I managed to find.
I noticed that putting a sleep in the frame processing block actually seemed to improve the perceived performance.
The AVMutableVideoComposition must build up a buffer of frames and the delay I'm seeing is that buffer running out before the frames with new filter values show up. Sleeping in the frame processing block prevented the buffer from filling up making the changes show up immediately.
I looked through the documentation of AVMutableVideoComposition for the millionth time and found this little gem in the docs for sourceTrackIDForFrameTiming.
If an empty edit is encountered in the source asset’s track, the compositor composes frames as needed up to the frequency specified in frameDuration property. Otherwise the frame timing for the video composition is derived from the source asset's track with the corresponding ID.
I had previously tried setting the frameDuration on the composition but couldn't get it to go faster than the video's framerate. If I set the sourceTrackIDForFrameTiming to kCMPersistentTrackID_Invalid it actually lets me speed up the framerate.
By setting the framerate to be extremely high (1000 fps), the phone can never fill up the buffer, making the changes appear immediate.
composition.sourceTrackIDForFrameTiming = kCMPersistentTrackID_Invalid
let frameRateTooHighForPhone = CMTime(value: 1, timescale: 1000)
composition.frameDuration = frameRateTooHighForPhone
It's a little bit hackier than is ideal, but it's not a bad solution.

Thanks so much for posting this, Randall. The above frameDuration hack fixed the lag I was seeing when enabling/disabling layers; so, it seems I'm moving in the right direction.
The issue I now need to figure out is why using this frameDuration hack also seems to have a side effect of introducing glitching and processing hangs in the video processing. Sometimes it works great, but usually the video freezes after a few seconds while the audio track continues to play. Without the hack, playback is solid but changes to composition lag. With the hack, changes are seemingly instantaneous and the video playback has about a 10% chance of being solid - otherwise, hangs. (If I scrub around enough it seems to somehow fix itself, and when it does the universe feels like a better place.)
I'm very new to working with AVMutableVideoComposition and the AVVideoCompositing protocol, and documentation concerning my usage seems to be sparse, so I'm posting this reply in case anyone has any more golden nuggets of info to share with me.

Related

Create a timelapse from a normal video in iOS

I have two solutions to this problem:
SOLUTION A
Convert the asset to an AVMutableComposition.
For every second keep only one frame , by removing timing for all the other frames using removeTimeRange(...) method.
SOLUTION B
Use the AVAssetReader to extract all individual frames as an array of CMSampleBuffer
Write [CMSampleBuffer] back into a movie skipping every 20 frames or so as per requirement.
Convert the obtained video file to an AVMutableComposition and use scaleTimeRange(..) to reduce overall timeRange of video for timelapse effect.
PROBLEMS
The first solution is not suitable for full HD videos , the video freezes in multiple place and the seekbar shows inaccurate timing .
e.g. A 12 second timelapse might only be shown to have a duration of 5 seconds, so it keeps playing even when the seek has finished.
I mean the timing of the video gets all messed up for some reason.
The second solution is incredibly slow. For a 10 minute HD video the memory would run upto infinity since all execution is done in memory.
I am searching for a technique that can produce a timelapse for a video right away , without waiting time .Solution A kind of does that , but is unsuitable because of timing problems and stuttering.
Any suggestion would be great. Thanks!
You might want to experiment with the inbuilt thumbnail generation functions to see if they are fast/effecient enough for your needs.
They have the benefit of being optimised to generate images efficiently from a video stream.
Simply displaying a 'slide show' like view of the thumbnails one after another may give you the effect you are looking for.
There is iinfomrtaion on the key class, AVAssetImageGenerator, here including how to use it to generate multiple images:
https://developer.apple.com/reference/avfoundation/avassetimagegenerator#//apple_ref/occ/instm/AVAssetImageGenerator/generateCGImagesAsynchronouslyForTimes%3acompletionHandler%3a

iOS/AVFoundation: How to eliminate (occasional) blank frames between different videos within an AVComposition during playback

The app I’m working on loops a video a specified # of times by adding the same AVAssetTrack (created from the original video url) multiple times to the same AVComposition at successive intervals. The app similarly inserts a new video clip into an existing composition by 'removing' the time range from the composition's AVMutableCompositionTrack (for AVMediaTypeVideo) and inserting the new clip's AVAssetTrack into the previously removed time range.
However, occasionally and somewhat rarely, after inserting a new clip as described above into a time range within a repeat of the original looping video, there are resulting blank frames which only appear at the video loop’s transition points (within the composition), but only during playback - the video exports correctly without gaps.
This leads me to believe the issue is with the AVPlayer or AVPlayerItem and how the frames are currently buffered for playback, rather than how I'm inserting/ looping the clips or choosing the correct CMTime stamps to do so. The app is doing a bunch of things at once (loop visualization in the UI via an NSTimer, audio playback via Amazing Audio Engine) - could my issue be a result of competition for resources?
One more note: I understand that discrepancies between audio and video in an asset can cause glitches (i.e. the underlying audio is a little bit longer than the video length), but as I'm not adding an audioEncodingTarget to the GPUImageWriter that I'm using to record and save the video, the videos have no audio components.
Any thoughts or directions you can point me in would be greatly appreciated! Many thanks in advance.
Update: the flashes coincide with the "Had to drop a video frame" error logged by the GPUImage library, which according to the creator has to do with the phone not being able to process video fast enough. Can multi-threading solving this?
Update 2: So the flashes actually don't always correspond to the had to drop a video frame error. I have also disabled all of the AVRecorder/Amazing Audio Engine code and the issue still persists making it not a problem of resource competition between those engines. I have been logging properties of AVPlayer item and notice that the 'isPlayBackLikelyToKeepUp' which is always NO, and 'isPlaybackBufferFull' which is always yes.
So problem is solved - sort of frustrating how brutally simple the fix is. I just used a time range a frame shorter for adding the videos to the composition rather than the AVAssetTrack's time range. No more flashes. Hopefully the users won't miss that 30th of a second :)
shortened_duration = CMTimeSubtract(originalVideoAssetTrack.timeRange.duration, CMTimeMake(1,30));

iOS Change keyframes in video

I'm trying to scrub through videos in really small values (maybe even less than milliseconds). To get to the next frame I use [AVPlayer seekToTime:time toleranceBefore: kCMTimeZero toleranceAfter:kCMTimeZero] which gives me the correct position. The problem is, that it takes too long to scrub backward.
I know the reasons are the keyframes and the player has to start searching from the nearest keyframe to reach the position.
Is there any possibility to reencode the video to have more keyframes, or entirely exist out of keyframes?
Thanks
Yes, you can encode video to contain all keyframes, but the file will become MUCH larger. It will also take time/CPU to do it. In addition at 30 frames per second there is only one frame every 33 milliseconds, so sub millisecond resolution doesn't make any sense.

Playing an AVMutableComposition with AVPlayer audio gets out of sync

I have an AVMutableComposition with 2 audio tracks and one video track. I'm using the composition to string about 40 different video clips from .mov files, putting the video content of each clip in the video track of my composition and the audio in the audio track. The second audio track I use for music.
I also have a synchronized layer for titles graphics.
When I play this composition using an AVPlayer, the audio slowly gets out of sync. It takes about 4 minutes to start becoming noticeable. It seems like if I only string together a handfull of longer clips the problem is not as apparent, it is when there are many clips shorter clips (~40 in my test) that it gets really bad.
Pausing and Playing doesn't re-sync the audio, however seeking does. In other words, if I let the video play to the end, towards the end the lip sync gets noticeably off even if I pause and play throughout, however, if I seek to a time towards the end the audio gets back in sync.
My hacky solution for now is to seek to the currentTime + 1 frame every minute or so. This creates an unpleasant jump in the video caused by a lag in the seek operation, so not a good solution.
Exporting with an ExportSession doesn't present this problem, audio remains in sync in the output movie.
I'm wondering if the new masterClock property in the AVPlayer is the answer to this, and if it is, how is it used?
I had the same issue and fixed it, among many other audio and video things, by specifying times timescales in the following manner:
CMTime(seconds: my_seconds, preferredTimescale: CMTimeScale(600))
Before, my time scale was CMTimeScale(NSEC_PER_SEC). That caused me jittery when composing clips at a different frame rate, plus this audio out-of-sync that Eddy mentions here.
In spite of looking like a magic number, 600 is a common multiple of 24, 30, 60 and 120. These are usual frame rates for different purposes. The common multiple avoids dragging around rounding problems when composing multiple clips.

iOS AVPlayer: How to slow down a 30fps video to 1fps

I have a 30fps Quicktime .mov of still images I created with AVAssetWriter. (It's only about 10 frames long). I would like the user to be able to slow it down using a UISlider to about 1fps, but when I adjust the AVPlayer .rate property from 1 down to 0, it doesn't get anywhere near 1fps, it just stops playback (because a 0 rate is effectively stopping/pausing it, which makes sense). But how can I slow the player down to about 1fps? I think I'd need to do some math to calculate the actual rate, but that's where I'm stuck. Would it end up being something like 0.000000000000001?
Thanks!
If this was a requirement of mine I would approach this as follows (also suggested by Inafziger in the comments). Use AVAssetReader and roll my own viewer for the images. This would give you precise control using a timer as stated in your comments. Make sure you reuse some preallocated image(s) memory area (you can probably get away with space for a single image). I would probably take a pull approach like CoreAudio. When you need an image pull it from some image buffer manager class which calls AVAssetReaders read function. This way you can have N buffers that will always be available. This may be a little overkill. I do believe AVAssetReader pre decodes some amount of the movie upon initialization. This is why I say you can more than likely just get away with using a single buffer for reading image data into.
From you comment about memory issues. I do believe there are some functions in the AVAssetReader and associated classes that use the create rule.

Resources