iOS Change keyframes in video - ios

I'm trying to scrub through videos in really small values (maybe even less than milliseconds). To get to the next frame I use [AVPlayer seekToTime:time toleranceBefore: kCMTimeZero toleranceAfter:kCMTimeZero] which gives me the correct position. The problem is, that it takes too long to scrub backward.
I know the reasons are the keyframes and the player has to start searching from the nearest keyframe to reach the position.
Is there any possibility to reencode the video to have more keyframes, or entirely exist out of keyframes?
Thanks

Yes, you can encode video to contain all keyframes, but the file will become MUCH larger. It will also take time/CPU to do it. In addition at 30 frames per second there is only one frame every 33 milliseconds, so sub millisecond resolution doesn't make any sense.

Related

Update AVMutableVideoComposition on AVPlayerItem faster than video framerate

I am trying to preview a CIFilter applied to a video using AVMutableVideoComposition's applyingCIFiltersWithHandler initializer.
I have several sliders that change values in the filter, which get reflected by the AVPlayer. The only issue is that there is a noticeable lag between moving the slider and the next frame of the video applying my change.
If I use a higher framerate video, the applier block is called more often and the lag is not noticeable.
I've tried recreating and replacing the AVMutableVideoComposition on the current AVPlayerItem whenever the slider moves but this looks jerky while the video is playing. (It works very well if the video is paused. https://developer.apple.com/library/archive/qa/qa1966/_index.html)
Any idea how to do this without writing a custom video player that has a way to invalidate the frame?
This is a decent solution I managed to find.
I noticed that putting a sleep in the frame processing block actually seemed to improve the perceived performance.
The AVMutableVideoComposition must build up a buffer of frames and the delay I'm seeing is that buffer running out before the frames with new filter values show up. Sleeping in the frame processing block prevented the buffer from filling up making the changes show up immediately.
I looked through the documentation of AVMutableVideoComposition for the millionth time and found this little gem in the docs for sourceTrackIDForFrameTiming.
If an empty edit is encountered in the source asset’s track, the compositor composes frames as needed up to the frequency specified in frameDuration property. Otherwise the frame timing for the video composition is derived from the source asset's track with the corresponding ID.
I had previously tried setting the frameDuration on the composition but couldn't get it to go faster than the video's framerate. If I set the sourceTrackIDForFrameTiming to kCMPersistentTrackID_Invalid it actually lets me speed up the framerate.
By setting the framerate to be extremely high (1000 fps), the phone can never fill up the buffer, making the changes appear immediate.
composition.sourceTrackIDForFrameTiming = kCMPersistentTrackID_Invalid
let frameRateTooHighForPhone = CMTime(value: 1, timescale: 1000)
composition.frameDuration = frameRateTooHighForPhone
It's a little bit hackier than is ideal, but it's not a bad solution.
Thanks so much for posting this, Randall. The above frameDuration hack fixed the lag I was seeing when enabling/disabling layers; so, it seems I'm moving in the right direction.
The issue I now need to figure out is why using this frameDuration hack also seems to have a side effect of introducing glitching and processing hangs in the video processing. Sometimes it works great, but usually the video freezes after a few seconds while the audio track continues to play. Without the hack, playback is solid but changes to composition lag. With the hack, changes are seemingly instantaneous and the video playback has about a 10% chance of being solid - otherwise, hangs. (If I scrub around enough it seems to somehow fix itself, and when it does the universe feels like a better place.)
I'm very new to working with AVMutableVideoComposition and the AVVideoCompositing protocol, and documentation concerning my usage seems to be sparse, so I'm posting this reply in case anyone has any more golden nuggets of info to share with me.

Create a timelapse from a normal video in iOS

I have two solutions to this problem:
SOLUTION A
Convert the asset to an AVMutableComposition.
For every second keep only one frame , by removing timing for all the other frames using removeTimeRange(...) method.
SOLUTION B
Use the AVAssetReader to extract all individual frames as an array of CMSampleBuffer
Write [CMSampleBuffer] back into a movie skipping every 20 frames or so as per requirement.
Convert the obtained video file to an AVMutableComposition and use scaleTimeRange(..) to reduce overall timeRange of video for timelapse effect.
PROBLEMS
The first solution is not suitable for full HD videos , the video freezes in multiple place and the seekbar shows inaccurate timing .
e.g. A 12 second timelapse might only be shown to have a duration of 5 seconds, so it keeps playing even when the seek has finished.
I mean the timing of the video gets all messed up for some reason.
The second solution is incredibly slow. For a 10 minute HD video the memory would run upto infinity since all execution is done in memory.
I am searching for a technique that can produce a timelapse for a video right away , without waiting time .Solution A kind of does that , but is unsuitable because of timing problems and stuttering.
Any suggestion would be great. Thanks!
You might want to experiment with the inbuilt thumbnail generation functions to see if they are fast/effecient enough for your needs.
They have the benefit of being optimised to generate images efficiently from a video stream.
Simply displaying a 'slide show' like view of the thumbnails one after another may give you the effect you are looking for.
There is iinfomrtaion on the key class, AVAssetImageGenerator, here including how to use it to generate multiple images:
https://developer.apple.com/reference/avfoundation/avassetimagegenerator#//apple_ref/occ/instm/AVAssetImageGenerator/generateCGImagesAsynchronouslyForTimes%3acompletionHandler%3a

Does Backward scrubbing works as smooth as Forward scrubbing in iOS using AVPlayerItem?

I have used AVPlayer for video play. I am using following line of code for Forward scrubbing and it works fine. but when i scrub it backward using slider it seems choppy and skips few frames.
let valueToUpdate = Float(value) * Float(self.highlightsPlayer.clipTimeDuration)
self.highlightsPlayer.highlightsPlayerItem?.seek(to: CMTimeMakeWithSeconds(TimeInterval(valueToUpdate), 240))
Is there any solution that forward and backward scrubbing works in same way?
thanks in advance.
It has to do with compression and key-frames. Imagine you're scanning through a video that's been compressed. Every 10 seconds you get a full image but for every frame between those 10 second frames you just get the pixels that have changed. So scanning forward is easy, you just combine pixels as you go and show fewer frames depending on speed. Going backwards is not so easy. To go even 1 frame behind one of your key frames you need to jump back to the previous keyframe (10 seconds back) and then make up all the images until the one you needed.
Obviously this is a bit simplified as there are variable rate compression algorithms but this should give you the general idea.

iOS dynamically slow down the playback of a video, with continuous value

I have a problem with the iOS SDK. I can't find the API to slowdown a video with continuous values.
I have made an app with a slider and an AVPlayer, and I would like to change the speed of the video, from 50% to 150%, according to the slider value.
As for now, I just succeeded to change the speed of the video, but only with discrete values, and by recompiling the video. (In order to do that, I used AVMutableComposition APIs.
Do you know if it is possible to change continuously the speed, and without recompiling?
Thank you very much!
Jery
The AVPlayer's rate property allows playback speed changes if the associated AVPlayerItem is capable of it (responds YES to canPlaySlowForward or canPlayFastForward). The rate is 1.0 for normal playback, 0 for stopped, and can be set to other values but will probably round to the nearest discrete value it is capable of, such as 2:1, 3:2, 5:4 for faster speeds, and 1:2, 2:3 and 4:5 for slower speeds.
With the older MPMoviePlayerController, and its similar currentPlaybackRate property, I found that it would take any setting and report it back, but would still round it to one of the discrete values above. For example, set it to 1.05 and you would get normal speed (1:1) even though currentPlaybackRate would say 1.05 if you read it. Set it to 1.2 and it would play at 1.25X (5:4). And it was limited to 2:1 (double speed), beyond which it would hang or jump.
For some reason, the iOS API Reference doesn't mention these discrete speeds. They were found by experimentation. They make some sense. Since the hardware displays video frames at a fixed rate (e.g.- 30 or 60 frames per second), some multiples are easier than others. Half speed can be achieved by showing each frame twice, and double speed by dropping every other frame. Dropping 1 out of every 3 frames gives you 150% (3:2) speed. But to do 105% is harder, dropping 1 out of every 21 frames. Especially if this is done in hardware, you can see why they might have limited it to only certain multiples.

How can I ensure the correct frame rate when recording an animation using DirectShow?

I am attempting to record an animation (computer graphics, not video) to a WMV file using DirectShow. The setup is:
A Push Source that uses an in-memory bitmap holding the animation frame. Each time FillBuffer() is called, the bitmap's data is copied over into the sample, and the sample is timestamped with a start time (frame number * frame length) and duration (frame length). The frame rate is set to 10 frames per second in the filter.
An ASF Writer filter. I have a custom profile file that sets the video to 10 frames per second. Its a video-only filter, so there's no audio.
The pins connect, and when the graph is run, a wmv file is created. But...
The problem is it appears DirectShow is pushing data from the Push Source at a rate greater than 10 FPS. So the resultant wmv, while playable and containing the correct animation (as well as reporting the correct FPS), plays the animation back several times too slowly because too many frames were added to the video during recording. That is, a 10 second video at 10 FPS should only have 100 frames, but about 500 are being stuffed into the video, resulting in the video being 50 seconds long.
My initial attempt at a solution was just to slow down the FillBuffer() call by adding a sleep() for 1/10th second. And that indeed does more or less work. But it seems hackish, and I question whether that would work well at higher FPS.
So I'm wondering if there's a better way to do this. Actually, I'm assuming there's a better way and I'm just missing it. Or do I just need to smarten up the manner in which FillBuffer() in the Push Source is delayed and use a better timing mechanism?
Any suggestions would be greatly appreciated!
I do this with threads. The main thread is adding bitmaps to a list and the recorder thread takes bitmaps from that list.
Main thread
Animate your graphics at time T and render bitmap
Add bitmap to renderlist. If list is full (say more than 8 frames) wait. This is so you won't use too much memory.
Advance T with deltatime corresponding to desired framerate
Render thread
When a frame is requested, pick and remove a bitmap from the renderlist. If list is empty wait.
You need a threadsafe structure such as TThreadList to hold the bitmaps. It's a bit tricky to get right but your current approach is guaranteed to give to timing problems.
I am doing just the right thing for my recorder application (www.videophill.com) for purposes of testing the whole thing.
I am using Sleep() method to delay the frames, but am taking great care to ensure that timestamps of the frames are correct. Also, when Sleep()ing from frame to frame, please try to use 'absolute' time differences, because Sleep(100) will sleep about 100ms, not exactly 100ms.
If it won't work for you, you can always go for IReferenceClock, but I think that's overkill here.
So:
DateTime start=DateTime.Now;
int frameCounter=0;
while (wePush)
{
FillDataBuffer(...);
frameCounter++;
DateTime nextFrameTime=start.AddMilliseconds(frameCounter*100);
int delay=(nextFrameTime-DateTime.Now).TotalMilliseconds;
Sleep(delay);
}
EDIT:
Keep in mind: IWMWritter is time insensitive as long as you feed it with SAMPLES that are properly time-stamped.

Resources