Reverse video playback in iOS - ios

I want to play video backward in AVPlayer. I have tried with changing rates property to -1.0, and although it did work it was not smooth. Is there any way through which I can smoothly play videos backward?

As stated in the comments, the problem is with keyframes and the fact that most codecs are not designed to play backwards. There are 2 options for re-encoding the video that doesn't require you to actually reverse the video in editing.
Make every frame a keyframe. I've seen this work well for codecs like H.264 that rely on keyframes. Basically if every frame is a key frame, then each frame can be decoded without relying on any previous frames so it's effectively the same as playing forward.
Use a codec that doesn't use keyframes and non-keyframes (basically all frames are always keyframes). PhotoJPEG is one such option, although I'm not completely sure if it plays back on iOS. I would think so. It works great on a Mac.
Note that either of this options will result in larger file sizes compared to typical "keyframe every x frames" encoded video.

You have to reach at the end of the current item and then set the rate to the negative value. Something like this:
-(void)reversePlay
{
CMTime durTime = myPlayer.currentItem.asset.duration;
if (CMTIME_IS_VALID(durTime))
{
[myPlayer seekToTime:durTime toleranceBefore:kCMTimeZero toleranceAfter:kCMTimeZero];
[myPlayer setRate:-1.0];
}
else
NSLog(#"Invalid time");
}
source: https://stackoverflow.com/a/16104363/701043

Related

Update AVMutableVideoComposition on AVPlayerItem faster than video framerate

I am trying to preview a CIFilter applied to a video using AVMutableVideoComposition's applyingCIFiltersWithHandler initializer.
I have several sliders that change values in the filter, which get reflected by the AVPlayer. The only issue is that there is a noticeable lag between moving the slider and the next frame of the video applying my change.
If I use a higher framerate video, the applier block is called more often and the lag is not noticeable.
I've tried recreating and replacing the AVMutableVideoComposition on the current AVPlayerItem whenever the slider moves but this looks jerky while the video is playing. (It works very well if the video is paused. https://developer.apple.com/library/archive/qa/qa1966/_index.html)
Any idea how to do this without writing a custom video player that has a way to invalidate the frame?
This is a decent solution I managed to find.
I noticed that putting a sleep in the frame processing block actually seemed to improve the perceived performance.
The AVMutableVideoComposition must build up a buffer of frames and the delay I'm seeing is that buffer running out before the frames with new filter values show up. Sleeping in the frame processing block prevented the buffer from filling up making the changes show up immediately.
I looked through the documentation of AVMutableVideoComposition for the millionth time and found this little gem in the docs for sourceTrackIDForFrameTiming.
If an empty edit is encountered in the source asset’s track, the compositor composes frames as needed up to the frequency specified in frameDuration property. Otherwise the frame timing for the video composition is derived from the source asset's track with the corresponding ID.
I had previously tried setting the frameDuration on the composition but couldn't get it to go faster than the video's framerate. If I set the sourceTrackIDForFrameTiming to kCMPersistentTrackID_Invalid it actually lets me speed up the framerate.
By setting the framerate to be extremely high (1000 fps), the phone can never fill up the buffer, making the changes appear immediate.
composition.sourceTrackIDForFrameTiming = kCMPersistentTrackID_Invalid
let frameRateTooHighForPhone = CMTime(value: 1, timescale: 1000)
composition.frameDuration = frameRateTooHighForPhone
It's a little bit hackier than is ideal, but it's not a bad solution.
Thanks so much for posting this, Randall. The above frameDuration hack fixed the lag I was seeing when enabling/disabling layers; so, it seems I'm moving in the right direction.
The issue I now need to figure out is why using this frameDuration hack also seems to have a side effect of introducing glitching and processing hangs in the video processing. Sometimes it works great, but usually the video freezes after a few seconds while the audio track continues to play. Without the hack, playback is solid but changes to composition lag. With the hack, changes are seemingly instantaneous and the video playback has about a 10% chance of being solid - otherwise, hangs. (If I scrub around enough it seems to somehow fix itself, and when it does the universe feels like a better place.)
I'm very new to working with AVMutableVideoComposition and the AVVideoCompositing protocol, and documentation concerning my usage seems to be sparse, so I'm posting this reply in case anyone has any more golden nuggets of info to share with me.

iOS Change keyframes in video

I'm trying to scrub through videos in really small values (maybe even less than milliseconds). To get to the next frame I use [AVPlayer seekToTime:time toleranceBefore: kCMTimeZero toleranceAfter:kCMTimeZero] which gives me the correct position. The problem is, that it takes too long to scrub backward.
I know the reasons are the keyframes and the player has to start searching from the nearest keyframe to reach the position.
Is there any possibility to reencode the video to have more keyframes, or entirely exist out of keyframes?
Thanks
Yes, you can encode video to contain all keyframes, but the file will become MUCH larger. It will also take time/CPU to do it. In addition at 30 frames per second there is only one frame every 33 milliseconds, so sub millisecond resolution doesn't make any sense.

iOS postprocessing - overlay timestamp to video and export

I am working on an application where video and time/GPS/accelerometer data is simultaneously recorded to separate files.
I can play the video and have my overlay appear perfectly in realtime, but I cannot simply export this.
I am wanting to post-process the video and overlay the time,coordinates and on the video.
There are other shapes that will be overlayed which change size/position on each frame.
I have tried using AVMutableComposition and adding CALayers with limited results-
This works to an extent but I cannot synchronise the timestamp with the video. I could use a CAKeyframeAnimation with values+keyTimes, but the amount of values I need to work with is excessive.
My current approach is to render a separate video consisting of CGImages created using the data. This works well but I will need to use a ChromaKey to have transparency in the overlay. I have read that there will likely be quality issues after doing this.
Is there a simpler approach that I should be looking at?
I understand that render speed will not be fantastic, however I do not wish to require a separate 'PC' application to render the video.
Use AVAssetReader for recorded video. Get the CMSampleBufferRef, get it timestamp, draw time on sample buffer, write buffer to AVAssetWriterInputPixelBufferAdaptor. Similar approach for video being recorded.
Use the AVVideoCompositing protocol https://developer.apple.com/library/mac/documentation/AVFoundation/Reference/AVVideoCompositing_Protocol/index.html
This will allow you to get frame by frame call backs with the pixel buffers to do what you want.
With this protocol you will be able to take one frame and overlay whoever you would like. Take a look at This sample - https://developer.apple.com/library/ios/samplecode/AVCustomEdit/Introduction/Intro.html to see how to handle frame by frame modifications. If you take advantage of the AVVideoCompositing protocol you can set a custom video compositor and a video composition on your AVPlayerItem and AVExportSession to render/export what you want.

AVPlayerItem's stepByCount only smooth for forward, but choppy for backwards?

I have implemented AVPlayerItem's stepByCount method to manually go through a video frame by frame. Here's how it looks like for forward 1 step.
AVPlayer *player = [AVPlayer playerWithURL:url];
[player.currentItem stepByCount:1];
And backward 1 step
AVPlayer *player = [AVPlayer playerWithURL:url];
[player.currentItem stepByCount:-1];
The forward 1 step (going forward in time frame by frame) works well. However when I try to go backward frame by frame, it's not as smooth as the forward step. Am I missing something? Or is this because of the way videos are encoded--it's meant to be viewed forward but not backwards--inherently?
You can check to see if the AVPlayerItem supports stepping:
if (playerItem.canStepBackward)
{
[playerItem stepByCount:numberOfFrames];
}
else
{
// Do our best here...
[player seekToTime:CMTimeSubtract(player.currentTime, CMTimeMake(ABS(numberOfFrames)*1000, compositionFPS*1000)) toleranceBefore:kCMTimeZero toleranceAfter:kCMTimeZero];
}
Not sure what encoding format your video is in but compressed videos are encoded with keyframes. These occur either at regular intervals (like once a second) or when the scene changes (like when a cut is made from close up to wide shot). The data between keyframes just describes the cumulative changes since the last keyframe - only pixels that change are noted. As you have deduced, this means that a video in reverse is not how it is designed to be played. When you skip backwards the encoder needs to skip back to the previous keyframe (which could be hundreds or even thousands of frames before your current position) then recreate all the frames up to your required frame before actually presenting you with the composited frame. The only way around this in your situation is to either use an already flattened video format or to flatten it before scanning. You might want to take a look at http://opencv.org/ if you want to get into decompression.

MPMoviePlayer Buffer size/Adjustment

I have been using MPMovieplayer and the playableDuration to check the available duration of a movie.
The duration always seems to be ~1 second further than my current duration and basically I would like to increase this.
I have tried to use the prepareToPlay but this seems to do nothing noticeable to the playable Duration.
I have tried to set as many parameters as possible to attempt to try and change the value via setting the defaults pre-emptively such as the MPMovieSourceType, MediaType and alike, but all to no avail.
Just to clear a few things up firstly: I am using both MPMoviePlayer and AVplayer which both play different streams simultaneously as the video/audio I am using is split.
EDIT
Seems like I overlooked the file size affecting the stream and should have read more in the apple resources then elsewhere, but as far as I can tell the issue is: the file size is too large and therefore a server side media segmenter has to be implemented.
Apple Resource on Media Segmenting

Resources