iOS Extract all frames from a video - ios

I have to extract all frames from video file and then save them to file.
I tried to use AVAssetImageGenerator, but it's very slow - it takes 1s - 3s per each frame ( sample 1280x720 MPEG4 video ) without saving to file process.
Is there anyway to make it much faster?
OpenGL, GPU, (...)?
I will be very grateful for showing me right direction.

AVAssetImageGenerator is a random access (seeking) interface, and seeking takes time, so one optimisation could be to use an AVAssetReader which will quickly and sequentially vend you frames. You can also choose to work in yuv format, which will give you smaller frames (and I think) faster decoding.
However, those raw frames are enormous: are 1280px * 720px * 4 bytes/pixel (if in RGBA), which is about 3.6MB each. You're going to need some pretty serious compression if you want to keep them all (MPEG4 # 720p comes to mind :).
So what are you trying to achieve?
Are you sure you want fill up your users' disks at a rate of 108MB/s (at 30fps) or 864MB/s (at 240fps)?

Related

Create a timelapse from a normal video in iOS

I have two solutions to this problem:
SOLUTION A
Convert the asset to an AVMutableComposition.
For every second keep only one frame , by removing timing for all the other frames using removeTimeRange(...) method.
SOLUTION B
Use the AVAssetReader to extract all individual frames as an array of CMSampleBuffer
Write [CMSampleBuffer] back into a movie skipping every 20 frames or so as per requirement.
Convert the obtained video file to an AVMutableComposition and use scaleTimeRange(..) to reduce overall timeRange of video for timelapse effect.
PROBLEMS
The first solution is not suitable for full HD videos , the video freezes in multiple place and the seekbar shows inaccurate timing .
e.g. A 12 second timelapse might only be shown to have a duration of 5 seconds, so it keeps playing even when the seek has finished.
I mean the timing of the video gets all messed up for some reason.
The second solution is incredibly slow. For a 10 minute HD video the memory would run upto infinity since all execution is done in memory.
I am searching for a technique that can produce a timelapse for a video right away , without waiting time .Solution A kind of does that , but is unsuitable because of timing problems and stuttering.
Any suggestion would be great. Thanks!
You might want to experiment with the inbuilt thumbnail generation functions to see if they are fast/effecient enough for your needs.
They have the benefit of being optimised to generate images efficiently from a video stream.
Simply displaying a 'slide show' like view of the thumbnails one after another may give you the effect you are looking for.
There is iinfomrtaion on the key class, AVAssetImageGenerator, here including how to use it to generate multiple images:
https://developer.apple.com/reference/avfoundation/avassetimagegenerator#//apple_ref/occ/instm/AVAssetImageGenerator/generateCGImagesAsynchronouslyForTimes%3acompletionHandler%3a

Simplified screen capture: record video of only what appears within the layers of a UIView?

This SO answer addresses how to do a screen capture of a UIView. We need something similar, but instead of a single image, the goal is to produce a video of everything appearing within a UIView over 60 seconds -- conceptually like recording only the layers of that UIView, ignoring other layers.
Our video app superimposes layers on whatever the user is recording, and the ultimate goal is to produce a master video merging those layers with the original video. However, using AVVideoCompositionCoreAnimationTool to merge layers with the original video is very, very, very slow: exporting a 60-second video takes 10-20 seconds.
What we found is combining two videos (i.e., only using AVMutableComposition without AVVideoCompositionCoreAnimationTool) is very fast: ~ 1 second. The hope is to create an independent video of the layers and then combine that with the original video only using AVMutableComposition.
An answer in Swift is ideal but not required.
It sounds like your "fast" merge doesn't involve (re)-encoding frames, i.e. it's trivial and basically a glorified file concatenation, which is why it's getting 60x realtime. I asked about that because your "very slow" export is from 3-6 times realtime, which actually isn't that terrible (at least it wasn't on older hardware).
Encoding frames with an AVAssetWriter should give you an idea of the fastest possible non-trivial export and this may reveal that on modern hardware you could halve or quarter your export times.
This is a long way of saying that there might not be that much more performance to be had. If you think about the typical iOS video encoding use case, which would probably be recording 1920p # 120 fps or 240 fps, your encoding at ~6x realtime # 30fps is in the ballpark of what your typical iOS device "needs" to be able to do.
There are optimisations available to you (like lower/variable framerates), but these may lose you the convenience of being able to capture CALayers.

iOS AVPlayer: How to slow down a 30fps video to 1fps

I have a 30fps Quicktime .mov of still images I created with AVAssetWriter. (It's only about 10 frames long). I would like the user to be able to slow it down using a UISlider to about 1fps, but when I adjust the AVPlayer .rate property from 1 down to 0, it doesn't get anywhere near 1fps, it just stops playback (because a 0 rate is effectively stopping/pausing it, which makes sense). But how can I slow the player down to about 1fps? I think I'd need to do some math to calculate the actual rate, but that's where I'm stuck. Would it end up being something like 0.000000000000001?
Thanks!
If this was a requirement of mine I would approach this as follows (also suggested by Inafziger in the comments). Use AVAssetReader and roll my own viewer for the images. This would give you precise control using a timer as stated in your comments. Make sure you reuse some preallocated image(s) memory area (you can probably get away with space for a single image). I would probably take a pull approach like CoreAudio. When you need an image pull it from some image buffer manager class which calls AVAssetReaders read function. This way you can have N buffers that will always be available. This may be a little overkill. I do believe AVAssetReader pre decodes some amount of the movie upon initialization. This is why I say you can more than likely just get away with using a single buffer for reading image data into.
From you comment about memory issues. I do believe there are some functions in the AVAssetReader and associated classes that use the create rule.

Rendering Video to an OpenGL texture in iOS with scrubbing

I have used the following method iOS4: how do I use video file as an OpenGL texture? to get video frames rendering in openGL successfully.
This method however seems to fall down when you want to scrub (jump to a certain point in the playback) as it only supplies you with video frames sequentially.
Does anyone know a way this behaviour can successfully be achieved?
One easy way to implement this is to export the video to a series of frames, store each frame as a PNG, and then "scrub" by seeing to a PNG at a specific offset. That gives you random access in the image stream at the cost of decoding the entire video first and holding all the data on disk. This would also involve decoding each frame as it is accessed, that would eat up CPU but modern iPhones and iPads can handle it as long as you are not doing too much else.

Cap frame rate or bit rate in AVAssetExportSession?

I'm using AVAssetExportSession to export some stuff at 640x480, and the files are kind of monstrous -- predictably monstrous, but still monstrous, given that we need to upload them from the phone over a 3G network. Is there any way to affect the size of the file other than to reduce the resolution? Ideally I'd like to try, e.g., compressing harder (even if that lowers quality), or cutting back to 15 frames/second, or something like that, but there don't seem to be any hooks to do it.
With AVExportSession you can only use presets. If AVAssetExportPresetMediumQuality and AVAssetExportPresetLowQuality don't work for you, you're better off using AVAssetReader and AVAssetWriter. AVAssetWriter supports bitrate setting and optionally you could skip frames when writing to get lower FPS.

Resources