Capture multiple frames and prepare a video like boomerang - ios

I am trying to integrate a functionality where i have to capture multiple frames let say 3 frames in a second and at last combines all captured frames to form a video and upload that video on server.
You can refer the functionality same as happens in boomerang, i have searched a lot about the most effective way to do the same but didn't found anything helpful.
Any guidance is appreciated.

Video From Image Array
For combining your image array to video please make use of widely accepted answer here.
By using AVAssetWriter and CVPixelBufferRef .
To Make Video in reverse order
This can be done effectively Using the AVFoundation library.
Use AVAssetReader to read the video and Use AVAssetWriter to write the video in reverse order.
Refer the tutorial Reverse Video AVFoundation

Related

AVFoundation, create images all video's frames while playing its audio

at a bit of a loss here.
Basically, I need to create a frame server that will give my app an image of each frame contained in the video at said video's frame rate. I need to do this while also playing its audio.
I'll be using these frames as a texture source for certain geometries on an SceneKit
Have never used AVFoundation, any pointers, tutorials or suggestions are welcomed
That's a very general question.
Since you've never used AVFoundation, the AVFoundation Programming Guide should be your first stop.
In particular, the Still and Video Media Capture section shows how to set a delegate for the capture process.

Playing CMSampleBufferRef's with AVPlayer

I am working on an OSX video editing app and have a set of CMSampleBufferRef's in an array representing each frame of the video.
I want to render a preview of the video using AVPlayer - is it possible feed in these samples directly into AVPlayer?
I've looked at most of the AVFoundation classes. I know I can use AVAssetWriter to write to a file, but want to avoid this as the user will still be doing more editing (so good to have the raw frame data).
Any thoughts?
Yes you can.
First of all, you should convert CMSampleBufferRef to CGImageRef, this will allow you to display frame samples in screen.
This answer has the all necessary code to make this.
About to play with AVPlayer, I'm not sure if you really need to do this, since you have full access through your CMSampleBufferRef array and you are able to convert and render those samples properly, I think is not necessary to put those samples at AVPlayer, instead, you can render directly CGImage at CALayer.
I hope this can help you.

How to live filter a pre taken video on iOS

I have a video that I have already taken that I am looping in an AVPlayerLayer. I would like to know the best way to pass that video through a CIFilter for both live filtering and for saving the video with the filter applied. I have found Apple's CIFunHouse to be helpful, but it is for actually taking the video and processing each frame as it is taken where as I want to apply the filter after the video is taken. How could I go about doing this?
You can take a look at GPUImage: https://github.com/BradLarson/GPUImage

Can I use an image to be a poster frame for an audio clip on iOS?

I'm using AVMutableComposition and AVAssetExportSession to composite several discrete audio clips/files together into a single file, similarly to this post but there will be no "video" track. I'd like to give the track some visual appeal using a still image so that when the user plays the clip they don't just see a generic quicktime icon, ideally I'd replace the image with branding or something relevant to the audio content. How would I go about doing it and is there a way to do it without dramatically increasing file size(ie some way to have a really slow framerate or just something so its not generating 30 fps for what is non moving art.) Appreciate any help on this.
AVAssetWriter will allow you to create video from a still image. This question provides a great example of how to do so.

Loosing audio after processing in AVComposition

I am using AVMutableComposition to combine two pieces of media (one video one graphic overlay). The video was captured using UIImagePickerController.
Video records fine and audio is there when previewing the recording.
Process with Composition, and export session. Video saves fine (with overlay), but no audio.
iOS7.
I'm not specifically doing anything with audio in the composition. I just assumed it would "come along" with the video file. Is that accurate, or do I need to create a dedicated audio track in the composition?
_mike
After much researching, I found the solution to this in another Stackoverflow question (and answer).
iOS AVFoundation Export Session is missing audio.
Many thanks to that user.

Resources