What's the best way to composite frame-based animated stickers over recorded video? - ios

We want to allow the user to place animated "stickers" over video that they record in the app and are considering different ways to composite these stickers.
Create a video in code from the frame-based animated stickers (which can be rotated, and have translations applied to them) using AVAssetWriter. The problem is that AVAssetWriter only writes to a file and doesn't keep transparency. This would prevent us from being able to overly it over the video using AVMutableComposition.
Create .mov files ahead of time for our frame based stickers and composite them using AVMutableComposition and layer instructions with transformations. The problem with this is that there are no tools for easily converting our PNG based frames to a .mov while maintaining an alpha channel and we'd have to write our own.
Creating separate CALayers for each frame in the sticker animations. This could potentially create a very large number of layers per frame rate of the video.
Or any better ideas?
Thanks.

I would suggest that you take a look at my blog post on this specific subject. Basically, this example shows how RGBA video data can be loaded from a file attached to the app resources. This is imported from a .mov that contains Animation RGBA data on the desktop. A conversion step is required to get the data from the Desktop into iOS, since plain H.264 cannot support an Alpha channel directly (as you have discovered). Note that older hardware may have issues decoding a H.264 user recorded video and then another one on top of that, so this approach of using the CPU instead of the H.264 hardware for the sticker is actually better.

Related

Any way to have a real-time preview of AVMutableComposition? If no, how does Final Cut Pro does it?

is it possible to have a real-time preview of AVMutableComposition which has some layer instructions applied to its assets?
The only class I found that connects AVMutableComposition with AVVideoComposition (holding instructions) is AVExportSession. Does it mean I must export it first to play a preview?
If so, how does apps like Final Cut Pro serve real-time preview when I edit part of the video. Do they cut the whole video into multiple chunks, export what has changed and keep change of everything else?
This sounds like a difficult problem - is there any library that would help in cutting video into small chunks to export and keeping an eye on cache invalidation?
Cheers,
M.
I don't know if this is still relevant but you can always extract each frame from the video, manipulate it accordingly then render it to the screen.
If its from AVCaptureSession you can get CMSampleBuffer from the callbacks, if it's a file I think AVReader is your best bet then you can use either CoreImage or Metal to manipulate the frames and render them in real-time.
There is no real time preview with AVMutableComposition , they may create a time slot for every change and manage it's visibility when you change the slider below

iOS postprocessing - overlay timestamp to video and export

I am working on an application where video and time/GPS/accelerometer data is simultaneously recorded to separate files.
I can play the video and have my overlay appear perfectly in realtime, but I cannot simply export this.
I am wanting to post-process the video and overlay the time,coordinates and on the video.
There are other shapes that will be overlayed which change size/position on each frame.
I have tried using AVMutableComposition and adding CALayers with limited results-
This works to an extent but I cannot synchronise the timestamp with the video. I could use a CAKeyframeAnimation with values+keyTimes, but the amount of values I need to work with is excessive.
My current approach is to render a separate video consisting of CGImages created using the data. This works well but I will need to use a ChromaKey to have transparency in the overlay. I have read that there will likely be quality issues after doing this.
Is there a simpler approach that I should be looking at?
I understand that render speed will not be fantastic, however I do not wish to require a separate 'PC' application to render the video.
Use AVAssetReader for recorded video. Get the CMSampleBufferRef, get it timestamp, draw time on sample buffer, write buffer to AVAssetWriterInputPixelBufferAdaptor. Similar approach for video being recorded.
Use the AVVideoCompositing protocol https://developer.apple.com/library/mac/documentation/AVFoundation/Reference/AVVideoCompositing_Protocol/index.html
This will allow you to get frame by frame call backs with the pixel buffers to do what you want.
With this protocol you will be able to take one frame and overlay whoever you would like. Take a look at This sample - https://developer.apple.com/library/ios/samplecode/AVCustomEdit/Introduction/Intro.html to see how to handle frame by frame modifications. If you take advantage of the AVVideoCompositing protocol you can set a custom video compositor and a video composition on your AVPlayerItem and AVExportSession to render/export what you want.

How would I concat several videos of different sizes into a single video?

I have several video files of different sizes, same aspect ratio though, on a device Camera roll, I’m looking to create of them a single video, scaling up the smaller video files to fit into the whole “video frame size”.
I’m looking into AV Foundation, and am not sure if I need to go to a lower level of Core Media or should I be looking at the AV Foundation AVAssetWriter. It doesn’t seem like AVAssetWriter accepts AVComposition.
Which approach should I be taking for this kind of video related app?
AVMutableComposition is what you want. AVAssetWriter can write the file to disk since AVComposition is a subclass of AVAsset.

Can I use an image to be a poster frame for an audio clip on iOS?

I'm using AVMutableComposition and AVAssetExportSession to composite several discrete audio clips/files together into a single file, similarly to this post but there will be no "video" track. I'd like to give the track some visual appeal using a still image so that when the user plays the clip they don't just see a generic quicktime icon, ideally I'd replace the image with branding or something relevant to the audio content. How would I go about doing it and is there a way to do it without dramatically increasing file size(ie some way to have a really slow framerate or just something so its not generating 30 fps for what is non moving art.) Appreciate any help on this.
AVAssetWriter will allow you to create video from a still image. This question provides a great example of how to do so.

Load video from iPhone library, modify frame and play it in real-time

I'm looking for a tips to develop an application for iPhone/iPad that will be able to process video (let's consider only local files stored on the device for simplicity) and play it in real-time. For example you can choose any movie and choose "Old movie" filter and want it like on old lamp TV.
In order to make this idea real i need to implement two key features:
1) Grab frames and audio stream from a movie file and get access to separate frames (I'm interested in raw pixel buffer in BGRA or at least YUV color space).
2) Display processed frames somehow. I know it's possible to render processed frame to OpenGL texture, but i would like to have more powerful component with playback controls. Is there any class of media player that supports playing custom image and audio buffers?
The processing function is done and it's fast (less than duration on one frame).
I'm not asking for ready solution, but any tips are welcome!
Answer
Frame grabbing.
It seems the only way to grab video and audio frames is to use AVAssetReader class. Although it's not recommended to use for real-time grabbing it does the job. In my tests on iPad2 grabbing single frame needs about 7-8 ms. Seeking across the video is a tricky. Maybe someone can point to more efficient solution?
Video playback. I've done this with custom view and GLES to render a rectangle texture with a video frame inside of it. As far as i know it's the fastest way to draw bitmaps.
Problems
Need to manually play a sound samples
AVAssetReader grabbing should be synchronized with a movie frame rate. Otherwise movie will go too fast or too slow.
AVAssetReader allows only continuous frame access. You can't seek forward and backward. Only proposed solution is to delete old reader and create a new with trimmed time range.
This is how you would load a video from the camera roll..
This is a way to start processing video. Brad Larson did a great job..
How to grab video frames..
You can use AVPlayer+ AVPlayerItem, it provide you a chance to apply a filter on the display image.

Resources