Most efficient way for displaying procedurally generated video frames on iOS - ios

I need to display a sequence procedurally generated images as video sequence preferably with built-in controls (controls would be nice to have, but no requirement) and I'm just looking for a bit of guidance of which API to use. There seem to be a number of options but I'm not sure which one is the best suited to my needs. GPUImage, Core Video, Core Animation, OpenGL ES or something else?
Targetting just iOS 6 and up would be no problem if that helps.
Update: I'd prefer something that would allow me to display the video frames directly rather than writing them to a temporary movie.

Checkout the animationImages property of UIImageView. This may do what you are looking for. Basically you would store all of your images into this array so it can handle the animation for you. UIImageView Reference

Related

Drawing video frames (images) - ios

I'm working on a project and one of the features is to play a video from a rtmp path. I'm trying to find the best way to draw the frames. Right now I'm using a UIImageView and works, but It's not very elegant nor efficient. OpenGL might do the trick but I've never used it before. Do you have some ideas what should I use? If you agree with OpenGL can you give me a code snippet that I could use for drawing the frames?
Thank you.

non-destructive filter in OpenGL

I am making a photography app for the iPhone and will be using OpenGL to apply effects to the images. Now I'm a bit of an OpenGL noob and was wondering is there a way to build a filter(saturation & blur) that can be easily reversed?
To explain, the user takes a picture and then applies a blur of 5 and a saturation of 3(arbitrary values), but then comes back and turns it down to a blur of 3 and a saturation of 2, would the result be same as if he had given the original image a blur of 3 and a saturation of 2?
Save the original image and store the filter changes as an array of instructions that you can replay at a later date. This will also give you selective undo ability.
You cannot redo filters like blur. Such filters looses some of information about the image so it is hard to get it back. See discussion here.
Using OpenGL (or any other Api) you can easily apply filter as "postprocessing" effects. Just render a quad with your texture to some render target and then you will have transformed image as an output.
Here is a link to oZone3D how to do that.
You can save the created output (but as some other filename!).
Non-destructive editing is API agnostic, you can implement it with OpenGL or in software or anything, all you really need to do is keep aside the source data instead of overwriting it. You can even push back "history" to disk to avoid bloating the ram and gpu memory.
From the context of your question I can assume you are using some of the out-of-the-box ready-to-use functions apple provides for their API, it this case you rely on a stock implementation, so you are stuck with its destructive behavior until you come up with something better yourself.

iOS: Draw on top of AV video, then save the drawing in the video file

I'm working on an iPad app that records and plays videos using AVFoundation classes. I have all of the code for basic record/playback in place and now I would like to add a feature that allows the user to draw and make annotations on the video—something I believe will not be too difficult. The harder part, and something that I have not been able to find any examples of, will be to combine the drawing and annotations into the video file itself. I suspect this is part is accomplished with AVComposition but have no idea exactly how. Your help would be greatly appreciated.
Mark
I do not think that you can actually save a drawing into a video file in iOS. You could however consider using a separate view to save the drawing and synchronize the overlay onto the video using a transparent view. In other words, the user circled something at time 3 mins 42 secs in the video. Then when the video is played back you overlay the saved drawing onto the video at the 3:42 mark. It's not what you want but I think it is as close as you can get right now.
EDIT: Actually there might be a way after all. Take a look at this tutorial. I have not read the whole thing but it seems to incorporate the overlay function you need.
http://www.raywenderlich.com/30200/avfoundation-tutorial-adding-overlays-and-animations-to-videos

iOS audio : cutting and stitching audio?

I'm a Unity dev and need to help out colleagues with doing this natively in Obj-C. In Unity it's no big deal :
1)samples are stored in memory as a List of float[]
2)A helper function returns float[] of n size for any given sample, at any given offset
3)Another helper function fades the data if needed
4)An AudioClip object is created with the right size to accomodate all cut samples, and is then filled at appropriate offsets.
5)The AudioClip is assigned to a player component(AudioSource).
6)AudioSource.Play(ulong offsetInSamples), plays at a sample accurate time in the future. Looping is also just a matter of setting the AudioSource object's loop parameter.
I would very much appreciate if someone could point me towards the right classes to achieve similar results in Obj-C, for iOS devices. I'm pretty sure a lot of iOS audio newbies would be intersted too. Many thanks in advance!
Gregzo
A good overview of the relevant audio APIs available in iOs is here
The highest level framework that makes sense for patching together audio clips, setting their volume levels, and playing them back in your case is probably AVFoundation.
It will involve creating AVAssets, adding them to AVPlayerItems, possibly putting them into AVMutableCompositions to merge multiple items together and adjust their volumes (audioMix), and them playing them back with AVPlayer.
AVFoundation works with AVAsset, for converting between relevant formats and lower level bytes you'll want to have a look at AudioToolbox (I can't post more than two links yet).
For an somewhat simpler API with less control have a look at AVAudioPlayer. If you need greater control (eg: games - real time / low latency) you might need to use OpenAL for playback.

How do I add a still image to an AVComposition?

I have an AVMutableComposition with a video track and I would like to add a still image into the video track, to be displayed for some given time. The still image is simply a PNG. I can load the image as an asset, but that’s about it, because the resulting asset does not have any tracks and therefore cannot be simply inserted using the insertTimeRange… methods.
Is there a way to add still images to a composition? It looks like the answer is somewhere in Core Animation, but the whole thing seems to be a bit above my head and I would appreciate a code sample or some information pointers.
OK. There’s a great video called Editing Media with AV Foundation from WWDC that explains a lot. You can’t insert images right to the AVComposition timeline, at least I did not find any way to do that. But when exporting or playing an asset you can refer to an AVVideoComposition. That’s maybe not a perfect name for the class, since it allows you to mix between various video tracks in the asset, very much like AVAudioMix does for audio. And the AVVideoComposition has an animationTool property that lets you throw Core Animation layers (CALayer) into the mix. CALayer has a contents property that can be assigned a CGImageRef. Does not help in my case, might help somebody else.
I also need still images in my composition. My line of thinking is a little different. Insert on-the-fly movies of black in place of when images should be appearing (possibly one such video would suffice). Add a dictionary reference to each such insert, linking composition time-ranges to bona-fide desired images. When the correct time range arrives in my full-time custom compositor, pull out the desired image and paint that into the output pixel buffer, ignoring the incoming black frames from the composition. I think that'd be another way of doing it.

Resources