iOS: Draw on top of AV video, then save the drawing in the video file - ios

I'm working on an iPad app that records and plays videos using AVFoundation classes. I have all of the code for basic record/playback in place and now I would like to add a feature that allows the user to draw and make annotations on the video—something I believe will not be too difficult. The harder part, and something that I have not been able to find any examples of, will be to combine the drawing and annotations into the video file itself. I suspect this is part is accomplished with AVComposition but have no idea exactly how. Your help would be greatly appreciated.
Mark

I do not think that you can actually save a drawing into a video file in iOS. You could however consider using a separate view to save the drawing and synchronize the overlay onto the video using a transparent view. In other words, the user circled something at time 3 mins 42 secs in the video. Then when the video is played back you overlay the saved drawing onto the video at the 3:42 mark. It's not what you want but I think it is as close as you can get right now.
EDIT: Actually there might be a way after all. Take a look at this tutorial. I have not read the whole thing but it seems to incorporate the overlay function you need.
http://www.raywenderlich.com/30200/avfoundation-tutorial-adding-overlays-and-animations-to-videos

Related

Audio bars visualizer in iOS

I'm looking for a way to create a audio bars visualizer similar to this in iOS.
Every white bar will move up and down depending of audio wave. I'm really lost because haven't much experience dealing with audio in Objective-c.
EDIT: What i'm seeking is what Overcast's app does on its visualizer (the group of vertical orange bars on the lower part of the podcast's image)
Anyone can help?
Thanks
EDIT: Thanks to Tomer's answer I finally made it. First I did this tutorial in order to make it all clear. Then I created my own VisualizerView for my project, you can find it in this gist. Maybe is not perfect but it does what I needed to do.
Generally, you have a few options if you want to get an idea of what something sounds like in iOS:
Use the simple AVAudioPlayer audio player, and then use the [audioPlayer averagePowerForChannel:] method to get the avarage audio level for the current moment. Check out this tutorial.
Use the Audio Queue API, which lets you send whatever audio you want to the speaker: You would read audio from your source and fill the buffers with it every time. (If you're reading from a file, use AVAssetReader) This way you always know exactly what waveform you're playing, so you can, for example, calculate its avarage power or process it in other ways like FFT. Then you'd update the bars accordingly.
EDIT: The standard way of doing such a thing is to use the Fast Fourier Transform (FFT) - it extracts frequency information from a sound. Here's a good example of using it on iOS (Apple's guide here). But, of course, to use it you have to know exactly what waveform you're playing every time, so you'd probably want to use a lower-level API such as Audio Queue.

Adding cross dissolve effect - iOS video creation

I would like to create a video from a number of images and add a cross dissolve effect between the images.
How can this be done? I know images can be written into a video file, but I don't see where to apply an effect. Should each image be turned into a video and then the videos written into the fill video with the transition effect?
I have searched around and cannot find much information on how this can be done, e.g. how to use AVMutableComposition and if it is viable to create videos consisting of individual images, to then apply the cross dissolve effect.
Any information will be greatly appreciated.
If you want to dig around in the bowels of AVFoundation for this, I strongly suggest you take a look at this presentation, especially starting at slide 74 Be prepared to do a large amount of work to pull this off...
If you'd like to get down to business several orders of magnitude faster, and don't mind incorporating a 3rd party library, I'd highly recommend you try GPUImage
You'll find it quite simple to push images into a video and swap them out at will, as well as apply any number of blend filters to the transitions, simply by varying a single mix property of your blend filter over the time your transition happens.
I'm currently doing this right now. To make things short: You need an AVPlayer to which you put an AVPlayerItem that can be empty. You then need to set the forwardPlaybackEndTime of your AVPlayerItemto the duration of your animation. You then create an AVPlayerLayer that you initiate with your AVPlayer. (actually maybe you do not need the AVPlayerLayer if you will not put video in your animation. Then the important part, you create an AVSynchronizedLayer that your initiate with your previous AVPlayerItem. this AVSynchronizedLayer and any sublayer it holds will be synchronized with your AVPlayerItem. You can then create some simple CALayer holding your image (through the contents property and add your CAKeyframeAnimation to your layer on the opacity property. Now any animation on those sublayers will follow the time of your AVPlayerItem. To start the animation, simply call playon your AVPlayer. That's the theory for playback. If you want to export this animation in an mp4 you will need to use AVVideoCompositionCoreAnimationTool but it's pretty similar.
for code example see code snippet to create animation

playing images sequentially in ios

Suppose i have multiple frames from a video . I would like to play these frames in a movie player . These frames can change at any point of time . So it should be like a callback that should request each frame and program provides the frame to player as a response to call back.
Is it possible in IOS?
Please guide me in right direction in order to achieve this.
Thanks in advance
mia.
You are not going to be able to implement that type of approach using the built-in movie player. But, if you are just going to loop through video frames stored in PNG files in a directory, that would not be too hard to implement. You could take a look at this code as a starting point. This source code is completely free and does what you need.
PNGAnimatorDemo.zip
If you want to do some more advanced stuff, take a look at the AVImageFrameDecoder class in the AVAnimator (google it to find out more).

Create a Video Overlaying another Video

I am trying to make a nice pretty video.
I have a AVI video from a GOPro video camera, and I hae some info I want to overlay on top of the video. Like Time, GPS, Speed, G-Force etc.
I got my raw data, and ActionScript coded it up into a Flash movie, but then worked out I have two issues.
Flash export to AVI is pretty crap, and basically does a screen capture.
The export to AVI cant be transparent or anything but spare/rectangle.
So, can anyone suggest a better way? Should I use something other than Flash to create my speedometer, that is more friendly for overlaying on a AVI?
This is the sort of thing I am trying to create.
youtube.com/watch?v=tT-vDtQyCbo
I have a CSV of all my raw data, and am trying to find a way to overlay it and look as professional as that link above. I can make the dials in actionscript, but then exporting to AVI with a 'screen capture' type program, they look pretty crap. But on the other hand, inporting my HD video info Flash, and it becomes pretty crap quality, and still have the export issue at the end.
I'm not 100% clear on what you're trying to do.
If you mean that you want to put some info over a video using Flash, all you need to do is import your video onto the timeline on one layer and then place your information on a higher layer. If you want your video to play as a different shape, then you can simply apply a mask to the layer your video is sitting on.
If you throw in more directing information then I'll improve this answer for you :)

How do I add a still image to an AVComposition?

I have an AVMutableComposition with a video track and I would like to add a still image into the video track, to be displayed for some given time. The still image is simply a PNG. I can load the image as an asset, but that’s about it, because the resulting asset does not have any tracks and therefore cannot be simply inserted using the insertTimeRange… methods.
Is there a way to add still images to a composition? It looks like the answer is somewhere in Core Animation, but the whole thing seems to be a bit above my head and I would appreciate a code sample or some information pointers.
OK. There’s a great video called Editing Media with AV Foundation from WWDC that explains a lot. You can’t insert images right to the AVComposition timeline, at least I did not find any way to do that. But when exporting or playing an asset you can refer to an AVVideoComposition. That’s maybe not a perfect name for the class, since it allows you to mix between various video tracks in the asset, very much like AVAudioMix does for audio. And the AVVideoComposition has an animationTool property that lets you throw Core Animation layers (CALayer) into the mix. CALayer has a contents property that can be assigned a CGImageRef. Does not help in my case, might help somebody else.
I also need still images in my composition. My line of thinking is a little different. Insert on-the-fly movies of black in place of when images should be appearing (possibly one such video would suffice). Add a dictionary reference to each such insert, linking composition time-ranges to bona-fide desired images. When the correct time range arrives in my full-time custom compositor, pull out the desired image and paint that into the output pixel buffer, ignoring the incoming black frames from the composition. I think that'd be another way of doing it.

Resources