I'm working on an app that creates videos from a series of images, using AVMutableVideoComposition. I wanted to add a "Ken Burns" effect on the images, and I created the effect applying transforms to the main AVMutableVideoCompositionLayerInstruction object using the method "setTransformRampFromStartTransform:toEndTransform:timeRange:"
It works, but the scale effect gives an ugly result... it looks like refreshes of the frames are visible... it is like an old computer that cannot keep up the frame rate of the video... I don't really know how to explain this :)
Do you think there is a better way to achieve the result? For example, using AVVideoCompositionCoreAnimationTool?
Can I add animations to the single video tracks in the compositions?
Thanks for your help!
Related
I am making a video editor and so far I have been able to apply filters to a single frame. The way I currently have everything set up works perfectly. It's a lot of code to show and I don't really need help with the code specifically, so I'll just explain what I do. I use a video composition, WITHOUT a custom compositor that has one AVMutableVideoCompositionInstruction, but multiple AVCompositionTracks (one for each asset) in my AVMutableComposition. Each track has its own layer instructions that handle scale, orientation, and position for each video. I then extract frames using a player item video output and render the frames with metal to apply filters and effects. This works really well and the performance is great.
Now I am faced with applying transitions, which requires me to overlap tracks in the AVMutableComposition. The problem with this is that the video output can't extract frames from specific trackID's and will only extract the top layer. Also, when I overlap tracks, the video doesn't show at all. So I came to the conclusion that I need a custom compositor. I implemented the compositor, but there are a few problems. I can't use layer instructions, but I know this can easily be solved by handling my transforms directly through my vertex shader. The biggest issue is that I need filters to be applied to each frame before the transition is done. For example, a transition between A and B: when I extract frames from track A for the transition, I need all of track A's filters and effects to be applied. When I extract frames from track B I need all of track B's filters to be applied. Then I need to render the transition with the filtered frames from A and B. I can do this in the compositor, but I won't be able to make live updates. I need live updates for my app, for example, changing the intensity of track A's filter with a slider should show every single increment updated live on the player. This solution doesn't allow for that since I would have to change the entire video composition to change the properties of the instructions and/or video compositor.
I've also looked into using an AVAssetReader, however, I am not sure if this will be fast enough or be able to handle seeking through videos efficiently.
So to recap, what I need is a way to extract frames from specific tracks that are overlapped and allow for live updates of any filters. If anyone can lead me in the right direction I'd appreciate it. Thank you.
is it possible to have a real-time preview of AVMutableComposition which has some layer instructions applied to its assets?
The only class I found that connects AVMutableComposition with AVVideoComposition (holding instructions) is AVExportSession. Does it mean I must export it first to play a preview?
If so, how does apps like Final Cut Pro serve real-time preview when I edit part of the video. Do they cut the whole video into multiple chunks, export what has changed and keep change of everything else?
This sounds like a difficult problem - is there any library that would help in cutting video into small chunks to export and keeping an eye on cache invalidation?
Cheers,
M.
I don't know if this is still relevant but you can always extract each frame from the video, manipulate it accordingly then render it to the screen.
If its from AVCaptureSession you can get CMSampleBuffer from the callbacks, if it's a file I think AVReader is your best bet then you can use either CoreImage or Metal to manipulate the frames and render them in real-time.
There is no real time preview with AVMutableComposition , they may create a time slot for every change and manage it's visibility when you change the slider below
I would like to create a video from a number of images and add a cross dissolve effect between the images.
How can this be done? I know images can be written into a video file, but I don't see where to apply an effect. Should each image be turned into a video and then the videos written into the fill video with the transition effect?
I have searched around and cannot find much information on how this can be done, e.g. how to use AVMutableComposition and if it is viable to create videos consisting of individual images, to then apply the cross dissolve effect.
Any information will be greatly appreciated.
If you want to dig around in the bowels of AVFoundation for this, I strongly suggest you take a look at this presentation, especially starting at slide 74 Be prepared to do a large amount of work to pull this off...
If you'd like to get down to business several orders of magnitude faster, and don't mind incorporating a 3rd party library, I'd highly recommend you try GPUImage
You'll find it quite simple to push images into a video and swap them out at will, as well as apply any number of blend filters to the transitions, simply by varying a single mix property of your blend filter over the time your transition happens.
I'm currently doing this right now. To make things short: You need an AVPlayer to which you put an AVPlayerItem that can be empty. You then need to set the forwardPlaybackEndTime of your AVPlayerItemto the duration of your animation. You then create an AVPlayerLayer that you initiate with your AVPlayer. (actually maybe you do not need the AVPlayerLayer if you will not put video in your animation. Then the important part, you create an AVSynchronizedLayer that your initiate with your previous AVPlayerItem. this AVSynchronizedLayer and any sublayer it holds will be synchronized with your AVPlayerItem. You can then create some simple CALayer holding your image (through the contents property and add your CAKeyframeAnimation to your layer on the opacity property. Now any animation on those sublayers will follow the time of your AVPlayerItem. To start the animation, simply call playon your AVPlayer. That's the theory for playback. If you want to export this animation in an mp4 you will need to use AVVideoCompositionCoreAnimationTool but it's pretty similar.
for code example see code snippet to create animation
I'm working on an iPad app that records and plays videos using AVFoundation classes. I have all of the code for basic record/playback in place and now I would like to add a feature that allows the user to draw and make annotations on the video—something I believe will not be too difficult. The harder part, and something that I have not been able to find any examples of, will be to combine the drawing and annotations into the video file itself. I suspect this is part is accomplished with AVComposition but have no idea exactly how. Your help would be greatly appreciated.
Mark
I do not think that you can actually save a drawing into a video file in iOS. You could however consider using a separate view to save the drawing and synchronize the overlay onto the video using a transparent view. In other words, the user circled something at time 3 mins 42 secs in the video. Then when the video is played back you overlay the saved drawing onto the video at the 3:42 mark. It's not what you want but I think it is as close as you can get right now.
EDIT: Actually there might be a way after all. Take a look at this tutorial. I have not read the whole thing but it seems to incorporate the overlay function you need.
http://www.raywenderlich.com/30200/avfoundation-tutorial-adding-overlays-and-animations-to-videos
I have an AVMutableComposition with a video track and I would like to add a still image into the video track, to be displayed for some given time. The still image is simply a PNG. I can load the image as an asset, but that’s about it, because the resulting asset does not have any tracks and therefore cannot be simply inserted using the insertTimeRange… methods.
Is there a way to add still images to a composition? It looks like the answer is somewhere in Core Animation, but the whole thing seems to be a bit above my head and I would appreciate a code sample or some information pointers.
OK. There’s a great video called Editing Media with AV Foundation from WWDC that explains a lot. You can’t insert images right to the AVComposition timeline, at least I did not find any way to do that. But when exporting or playing an asset you can refer to an AVVideoComposition. That’s maybe not a perfect name for the class, since it allows you to mix between various video tracks in the asset, very much like AVAudioMix does for audio. And the AVVideoComposition has an animationTool property that lets you throw Core Animation layers (CALayer) into the mix. CALayer has a contents property that can be assigned a CGImageRef. Does not help in my case, might help somebody else.
I also need still images in my composition. My line of thinking is a little different. Insert on-the-fly movies of black in place of when images should be appearing (possibly one such video would suffice). Add a dictionary reference to each such insert, linking composition time-ranges to bona-fide desired images. When the correct time range arrives in my full-time custom compositor, pull out the desired image and paint that into the output pixel buffer, ignoring the incoming black frames from the composition. I think that'd be another way of doing it.