I apply real time effects using CoreImage to video that is played using AVPlayer. The problem is when the player is paused, filters are not applied if you tweak filter parameters using slider.
let videoComposition = AVMutableVideoComposition(asset: asset, applyingCIFiltersWithHandler: {[weak self] request in
// Clamp to avoid blurring transparent pixels at the image edges
let source = request.sourceImage.clampedToExtent()
let output:CIImage
if let filteredOutput = self?.runFilters(source, filters: array)?.cropped(to: request.sourceImage.extent) {
output = filteredOutput
} else {
output = source
}
// Provide the filter output to the composition
request.finish(with: output, context: nil)
})
As a workaround, I used this answer that worked till iOS 12.4, but not anymore in iOS 13 beta 6. Looking for solutions that work on iOS 13.
After reporting this as a bug to Apple and getting some helpful feedback I have a fix:
player.currentItem?.videoComposition = player.currentItem?.videoComposition?.mutableCopy() as? AVVideoComposition
The explanation i got was:
AVPlayer redraws a frame when AVPlayerItem’s videoComposition property gets a new instance or, even if it is the same instance, a property of the instance has been modified.
As a result; forcing a redraw can be achieved by making a 'new' instance simply by copying the existing instance.
Related
In the UI of my iOS app, I display a complex hierarchy of CALayers. One of these layers is a AVPlayerLayer that displays a video with CIFilters applied in real time (using AVVideoComposition(asset:, applyingCIFiltersWithHandler:)).
Now I want to export this layer composition to a video file. There are two tools in AVFoundation that seem helpful:
A: AVVideoCompositionCoreAnimationTool which allows rendering a video inside a (possibly animated) CALayer hierarchy
B: AVVideoComposition(asset:, applyingCIFiltersWithHandler:), which I also use in the UI, to apply CIFilters to a video asset.
However, these two tools cannot be used simultaneously: If I start an AVAssetExportSession that combines these tools, AVFoundation throws an NSInvalidArgumentException:
Expecting video composition to contain only AVCoreImageFilterVideoCompositionInstruction
I tried to workaround this limitation as follows:
Workaround 1
1) Setup an export using AVAssetReader and AVAssetWriter
2) Obtain the sample buffers from the asset reader and apply the CIFilter, save the result in a CGImage.
3) Set the CGImage as the content of the video layer in the layer hierarchy. Now the layer hierarchy "looks like" one frame of the final video.
4) Obtain the data of the CVPixelBuffer for each frame from the asset writer using CVPixelBufferGetBaseAddress and create a CGContext with that data.
5) Render my layer to that context using CALayer.render(in ctx: CGContext).
This setup works, but is extremely slow - exporting a 5 second video sometimes takes a minute. It looks like the CoreGraphics calls are the bottleneck here (I guess that's because with this approach the composition happens on the CPU?)
Workaround 2
One other approach could be to do this in two steps: First, save the source video just with the filters applied to a file as in B, and then use that video file to embed the video in the layer composition as in A. However, as it uses two passes, I guess this isn't as efficient as it could be.
Summary
What is a good approach to export this video to a file, ideally in a single pass? How can I use CIFilters and AVVideoCompositionCoreAnimationTool simultaneously? Is there a native way to set up a "pipeline" in AVFoundation which combines these tools?
The way to achieve this is using a custom AVVideoCompositing. This object allows you to compose (in this case apply the CIFilter) each video frame.
Here's an example implementation that applies a CIPhotoEffectNoir effect to the whole video:
class VideoFilterCompositor: NSObject, AVVideoCompositing {
var sourcePixelBufferAttributes: [String : Any]? = [kCVPixelBufferPixelFormatTypeKey as String: kCVPixelFormatType_32BGRA]
var requiredPixelBufferAttributesForRenderContext: [String : Any] = [kCVPixelBufferPixelFormatTypeKey as String: kCVPixelFormatType_32BGRA]
private var renderContext: AVVideoCompositionRenderContext?
func renderContextChanged(_ newRenderContext: AVVideoCompositionRenderContext) {
renderContext = newRenderContext
}
func cancelAllPendingVideoCompositionRequests() {
}
private let filter = CIFilter(name: "CIPhotoEffectNoir")!
private let context = CIContext()
func startRequest(_ asyncVideoCompositionRequest: AVAsynchronousVideoCompositionRequest) {
guard let track = asyncVideoCompositionRequest.sourceTrackIDs.first?.int32Value, let frame = asyncVideoCompositionRequest.sourceFrame(byTrackID: track) else {
asyncVideoCompositionRequest.finish(with: NSError(domain: "VideoFilterCompositor", code: 0, userInfo: nil))
return
}
filter.setValue(CIImage(cvPixelBuffer: frame), forKey: kCIInputImageKey)
if let outputImage = filter.outputImage, let outBuffer = renderContext?.newPixelBuffer() {
context.render(outputImage, to: outBuffer)
asyncVideoCompositionRequest.finish(withComposedVideoFrame: outBuffer)
} else {
asyncVideoCompositionRequest.finish(with: NSError(domain: "VideoFilterCompositor", code: 0, userInfo: nil))
}
}
}
If you need to have different filters at different times, you can use custom AVVideoCompositionInstructionProtocol which you can get from the AVAsynchronousVideoCompositionRequest
Next, you need to use this with your AVMutableVideoComposition, so:
let videoComposition = AVMutableVideoComposition()
videoComposition.customVideoCompositorClass = VideoFilterCompositor.self
//Add your animator tool as usual
let animator = AVVideoCompositionCoreAnimationTool(postProcessingAsVideoLayer: v, in: p)
videoComposition.animationTool = animator
//Finish setting up the composition
With this, you should be able to export the video using a regular AVAssetExportSession, setting its videoComposition
I am developing an ios video trimmer with swift 4. I am trying to render a horizontal list of video thumbnails spread out over various durations both from local video files and and remote urls. When I test it in the simulator the thumbnails get generated in less than a second which is ok. However, when I test this code on an actual device the thumbnail generation is really slow and sometimes crashes. I tried to add the actual image generation to a background thread and then update the UI on the main thread when it is completed but that doesnt seem to work very well and the app crashes after rendering the screen a few times. I am not sure if that is because I am navigating away from the screen while tasks are still trying to complete. I am trying to resolve this problem and have the app generate the thumbnails quicker and not crash. Here is the code that I am using below. I would really appreciate any assistance for this issue.
func renderThumbnails(view: UIView, videoURL: URL, duration: Float64) {
var offset: Float64 = 0
for i in 0..<self.IMAGE_COUNT{
DispatchQueue.global(qos: .userInitiated).async {
offset = Float64(i) * (duration / Float64(self.IMAGE_COUNT))
let thumbnail = thumbnailFromVideo(videoUrl: videoURL,
time: CMTimeMake(Int64(offset), 1))
DispatchQueue.main.async {
self.addImageToView(image: thumbnail, view: view, index: i)
}
}
}
}
static func thumbnailFromVideo(videoUrl: URL, time: CMTime) -> UIImage{
let asset: AVAsset = AVAsset(url: videoUrl) as AVAsset
let imgGenerator = AVAssetImageGenerator(asset: asset)
imgGenerator.appliesPreferredTrackTransform = true
do{
let cgImage = try imgGenerator.copyCGImage(at: time, actualTime: nil)
let uiImage = UIImage(cgImage: cgImage)
return uiImage
}catch{
}
return UIImage()
}
The first sentence of the documentation says not to do what you’re doing! And it even tells you what to do instead.
Generating a single image in isolation can require the decoding of a large number of video frames with complex interdependencies. If you require a series of images, you can achieve far greater efficiency using the asynchronous method, generateCGImagesAsynchronously(forTimes:completionHandler:), which employs decoding efficiencies similar to those used during playback.
(Italics mine.)
I am developing a video editing kind of application in Swift3 language Where I am merging multiple videos.. setting custom background sound, WaterMark and Fade In & Fade Out effect to the final merged video using AVFoundation framework.
Now my problem is I need to add filter effects like Warm, cold, sepia/vintage to the video... Is this possible to add such effects in Swift iOS using inbuilt libraries? I have searched in Google but not able to find proper solution:
RGB range for cold and warm colors?
http://flexmonkey.blogspot.in/2016/04/loading-filtering-saving-videos-in-swift.html
How to create and add video filter like Instagram using AVFoundation framework - Swift programming
https://developer.apple.com/library/content/samplecode/RosyWriter/Introduction/Intro.html#//apple_ref/doc/uid/DTS40011110
Please advise me. Thank you!
Edited:
I tried using below code but it doesn't work.
let filter = CIFilter(name: "CISepiaTone")!
let composition = AVVideoComposition(asset: firstAsset, applyingCIFiltersWithHandler: { request in
let source = request.sourceImage.clampingToExtent()
filter.setValue(source, forKey: kCIInputImageKey)
// Vary filter parameters based on video timing
let seconds = CMTimeGetSeconds(request.compositionTime)
filter.setValue(seconds * 10.0, forKey: kCIInputRadiusKey)
let output = filter.outputImage!.cropping(to: request.sourceImage.extent)
// Provide the filter output to the composition
request.finish(with: output, context: nil)
})
I'm using Swift to show content from an AVPlayer in a view's AVPlayerLayer. The associated AVPlayerItem has a videoComposition, and slightly simplified version of the code to create it (without error checking, etc.) looks like this:
playerItem.videoComposition = AVVideoComposition(asset: someAsset, applyingCIFiltersWithHandler: {
[unowned self] (request: AVAsynchronousCIImageFilteringRequest) in
let paramDict = << set up parameter dictionary based on class vars >>
// filter the image
let filter = self.ciFilterWithParamDict(paramDict) {
filter.setValue(request.sourceImage, forKey: kCIInputImageKey)
if let filteredImage = filter.outputImage {
request.finishWithImage(filteredImage, context: nil)
}
})
This all works as expected when the AVPlayer is playing or seeking. And if a new videoComposition is created and loaded, the AVPlayerLayer is rendered correctly.
I have not found a way, however, to "trigger" the AVPlayer/ AVPlayerItem/ AVVideoComposition to re-render when I have changed some of the values that I use to calculate filter parameters. If I change values and then play or seek, it is rendered correctly, but only if I play or seek. Is there no way to trigger a rendering "in place"?
The best way that I know to do this is to simply create a new AVVideoComposition instance for the AVPlayerItem when editing your CIFilter inputs on a paused AVPlayer. In my experience it's way faster and cleaner than swapping player items out and back into the player. You might think that creating a new video composition is slow, but really all you are doing is redefining the render path at that specific frame, which is almost as efficient as invalidating the part of the Core Image cache that was impacted by your change.
The key here that the video composition of the player item must be invalidated in some way to trigger a redraw. Simply changing the input parameters of the Core Image filters have sadly no way (that i know of) of invalidating the video composition, which is the source of the issue.
You can get even more efficient by creating a AVMutableVideoComposition instance for the AVPlayerItem and mutate it in some ways (by changing things like instructions, animationTool, frameDuration) when editing on pause.
I used a hack to replace the avPlayerItem entirely to force a refresh. But it would be much better if there was a way to trigger the avPlayerItem to re-render directly.
// If the video is paused, force the player to re-render the frame.
if (self.avPlayer.currentItem.asset && self.avPlayer.rate == 0) {
CMTime currentTime = self.avPlayerItem.currentTime;
[self.avPlayer replaceCurrentItemWithPlayerItem:nil];
[self.avPlayer replaceCurrentItemWithPlayerItem:self.avPlayerItem];
[self.avPlayerItem seekToTime:currentTime toleranceBefore:kCMTimeZero toleranceAfter:kCMTimeZero];
}
This one might help you.
let currentTime = self.player.current()
self.player.play()
self.player.pause()
self.player.seek(to: currentTime, toleranceBefore:kCMTimeZero toleranceAfter:kCMTimeZero)
Along the lines of this answer, but instead of creating a brand new AVVideoComposition and setting that as the player's videoComposition, it appears you can force a refresh of the current frame by simply setting videoComposition to nil and then immediately back to the existing videoComposition instance.
This results in the following simple workaround any time you want to force refresh the current frame:
let videoComposition = player.currentItem?.videoComposition
player.currentItem?.videoComposition = nil
player.currentItem?.videoComposition = videoComposition
This one worked in my case:
let playerItem = player.currentItem!
pausePlayback()
playerItem.videoComposition = nil
playerItem.videoComposition = AVVideoComposition(asset: playerItem.asset) { request in
let source = request.sourceImage.clampedToExtent()
filter?.setValue(source, forKey: kCIInputImageKey) //Any CIFilter
request.finish(with: filter?.outputImage!, context: nil)
}
resumePlayback()
I'm developing an app for Swift and Sprite Kit (xCode 6.4, currently building for iOS 8.4). I'm using SKVideoNode in conjunction with AVPlayer to play a full-screen video. The code follows:
let path = NSBundle.mainBundle().pathForResource("SPLASH_x", ofType:"mov")
let vUrl = NSURL.fileURLWithPath(path!)
let asset = AVAsset.assetWithURL(vUrl) as? AVAsset
let playerItem = AVPlayerItem(asset: asset)
let player = AVPlayer(URL: vUrl)
SplashVideo = SKVideoNode(AVPlayer: player)
SplashVideo!.xScale = self.size.width / SplashVideo!.size.width
SplashVideo!.yScale = self.size.height / SplashVideo!.size.height
SplashVideo!.position = CGPointMake(self.frame.midX, self.frame.midY)
self.addChild(SplashVideo!)
var observer: AnyObject? = nil
observer = player.addPeriodicTimeObserverForInterval(CMTimeMake(1,30), queue: dispatch_get_main_queue(),
usingBlock: { (time: CMTime) -> Void in
let secs:Float64 = CMTimeGetSeconds(time)
println("I think it's playing")
if (secs > 0.01) {
self.hideBackground()
println("I think I'm done observing. Background hidden!")
player.removeTimeObserver(observer!)
}
})
println("I think I'm playing the splash video:")
SplashVideo!.play()
(In case it's not clear, this happens in didMoveToView; I have imported Foundation, AVFoundation, and SpriteKit at the top of the file).
This works fine in the simulator; if I build and run for my iPad nothing happens at all--it displays a black screen, or if I remove the time observer (so that the background never gets hidden), I just see the background (The background is the first frame of the movie--I was experiencing a black flash at the beginning of video playback and am using the time observer as a masking technique to hide it). One of my users has reported that it worked for him until he upgraded to iOS9 (less of a concern), another reports that he hears the audio that goes with the .mov file but doesn't see the video itself (more of a concern). So I'm getting a variety of non-working behaviors, which is the best kind of bug. And by best I mean worst.
Things I have tried:
Various versions and combinations of directly linking in Foundation, AVFoundation, SpriteKit when building.
Using AVPlayerLayer instead of SpriteKit (no change in behavior for me, didn't deploy so I don't know if it would help any of my testers).
Removing the time observer entirely (no change).
Searching the interwebs (no help).
Tearing my hair out (ouch).
All were ineffective. And now I am bald. And sad.
Answering my own question: after much trial and error, it appears that you can't scale an SKVideoNode in iOS9 (or possibly this was never supported? The documentation is not clear). It's also true that the simulator for xCode 7 isn't playing video no matter what I do, which wasn't helping matters. In any case, what you can do is change the size property of the Node (and, I guess, let Sprite Kit do the scaling? Documentation seems spotty) and that seems to work:
let asset = AVAsset.assetWithURL(vUrl) as? AVAsset
let playerItem = AVPlayerItem(asset: asset)
let player = AVPlayer(URL: vUrl)
SplashVideo = SKVideoNode(AVPlayer: player)
SplashVideo!.size = self.size
SplashVideo!.position = CGPointMake(self.frame.midX, self.frame.midY)
self.addChild(SplashVideo!)