AVVideoComposition with CIFilters crash - ios

I am creating an AVVideoComposition with CIFilters this way:
videoComposition = AVMutableVideoComposition(asset: asset, applyingCIFiltersWithHandler: {[weak self] request in
// Clamp to avoid blurring transparent pixels at the image edges
let source = request.sourceImage.clampedToExtent()
let output:CIImage
if let filteredOutput = self?.runFilters(source, filters: filters)?.cropped(to: request.sourceImage.extent) {
output = filteredOutput
} else {
output = source
}
// Provide the filter output to the composition
request.finish(with: output, context: nil)
})
And then to correctly handle rotation, I create a passthrough instruction which sets identity transform on passthrough layer.
let passThroughInstruction = AVMutableVideoCompositionInstruction()
passThroughInstruction.timeRange = CMTimeRangeMake(start: CMTime.zero, duration: asset.duration)
let passThroughLayer = AVMutableVideoCompositionLayerInstruction(assetTrack: videoTrack)
passThroughLayer.setTransform(CGAffineTransform.identity, at: CMTime.zero)
passThroughInstruction.layerInstructions = [passThroughLayer]
videoComposition.instructions = [passThroughInstruction]
The problem is it crashes with the error:
'*** -[AVCoreImageFilterCustomVideoCompositor startVideoCompositionRequest:] Expecting video composition to contain only AVCoreImageFilterVideoCompositionInstruction'
My issue is that if I do not specify this passThroughInstruction, output is incorrect if input asset's videotrack contains a preferredTransform which specifies rotation by 90 degrees. How do I use video composition with Core Image filters that correctly handles preferredTransform of video track?
EDIT: The question looks similar but is different from other questions that involve playback. In my case, playback is fine but it is rendering that creates distorted video.

Ok I found the real issue. Issue is applying videoComposition returned by AVMutableVideoComposition(asset:, applyingCIFiltersWithHandler:) function implicitly creates a composition instruction that physically rotates the CIImages in case a rotation transform is applied through preferredTransform on video track. Whether it should be doing that or not is debatable as the preferredTransform is applied at the player level. As a workaround, in addition to what is suggested in this answer. I had to adjust width and height passed via AVVideoWidthKey, AVVideoHeightKey in videoSettings which is passed to AVAssetReader/Writer.

Related

Rendering Alpha Channel Video Over Background (AVFoundation, Swift)

Recently, I have been following this tutorial, which has taught me how to play a video with alpha channel in iOS. This has been working great to build an AVPlayer over something like a UIImageView, which allows me to make it look like my video (with the alpha channel removed) is playing on top of the image.
Using this approach, I now need to find a way to do this while rendering/saving the video to the user's device. This is my code to generate the alpha video that plays in the AVPlayer;
let videoSize = CGSize(width: playerItem.presentationSize.width, height: playerItem.presentationSize.height / 2.0)
let composition = AVMutableVideoComposition(asset: playerItem.asset, applyingCIFiltersWithHandler: { request in
let sourceRect = CGRect(origin: .zero, size: videoSize)
let alphaRect = sourceRect.offsetBy(dx: 0, dy: sourceRect.height)
let filter = AlphaFrameFilter()
filter.inputImage = request.sourceImage.cropped(to: alphaRect)
.transformed(by: CGAffineTransform(translationX: 0, y: -sourceRect.height))
filter.maskImage = request.sourceImage.cropped(to: sourceRect)
return request.finish(with: filter.outputImage!, context: nil)
(That's been truncated a bit for ease, but I can confirm this approach properly returns an AVVideoComposition that I can play in AVPlayer.
I recognize that I can use an AVVideoComposition with an AVExportSession, but this only allows me to render my alpha video over a black background (and not an image or video, as I'd need).
Is there a way to overlay the now "background-removed" alpha channel, on top of another video and process out?

When reading frames from a video on an iPad with AVAssetReader, the images are not properly oriented

A few things I want to establish first:
This works properly on multiple iPhones (iOS 10.3 & 11.x)
This works properly on any iPad simulator (iOS 11.x)
What I am left with is a situation where when I run the following code (condensed from my application to remove unrelated code), I am getting an image that is upside down (landscape) or rotated 90 degrees (portrait). Viewing the video that is processed just prior to this step shows that it is properly oriented. All testing has been done on iOS 11.2.5.
* UPDATED *
I did some further testing and found a few more interesting items:
If the video was imported from a phone, or an external source, it is properly processed
If the video was recorded on the iPad in portrait orientation, then the reader extracts it rotated 90 degrees left
If the video was recorded on the iPad in landscape orientation, then the reader extracts it upside down
In the two scenarios above, UIImage reports an orientation of portrait
A condensed version of the code involved:
import UIKit
import AVFoundation
let asset = ...
let assetReader = try? AVAssetReader(asset: asset)
if let assetTrack = asset.tracks(withMediaType: .video).first,
let assetReader = assetReader {
let assetReaderOutputSettings = [
kCVPixelBufferPixelFormatTypeKey as String : NSNumber(value: kCVPixelFormatType_32BGRA)
]
let assetReaderOutput = AVAssetReaderTrackOutput(track: assetTrack,
outputSettings: assetReaderOutputSettings)
assetReaderOutput.alwaysCopiesSampleData = false
assetReader.add(assetReaderOutput)
var images = [UIImage]()
assetReader.startReading()
var sample = assetReaderOutput.copyNextSampleBuffer()
while (sample != nil) {
if let image = sample?.uiImage { // The image is inverted here
images.append(image)
sample = assetReaderOutput.copyNextSampleBuffer()
}
}
// Continue here with array of images...
}
After some exploration, I came across the following that allowed me to obtain the video's orientation from the AVAssetTrack:
let transform = assetTrack.preferredTransform
radians = atan2(transform.b, transform.a)
Once I had that, I was able to convert it to degrees:
let degrees = (radians * 180.0) / .pi
Then, using a switch statement I could determine how to rotate the image:
Switch Int(degrees) {
case -90, 90:
// Rotate accordingly
case 180:
// Flip
default:
// Do nothing
}

how to merge two video with transparency

I have successfully merge video-1 and video-2, over each other with video-2 being transparent using AVFoundation framework but after merging below video(video-1) is not displayed only video-2 is visible but when I use below code
AVMutableVideoCompositionLayerInstruction *SecondlayerInstruction =[AVMutableVideoCompositionLayerInstruction videoCompositionLayerInstructionWithAssetTrack:secondTrack];
[SecondlayerInstruction setOpacity:0.6 atTime:kCMTimeZero];
its set opacity on video-2 layer.But here actual problem is, there are some content over video-2 layer which is not transparent and here after applying opacity over video-2 layer it also apply over that content which is not transparent.
I am adding two image here which describe both scenario after set opacity using AVMutableVideoCompositionLayerInstruction
as in image after merging transparent area is black and when I set opacity over second layer whole the video-2 goes transparent now but the content also become transparent.
but my question is that how to play transparent video over another video after merging.I already checked video-2 is transparent as it proper play in android platform.
Edited-1 : I also try to set a background color on myVideoCompositionInstructionwhich also not helped. taking reference from this old question link
Edited-2 : In AVVideoComposition.h, I found
Indicates the background color of the composition. Solid BGRA colors
only are supported; patterns and other color refs that are not
supported will be ignored. If the background color is not specified
the video compositor will use a default backgroundColor of opaque
black. If the rendered pixel buffer does not have alpha, the alpha
value of the backgroundColor will be ignored.
What it means, I didn't get it.can any one help?
Good Question :
Try This
var totalTime : CMTime = CMTimeMake(0, 0)
func mergeVideoArray() {
let mixComposition = AVMutableComposition()
for videoAsset in arrayVideos {
let videoTrack =
mixComposition.addMutableTrack(withMediaType: AVMediaTypeVideo,
preferredTrackID: Int32(kCMPersistentTrackID_Invalid))
do {
if videoAsset == arrayVideos.first {
atTimeM = kCMTimeZero
} else {
atTimeM = totalTime // <-- Use the total time for all the videos seen so far.
}
try videoTrack.insertTimeRange(CMTimeRangeMake(kCMTimeZero, videoAsset.duration),
of: videoAsset.tracks(withMediaType: AVMediaTypeVideo)[0],
at: atTimeM)
videoSize = videoTrack.naturalSize
} catch let error as NSError {
print("error: \(error)")
}
totalTime += videoAsset.duration // <-- Update the total time for all videos.
...
Instead of opacity, you can set the alpha of video.
Explanation :
Alpha sets the opacity value for an element and all of its children, While opacity sets the opacity value only for a single component.
enter link description here
This worked for me. I put the first video above the second video. I wanted the first video to have an opacity of 0.7
let firstVideoCompositionTrack = mixComposition.addMutableTrack(withMediaType: .video, preferredTrackID: Int32(kCMPersistentTrackID_Invalid))
let secondVideoCompositionTrack = mixComposition.addMutableTrack(withMediaType: .video, preferredTrackID: Int32(kCMPersistentTrackID_Invalid))
// ... run MixComposition ...
let firstLayerInstruction = AVMutableVideoCompositionLayerInstruction(assetTrack: firstVideoCompositionTrack!)
firstLayerInstruction.setOpacity(0.7, at: .zero) <-----HERE
// rest of code

Swift: extracting image from video comes out blurry even though video looks sharp?

The code below was inspired by other posts on SO and extracts an image from a video. Unfortunately, the image looks blurry even though the video looks sharp and fully in focus.
Is there something wrong with the code, or is this a natural difficulty of extracting images from videos?
func getImageFromVideo(videoURL: String) -> UIImage {
do {
let asset = AVURLAsset(URL: NSURL(fileURLWithPath: videoURL), options: nil)
let imgGenerator = AVAssetImageGenerator(asset: asset)
imgGenerator.appliesPreferredTrackTransform = true
let cgImage = try imgGenerator.copyCGImageAtTime(CMTimeMake(0, 1), actualTime: nil)
let image = UIImage(CGImage: cgImage)
return image
} catch {
...
}
}
Your code is working without errors or problems. I've tried with a video and the grabbed image was not blurry.
I would try to debug this by using a different timescale for CMTime.
With CMTimeMake, the first argument is the value and the second argument is the timescale.
Your timescale is 1, so the value is in seconds. A value of 0 means 1st second, a value of 1 means 2nd second, etc. Actually it means the first frame after the designated location in the timeline.
With your current CMTime it grabs the first frame of the first second: that's the first frame of the video (even if the video is less than 1s).
With a timescale of 4, the value would be 1/4th of a second. Etc.
Try finding a CMTime that falls right on a steady frame (it depends on your video framerate, you'll have to make tests).
For example if your video is at 24 fps, then to grab exactly one frame of video, the timescale should be at 24 (that way each value unit would represent a whole frame):
let cgImage = try imgGenerator.copyCGImageAtTime(CMTimeMake(0, 24), actualTime: nil)
On the other hand, you mention that only the first and last frames of the video are blurry. As you rightly guessed, it's probably the actual cause of your issue and is caused by a lack of device stabilization.
A note: the encoding of the video might also play a role. Some MPG encoders create incomplete and interpolated frames that are "recreated" when the video plays, but these frames can appear blurry when grabbed with copyCGImageAtTime. The only solution I've found for this rare problem is to grab another frame just before or just after the blurry one.

AVVideoCompositionCoreAnimationTool export performance with lots of sublayers/animations

I'm working on an app that exports CALayer animations over 2-10 second videos using AVMutableVideoComposition and AVVideoCompositionCoreAnimationTool (export via AVExportSession).
There can hundreds CAShapeLayers in each composition, and each will have animation(s) attached to it.
let animationLayer = CALayer()
animationLayer.frame = CGRectMake(0, 0, size.width, size.height)
animationLayer.geometryFlipped = true
// Add a ton of CAShapeLayers with CABasicAnimation's to animation Layer
let parentLayer = CALayer()
let videoLayer = CALayer()
parentLayer.frame = CGRectMake(0, 0, size.width, size.height)
videoLayer.frame = CGRectMake(0, 0, size.width, size.height)
parentLayer.addSublayer(videoLayer)
parentLayer.addSublayer(animationLayer)
mainCompositionInst.animationTool = AVVideoCompositionCoreAnimationTool(postProcessingAsVideoLayer: videoLayer, inLayer: parentLayer)
let exporter = AVAssetExportSession(asset: mixComposition, presetName: AVAssetExportPresetHighestQuality)
exportSession.outputURL = finalUrl
exportSession.outputFileType = AVFileTypeQuickTimeMovie
exportSession.shouldOptimizeForNetworkUse = true
exportSession.videoComposition = mainCompositionInst
exportSession.exportAsynchronouslyWithCompletionHandler(...)
Now, this totally works. However, the composition export can be very slow when the animations are numerous (15-25 secs to export). I'm interested in any ideas to speed up the export performance.
One idea I have thus far is to do multiple composition/export passes and add a "reasonable" number of animation layers each pass. But I have a feeling that would just make it slower.
Or, perhaps export lots of smaller videos that each contain a "reasonable" number of animation layers, and then compose them all together in a final export.
Any other ideas? Is the slowness just a fact of life? I'd appreciate any insight! I'm pretty novice with AVFoundation.
I went down the video composition path and didn't like the constant frame rate that AVAssetExportSession enforced, so I manually rendered my content onto AVAssetReader's output and encoded it to mov with AVAssetWriter.
If you have a lot of content, you could translate it to OpenGL/Metal and use the GPU to render it blazingly quickly directly onto your video frames via texture caches.
I bet the GPU wouldn't break a sweat, so you'd be limited by the video encoder speed. I'm not sure what that is - the 4s used to do 3x realtime, that can only have improved.
There's a lot of fiddling required to do what I suggest. You could get started very quickly on the OpenGL path by using GPUImage, although a Metal version would be very cool.

Resources