how to merge two video with transparency - ios

I have successfully merge video-1 and video-2, over each other with video-2 being transparent using AVFoundation framework but after merging below video(video-1) is not displayed only video-2 is visible but when I use below code
AVMutableVideoCompositionLayerInstruction *SecondlayerInstruction =[AVMutableVideoCompositionLayerInstruction videoCompositionLayerInstructionWithAssetTrack:secondTrack];
[SecondlayerInstruction setOpacity:0.6 atTime:kCMTimeZero];
its set opacity on video-2 layer.But here actual problem is, there are some content over video-2 layer which is not transparent and here after applying opacity over video-2 layer it also apply over that content which is not transparent.
I am adding two image here which describe both scenario after set opacity using AVMutableVideoCompositionLayerInstruction
as in image after merging transparent area is black and when I set opacity over second layer whole the video-2 goes transparent now but the content also become transparent.
but my question is that how to play transparent video over another video after merging.I already checked video-2 is transparent as it proper play in android platform.
Edited-1 : I also try to set a background color on myVideoCompositionInstructionwhich also not helped. taking reference from this old question link
Edited-2 : In AVVideoComposition.h, I found
Indicates the background color of the composition. Solid BGRA colors
only are supported; patterns and other color refs that are not
supported will be ignored. If the background color is not specified
the video compositor will use a default backgroundColor of opaque
black. If the rendered pixel buffer does not have alpha, the alpha
value of the backgroundColor will be ignored.
What it means, I didn't get it.can any one help?

Good Question :
Try This
var totalTime : CMTime = CMTimeMake(0, 0)
func mergeVideoArray() {
let mixComposition = AVMutableComposition()
for videoAsset in arrayVideos {
let videoTrack =
mixComposition.addMutableTrack(withMediaType: AVMediaTypeVideo,
preferredTrackID: Int32(kCMPersistentTrackID_Invalid))
do {
if videoAsset == arrayVideos.first {
atTimeM = kCMTimeZero
} else {
atTimeM = totalTime // <-- Use the total time for all the videos seen so far.
}
try videoTrack.insertTimeRange(CMTimeRangeMake(kCMTimeZero, videoAsset.duration),
of: videoAsset.tracks(withMediaType: AVMediaTypeVideo)[0],
at: atTimeM)
videoSize = videoTrack.naturalSize
} catch let error as NSError {
print("error: \(error)")
}
totalTime += videoAsset.duration // <-- Update the total time for all videos.
...

Instead of opacity, you can set the alpha of video.
Explanation :
Alpha sets the opacity value for an element and all of its children, While opacity sets the opacity value only for a single component.
enter link description here

This worked for me. I put the first video above the second video. I wanted the first video to have an opacity of 0.7
let firstVideoCompositionTrack = mixComposition.addMutableTrack(withMediaType: .video, preferredTrackID: Int32(kCMPersistentTrackID_Invalid))
let secondVideoCompositionTrack = mixComposition.addMutableTrack(withMediaType: .video, preferredTrackID: Int32(kCMPersistentTrackID_Invalid))
// ... run MixComposition ...
let firstLayerInstruction = AVMutableVideoCompositionLayerInstruction(assetTrack: firstVideoCompositionTrack!)
firstLayerInstruction.setOpacity(0.7, at: .zero) <-----HERE
// rest of code

Related

AVFoundation Crash on Exporting Video With Text Layer

I'm developing a video editing app for iOS on my spare time.
I just resumed work on it after several weeks of attending other rpojects, and -even though I haven't made any significant changes to the code- now it crashes everytime I try to export my video composition.
I checked out and built the exact same commit that I successfully uploaded to TestFlight back then (and it was working on several devices without crashing), so perhaps it is an issue with the latest Xcode / iOS SDK that I hve updated since then?
The code crashes on _xpc_api_misuse, on a thread:
com.apple.coremedia.basicvideocompositor.output
Debug Navigator:
At the time of the crash, there are 70+ threads on the debug navigator, so perhaps something is wrong and the app is using too many threads (never seen these many).
My app overlays a 'watermark' on exported video using a text layer. After playing around, I discovered that the crash can be averted if I comment-out the watermark code:
guard let exporter = AVAssetExportSession(asset: composition, presetName: AVAssetExportPresetHighestQuality) else {
return failure(ProjectError.failedToCreateExportSession)
}
guard let documents = try? FileManager.default.url(for: .documentDirectory, in: .userDomainMask, appropriateFor: nil, create: true) else {
return failure(ProjectError.temporaryOutputDirectoryNotFound)
}
let dateFormatter = DateFormatter()
dateFormatter.dateFormat = "yyyy-MM-dd_HHmmss"
let fileName = dateFormatter.string(from: Date())
let fileExtension = "mov"
let fileURL = documents.appendingPathComponent(fileName).appendingPathExtension(fileExtension)
exporter.outputURL = fileURL
exporter.outputFileType = AVFileType.mov
exporter.shouldOptimizeForNetworkUse = true // check if needed
// OFFENDING BLOCK (commenting out averts crash)
if addWaterMark {
let frame = CGRect(origin: .zero, size: videoComposition.renderSize)
let watermark = WatermarkLayer(frame: frame)
let parentLayer = CALayer()
let videoLayer = CALayer()
parentLayer.frame = frame
videoLayer.frame = frame
parentLayer.addSublayer(videoLayer)
parentLayer.addSublayer(watermark)
videoComposition.animationTool = AVVideoCompositionCoreAnimationTool(postProcessingAsVideoLayer: videoLayer, in: parentLayer)
}
// END OF OFFENDING BLOCK
exporter.videoComposition = videoComposition
exporter.exportAsynchronously {
// etc.
The code for the watermark layer is:
class WatermarkLayer: CATextLayer {
private let defaultFontSize: CGFloat = 48
private let rightMargin: CGFloat = 10
private let bottomMargin: CGFloat = 10
init(frame: CGRect) {
super.init()
guard let appName = Bundle.main.infoDictionary?["CFBundleName"] as? String else {
fatalError("!!!")
}
self.foregroundColor = CGColor.srgb(r: 255, g: 255, b: 255, a: 0.5)
self.backgroundColor = CGColor.clear
self.string = String(format: String.watermarkFormat, appName)
self.font = CTFontCreateWithName(String.watermarkFontName as CFString, defaultFontSize, nil)
self.fontSize = defaultFontSize
self.shadowOpacity = 0.75
self.alignmentMode = .right
self.frame = frame
}
required init?(coder: NSCoder) {
fatalError("init(coder:) has not been implemented. Use init(frame:) instead.")
}
override func draw(in ctx: CGContext) {
let height = self.bounds.size.height
let fontSize = self.fontSize
let yDiff = (height-fontSize) - fontSize/10 - bottomMargin // Bottom (minus margin)
ctx.saveGState()
ctx.translateBy(x: -rightMargin, y: yDiff)
super.draw(in: ctx)
ctx.restoreGState()
}
}
Any ideas what could be happening?
Perhaps my code is doing something wrong that somewhow 'got a pass' in a previous SDK due to some Apple bug that got fixed or an implementation 'hole' that got plugged?
UPDATE: I downloaded Ray Wenderlich's sample project for video wediting and tried to add 'subtitles' to a video (I had to tweak the too-old project so that it would compile under Xcode 11).
Lo and behold, it crashes in the exact same way.
UPDATE 2: I now tried on the device (iPhone 8 running the latest iOS 13.5) and it works, no crash. The Simulators for iOS 13.5 do crash however. When I originally posted the question (iOS 13.4?), I'm sure it was both Crashing on device and Simulator.
I am downloading the iOS 12.0 Simulators to check, but it's still a few gigabytes away...
I'm having the same issue. Started after iOS 13.4 and is only shown on the simulator (device is working fine). If I comment out parentLayer.addSublayer(videoLayer) then the app doesn't crash, but the exported video isn't the desired output.
This fixed it for me in iOS 14.5:
public static var isSimulator: Bool {
#if targetEnvironment(simulator)
true
#else
false
#endif
}
// ...
let export = AVAssetExportSession(
asset: composition,
presetName: isSimulator ? AVAssetExportPresetPassthrough : AVAssetExportPresetHighestQuality
)
edit: Doesn't actually render like on a real device though. Edits are simply ignored...
Meet same issues, but on Simulator (Xcode 12.4 (12D4e)) only.
After some research, I found this crash is lead by AVVideoCompositionCoreAnimationTool's
+videoCompositionCoreAnimationToolWithPostProcessingAsVideoLayer:inLayer:
And I fixed it by replacing it w/ one below (but we need to handle instruction.layerInstructions in this way):
+videoCompositionCoreAnimationToolWithAdditionalLayer:asTrackID:
Below is a sample code works on both real device & simulator (as the OP didn't tag Swift explicitly, I'll just copy my Objective-C sample here):
...
// Prepare watermark layer
CALayer *watermarkLayer = ...;
CMPersistentTrackID watermarkLayerTrackID = [asset unusedTrackID];
// !!! NOTE#01: Use as additional layer here instead of animation layer.
videoComposition.animationTool = [AVVideoCompositionCoreAnimationTool videoCompositionCoreAnimationToolWithAdditionalLayer:watermarkLayer asTrackID:watermarkLayerTrackID];
// Create video composition instruction
AVMutableVideoCompositionInstruction *instruction = [AVMutableVideoCompositionInstruction videoCompositionInstruction];
instruction.timeRange = CMTimeRangeMake(kCMTimeZero, asset.duration);
// - Watermark layer instruction
// !!! NOTE#02: Make this instruction track watermark layer by the `trackID`.
AVMutableVideoCompositionLayerInstruction *watermarkLayerInstruction = [AVMutableVideoCompositionLayerInstruction videoCompositionLayerInstruction];
watermarkLayerInstruction.trackID = watermarkLayerTrackID;
// - Video track layer instruction
AVAssetTrack *videoTrack = [asset tracksWithMediaType:AVMediaTypeVideo].firstObject;
AVMutableVideoCompositionLayerInstruction *videoLayerInstruction = [AVMutableVideoCompositionLayerInstruction videoCompositionLayerInstructionWithAssetTrack:videoTrack];
// Watermark layer above video layer here.
instruction.layerInstructions = #[
watermarkLayerInstruction,
videoLayerInstruction,
];
videoComposition.instructions = #[instruction];
// Export the video w/ watermark.
AVAssetExportSession *exportSession = ...;
...
exportSession.videoComposition = videoComposition;
...
And btw, if you just need to add an image as watermark, another solution by using AVVideoComposition's
-videoCompositionWithAsset:applyingCIFiltersWithHandler:
also works well on both real device & simulator, but I tested it and found it's slower. Seems this way is more suitable for video blender/filter.
#if targetEnvironment(simulator)
// Adding layers while export crashes on simulator as it expects opaque background.
#else
if let animationTool = getAnimationTool() {
videoComposition.animationTool = animationTool
}
#endif
Here getAnimationTool() will return AVVideoCompositionCoreAnimationTool.
It can be either Image layer or text layer. But should return AVVideoCompositionCoreAnimationTool.

Rendering Alpha Channel Video Over Background (AVFoundation, Swift)

Recently, I have been following this tutorial, which has taught me how to play a video with alpha channel in iOS. This has been working great to build an AVPlayer over something like a UIImageView, which allows me to make it look like my video (with the alpha channel removed) is playing on top of the image.
Using this approach, I now need to find a way to do this while rendering/saving the video to the user's device. This is my code to generate the alpha video that plays in the AVPlayer;
let videoSize = CGSize(width: playerItem.presentationSize.width, height: playerItem.presentationSize.height / 2.0)
let composition = AVMutableVideoComposition(asset: playerItem.asset, applyingCIFiltersWithHandler: { request in
let sourceRect = CGRect(origin: .zero, size: videoSize)
let alphaRect = sourceRect.offsetBy(dx: 0, dy: sourceRect.height)
let filter = AlphaFrameFilter()
filter.inputImage = request.sourceImage.cropped(to: alphaRect)
.transformed(by: CGAffineTransform(translationX: 0, y: -sourceRect.height))
filter.maskImage = request.sourceImage.cropped(to: sourceRect)
return request.finish(with: filter.outputImage!, context: nil)
(That's been truncated a bit for ease, but I can confirm this approach properly returns an AVVideoComposition that I can play in AVPlayer.
I recognize that I can use an AVVideoComposition with an AVExportSession, but this only allows me to render my alpha video over a black background (and not an image or video, as I'd need).
Is there a way to overlay the now "background-removed" alpha channel, on top of another video and process out?

AVVideoComposition with CIFilters crash

I am creating an AVVideoComposition with CIFilters this way:
videoComposition = AVMutableVideoComposition(asset: asset, applyingCIFiltersWithHandler: {[weak self] request in
// Clamp to avoid blurring transparent pixels at the image edges
let source = request.sourceImage.clampedToExtent()
let output:CIImage
if let filteredOutput = self?.runFilters(source, filters: filters)?.cropped(to: request.sourceImage.extent) {
output = filteredOutput
} else {
output = source
}
// Provide the filter output to the composition
request.finish(with: output, context: nil)
})
And then to correctly handle rotation, I create a passthrough instruction which sets identity transform on passthrough layer.
let passThroughInstruction = AVMutableVideoCompositionInstruction()
passThroughInstruction.timeRange = CMTimeRangeMake(start: CMTime.zero, duration: asset.duration)
let passThroughLayer = AVMutableVideoCompositionLayerInstruction(assetTrack: videoTrack)
passThroughLayer.setTransform(CGAffineTransform.identity, at: CMTime.zero)
passThroughInstruction.layerInstructions = [passThroughLayer]
videoComposition.instructions = [passThroughInstruction]
The problem is it crashes with the error:
'*** -[AVCoreImageFilterCustomVideoCompositor startVideoCompositionRequest:] Expecting video composition to contain only AVCoreImageFilterVideoCompositionInstruction'
My issue is that if I do not specify this passThroughInstruction, output is incorrect if input asset's videotrack contains a preferredTransform which specifies rotation by 90 degrees. How do I use video composition with Core Image filters that correctly handles preferredTransform of video track?
EDIT: The question looks similar but is different from other questions that involve playback. In my case, playback is fine but it is rendering that creates distorted video.
Ok I found the real issue. Issue is applying videoComposition returned by AVMutableVideoComposition(asset:, applyingCIFiltersWithHandler:) function implicitly creates a composition instruction that physically rotates the CIImages in case a rotation transform is applied through preferredTransform on video track. Whether it should be doing that or not is debatable as the preferredTransform is applied at the player level. As a workaround, in addition to what is suggested in this answer. I had to adjust width and height passed via AVVideoWidthKey, AVVideoHeightKey in videoSettings which is passed to AVAssetReader/Writer.

Is there a way to display camera images without using AVCaptureVideoPreviewLayer?

Is there a way to display camera images without using AVCaptureVideoPreviewLayer?
I want to do screen capture, but I can not do it.
session = AVCaptureSession()
camera = AVCaptureDevice.default(
AVCaptureDevice.DeviceType.builtInWideAngleCamera,
for: AVMediaType.video,
position: .front) // position: .front
do {
input = try AVCaptureDeviceInput(device: camera)
} catch let error as NSError {
print(error)
}
if(session.canAddInput(input)) {
session.addInput(input)
}
let previewLayer = AVCaptureVideoPreviewLayer(session: session)
cameraView.backgroundColor = UIColor.red
previewLayer.frame = cameraView.bounds
previewLayer.videoGravity = AVLayerVideoGravity.resizeAspect
cameraview.layer.addSublayer(previewLayer)
session.startRunning()
I am currently trying to broadcast a screen capture. It is to synthesize the camera image and some UIView. However, if you use AVCaptureVideoPreviewLayer screen capture can not be done and the camera image is not displayed. Therefore, I want to display the camera image so that screen capture can be performed.
Generally the views that are displayed using GPU directly may not be redrawn on the CPU. This includes situations like openGL content or these preview layers.
The "screen capture" redraws the screen on a new context on CPU which obviously ignores the GPU part.
You should try and play around with adding some outputs on the session which will give you images or rather CMSampleBuffer shots which may be used to generate the image.
There are plenty ways in doing this but you will most likely need to go a step lower. You can add output to your session to receive samples directly. Doing this is a bit of a code so please refer to some other posts like this one. The point in this you will have a didOutputSampleBuffer method which will feed you CMSampleBufferRef objects that may be used to construct pretty much anything in terms of images.
Now in your case I assume you will be aiming to get UIImage from sample buffer. To do so you may again need a bit of code so refer to some other post like this one.
To put it all together you could as well simply use an image view and drop the preview layer. As you get the sample buffer you can create image and update your image view. I am not sure what the performance of this would be but I discourage you on doing this. If image itself is enough for your case then you don't need a view snapshot at all.
But IF you do:
On snapshot create this image. Then overlay your preview layer with and image view that is showing this generated image (add a subview). Then create the snapshot and remove the image view all in a single chunk:
func snapshot() -> UIImage? {
let imageView = UIImageView(frame: self.previewPanelView.bounds)
imageView.image = self.imageFromLatestSampleBuffer()
imageView.contentMode = .aspectFill // Not sure
self.previewPanelView.addSubview(imageView)
let image = createSnapshot()
imageView.removeFromSuperview()
return image
}
Let us know how things turn and you tried, what did or did not work.

Swift: extracting image from video comes out blurry even though video looks sharp?

The code below was inspired by other posts on SO and extracts an image from a video. Unfortunately, the image looks blurry even though the video looks sharp and fully in focus.
Is there something wrong with the code, or is this a natural difficulty of extracting images from videos?
func getImageFromVideo(videoURL: String) -> UIImage {
do {
let asset = AVURLAsset(URL: NSURL(fileURLWithPath: videoURL), options: nil)
let imgGenerator = AVAssetImageGenerator(asset: asset)
imgGenerator.appliesPreferredTrackTransform = true
let cgImage = try imgGenerator.copyCGImageAtTime(CMTimeMake(0, 1), actualTime: nil)
let image = UIImage(CGImage: cgImage)
return image
} catch {
...
}
}
Your code is working without errors or problems. I've tried with a video and the grabbed image was not blurry.
I would try to debug this by using a different timescale for CMTime.
With CMTimeMake, the first argument is the value and the second argument is the timescale.
Your timescale is 1, so the value is in seconds. A value of 0 means 1st second, a value of 1 means 2nd second, etc. Actually it means the first frame after the designated location in the timeline.
With your current CMTime it grabs the first frame of the first second: that's the first frame of the video (even if the video is less than 1s).
With a timescale of 4, the value would be 1/4th of a second. Etc.
Try finding a CMTime that falls right on a steady frame (it depends on your video framerate, you'll have to make tests).
For example if your video is at 24 fps, then to grab exactly one frame of video, the timescale should be at 24 (that way each value unit would represent a whole frame):
let cgImage = try imgGenerator.copyCGImageAtTime(CMTimeMake(0, 24), actualTime: nil)
On the other hand, you mention that only the first and last frames of the video are blurry. As you rightly guessed, it's probably the actual cause of your issue and is caused by a lack of device stabilization.
A note: the encoding of the video might also play a role. Some MPG encoders create incomplete and interpolated frames that are "recreated" when the video plays, but these frames can appear blurry when grabbed with copyCGImageAtTime. The only solution I've found for this rare problem is to grab another frame just before or just after the blurry one.

Resources