I'm developing a video editing app for iOS on my spare time.
I just resumed work on it after several weeks of attending other rpojects, and -even though I haven't made any significant changes to the code- now it crashes everytime I try to export my video composition.
I checked out and built the exact same commit that I successfully uploaded to TestFlight back then (and it was working on several devices without crashing), so perhaps it is an issue with the latest Xcode / iOS SDK that I hve updated since then?
The code crashes on _xpc_api_misuse, on a thread:
com.apple.coremedia.basicvideocompositor.output
Debug Navigator:
At the time of the crash, there are 70+ threads on the debug navigator, so perhaps something is wrong and the app is using too many threads (never seen these many).
My app overlays a 'watermark' on exported video using a text layer. After playing around, I discovered that the crash can be averted if I comment-out the watermark code:
guard let exporter = AVAssetExportSession(asset: composition, presetName: AVAssetExportPresetHighestQuality) else {
return failure(ProjectError.failedToCreateExportSession)
}
guard let documents = try? FileManager.default.url(for: .documentDirectory, in: .userDomainMask, appropriateFor: nil, create: true) else {
return failure(ProjectError.temporaryOutputDirectoryNotFound)
}
let dateFormatter = DateFormatter()
dateFormatter.dateFormat = "yyyy-MM-dd_HHmmss"
let fileName = dateFormatter.string(from: Date())
let fileExtension = "mov"
let fileURL = documents.appendingPathComponent(fileName).appendingPathExtension(fileExtension)
exporter.outputURL = fileURL
exporter.outputFileType = AVFileType.mov
exporter.shouldOptimizeForNetworkUse = true // check if needed
// OFFENDING BLOCK (commenting out averts crash)
if addWaterMark {
let frame = CGRect(origin: .zero, size: videoComposition.renderSize)
let watermark = WatermarkLayer(frame: frame)
let parentLayer = CALayer()
let videoLayer = CALayer()
parentLayer.frame = frame
videoLayer.frame = frame
parentLayer.addSublayer(videoLayer)
parentLayer.addSublayer(watermark)
videoComposition.animationTool = AVVideoCompositionCoreAnimationTool(postProcessingAsVideoLayer: videoLayer, in: parentLayer)
}
// END OF OFFENDING BLOCK
exporter.videoComposition = videoComposition
exporter.exportAsynchronously {
// etc.
The code for the watermark layer is:
class WatermarkLayer: CATextLayer {
private let defaultFontSize: CGFloat = 48
private let rightMargin: CGFloat = 10
private let bottomMargin: CGFloat = 10
init(frame: CGRect) {
super.init()
guard let appName = Bundle.main.infoDictionary?["CFBundleName"] as? String else {
fatalError("!!!")
}
self.foregroundColor = CGColor.srgb(r: 255, g: 255, b: 255, a: 0.5)
self.backgroundColor = CGColor.clear
self.string = String(format: String.watermarkFormat, appName)
self.font = CTFontCreateWithName(String.watermarkFontName as CFString, defaultFontSize, nil)
self.fontSize = defaultFontSize
self.shadowOpacity = 0.75
self.alignmentMode = .right
self.frame = frame
}
required init?(coder: NSCoder) {
fatalError("init(coder:) has not been implemented. Use init(frame:) instead.")
}
override func draw(in ctx: CGContext) {
let height = self.bounds.size.height
let fontSize = self.fontSize
let yDiff = (height-fontSize) - fontSize/10 - bottomMargin // Bottom (minus margin)
ctx.saveGState()
ctx.translateBy(x: -rightMargin, y: yDiff)
super.draw(in: ctx)
ctx.restoreGState()
}
}
Any ideas what could be happening?
Perhaps my code is doing something wrong that somewhow 'got a pass' in a previous SDK due to some Apple bug that got fixed or an implementation 'hole' that got plugged?
UPDATE: I downloaded Ray Wenderlich's sample project for video wediting and tried to add 'subtitles' to a video (I had to tweak the too-old project so that it would compile under Xcode 11).
Lo and behold, it crashes in the exact same way.
UPDATE 2: I now tried on the device (iPhone 8 running the latest iOS 13.5) and it works, no crash. The Simulators for iOS 13.5 do crash however. When I originally posted the question (iOS 13.4?), I'm sure it was both Crashing on device and Simulator.
I am downloading the iOS 12.0 Simulators to check, but it's still a few gigabytes away...
I'm having the same issue. Started after iOS 13.4 and is only shown on the simulator (device is working fine). If I comment out parentLayer.addSublayer(videoLayer) then the app doesn't crash, but the exported video isn't the desired output.
This fixed it for me in iOS 14.5:
public static var isSimulator: Bool {
#if targetEnvironment(simulator)
true
#else
false
#endif
}
// ...
let export = AVAssetExportSession(
asset: composition,
presetName: isSimulator ? AVAssetExportPresetPassthrough : AVAssetExportPresetHighestQuality
)
edit: Doesn't actually render like on a real device though. Edits are simply ignored...
Meet same issues, but on Simulator (Xcode 12.4 (12D4e)) only.
After some research, I found this crash is lead by AVVideoCompositionCoreAnimationTool's
+videoCompositionCoreAnimationToolWithPostProcessingAsVideoLayer:inLayer:
And I fixed it by replacing it w/ one below (but we need to handle instruction.layerInstructions in this way):
+videoCompositionCoreAnimationToolWithAdditionalLayer:asTrackID:
Below is a sample code works on both real device & simulator (as the OP didn't tag Swift explicitly, I'll just copy my Objective-C sample here):
...
// Prepare watermark layer
CALayer *watermarkLayer = ...;
CMPersistentTrackID watermarkLayerTrackID = [asset unusedTrackID];
// !!! NOTE#01: Use as additional layer here instead of animation layer.
videoComposition.animationTool = [AVVideoCompositionCoreAnimationTool videoCompositionCoreAnimationToolWithAdditionalLayer:watermarkLayer asTrackID:watermarkLayerTrackID];
// Create video composition instruction
AVMutableVideoCompositionInstruction *instruction = [AVMutableVideoCompositionInstruction videoCompositionInstruction];
instruction.timeRange = CMTimeRangeMake(kCMTimeZero, asset.duration);
// - Watermark layer instruction
// !!! NOTE#02: Make this instruction track watermark layer by the `trackID`.
AVMutableVideoCompositionLayerInstruction *watermarkLayerInstruction = [AVMutableVideoCompositionLayerInstruction videoCompositionLayerInstruction];
watermarkLayerInstruction.trackID = watermarkLayerTrackID;
// - Video track layer instruction
AVAssetTrack *videoTrack = [asset tracksWithMediaType:AVMediaTypeVideo].firstObject;
AVMutableVideoCompositionLayerInstruction *videoLayerInstruction = [AVMutableVideoCompositionLayerInstruction videoCompositionLayerInstructionWithAssetTrack:videoTrack];
// Watermark layer above video layer here.
instruction.layerInstructions = #[
watermarkLayerInstruction,
videoLayerInstruction,
];
videoComposition.instructions = #[instruction];
// Export the video w/ watermark.
AVAssetExportSession *exportSession = ...;
...
exportSession.videoComposition = videoComposition;
...
And btw, if you just need to add an image as watermark, another solution by using AVVideoComposition's
-videoCompositionWithAsset:applyingCIFiltersWithHandler:
also works well on both real device & simulator, but I tested it and found it's slower. Seems this way is more suitable for video blender/filter.
#if targetEnvironment(simulator)
// Adding layers while export crashes on simulator as it expects opaque background.
#else
if let animationTool = getAnimationTool() {
videoComposition.animationTool = animationTool
}
#endif
Here getAnimationTool() will return AVVideoCompositionCoreAnimationTool.
It can be either Image layer or text layer. But should return AVVideoCompositionCoreAnimationTool.
Related
I'm having an issue with displaying bounding box around recognized object using Core ML & Vision.
The horizontal detection seems to be working correctly, however, vertically the box is too tall, goes over the top edge of the video, doesn't go all the way to the bottom of the video, and it doesn't follow motion of the camera correctly. Here you can see the issue: https://imgur.com/Sppww8T
This is how video data output is initialized:
let videoDataOutput = AVCaptureVideoDataOutput()
videoDataOutput.alwaysDiscardsLateVideoFrames = true
videoDataOutput.videoSettings = [kCVPixelBufferPixelFormatTypeKey as String: Int(kCVPixelFormatType_420YpCbCr8BiPlanarFullRange)]
videoDataOutput.setSampleBufferDelegate(self, queue: dataOutputQueue!)
self.videoDataOutput = videoDataOutput
session.addOutput(videoDataOutput)
let c = videoDataOutput.connection(with: .video)
c?.videoOrientation = .portrait
I've also tried other video orientations, without much success.
Performing the vision request:
let handler = VNImageRequestHandler(cvPixelBuffer: image, options: [:])
try? handler.perform(vnRequests)
And finally once the request is processed. viewRect is set to the size of the video view: 812x375 (I know, video layer itself is a bit shorter, but that's not the issue here):
let observationRect = VNImageRectForNormalizedRect(observation.boundingBox, Int(viewRect.width), Int(viewRect.height))
I've also tried doing something like (with more issues):
var observationRect = observation.boundingBox
observationRect.origin.y = 1.0 - observationRect.origin.y
observationRect = videoPreviewLayer.layerRectConverted(fromMetadataOutputRect: observationRect)
I've tried to cut out as much of what I deemed to be irrelevant code as possible.
I've actually come across a similar issue using Apple's sample code, when the bounding box wouldn't vertically go around objects as expected: https://developer.apple.com/documentation/vision/recognizing_objects_in_live_capture Maybe that means that there is some issue with the API?
I use something like this:
let width = view.bounds.width
let height = width * 16 / 9
let offsetY = (view.bounds.height - height) / 2
let scale = CGAffineTransform.identity.scaledBy(x: width, y: height)
let transform = CGAffineTransform(scaleX: 1, y: -1).translatedBy(x: 0, y: -height - offsetY)
let rect = prediction.boundingBox.applying(scale).applying(transform)
This assumes portrait orientation and a 16:9 aspect ratio. It assumes the .imageCropAndScaleOption = .scaleFill.
Credits: The transform code was taken from this repo: https://github.com/Willjay90/AppleFaceDetection
I am creating an AVVideoComposition with CIFilters this way:
videoComposition = AVMutableVideoComposition(asset: asset, applyingCIFiltersWithHandler: {[weak self] request in
// Clamp to avoid blurring transparent pixels at the image edges
let source = request.sourceImage.clampedToExtent()
let output:CIImage
if let filteredOutput = self?.runFilters(source, filters: filters)?.cropped(to: request.sourceImage.extent) {
output = filteredOutput
} else {
output = source
}
// Provide the filter output to the composition
request.finish(with: output, context: nil)
})
And then to correctly handle rotation, I create a passthrough instruction which sets identity transform on passthrough layer.
let passThroughInstruction = AVMutableVideoCompositionInstruction()
passThroughInstruction.timeRange = CMTimeRangeMake(start: CMTime.zero, duration: asset.duration)
let passThroughLayer = AVMutableVideoCompositionLayerInstruction(assetTrack: videoTrack)
passThroughLayer.setTransform(CGAffineTransform.identity, at: CMTime.zero)
passThroughInstruction.layerInstructions = [passThroughLayer]
videoComposition.instructions = [passThroughInstruction]
The problem is it crashes with the error:
'*** -[AVCoreImageFilterCustomVideoCompositor startVideoCompositionRequest:] Expecting video composition to contain only AVCoreImageFilterVideoCompositionInstruction'
My issue is that if I do not specify this passThroughInstruction, output is incorrect if input asset's videotrack contains a preferredTransform which specifies rotation by 90 degrees. How do I use video composition with Core Image filters that correctly handles preferredTransform of video track?
EDIT: The question looks similar but is different from other questions that involve playback. In my case, playback is fine but it is rendering that creates distorted video.
Ok I found the real issue. Issue is applying videoComposition returned by AVMutableVideoComposition(asset:, applyingCIFiltersWithHandler:) function implicitly creates a composition instruction that physically rotates the CIImages in case a rotation transform is applied through preferredTransform on video track. Whether it should be doing that or not is debatable as the preferredTransform is applied at the player level. As a workaround, in addition to what is suggested in this answer. I had to adjust width and height passed via AVVideoWidthKey, AVVideoHeightKey in videoSettings which is passed to AVAssetReader/Writer.
A few things I want to establish first:
This works properly on multiple iPhones (iOS 10.3 & 11.x)
This works properly on any iPad simulator (iOS 11.x)
What I am left with is a situation where when I run the following code (condensed from my application to remove unrelated code), I am getting an image that is upside down (landscape) or rotated 90 degrees (portrait). Viewing the video that is processed just prior to this step shows that it is properly oriented. All testing has been done on iOS 11.2.5.
* UPDATED *
I did some further testing and found a few more interesting items:
If the video was imported from a phone, or an external source, it is properly processed
If the video was recorded on the iPad in portrait orientation, then the reader extracts it rotated 90 degrees left
If the video was recorded on the iPad in landscape orientation, then the reader extracts it upside down
In the two scenarios above, UIImage reports an orientation of portrait
A condensed version of the code involved:
import UIKit
import AVFoundation
let asset = ...
let assetReader = try? AVAssetReader(asset: asset)
if let assetTrack = asset.tracks(withMediaType: .video).first,
let assetReader = assetReader {
let assetReaderOutputSettings = [
kCVPixelBufferPixelFormatTypeKey as String : NSNumber(value: kCVPixelFormatType_32BGRA)
]
let assetReaderOutput = AVAssetReaderTrackOutput(track: assetTrack,
outputSettings: assetReaderOutputSettings)
assetReaderOutput.alwaysCopiesSampleData = false
assetReader.add(assetReaderOutput)
var images = [UIImage]()
assetReader.startReading()
var sample = assetReaderOutput.copyNextSampleBuffer()
while (sample != nil) {
if let image = sample?.uiImage { // The image is inverted here
images.append(image)
sample = assetReaderOutput.copyNextSampleBuffer()
}
}
// Continue here with array of images...
}
After some exploration, I came across the following that allowed me to obtain the video's orientation from the AVAssetTrack:
let transform = assetTrack.preferredTransform
radians = atan2(transform.b, transform.a)
Once I had that, I was able to convert it to degrees:
let degrees = (radians * 180.0) / .pi
Then, using a switch statement I could determine how to rotate the image:
Switch Int(degrees) {
case -90, 90:
// Rotate accordingly
case 180:
// Flip
default:
// Do nothing
}
I have successfully merge video-1 and video-2, over each other with video-2 being transparent using AVFoundation framework but after merging below video(video-1) is not displayed only video-2 is visible but when I use below code
AVMutableVideoCompositionLayerInstruction *SecondlayerInstruction =[AVMutableVideoCompositionLayerInstruction videoCompositionLayerInstructionWithAssetTrack:secondTrack];
[SecondlayerInstruction setOpacity:0.6 atTime:kCMTimeZero];
its set opacity on video-2 layer.But here actual problem is, there are some content over video-2 layer which is not transparent and here after applying opacity over video-2 layer it also apply over that content which is not transparent.
I am adding two image here which describe both scenario after set opacity using AVMutableVideoCompositionLayerInstruction
as in image after merging transparent area is black and when I set opacity over second layer whole the video-2 goes transparent now but the content also become transparent.
but my question is that how to play transparent video over another video after merging.I already checked video-2 is transparent as it proper play in android platform.
Edited-1 : I also try to set a background color on myVideoCompositionInstructionwhich also not helped. taking reference from this old question link
Edited-2 : In AVVideoComposition.h, I found
Indicates the background color of the composition. Solid BGRA colors
only are supported; patterns and other color refs that are not
supported will be ignored. If the background color is not specified
the video compositor will use a default backgroundColor of opaque
black. If the rendered pixel buffer does not have alpha, the alpha
value of the backgroundColor will be ignored.
What it means, I didn't get it.can any one help?
Good Question :
Try This
var totalTime : CMTime = CMTimeMake(0, 0)
func mergeVideoArray() {
let mixComposition = AVMutableComposition()
for videoAsset in arrayVideos {
let videoTrack =
mixComposition.addMutableTrack(withMediaType: AVMediaTypeVideo,
preferredTrackID: Int32(kCMPersistentTrackID_Invalid))
do {
if videoAsset == arrayVideos.first {
atTimeM = kCMTimeZero
} else {
atTimeM = totalTime // <-- Use the total time for all the videos seen so far.
}
try videoTrack.insertTimeRange(CMTimeRangeMake(kCMTimeZero, videoAsset.duration),
of: videoAsset.tracks(withMediaType: AVMediaTypeVideo)[0],
at: atTimeM)
videoSize = videoTrack.naturalSize
} catch let error as NSError {
print("error: \(error)")
}
totalTime += videoAsset.duration // <-- Update the total time for all videos.
...
Instead of opacity, you can set the alpha of video.
Explanation :
Alpha sets the opacity value for an element and all of its children, While opacity sets the opacity value only for a single component.
enter link description here
This worked for me. I put the first video above the second video. I wanted the first video to have an opacity of 0.7
let firstVideoCompositionTrack = mixComposition.addMutableTrack(withMediaType: .video, preferredTrackID: Int32(kCMPersistentTrackID_Invalid))
let secondVideoCompositionTrack = mixComposition.addMutableTrack(withMediaType: .video, preferredTrackID: Int32(kCMPersistentTrackID_Invalid))
// ... run MixComposition ...
let firstLayerInstruction = AVMutableVideoCompositionLayerInstruction(assetTrack: firstVideoCompositionTrack!)
firstLayerInstruction.setOpacity(0.7, at: .zero) <-----HERE
// rest of code
I am trying to change local video resolution in webRTC. I used following method to create local video tracker:
-(RTCVideoTrack *)createLocalVideoTrack {
RTCVideoTrack *localVideoTrack = nil;
RTCMediaConstraints *mediaConstraints = [[RTCMediaConstraints alloc] initWithMandatoryConstraints:nil optionalConstraints:nil];
RTCAVFoundationVideoSource *source =
[self.factory avFoundationVideoSourceWithConstraints:mediaConstraints];
localVideoTrack =
[self.factory videoTrackWithSource:source
trackId:#"ARDAMSv0"];
return localVideoTrack;
}
I set the mandatory constraint as follow, but it doesn't work:
#{#"minFrameRate":#"20",#"maxFrameRate":#"30",#"maxWidth":#"320",#"minWidth":#"240",#"maxHeight":#"320",#"minHeight":#"240"};
Could anyone help me?
Latest SDK builds don't provide factory method to build capturer with constraints any more. Solution should be based on AVCaptureSession instead and WebRTC will take care about CPU and bandwidth utilization.
For this you need to keep reference to your RTCVideoSource that was passed to capturer. It has method:
- (void)adaptOutputFormatToWidth:(int)width height:(int)height fps:(int)fps;
Calling this function will cause frames to be scaled down to the requested resolution. Also, frames will be cropped to match the requested aspect ratio, and frames will be dropped to match the requested fps. The requested aspect ratio is orientation agnostic and will be adjusted to maintain the input orientation, so it doesn't matter if e.g. 1280x720 or 720x1280 is requested.
var localVideoSource: RTCVideoSource?
You may create your video track this way:
func createVideoTrack() -> RTCVideoTrack? {
var source: RTCVideoSource
if let localSource = self.localVideoSource {
source = localSource
} else {
source = self.factory.videoSource()
self.localVideoSource = source
}
let devices = RTCCameraVideoCapturer.captureDevices()
if let camera = devices.first,
// here you can decide to use front or back camera
let format = RTCCameraVideoCapturer.supportedFormats(for: camera).last,
// here you have a bunch of formats from tiny to up to 4k, find 1st that conforms your needs, i.e. if you usemax 1280x720, then no need to pick 4k
let fps = format.videoSupportedFrameRateRanges.first?.maxFrameRate
// or take smth in between min..max, i.e. 24 fps and not 30, to reduce gpu/cpu use {
let intFps = Int(fps)
let capturer = RTCCameraVideoCapturer(delegate: source)
capturer.startCapture(with: camera, format: format, fps: intFps)
let videoTrack = self.factory.videoTrack(with: source, trackId: WebRTCClient.trackIdVideo)
return videoTrack
}
retun nil
}
And when you need to change resolution, you can tell this video source to do "scaling".
func changeResolution(w: Int32, h: Int32) -> Bool {
guard let videoSource = self.localVideoSource else {
return false
}
// TODO: decide fps
videoSource.adaptOutputFormat(toWidth: w, height: h, fps: 30)
return true
}
Camera will still capture frames with resolution providd in format to startCapture. And if you care about resource utilization, then you can also use next methods prior to adaptOutputFormat.
// Stops the capture session asynchronously and notifies callback on completion.
- (void)stopCaptureWithCompletionHandler:(nullable void (^)(void))completionHandler;
// Starts the capture session asynchronously.
- (void)startCaptureWithDevice:(AVCaptureDevice *)device format:(AVCaptureDeviceFormat *)format fps:(NSInteger)fps;