I'm attempting to combine an image and a video. I have them combining and exporting however it's rotated side ways.
Sorry for the bulk code paste. I've seen answers about applying a transform to compositionVideoTrack.preferredTransform however that does nothing. Adding to AVMutableVideoCompositionInstruction does nothing also.
I feel like this area is where things start to go wrong. here here:
// I feel like this loading here is the problem
let videoTrack = videoAsset.tracksWithMediaType(AVMediaTypeVideo)[0]
// because it makes our parentLayer and videoLayer sizes wrong
let videoSize = videoTrack.naturalSize
// this is returning 1920x1080, so it is rotating the video
print("\(videoSize.width) , \(videoSize.height)")
So by here our frame sizes are wrong for the rest of the method. Now when we try to go and create the overlay image layer the frame is not correct:
let aLayer = CALayer()
aLayer.contents = UIImage(named: "OverlayTestImageOverlay")?.CGImage
aLayer.frame = CGRectMake(0, 0, videoSize.width, videoSize.height)
aLayer.opacity = 1
Here is my complete method.
func combineImageVid() {
let path = NSBundle.mainBundle().pathForResource("SampleMovie", ofType:"MOV")
let fileURL = NSURL(fileURLWithPath: path!)
let videoAsset = AVURLAsset(URL: fileURL)
let mixComposition = AVMutableComposition()
let compositionVideoTrack = mixComposition.addMutableTrackWithMediaType(AVMediaTypeVideo, preferredTrackID: kCMPersistentTrackID_Invalid)
var clipVideoTrack = videoAsset.tracksWithMediaType(AVMediaTypeVideo)
do {
try compositionVideoTrack.insertTimeRange(CMTimeRangeMake(kCMTimeZero, videoAsset.duration), ofTrack: clipVideoTrack[0], atTime: kCMTimeZero)
}
catch _ {
print("failed to insertTimeRange")
}
compositionVideoTrack.preferredTransform = videoAsset.preferredTransform
// I feel like this loading here is the problem
let videoTrack = videoAsset.tracksWithMediaType(AVMediaTypeVideo)[0]
// because it makes our parentLayer and videoLayer sizes wrong
let videoSize = videoTrack.naturalSize
// this is returning 1920x1080, so it is rotating the video
print("\(videoSize.width) , \(videoSize.height)")
let aLayer = CALayer()
aLayer.contents = UIImage(named: "OverlayTestImageOverlay")?.CGImage
aLayer.frame = CGRectMake(0, 0, videoSize.width, videoSize.height)
aLayer.opacity = 1
let parentLayer = CALayer()
let videoLayer = CALayer()
parentLayer.frame = CGRectMake(0, 0, videoSize.width, videoSize.height)
videoLayer.frame = CGRectMake(0, 0, videoSize.width, videoSize.height)
parentLayer.addSublayer(videoLayer)
parentLayer.addSublayer(aLayer)
let videoComp = AVMutableVideoComposition()
videoComp.renderSize = videoSize
videoComp.frameDuration = CMTimeMake(1, 30)
videoComp.animationTool = AVVideoCompositionCoreAnimationTool(postProcessingAsVideoLayer: videoLayer, inLayer: parentLayer)
let instruction = AVMutableVideoCompositionInstruction()
instruction.timeRange = CMTimeRangeMake(kCMTimeZero, mixComposition.duration)
let mixVideoTrack = mixComposition.tracksWithMediaType(AVMediaTypeVideo)[0]
mixVideoTrack.preferredTransform = CGAffineTransformMakeRotation(CGFloat(M_PI * 90.0 / 180))
let layerInstruction = AVMutableVideoCompositionLayerInstruction(assetTrack: mixVideoTrack)
instruction.layerInstructions = [layerInstruction]
videoComp.instructions = [instruction]
// create new file to receive data
let dirPaths = NSSearchPathForDirectoriesInDomains(.DocumentDirectory, .UserDomainMask, true)
let docsDir: AnyObject = dirPaths[0]
let movieFilePath = docsDir.stringByAppendingPathComponent("result.mov")
let movieDestinationUrl = NSURL(fileURLWithPath: movieFilePath)
do {
try NSFileManager.defaultManager().removeItemAtPath(movieFilePath)
}
catch _ {}
// use AVAssetExportSession to export video
let assetExport = AVAssetExportSession(asset: mixComposition, presetName:AVAssetExportPresetHighestQuality)
assetExport?.videoComposition = videoComp
assetExport!.outputFileType = AVFileTypeQuickTimeMovie
assetExport!.outputURL = movieDestinationUrl
assetExport!.exportAsynchronouslyWithCompletionHandler({
switch assetExport!.status{
case AVAssetExportSessionStatus.Failed:
print("failed \(assetExport!.error)")
case AVAssetExportSessionStatus.Cancelled:
print("cancelled \(assetExport!.error)")
default:
print("Movie complete")
// play video
NSOperationQueue.mainQueue().addOperationWithBlock({ () -> Void in
print(movieDestinationUrl)
})
}
})
}
This is what I'm getting exported:
I tried adding these two methods in order to rotate the video:
class func videoCompositionInstructionForTrack(track: AVCompositionTrack, asset: AVAsset) -> AVMutableVideoCompositionLayerInstruction {
let instruction = AVMutableVideoCompositionLayerInstruction(assetTrack: track)
let assetTrack = asset.tracksWithMediaType(AVMediaTypeVideo)[0]
let transform = assetTrack.preferredTransform
let assetInfo = orientationFromTransform(transform)
var scaleToFitRatio = UIScreen.mainScreen().bounds.width / assetTrack.naturalSize.width
if assetInfo.isPortrait {
scaleToFitRatio = UIScreen.mainScreen().bounds.width / assetTrack.naturalSize.height
let scaleFactor = CGAffineTransformMakeScale(scaleToFitRatio, scaleToFitRatio)
instruction.setTransform(CGAffineTransformConcat(assetTrack.preferredTransform, scaleFactor),
atTime: kCMTimeZero)
} else {
let scaleFactor = CGAffineTransformMakeScale(scaleToFitRatio, scaleToFitRatio)
var concat = CGAffineTransformConcat(CGAffineTransformConcat(assetTrack.preferredTransform, scaleFactor), CGAffineTransformMakeTranslation(0, UIScreen.mainScreen().bounds.width / 2))
if assetInfo.orientation == .Down {
let fixUpsideDown = CGAffineTransformMakeRotation(CGFloat(M_PI))
let windowBounds = UIScreen.mainScreen().bounds
let yFix = assetTrack.naturalSize.height + windowBounds.height
let centerFix = CGAffineTransformMakeTranslation(assetTrack.naturalSize.width, yFix)
concat = CGAffineTransformConcat(CGAffineTransformConcat(fixUpsideDown, centerFix), scaleFactor)
}
instruction.setTransform(concat, atTime: kCMTimeZero)
}
return instruction
}
class func orientationFromTransform(transform: CGAffineTransform) -> (orientation: UIImageOrientation, isPortrait: Bool) {
var assetOrientation = UIImageOrientation.Up
var isPortrait = false
if transform.a == 0 && transform.b == 1.0 && transform.c == -1.0 && transform.d == 0 {
assetOrientation = .Right
isPortrait = true
} else if transform.a == 0 && transform.b == -1.0 && transform.c == 1.0 && transform.d == 0 {
assetOrientation = .Left
isPortrait = true
} else if transform.a == 1.0 && transform.b == 0 && transform.c == 0 && transform.d == 1.0 {
assetOrientation = .Up
} else if transform.a == -1.0 && transform.b == 0 && transform.c == 0 && transform.d == -1.0 {
assetOrientation = .Down
}
return (assetOrientation, isPortrait)
}
The updated my combineImageVid() method adding this in
let instruction = AVMutableVideoCompositionInstruction()
instruction.timeRange = CMTimeRangeMake(kCMTimeZero, mixComposition.duration)
let mixVideoTrack = mixComposition.tracksWithMediaType(AVMediaTypeVideo)[0]
//let layerInstruction = AVMutableVideoCompositionLayerInstruction(assetTrack: mixVideoTrack)
//layerInstruction.setTransform(videoAsset.preferredTransform, atTime: kCMTimeZero)
let layerInstruction = videoCompositionInstructionForTrack(compositionVideoTrack, asset: videoAsset)
Which gives me this output:
So I'm getting closer however I feel that because the track is originally being loaded the wrong way, I need to address the issue there. Also, I don't know why the huge black box is there now. I thought maybe it was due to my image layer taking the bounds of the loaded video asset here:
aLayer.frame = CGRectMake(0, 0, videoSize.width, videoSize.height)
However changing that to some small width/height doesn't make a difference. I then thought about adding a crop rec to get rid of the black square but that didn't work either :(
Following Allens suggestions of not using these two methods:
class func videoCompositionInstructionForTrack(track: AVCompositionTrack, asset: AVAsset) -> AVMutableVideoCompositionLayerInstruction
class func orientationFromTransform(transform: CGAffineTransform) -> (orientation: UIImageOrientation, isPortrait: Bool)
But updating my original method to look like this:
videoLayer.frame = CGRectMake(0, 0, videoSize.height, videoSize.width) //notice the switched width and height
...
videoComp.renderSize = CGSizeMake(videoSize.height,videoSize.width) //this make the final video in portrait
...
layerInstruction.setTransform(videoTrack.preferredTransform, atTime: kCMTimeZero) //important piece of information let composition know you want to rotate the original video in output
We are getting really close however the problem now seems to be editing renderSize. If I change it to anything other than the landscape size I get this:
here is the document for the orientation at Apple:
https://developer.apple.com/library/ios/qa/qa1744/_index.html
if your original video was taken in portrait mode iOS, it's nature size will still be landscape, but it comes with a metadata of rotate in the mov file. In order to rotate your video, you need to make changes to your 1st piece of code with the following:
videoLayer.frame = CGRectMake(0, 0, videoSize.height, videoSize.width) //notice the switched width and height
...
videoComp.renderSize = CGSizeMake(videoSize.height,videoSize.width) //this make the final video in portrait
...
layerInstruction.setTransform(videoTrack.preferredTransform, atTime: kCMTimeZero) //important piece of information let composition know you want to rotate the original video in output
Yes, you are really close!
Maybe U should check the videoTrack's preferredTransform so to give it a exact renderSize and transform:
CGAffineTransform transform = assetVideoTrack.preferredTransform;
CGFloat rotation = [self rotationWithTransform:transform];
//if been rotated
if (rotation != 0)
{
//if rotation is 360°
if (fabs((rotation - M_PI * 2)) >= valueOfError) {
CGFloat m = rotation / M_PI;
CGAffineTransform t1;
//rotation is 90° or 270°
if (fabs(m - 1/2.0) < valueOfError || fabs(m - 3/2.0) < valueOfError) {
self.mutableVideoComposition.renderSize = CGSizeMake(assetVideoTrack.naturalSize.height,assetVideoTrack.naturalSize.width);
t1 = CGAffineTransformMakeTranslation(assetVideoTrack.naturalSize.height, 0);
}
//rotation is 180°
if (fabs(m - 1.0) < valueOfError) {
t1 = CGAffineTransformMakeTranslation(assetVideoTrack.naturalSize.width, assetVideoTrack.naturalSize.height);
}
CGAffineTransform t2 = CGAffineTransformRotate(t1,rotation);
// CGAffineTransform transform = makeTransform(1.0, 1.0, 90, videoTrack.naturalSize.height, 0);
[passThroughLayer setTransform:t2 atTime:kCMTimeZero];
}
}
//convert transform to radian
- (CGFloat)rotationWithTransform:(CGAffineTransform)t
{
return atan2f(t.b, t.a);
}
Related
I have a link I can DM for a minimum working example!
Recording Videos
For recording, the AVCaptureConnection for an AVCaptureSession, I set isVideoMirrored to true when using the front camera and false when using the back camera. All in portrait orientation.
Saving Videos
When I save videos, I perform an AVAssetExportSession. If I used the front camera, I want to maintain the isVideoMirrored = true, so I create an AVMutableComposition to set the AVAsset video track's preferredTransform to CGAffineTransform(scaleX: -1.0, y: 1.0).rotated(by: CGFloat(Double.pi/2)). For the back camera, I export the AVAsset as outputted.
Part of my saving code:
if didCaptureWithFrontCamera {
let composition = AVMutableComposition()
let assetVideoTrack = asset.tracks(withMediaType: .video).last!
let assetAudioTrack = asset.tracks(withMediaType: .audio).last!
let compositionVideoTrack = composition.addMutableTrack(withMediaType: AVMediaType.video, preferredTrackID: CMPersistentTrackID(kCMPersistentTrackID_Invalid))
let compositionAudioTrack = composition.addMutableTrack(withMediaType: AVMediaType.audio, preferredTrackID: CMPersistentTrackID(kCMPersistentTrackID_Invalid))
try? compositionVideoTrack?.insertTimeRange(CMTimeRangeMake(start: CMTime.zero, duration: asset.duration), of: assetVideoTrack, at: CMTime.zero)
try? compositionAudioTrack?.insertTimeRange(CMTimeRangeMake(start: CMTime.zero, duration: asset.duration), of: assetAudioTrack, at: CMTime.zero)
compositionVideoTrack?.preferredTransform = CGAffineTransform(scaleX: -1.0, y: 1.0).rotated(by: CGFloat(Double.pi/2))
guard let exportSession = AVAssetExportSession(asset: composition, presetName: AVAssetExportPreset1280x720) else {
handler(nil)
return
}
exportSession.outputURL = outputURL
exportSession.outputFileType = .mp4
exportSession.shouldOptimizeForNetworkUse = true
exportSession.exportAsynchronously { handler(exportSession) }
} else {
guard let exportSession = AVAssetExportSession(asset: asset, presetName: AVAssetExportPreset1280x720) else {
handler(nil)
return
}
exportSession.outputURL = outputURL
exportSession.outputFileType = .mp4
exportSession.shouldOptimizeForNetworkUse = true
exportSession.exportAsynchronously { handler(exportSession) }
}
Merging Videos
Later, to view the saved videos, I want to merge them together as a single video and maintain each by their original orientation via AVMutableComposition.
What partially has worked is setting the video track of AVMutableComposition to the preferredTransform property of the video track of an individual AVAsset video. The only problem is that a single orientation is applied to all the videos (i.e. mirroring isn't applied in a back camera recorded video and the same is applied to the front camera video too).
From solutions I've come across it appears I need to apply AVMutableVideoCompositionInstruction, but in trying to do so, the AVAssetExportSession doesn't seem to factor in the videoComposition instructions at all.
Any guidance would be extremely appreciated as I haven't been able to solve it for the life of me...
My attempted merge code:
func merge(videos: [AVURLAsset], for date: Date, completion: #escaping (_ url: URL, _ asset: AVAssetExportSession)->()) {
let videoComposition = AVMutableComposition()
var lastTime: CMTime = .zero
var count = 0
var instructions = [AVMutableVideoCompositionInstruction]()
let renderSize = CGSize(width: 720, height: 1280)
guard let videoCompositionTrack = videoComposition.addMutableTrack(withMediaType: .video, preferredTrackID: Int32(kCMPersistentTrackID_Invalid)) else { return }
guard let audioCompositionTrack = videoComposition.addMutableTrack(withMediaType: .audio, preferredTrackID: Int32(kCMPersistentTrackID_Invalid)) else { return }
for video in videos {
if let videoTrack = video.tracks(withMediaType: .video)[safe: 0] {
//this is the only thing that seems to work, but work not in the way i'd hope where each video keeps its original orientation
//videoCompositionTrack.preferredTransform = videoTrack.preferredTransform
if let audioTrack = video.tracks(withMediaType: .audio)[safe: 0] {
do {
try videoCompositionTrack.insertTimeRange(CMTimeRangeMake(start: .zero, duration: video.duration), of: videoTrack, at: lastTime)
try audioCompositionTrack.insertTimeRange(CMTimeRangeMake(start: .zero, duration: video.duration), of: audioTrack, at: lastTime)
let layerInstruction = videoCompositionInstruction(videoTrack, asset: video, count: count)
let videoCompositionInstruction = AVMutableVideoCompositionInstruction()
videoCompositionInstruction.timeRange = CMTimeRangeMake(start: lastTime, duration: video.duration)
videoCompositionInstruction.layerInstructions = [layerInstruction]
instructions.append(videoCompositionInstruction)
} catch {
return
}
lastTime = CMTimeAdd(lastTime, video.duration)
count += 1
} else {
do {
try videoCompositionTrack.insertTimeRange(CMTimeRangeMake(start: .zero, duration: video.duration), of: videoTrack, at: lastTime)
let layerInstruction = videoCompositionInstruction(videoTrack, asset: video, count: count)
let videoCompositionInstruction = AVMutableVideoCompositionInstruction()
videoCompositionInstruction.timeRange = CMTimeRangeMake(start: lastTime, duration: video.duration)
videoCompositionInstruction.layerInstructions = [layerInstruction]
instructions.append(videoCompositionInstruction)
} catch {
return
}
lastTime = CMTimeAdd(lastTime, video.duration)
count += 1
}
}
}
let mutableVideoComposition = AVMutableVideoComposition()
mutableVideoComposition.instructions = instructions
mutableVideoComposition.frameDuration = CMTimeMake(value: 1, timescale: 30)
mutableVideoComposition.renderSize = renderSize
dateFormatter.dateStyle = .long
dateFormatter.timeStyle = .short
let date = dateFormatter.string(from: date)
let mergedURL = NSURL.fileURL(withPath: NSTemporaryDirectory() + "merged-\(date)" + ".mp4")
guard let exporter = AVAssetExportSession(asset: videoComposition, presetName: AVAssetExportPresetHighestQuality) else { return }
exporter.outputURL = mergedURL
exporter.outputFileType = .mp4
exporter.videoComposition = mutableVideoComposition
exporter.shouldOptimizeForNetworkUse = true
completion(mergedURL, exporter)
}
func videoCompositionInstruction(_ firstTrack: AVAssetTrack, asset: AVAsset, count: Int) -> AVMutableVideoCompositionLayerInstruction {
let renderSize = CGSize(width: 720, height: 1280)
let instruction = AVMutableVideoCompositionLayerInstruction(assetTrack: firstTrack)
let assetTrack = asset.tracks(withMediaType: .video)[0]
let t = assetTrack.fixedPreferredTransform // new transform fix
let assetInfo = orientationFromTransform(t)
if assetInfo.isPortrait {
let scaleToFitRatio = renderSize.width / assetTrack.naturalSize.height
let scaleFactor = CGAffineTransform(scaleX: scaleToFitRatio, y: scaleToFitRatio)
var finalTransform = assetTrack.fixedPreferredTransform.concatenating(scaleFactor)
if assetInfo.orientation == .rightMirrored || assetInfo.orientation == .leftMirrored {
finalTransform = finalTransform.translatedBy(x: -t.ty, y: 0)
}
instruction.setTransform(t, at: CMTime.zero)
} else {
let renderRect = CGRect(x: 0, y: 0, width: renderSize.width, height: renderSize.height)
let videoRect = CGRect(origin: .zero, size: assetTrack.naturalSize).applying(assetTrack.fixedPreferredTransform)
let scale = renderRect.width / videoRect.width
let transform = CGAffineTransform(scaleX: renderRect.width / videoRect.width, y: (videoRect.height * scale) / assetTrack.naturalSize.height)
let translate = CGAffineTransform(translationX: .zero, y: ((renderSize.height - (videoRect.height * scale))) / 2)
instruction.setTransform(assetTrack.fixedPreferredTransform.concatenating(transform).concatenating(translate), at: .zero)
}
if count == 0 {
instruction.setOpacity(0.0, at: asset.duration)
}
return instruction
}
func orientationFromTransform(_ transform: CGAffineTransform) -> (orientation: UIImage.Orientation, isPortrait: Bool) {
var assetOrientation = UIImage.Orientation.up
var isPortrait = false
if transform.a == 0 && transform.b == 1.0 && transform.c == -1.0 && transform.d == 0 {
assetOrientation = .right
isPortrait = true
} else if transform.a == 0 && transform.b == 1.0 && transform.c == 1.0 && transform.d == 0 {
assetOrientation = .rightMirrored
isPortrait = true
} else if transform.a == 0 && transform.b == -1.0 && transform.c == 1.0 && transform.d == 0 {
assetOrientation = .left
isPortrait = true
} else if transform.a == 0 && transform.b == -1.0 && transform.c == -1.0 && transform.d == 0 {
assetOrientation = .leftMirrored
isPortrait = true
} else if transform.a == 1.0 && transform.b == 0 && transform.c == 0 && transform.d == 1.0 {
assetOrientation = .up
} else if transform.a == -1.0 && transform.b == 0 && transform.c == 0 && transform.d == -1.0 {
assetOrientation = .down
}
return (assetOrientation, isPortrait)
}
extension AVAssetTrack {
var fixedPreferredTransform: CGAffineTransform {
var t = preferredTransform
switch(t.a, t.b, t.c, t.d) {
case (1, 0, 0, 1):
t.tx = 0
t.ty = 0
case (1, 0, 0, -1):
t.tx = 0
t.ty = naturalSize.height
case (-1, 0, 0, 1):
t.tx = naturalSize.width
t.ty = 0
case (-1, 0, 0, -1):
t.tx = naturalSize.width
t.ty = naturalSize.height
case (0, -1, 1, 0):
t.tx = 0
t.ty = naturalSize.width
case (0, 1, -1, 0):
t.tx = naturalSize.height
t.ty = 0
case (0, 1, 1, 0):
t.tx = 0
t.ty = 0
case (0, -1, -1, 0):
t.tx = naturalSize.height
t.ty = naturalSize.width
default:
break
}
return t
}
}
Assuming your transformations are correct, I updated your merge function.
The main change is using a single AVMutableVideoCompositionInstruction with multiple AVMutableVideoCompositionLayerInstruction, and passing the correct CMTime value to for the layer instruction to be executed at.
func merge(videos: [AVURLAsset],
for date: Date,
completion: #escaping (_ url: URL, _ asset: AVAssetExportSession)->()) {
let videoComposition = AVMutableComposition()
guard let videoCompositionTrack = videoComposition.addMutableTrack(withMediaType: .video,
preferredTrackID: Int32(kCMPersistentTrackID_Invalid)),
let audioCompositionTrack = videoComposition.addMutableTrack(withMediaType: .audio,
preferredTrackID: Int32(kCMPersistentTrackID_Invalid))
else { return }
var lastTime: CMTime = .zero
var layerInstructions = [AVMutableVideoCompositionLayerInstruction]()
for video in videos {
guard let videoTrack = video.tracks(withMediaType: .video)[safe: 0] else { return }
// add audio track if available
if let audioTrack = video.tracks(withMediaType: .audio)[safe: 0] {
do {
try audioCompositionTrack.insertTimeRange(CMTimeRangeMake(start: .zero, duration: video.duration),
of: audioTrack,
at: lastTime)
} catch {
return
}
}
// add video track
do {
try videoCompositionTrack.insertTimeRange(CMTimeRangeMake(start: .zero, duration: video.duration),
of: videoTrack,
at: lastTime)
let layerInstruction = makeVideoCompositionInstruction(videoTrack,
asset: video,
atTime: lastTime)
layerInstructions.append(layerInstruction)
} catch {
return
}
lastTime = CMTimeAdd(lastTime, video.duration)
} // end for..in videos
let renderSize = CGSize(width: 720, height: 1280)
let videoInstruction = AVMutableVideoCompositionInstruction()
videoInstruction.timeRange = CMTimeRangeMake(start: .zero, duration: lastTime)
videoInstruction.layerInstructions = layerInstructions
let mutableVideoComposition = AVMutableVideoComposition()
mutableVideoComposition.instructions = [videoInstruction]
mutableVideoComposition.frameDuration = CMTimeMake(value: 1, timescale: 30)
mutableVideoComposition.renderSize = renderSize
let dateFormatter = DateFormatter()
dateFormatter.dateStyle = .long
dateFormatter.timeStyle = .short
let date = dateFormatter.string(from: date)
let mergedURL = NSURL.fileURL(withPath: NSTemporaryDirectory() + "merged-\(date)" + ".mp4")
guard let exporter = AVAssetExportSession(asset: videoComposition,
presetName: AVAssetExportPresetHighestQuality) else { return }
exporter.outputURL = mergedURL
exporter.outputFileType = .mp4
exporter.videoComposition = mutableVideoComposition
exporter.shouldOptimizeForNetworkUse = true
completion(mergedURL, exporter)
}
func makeVideoCompositionInstruction(_ videoTrack: AVAssetTrack,
asset: AVAsset,
atTime: CMTime) -> AVMutableVideoCompositionLayerInstruction {
let renderSize = CGSize(width: 720, height: 1280)
let instruction = AVMutableVideoCompositionLayerInstruction(assetTrack: videoTrack)
let assetTrack = asset.tracks(withMediaType: .video)[0]
let t = assetTrack.fixedPreferredTransform // new transform fix
let assetInfo = orientationFromTransform(t)
if assetInfo.isPortrait {
let scaleToFitRatio = renderSize.width / assetTrack.naturalSize.height
let scaleFactor = CGAffineTransform(scaleX: scaleToFitRatio, y: scaleToFitRatio)
var finalTransform = assetTrack.fixedPreferredTransform.concatenating(scaleFactor)
if assetInfo.orientation == .rightMirrored || assetInfo.orientation == .leftMirrored {
finalTransform = finalTransform.translatedBy(x: -t.ty, y: 0)
}
instruction.setTransform(t, at: atTime)
} else {
let renderRect = CGRect(x: 0, y: 0, width: renderSize.width, height: renderSize.height)
let videoRect = CGRect(origin: .zero, size: assetTrack.naturalSize).applying(assetTrack.fixedPreferredTransform)
let scale = renderRect.width / videoRect.width
let transform = CGAffineTransform(scaleX: renderRect.width / videoRect.width,
y: (videoRect.height * scale) / assetTrack.naturalSize.height)
let translate = CGAffineTransform(translationX: .zero,
y: ((renderSize.height - (videoRect.height * scale))) / 2)
instruction.setTransform(assetTrack.fixedPreferredTransform.concatenating(transform).concatenating(translate),
at: atTime)
}
// if atTime = 0, we can assume this is the first track being added
if atTime == .zero {
instruction.setOpacity(0.0,
at: asset.duration)
}
return instruction
}
Objective : I have a Video over which I have a UIView which contains animated GIFs(not locally stored, but using giphy api), Texts, or hand drawings. I want to export this along with the image in a single video.
What I did :
I created a UIView on which the animations are. Then converted that to CALayer and added to video with AVMutableVideoCompotion.
Problem : The UIView with animations is being converted to an Image instead of a video. How can I solve this.
Below is the Program for my export session. Any pointers will be really helpful.
func convertVideoAndSaveTophotoLibrary(videoURL: URL) {
let file = FileManager.shared.getDocumentDirectory(path: currentFilename)
FileManager.shared.clearPreviousFiles(withPath: file.path)
// File to composit
let asset = AVURLAsset(url: videoURL as URL)
let composition = AVMutableComposition.init()
composition.addMutableTrack(withMediaType: AVMediaType.video, preferredTrackID: kCMPersistentTrackID_Invalid)
let clipVideoTrack = asset.tracks(withMediaType: AVMediaType.video)[0]
// Rotate to potrait
let transformer = AVMutableVideoCompositionLayerInstruction(assetTrack: clipVideoTrack)
let videoTransform:CGAffineTransform = clipVideoTrack.preferredTransform
//fix orientation
var videoAssetOrientation_ = UIImage.Orientation.up
var isVideoAssetPortrait_ = false
if videoTransform.a == 0 && videoTransform.b == 1.0 && videoTransform.c == -1.0 && videoTransform.d == 0 {
videoAssetOrientation_ = UIImage.Orientation.right
isVideoAssetPortrait_ = true
}
if videoTransform.a == 0 && videoTransform.b == -1.0 && videoTransform.c == 1.0 && videoTransform.d == 0 {
videoAssetOrientation_ = UIImage.Orientation.left
isVideoAssetPortrait_ = true
}
if videoTransform.a == 1.0 && videoTransform.b == 0 && videoTransform.c == 0 && videoTransform.d == 1.0 {
videoAssetOrientation_ = UIImage.Orientation.up
}
if videoTransform.a == -1.0 && videoTransform.b == 0 && videoTransform.c == 0 && videoTransform.d == -1.0 {
videoAssetOrientation_ = UIImage.Orientation.down;
}
transformer.setTransform(clipVideoTrack.preferredTransform, at: CMTime.zero)
transformer.setOpacity(0.0, at: asset.duration)
//adjust the render size if neccessary
var naturalSize: CGSize
if(isVideoAssetPortrait_){
naturalSize = CGSize(width: clipVideoTrack.naturalSize.height, height: clipVideoTrack.naturalSize.width)
} else {
naturalSize = clipVideoTrack.naturalSize;
}
var renderWidth: CGFloat!
var renderHeight: CGFloat!
renderWidth = naturalSize.width
renderHeight = naturalSize.height
let parentlayer = CALayer()
let videoLayer = CALayer()
let watermarkLayer = CALayer()
let videoComposition = AVMutableVideoComposition()
videoComposition.renderSize = CGSize(width: renderWidth, height: renderHeight)
videoComposition.frameDuration = CMTimeMake(value: 1, timescale: 30)
videoComposition.renderScale = 1.0
//---------------------->>>>>> converting uiview to uiimage
watermarkLayer.contents = canvasView.asImage().cgImage
parentlayer.frame = CGRect(origin: CGPoint(x: 0, y: 0), size: naturalSize)
videoLayer.frame = CGRect(origin: CGPoint(x: 0, y: 0), size: naturalSize)
watermarkLayer.frame = CGRect(origin: CGPoint(x: 0, y: 0), size: naturalSize)
parentlayer.addSublayer(videoLayer)
parentlayer.addSublayer(watermarkLayer)
//---------------------->>>>>> Add view to video
videoComposition.animationTool = AVVideoCompositionCoreAnimationTool(postProcessingAsVideoLayers: [videoLayer], in: parentlayer)
let instruction = AVMutableVideoCompositionInstruction()
instruction.timeRange = CMTimeRangeMake(start: CMTime.zero, duration: CMTimeMakeWithSeconds(60, preferredTimescale: 30))
instruction.layerInstructions = [transformer]
videoComposition.instructions = [instruction]
let exporter = AVAssetExportSession.init(asset: asset, presetName: AVAssetExportPresetHighestQuality)
exporter?.outputFileType = AVFileType.mp4
exporter?.outputURL = file
exporter?.videoComposition = videoComposition
exporter?.shouldOptimizeForNetworkUse = true
exporter!.exportAsynchronously(completionHandler: {() -> Void in
if exporter?.status == .completed {
let outputURL: URL? = exporter?.outputURL
self.saveToPhotoLibrary(url: outputURL!)
}
})
}
Converting UIView to UIimage
extension UIView {
func asImage() -> UIImage {
let renderer = UIGraphicsImageRenderer(bounds: bounds)
return renderer.image { rendererContext in
layer.render(in: rendererContext.cgContext)
}
}
}
Code for Adding the GIF(I am using the Giphy API here), so the gif is downloaded and then added
func didSelectMedia(giphyViewController: GiphyViewController, media: GPHMedia) {
addMedia(media: media)
giphyViewController.dismiss(animated: true) { [weak self] in
self?.giphy = nil
}
}
// GPHMediaView is a subclass of UIImageView
func addMedia(media: GPHMedia) {
let mediaView = GPHMediaView()
mediaView.media = media
mediaView.contentMode = .scaleAspectFill
mediaView.frame.size = CGSize(width: 150, height: 150)
mediaView.center = canvasView.center
canvasView.addSubview(mediaView)
print(mediaView.frame)
self.addGesturesTo(mediaView)
}
What I am getting: The cat over the video is a gif. But sadly all i get is one frame. Now I know that is because I am converting the view to image. But that's the solution I need to know. How do I have the gif merged to the video.
You have two ways to archive this. First you can convert gif to video and add it to composition, but you lose alpha channel. Second way and more relevant is to add CAKeyframeAnimation on gif layer. To do this you should get all image frames from gif and put it all to key CAKeyframeAnimation.values and set duration which equal to framesCount * framesPerSecond.
class func makeContentAnimation(beginTime: Double, values: [Any], frameRate: Double) -> CAKeyframeAnimation {
let animation = CAKeyframeAnimation(keyPath: "contents")
animation.values = values
animation.beginTime = beginTime.isZero ? AVCoreAnimationBeginTimeAtZero : beginTime
animation.duration = frameRate * Double(values.count)
animation.isRemovedOnCompletion = false
animation.repeatCount = .infinity
return animation
}
I'm trying to render an image into a video captured with the front camera using AVMutableComposition. The size of the resulting video (including the image) is perfectly fine.
However, the initial video will be resized as shown in this picture:
I'm using the NextLevelSessionExporter and this is my code snippet:
// * MARK - Creating composition
/// Create AVMutableComposition object. This object will hold the AVMutableCompositionTrack instances.
let mainMutableComposition = AVMutableComposition()
/// Creating an empty video track
let videoTrack = mainMutableComposition.addMutableTrack(withMediaType: AVMediaType.video, preferredTrackID: kCMPersistentTrackID_Invalid)
let videoAssetTrack = videoAsset.tracks(withMediaType: AVMediaType.video)[0]
do {
//Adding the video track
try videoTrack?.insertTimeRange(CMTimeRange(start: kCMTimeZero, duration: videoAsset.duration), of: videoAsset.tracks(withMediaType: AVMediaType.video).first!, at: kCMTimeZero)
} catch {
completion(false, nil)
}
/// Adding audio if user wants to.
if withAudio {
do {
//Adding the video track
let audio = videoAsset.tracks(withMediaType: AVMediaType.audio).first
if audio != nil {
let audioTrack = mainMutableComposition.addMutableTrack(withMediaType: AVMediaType.audio, preferredTrackID: kCMPersistentTrackID_Invalid)
try audioTrack?.insertTimeRange(CMTimeRange(start: kCMTimeZero, duration: videoAsset.duration), of: audio!, at: kCMTimeZero)
}
} catch {
completion(false, nil)
}
}
// * MARK - Composition is ready ----------
// Create AVMutableVideoCompositionInstruction
let compositionInstructions = AVMutableVideoCompositionInstruction()
compositionInstructions.timeRange = CMTimeRange(start: kCMTimeZero, duration: videoAsset.duration)
// Create an AvmutableVideoCompositionLayerInstruction
let videoLayerInstruction = AVMutableVideoCompositionLayerInstruction.init(assetTrack: videoTrack!)
videoLayerInstruction.setTransform(videoAssetTrack.preferredTransform, at: kCMTimeZero)
compositionInstructions.layerInstructions = [videoLayerInstruction]
//Add instructions
let videoComposition = AVMutableVideoComposition()
let naturalSize : CGSize = videoAssetTrack.naturalSize
///Rendering image into video
let renderWidth = naturalSize.width
let renderHeight = naturalSize.height
//Assigning instructions and rendering size
videoComposition.renderSize = CGSize(width: renderWidth, height: renderHeight)
videoComposition.instructions = [compositionInstructions]
videoComposition.frameDuration = CMTime(value: 1, timescale: Int32((videoTrack?.nominalFrameRate)!))
//Applying image to instruction
self.applyVideoImage(to: videoComposition, withSize: naturalSize, image: image)
// Getting the output path
let documentsURL = FileManager.default.urls(for: .documentDirectory, in: .userDomainMask).first
let outputPath = documentsURL?.appendingPathComponent("lastEditedVideo.mp4")
if FileManager.default.fileExists(atPath: (outputPath?.path)!) {
do {
try FileManager.default.removeItem(atPath: (outputPath?.path)!)
}
catch {
completion(false, nil)
}
}
// Create exporter
let exporter = NextLevelSessionExporter(withAsset: mainMutableComposition)
exporter.outputURL = outputPath
exporter.outputFileType = AVFileType.mp4
exporter.videoComposition = videoComposition
let compressionDict: [String: Any] = [
AVVideoAverageBitRateKey: NSNumber(integerLiteral: 2300000),
AVVideoProfileLevelKey: AVVideoProfileLevelH264BaselineAutoLevel as String
]
exporter.videoOutputConfiguration = [
AVVideoCodecKey: AVVideoCodecType.h264,
AVVideoWidthKey: NSNumber(integerLiteral: Int(naturalSize.width)),
AVVideoHeightKey: NSNumber(integerLiteral: Int(naturalSize.height)),
AVVideoCompressionPropertiesKey: compressionDict
]
exporter.audioOutputConfiguration = [
AVFormatIDKey: kAudioFormatMPEG4AAC,
AVEncoderBitRateKey: NSNumber(integerLiteral: 128000),
AVNumberOfChannelsKey: NSNumber(integerLiteral: 2),
AVSampleRateKey: NSNumber(value: Float(44100))
]
completion(true, exporter)
}
This is my applyVideoImage() function.
private func applyVideoImage(to composition: AVMutableVideoComposition, withSize size: CGSize, image: UIImage) { //Adds an image to a video composition
//Creating image layer
let overlayLayer = CALayer()
let overlayImage: UIImage = image
overlayLayer.contents = overlayImage.cgImage
overlayLayer.frame = CGRect(x: 0, y: 0, width: size.width, height: size.height)
overlayLayer.contentsGravity = kCAGravityResizeAspectFill
overlayLayer.masksToBounds = true
//Creating parent and video layer
let parentLayer = CALayer()
let videoLayer = CALayer()
parentLayer.frame = CGRect(x: 0, y: 0, width: size.width, height: size.height)
videoLayer.frame = CGRect(x: 0, y: 0, width: size.width, height: size.height)
parentLayer.addSublayer(videoLayer)
parentLayer.addSublayer(overlayLayer)
//Adding those layers to video
composition.animationTool = AVVideoCompositionCoreAnimationTool(postProcessingAsVideoLayer: videoLayer, in: parentLayer)
}
EDIT 1:
This bug only occurs when I'm exporting a mirrored video that has been captured with the front camera.
This is really tricky: You need to check the preferredTransform of the video track to determine wether it is a portrait video or not.
var videoAssetOrientation = UIImageOrientation.up
var isVideoAssetPortrait = false
var videoTransform = videoAssetTrack.preferredTransform
if needsMirroring == true {
isVideoAssetPortrait = true
}else if videoTransform.a == 0 && videoTransform.b == 1.0 && videoTransform.c == -1.0 && videoTransform.d == 0 {
videoAssetOrientation = .right
isVideoAssetPortrait = true
}else if videoTransform.a == 0 && videoTransform.b == -1.0 && videoTransform.c == 1.0 && videoTransform.d == 0 {
videoAssetOrientation = .left
isVideoAssetPortrait = true
}else if videoTransform.a == 1.0 && videoTransform.b == 0 && videoTransform.c == 0 && videoTransform.d == 1.0 {
videoAssetOrientation = .up
}else if videoTransform.a == -1.0 && videoTransform.b == 0 && videoTransform.c == 0 && videoTransform.d == -1.0 {
videoAssetOrientation = .down
}
//Add instructions
mainInstruction.layerInstructions = [videoLayerInstruction]
let mainCompositionInst = AVMutableVideoComposition()
let naturalSize : CGSize!
if isVideoAssetPortrait {
naturalSize = CGSize(width: videoAssetTrack.naturalSize.height, height: videoAssetTrack.naturalSize.width)
} else {
naturalSize = videoAssetTrack.naturalSize
}
Hope that helps.
Try applying a negative scale transform to flip the video when mirrored:
// Create an AvmutableVideoCompositionLayerInstruction
let videoLayerInstruction = AVMutableVideoCompositionLayerInstruction.init(assetTrack: videoTrack!)
let flipped = videoAssetTrack.preferredTransform.scaledBy(x: -1.0, y: 1.0)
videoLayerInstruction.setTransform(flipped, at: kCMTimeZero)
compositionInstructions.layerInstructions = [videoLayerInstruction]
I am trying to useAVVideoComposition to add some text on top of a video and save the video.
This is the code I use:
I Create an AVMutableComposition and AVVideoComposition
var mutableComp = AVMutableComposition()
var mutableVidComp = AVMutableVideoComposition()
var compositionSize : CGSize?
func configureAsset(){
let options = [AVURLAssetPreferPreciseDurationAndTimingKey : "true"]
let videoAsset = AVURLAsset(url: Bundle.main.url(forResource: "Car", withExtension: "mp4")! , options : options)
let videoAssetSourceTrack = videoAsset.tracks(withMediaType: AVMediaTypeVideo).first! as AVAssetTrack
compositionSize = videoAssetSourceTrack.naturalSize
let mutableVidTrack = mutableComp.addMutableTrack(withMediaType: AVMediaTypeVideo, preferredTrackID: kCMPersistentTrackID_Invalid)
let trackRange = CMTimeRangeMake(kCMTimeZero, videoAsset.duration)
do {
try mutableVidTrack.insertTimeRange( trackRange, of: videoAssetSourceTrack, at: kCMTimeZero)
mutableVidTrack.preferredTransform = videoAssetSourceTrack.preferredTransform
}catch { print(error) }
snapshot = mutableComp
mutableVidComp = AVMutableVideoComposition(propertiesOf: videoAsset)
}
II Setup the layers
func applyVideoEffectsToComposition() {
// 1 - Set up the text layer
let subTitle1Text = CATextLayer()
subTitle1Text.font = "Helvetica-Bold" as CFTypeRef
subTitle1Text.frame = CGRect(x: self.view.frame.midX - 60 , y: self.view.frame.midY - 50, width: 120, height: 100)
subTitle1Text.string = "Bench"
subTitle1Text.foregroundColor = UIColor.black.cgColor
subTitle1Text.alignmentMode = kCAAlignmentCenter
// 2 - The usual overlay
let overlayLayer = CALayer()
overlayLayer.addSublayer(subTitle1Text)
overlayLayer.frame = CGRect(x: 0, y: 0, width: compositionSize!.width, height: compositionSize!.height)
overlayLayer.masksToBounds = true
// 3 - set up the parent layer
let parentLayer = CALayer()
let videoLayer = CALayer()
parentLayer.frame = CGRect(x: 0, y: 0, width: compositionSize!.width, height: compositionSize!.height)
videoLayer.frame = CGRect(x: 0, y: 0, width: compositionSize!.width, height: compositionSize!.height)
parentLayer.addSublayer(videoLayer)
parentLayer.addSublayer(overlayLayer)
mutableVidComp.animationTool = AVVideoCompositionCoreAnimationTool(postProcessingAsVideoLayer: videoLayer, in: parentLayer)
}
III . Save video with AVMutbaleVideoComposition
func saveAsset (){
func deleteFile(_ filePath:URL) {
guard FileManager.default.fileExists(atPath: filePath.path) else { return }
do {
try FileManager.default.removeItem(atPath: filePath.path) }
catch {fatalError("Unable to delete file: \(error) : \(#function).")} }
let documentsDirectory = FileManager.default.urls(for: .documentDirectory, in: .userDomainMask)[0] as URL
let filePath = documentsDirectory.appendingPathComponent("rendered-audio.mp4")
deleteFile(filePath)
if let exportSession = AVAssetExportSession(asset: mutableComp , presetName: AVAssetExportPresetHighestQuality){
exportSession.videoComposition = mutableVidComp
// exportSession.canPerformMultiplePassesOverSourceMediaData = true
exportSession.outputURL = filePath
exportSession.shouldOptimizeForNetworkUse = true
exportSession.timeRange = CMTimeRangeMake(kCMTimeZero, mutableComp.duration)
exportSession.outputFileType = AVFileTypeQuickTimeMovie
exportSession.exportAsynchronously {
print("finished: \(filePath) : \(exportSession.status.rawValue) ")
if exportSession.status.rawValue == 4 {
print("Export failed -> Reason: \(exportSession.error!.localizedDescription))")
print(exportSession.error!)
}
}
}
}
Then I run all three methods in the viewDidLoad method for a quick test. The problem is that when I run the app ,the result of the export is the original video without the title on it.
What am I missing here?
UPDATE
I notice that adding a subTitle1Text.backgroundColor property in
part II of the code makes a colored CGRect corresponding to subTitle1Text.frame appear on top of the video when exported.
(See Image)
When this code is modified for playback using AVSynchronizedLayer the desired layer can be seen on top of the video with text on it.
So perhaps this is a bug in AVFoundation itself.
I suppose I am only left with the option of using a customVideoCompositorClass. The problem with that is that it takes a lot of time to render the video . Here is an example that uses AVVideoCompositing
Here is full working code which I used in my project. It will show CATextLayer at bottom (0,0). And in export session finish it will replace new path in player item. I used one model from Objective C code to get orientation. Please do testing in device. AVPLayer will not show text layer properly in simulator.
let composition = AVMutableComposition.init()
let videoComposition = AVMutableVideoComposition()
videoComposition.frameDuration = CMTimeMake(1, 30)
videoComposition.renderScale = 1.0
let compositionCommentaryTrack: AVMutableCompositionTrack? = composition.addMutableTrack(withMediaType: AVMediaTypeAudio, preferredTrackID: kCMPersistentTrackID_Invalid)
let compositionVideoTrack: AVMutableCompositionTrack? = composition.addMutableTrack(withMediaType: AVMediaTypeVideo, preferredTrackID: kCMPersistentTrackID_Invalid)
let clipVideoTrack:AVAssetTrack = self.currentAsset.tracks(withMediaType: AVMediaTypeVideo)[0]
let audioTrack: AVAssetTrack? = self.currentAsset.tracks(withMediaType: AVMediaTypeAudio)[0]
try? compositionCommentaryTrack?.insertTimeRange(CMTimeRangeMake(kCMTimeZero, self.currentAsset.duration), of: audioTrack!, at: kCMTimeZero)
try? compositionVideoTrack?.insertTimeRange(CMTimeRangeMake(kCMTimeZero, self.currentAsset.duration), of: clipVideoTrack, at: kCMTimeZero)
let orientation = VideoModel.videoOrientation(self.currentAsset)
var isPortrait = false
switch orientation {
case .landscapeRight:
isPortrait = false
case .landscapeLeft:
isPortrait = false
case .portrait:
isPortrait = true
case .portraitUpsideDown:
isPortrait = true
}
var naturalSize = clipVideoTrack.naturalSize
if isPortrait
{
naturalSize = CGSize.init(width: naturalSize.height, height: naturalSize.width)
}
videoComposition.renderSize = naturalSize
let scale = CGFloat(1.0)
var transform = CGAffineTransform.init(scaleX: CGFloat(scale), y: CGFloat(scale))
switch orientation {
case .landscapeRight: break
// isPortrait = false
case .landscapeLeft:
transform = transform.translatedBy(x: naturalSize.width, y: naturalSize.height)
transform = transform.rotated(by: .pi)
case .portrait:
transform = transform.translatedBy(x: naturalSize.width, y: 0)
transform = transform.rotated(by: CGFloat(M_PI_2))
case .portraitUpsideDown:break
}
let frontLayerInstruction = AVMutableVideoCompositionLayerInstruction(assetTrack: compositionVideoTrack!)
frontLayerInstruction.setTransform(transform, at: kCMTimeZero)
let MainInstruction = AVMutableVideoCompositionInstruction()
MainInstruction.timeRange = CMTimeRangeMake(kCMTimeZero, composition.duration)
MainInstruction.layerInstructions = [frontLayerInstruction]
videoComposition.instructions = [MainInstruction]
let parentLayer = CALayer.init()
parentLayer.frame = CGRect.init(x: 0, y: 0, width: naturalSize.width, height: naturalSize.height)
let videoLayer = CALayer.init()
videoLayer.frame = parentLayer.frame
let layer = CATextLayer()
layer.string = "HELLO ALL"
layer.foregroundColor = UIColor.white.cgColor
layer.backgroundColor = UIColor.orange.cgColor
layer.fontSize = 32
layer.frame = CGRect.init(x: 0, y: 0, width: 300, height: 100)
var rct = layer.frame;
let widthScale = self.playerView.frame.size.width/naturalSize.width
rct.size.width /= widthScale
rct.size.height /= widthScale
rct.origin.x /= widthScale
rct.origin.y /= widthScale
parentLayer.addSublayer(videoLayer)
parentLayer.addSublayer(layer)
videoComposition.animationTool = AVVideoCompositionCoreAnimationTool.init(postProcessingAsVideoLayer: videoLayer, in: parentLayer)
let documentsPath = NSSearchPathForDirectoriesInDomains(.documentDirectory, .userDomainMask, true)[0]
let videoPath = documentsPath+"/cropEditVideo.mov"
let fileManager = FileManager.default
if fileManager.fileExists(atPath: videoPath)
{
try! fileManager.removeItem(atPath: videoPath)
}
print("video path \(videoPath)")
var exportSession = AVAssetExportSession.init(asset: composition, presetName: AVAssetExportPresetHighestQuality)
exportSession?.videoComposition = videoComposition
exportSession?.outputFileType = AVFileTypeQuickTimeMovie
exportSession?.outputURL = URL.init(fileURLWithPath: videoPath)
exportSession?.videoComposition = videoComposition
var exportProgress: Float = 0
let queue = DispatchQueue(label: "Export Progress Queue")
queue.async(execute: {() -> Void in
while exportSession != nil {
// int prevProgress = exportProgress;
exportProgress = (exportSession?.progress)!
print("current progress == \(exportProgress)")
sleep(1)
}
})
exportSession?.exportAsynchronously(completionHandler: {
if exportSession?.status == AVAssetExportSessionStatus.failed
{
print("Failed \(exportSession?.error)")
}else if exportSession?.status == AVAssetExportSessionStatus.completed
{
exportSession = nil
let asset = AVAsset.init(url: URL.init(fileURLWithPath: videoPath))
DispatchQueue.main.async {
let item = AVPlayerItem.init(asset: asset)
self.player.replaceCurrentItem(with: item)
let assetDuration = CMTimeGetSeconds(composition.duration)
self.progressSlider.maximumValue = Float(assetDuration)
self.syncLayer.removeFromSuperlayer()
self.lblIntro.isHidden = true
self.player.play()
// let url = URL.init(fileURLWithPath: videoPath)
// let activityVC = UIActivityViewController(activityItems: [url], applicationActivities: [])
// self.present(activityVC, animated: true, completion: nil)
}
}
})
Below is code of My VideoModel class
-(AVCaptureVideoOrientation)videoOrientation:(AVAsset *)asset
{
AVCaptureVideoOrientation result = 0;
NSArray *tracks = [asset tracksWithMediaType:AVMediaTypeVideo];
if([tracks count] > 0) {
AVAssetTrack *videoTrack = [tracks objectAtIndex:0];
CGAffineTransform t = videoTrack.preferredTransform;
// Portrait
if(t.a == 0 && t.b == 1.0 && t.c == -1.0 && t.d == 0)
{
result = AVCaptureVideoOrientationPortrait;
}
// PortraitUpsideDown
if(t.a == 0 && t.b == -1.0 && t.c == 1.0 && t.d == 0) {
result = AVCaptureVideoOrientationPortraitUpsideDown;
}
// LandscapeRight
if(t.a == 1.0 && t.b == 0 && t.c == 0 && t.d == 1.0)
{
result = AVCaptureVideoOrientationLandscapeRight;
}
// LandscapeLeft
if(t.a == -1.0 && t.b == 0 && t.c == 0 && t.d == -1.0)
{
result = AVCaptureVideoOrientationLandscapeLeft;
}
}
return result;
}
Let me know if you need any more help in this.
My goal is to let user select video from photos and then let him to add labels over it.
Here is what I've got:
let audioAsset = AVURLAsset(url: selectedVideoURL)
let videoAsset = AVURLAsset(url: selectedVideoURL)
let mixComposition = AVMutableComposition()
let compositionVideoTrack = mixComposition.addMutableTrack(withMediaType: AVMediaTypeVideo, preferredTrackID: Int32(kCMPersistentTrackID_Invalid))
let compositionAudioTrack = mixComposition.addMutableTrack(withMediaType: AVMediaTypeAudio, preferredTrackID: Int32(kCMPersistentTrackID_Invalid))
let clipVideoTrack = videoAsset.tracks(withMediaType: AVMediaTypeVideo)[0]
let clipAudioTrack = audioAsset.tracks(withMediaType: AVMediaTypeAudio)[0]
do {
try compositionVideoTrack.insertTimeRange(CMTimeRangeMake(kCMTimeZero, videoAsset.duration), of: clipVideoTrack, at: kCMTimeZero)
try compositionAudioTrack.insertTimeRange(CMTimeRangeMake(kCMTimeZero, audioAsset.duration), of: clipAudioTrack, at: kCMTimeZero)
compositionVideoTrack.preferredTransform = clipVideoTrack.preferredTransform
} catch {
print(error)
}
var videoSize = clipVideoTrack.naturalSize
if isVideoPortrait(asset: videoAsset) {
videoSize = CGSize(width: videoSize.height, height: videoSize.width)
}
let parentLayer = CALayer()
let videoLayer = CALayer()
parentLayer.frame = CGRect(x: 0, y: 0, width: videoSize.width, height: videoSize.height)
videoLayer.frame = CGRect(x: 0, y: 0, width: videoSize.width, height: videoSize.height)
parentLayer.addSublayer(videoLayer)
// adding label
let helloLabelLayer = CATextLayer()
helloLabelLayer.string = "Hello"
helloLabelLayer.font = "Signika-Semibold" as CFTypeRef?
helloLabelLayer.fontSize = 30.0
helloLabelLayer.contentsScale = mainScreen.scale
helloLabelLayer.alignmentMode = kCAAlignmentNatural
helloLabelLayer.frame = CGRect(x: 0.0, y: 0.0, width: 100.0, height: 50.0)
parentLayer.addSublayer(helloLabelLayer)
// creating composition
let videoComp = AVMutableVideoComposition()
videoComp.renderSize = videoSize
videoComp.frameDuration = CMTimeMake(1, 30)
videoComp.animationTool = AVVideoCompositionCoreAnimationTool(postProcessingAsVideoLayer: videoLayer, in: parentLayer)
let instruction = AVMutableVideoCompositionInstruction()
instruction.timeRange = CMTimeRangeMake(kCMTimeZero, mixComposition.duration)
let layerInstruction = videoCompositionInstructionForTrack(track: compositionVideoTrack, asset: videoAsset)
instruction.layerInstructions = [layerInstruction]
videoComp.instructions = [instruction]
if let assetExport = AVAssetExportSession(asset: mixComposition, presetName: AVAssetExportPreset640x480) {
let filename = NSTemporaryDirectory().appending("video.mov")
if FileManager.default.fileExists(atPath: filename) {
do {
try FileManager.default.removeItem(atPath: filename)
} catch {
print(error)
}
}
let url = URL(fileURLWithPath: filename)
assetExport.outputURL = url
assetExport.outputFileType = AVFileTypeMPEG4
assetExport.videoComposition = videoComp
print(NSDate().timeIntervalSince1970)
assetExport.exportAsynchronously {
print(NSDate().timeIntervalSince1970)
let library = ALAssetsLibrary()
library.writeVideoAtPath(toSavedPhotosAlbum: url, completionBlock: {
(url, error) in
switch assetExport.status {
case AVAssetExportSessionStatus.failed:
p("failed \(assetExport.error)")
case AVAssetExportSessionStatus.cancelled:
p("cancelled \(assetExport.error)")
default:
p("complete")
p(NSDate().timeIntervalSince1970)
if FileManager.default.fileExists(atPath: filename) {
do {
try FileManager.default.removeItem(atPath: filename)
} catch {
p(error)
}
}
print("Exported")
}
})
}
Implementation of isVideoPortrait function:
func isVideoPortrait(asset: AVAsset) -> Bool {
var isPortrait = false
let tracks = asset.tracks(withMediaType: AVMediaTypeVideo)
if tracks.count > 0 {
let videoTrack = tracks[0]
let t = videoTrack.preferredTransform
if t.a == 0 && t.b == 1.0 && t.c == -1.0 && t.d == 0 {
isPortrait = true
}
if t.a == 0 && t.b == -1.0 && t.c == 1.0 && t.d == 0 {
isPortrait = true
}
if t.a == 1.0 && t.b == 0 && t.c == 0 && t.d == 1.0 {
isPortrait = false
}
if t.a == -1.0 && t.b == 0 && t.c == 0 && t.d == -1.0 {
isPortrait = false
}
}
return isPortrait
}
And the last function for video composition layer instruction:
func videoCompositionInstructionForTrack(track: AVCompositionTrack, asset: AVAsset) -> AVMutableVideoCompositionLayerInstruction {
let instruction = AVMutableVideoCompositionLayerInstruction(assetTrack: track)
let assetTrack = asset.tracks(withMediaType: AVMediaTypeVideo)[0]
let transform = assetTrack.preferredTransform
instruction.setTransform(transform, at: kCMTimeZero)
return instruction
}
The code works well, output video has label, but if I select 1 minute video, export takes 28 seconds.
I've search for it and tried to remove layerInsctuction transform, but no effect.
Tried to add:
assetExport.shouldOptimizeForNetworkUse = false
no effect either.
Also, tried to set AVAssetExportPresetPassthrough for AVAssetExportSession, in this case video exports with 1 second but labels have gone.
Any help would be appreciated, because I'm in stuck. Thanks for your time.
The only way I can think of is to reduce the quality via the bit rate and resolution.
This is done through a dictionary applied to the videoSettings of the AssetExporter, for this to work I had to use a Framework called SDAVAssetExportSession
Then by changing the videoSettings I could play with the quality to get an optimal quality / speed.
let compression = [AVVideoAverageBitRateKey : 2097152(DESIRED_BITRATE),AVVideoProfileLevelKey : AVVideoProfileLevelH264BaselineAutoLevel]
let videoSettings = [AVVideoCodecKey : AVVideoCodecH264, AVVideoWidthKey : maxWidth, AVVideoHeightKey : maxHeight, AVVideoCompressionPropertiesKey:compression]
This was the only way I could speed things up.
This is not directly relevant to your question, but your code here is backwards:
assetExport.exportAsynchronously {
let library = ALAssetsLibrary()
library.writeVideoAtPath(toSavedPhotosAlbum: url, completionBlock: {
switch assetExport.status {
No no no. First you complete the asset export. Then you can copy again to somewhere else if that's what you want to do. So this needs to go like this:
assetExport.exportAsynchronously {
switch assetExport.status {
case .completed:
let library = ALAssetsLibrary()
library.writeVideoAtPath...
Other comments:
ALAssetsLibrary is dead. This is not the way to copy into the user's photo library. Use Photo framework.
Your original code is very odd, because there are a lot of other cases you are not testing for. You are just assuming that default means .completed. That's dangerous.