I am attempting to make a new video using an image that will always be size: CGSize(375, 667), but with a video that will be different sizes, and with the contentMode of .`aspectFit'. The problem is that I cannot figure out how to make the whole video composition the correct size (i.e. the image size), and instead it is the videos natural size with a bunch of weird outcomes. (edit note: the video should be centered in the view like a normal aspectFit would do for a UIImageView for example..)
here is an example of what i am trying to achieve... note that I already have the image and the video, all i need to do is make the new video with them. And this is how it should look like (in the image):
desired result image here --
Here is the code I am attempting currently, with a placeholder image of "background" (a random 375, 667 image in Assets..): I think I may be doing the stuff around the comment "important stuff" improperly... but i cannot figure it out currently :/
func makeVideo(fromVideoAt videoURL: URL, forName name: String, onComplete: #escaping (URL?) -> Void) {
let asset = AVURLAsset(url: videoURL)
let composition = AVMutableComposition()
guard
let compositionTrack = composition.addMutableTrack(
withMediaType: .video, preferredTrackID: kCMPersistentTrackID_Invalid),
let assetTrack = asset.tracks(withMediaType: .video).first
else {
print("Something is wrong with the asset.")
onComplete(nil)
return
}
do {
let timeRange = CMTimeRange(start: .zero, duration: asset.duration)
try compositionTrack.insertTimeRange(timeRange, of: assetTrack, at: .zero)
if let audioAssetTrack = asset.tracks(withMediaType: .audio).first,
let compositionAudioTrack = composition.addMutableTrack(withMediaType: .audio, preferredTrackID: kCMPersistentTrackID_Invalid) {
try compositionAudioTrack.insertTimeRange(timeRange, of: audioAssetTrack, at: .zero)
}
} catch {
print(error)
onComplete(nil)
return
}
compositionTrack.preferredTransform = assetTrack.preferredTransform
let videoInfo = orientation(from: assetTrack.preferredTransform)
//Important stuff potentially? general below:
let videoSize: CGSize
if videoInfo.isPortrait {
videoSize = CGSize(width: 720, height: 1280)
} else {
videoSize = CGSize(width: 720, height: 1280) //720.0, 1280 tiktok default..?
}
//the Background image:
let backgroundLayer = CALayer()
backgroundLayer.frame = CGRect(origin: .zero, size: videoSize) //videosize
backgroundLayer.contents = UIImage(named: "background")?.cgImage
backgroundLayer.contentsGravity = .resizeAspectFill
backgroundLayer.backgroundColor = UIColor.red.cgColor
//Video layer:
let videoLayer = CALayer()
// videoLayer.frame = CGRect(origin: .zero, size: CGSize(width: composition.naturalSize.width, height: composition.naturalSize.height)) //videosize
videoLayer.backgroundColor = UIColor.yellow.cgColor
print(composition.naturalSize, "<-- composition.naturalSize")
videoLayer.frame = CGRect(origin: .zero, size: CGSize(width: videoSize.width, height: composition.naturalSize.height))//CGRect(x: 0, y: 0, width: videoSize.width, height: composition.naturalSize.height)
//OutPutlayer putting the together?
let outputLayer = CALayer()
outputLayer.frame = CGRect(origin: .zero, size: CGSize(width: 720, height: 1280)) //videosize
outputLayer.backgroundColor = UIColor.white.cgColor
outputLayer.addSublayer(backgroundLayer)
outputLayer.addSublayer(videoLayer)
// outputLayer.addSublayer(overlayLayer)
let videoComposition = AVMutableVideoComposition()
videoComposition.renderSize = videoSize
videoComposition.frameDuration = CMTime(value: 1, timescale: 30)
videoComposition.animationTool = AVVideoCompositionCoreAnimationTool(postProcessingAsVideoLayer: videoLayer, in: outputLayer)
//Setting Up Instructions
let instruction = AVMutableVideoCompositionInstruction()
instruction.timeRange = CMTimeRange(start: .zero, duration: composition.duration)
videoComposition.instructions = [instruction]
let layerInstruction = compositionLayerInstruction(for: compositionTrack, assetTrack: assetTrack)
instruction.layerInstructions = [layerInstruction]
//EXPORTING
guard let export = AVAssetExportSession(asset: composition, presetName: AVAssetExportPresetHighestQuality) else {
print("Cannot create export session.")
onComplete(nil)
return
}
let videoName = UUID().uuidString
let exportURL = URL(fileURLWithPath: NSTemporaryDirectory()).appendingPathComponent(videoName).appendingPathExtension("mp4")
export.videoComposition = videoComposition
export.outputFileType = .mov
export.outputURL = exportURL
export.exportAsynchronously {
DispatchQueue.main.async {
switch export.status {
case .completed:
onComplete(exportURL)
default:
print("Something went wrong during export.")
print(export.error ?? "unknown error")
onComplete(nil)
break
}
}
}
}
Try to use this code https://github.com/vabe1337/VBVideoEditor. It render video like TikTok, Instagram.
Related
I have been trying to get AVAssetExportSession to save a video with animations overlaid, but am running into a problem where the exported video has a runtime of 0:00.
As I understand it, the AVAssetExportSession APIs need a base video, which can be a black video that runs for a few seconds. You can then write over and cover up as much of that base video as you'd like using CALayers. If the animations run over the runtime of the base video, the exported video will extend itself to contain the runtime of the animations.
The base video is 5 sec long, but still yet the exported video is 0:00. Interestingly the exported video does contain the black background from the source video and the very first frame of the animation (the layers).
Has anyone run into this before and know of a good solution/what I'm missing?
Code for context...
#objc func saveMovie() {
print("save movie")
self.selectedFrame = CGRect(x: 0.0, y: 0.0, width: 1080.0, height: 1920.0)
self.selectedBounds = CGRect(x: 0.0, y: 0.0, width: 1080.0, height: 1920.0)
let mainLayer = CALayer()
mainLayer.frame = CGRect(x: 0.0, y: 0.0, width: 1080.0, height: 1920.0)
let videoLayer = CALayer()
videoLayer.frame = CGRect(x: 0.0, y: 0.0, width: 1080.0, height: 1920.0)
let animationLayer = CALayer()
animationLayer.frame = CGRect(x: 0.0, y: 0.0, width: 1080.0, height: 1920.0)
animationLayer.addSublayer(makeBackground())
animationLayer.addSublayer(makeHeadingTextLayer())
mainLayer.addSublayer(videoLayer)
mainLayer.addSublayer(animationLayer)
if let sourceVideoUrl = Bundle.main.url(
forResource: "SourceVideo",
withExtension: "mp4"
) {
// Load Video Asset to Use As Base
print(sourceVideoUrl.absoluteString)
let baseVideoAsset = AVURLAsset(url: sourceVideoUrl)
// Create Composition for the video to live in
let composition = AVMutableComposition()
composition.naturalSize = CGSize(width: 1080.0, height: 1920.0)
guard
let compositionTrack = composition.addMutableTrack(
withMediaType: AVMediaType.video,
preferredTrackID: kCMPersistentTrackID_Invalid
),
let assetTrack = baseVideoAsset.tracks(
withMediaType: .video
).first
else {
print("something is wrong with the asset")
return
}
do {
// this crashes, so just hard coding 5 seconds right now
//let baseVideoDuration = try await baseVideoAsset.load(.duration)
//print("\(baseVideoDuration)")
let timeRange = CMTimeRangeMake(
start: .zero,
duration: CMTime(value: 5, timescale: 30)
)
try compositionTrack.insertTimeRange(
timeRange,
of: assetTrack,
at: .zero
)
} catch {
print("issue with video track insert time range")
}
compositionTrack.preferredTransform = assetTrack.preferredTransform
let videoSize = CGSize(width: 1080.0, height: 1920.0)
let videoComposition = AVMutableVideoComposition()
videoComposition.renderSize = videoSize
videoComposition.frameDuration = CMTimeMake(value: 1, timescale: 30)
videoComposition.animationTool = AVVideoCompositionCoreAnimationTool(
postProcessingAsVideoLayer: videoLayer,
in: mainLayer
)
let videoCompositionInstruction = AVMutableVideoCompositionInstruction()
videoCompositionInstruction.timeRange = CMTimeRangeMake(
start: CMTime.zero,
duration: CMTimeMake(value: 10, timescale: 30)
)
videoComposition.instructions = [videoCompositionInstruction]
let layerInstruction = AVMutableVideoCompositionLayerInstruction(assetTrack: assetTrack)
let transform = assetTrack.preferredTransform
layerInstruction.setTransform(transform, at: .zero)
videoCompositionInstruction.layerInstructions = [layerInstruction]
guard let exporter = AVAssetExportSession(
asset: composition,
presetName: AVAssetExportPreset1920x1080
) else {
print("failed to create exporter")
return
}
let videoName = UUID().uuidString
let exportUrl = URL(fileURLWithPath: NSTemporaryDirectory())
.appendingPathComponent(videoName)
.appendingPathExtension("mov")
exporter.videoComposition = videoComposition
exporter.outputFileType = .mov
exporter.outputURL = exportUrl
exporter.timeRange = CMTimeRangeMake(start: .zero, duration: CMTimeMake(value: 10, timescale: 30))
NSLog("Composition Duration: %ld seconds", lround(CMTimeGetSeconds(composition.duration)));
exporter.exportAsynchronously {
DispatchQueue.main.async {
switch exporter.status {
case .failed:
print("failed to export")
print(exporter.error ?? "no error")
case .cancelled:
print("canceled")
case .completed:
print("completed")
UISaveVideoAtPathToSavedPhotosAlbum(
exportUrl.relativePath,
self,
nil,
nil
)
case .unknown:
print("unknown status")
default:
break
}
}
}
}
}
First time poster, looooooong time peruser. I'm using SwiftUI for the layout and UIRepresentables for the camera work. (Xcode 11.7), and trying to overlay an image onto a CALayer (for eventual export to video). The image was converted from a UITextView so the user is free to edit, pinch/zoom, and drag the text to their heart's content. After scouring SO for days, and reading Ray Wenderlich tutorials I've hit a wall. Screenshots below.
Before: freeform text 'coffee' added to the view
After: exported movie still, 'coffee' text position is incorrect
Below is the export function. I suspect I'm doing something wrong with relativePosition.
Thank you for any suggestions, this is my foray into writing an iOS app.
static func exportLayersToVideo(_ fileUrl:String, _ textView:UITextView){
let fileURL = NSURL(fileURLWithPath: fileUrl)
let composition = AVMutableComposition()
let vidAsset = AVURLAsset(url: fileURL as URL, options: nil)
// get video track
let vtrack = vidAsset.tracks(withMediaType: AVMediaType.video)
let videoTrack: AVAssetTrack = vtrack[0]
let vid_timerange = CMTimeRangeMake(start: CMTime.zero, duration: vidAsset.duration)
let tr: CMTimeRange = CMTimeRange(start: CMTime.zero, duration: CMTime(seconds: 10.0, preferredTimescale: 600))
composition.insertEmptyTimeRange(tr)
let trackID:CMPersistentTrackID = CMPersistentTrackID(kCMPersistentTrackID_Invalid)
if let compositionvideoTrack: AVMutableCompositionTrack = composition.addMutableTrack(withMediaType: AVMediaType.video, preferredTrackID: trackID) {
do {
try compositionvideoTrack.insertTimeRange(vid_timerange, of: videoTrack, at: CMTime.zero)
} catch {
print("error")
}
compositionvideoTrack.preferredTransform = videoTrack.preferredTransform
} else {
print("unable to add video track")
return
}
let size = videoTrack.naturalSize
let parentlayer = CALayer()
parentlayer.frame = CGRect(x: 0, y: 0, width: size.width, height: size.height)
let videolayer = CALayer()
videolayer.frame = CGRect(x: 0, y: 0, width: size.width, height: size.height)
// Convert UITextView to Image
let renderer = UIGraphicsImageRenderer(size: textView.bounds.size)
let image = renderer.image { ctx in
textView.drawHierarchy(in: textView.bounds, afterScreenUpdates: true)
}
let imglayer = CALayer()
let scaledAspect: CGFloat = image.size.width / image.size.height
let scaledWidth = size.width
let scaledHeight = scaledWidth / scaledAspect
let relativePosition = parentlayer.convert(textView.frame.origin, from: textView.layer)
imglayer.frame = CGRect(x: relativePosition.x, y: relativePosition.y, width: scaledWidth,height: scaledHeight)
imglayer.contents = image.cgImage
// Adding videolayer and imglayer
parentlayer.addSublayer(videolayer)
parentlayer.addSublayer(imglayer)
let layercomposition = AVMutableVideoComposition()
layercomposition.frameDuration = CMTimeMake(value: 1, timescale: 30)
layercomposition.renderSize = size
layercomposition.animationTool = AVVideoCompositionCoreAnimationTool(postProcessingAsVideoLayer: videolayer, in: parentlayer)
// instruction for overlay
let instruction = AVMutableVideoCompositionInstruction()
instruction.timeRange = CMTimeRangeMake(start: CMTime.zero, duration: composition.duration)
let videotrack = composition.tracks(withMediaType: AVMediaType.video)[0] as AVAssetTrack
let layerinstruction = AVMutableVideoCompositionLayerInstruction(assetTrack: videotrack)
instruction.layerInstructions = NSArray(object: layerinstruction) as [AnyObject] as! [AVVideoCompositionLayerInstruction]
layercomposition.instructions = NSArray(object: instruction) as [AnyObject] as! [AVVideoCompositionInstructionProtocol]
// create new file to receive data
let dirPaths = NSSearchPathForDirectoriesInDomains(.documentDirectory, .userDomainMask, true)
let docsDir = dirPaths[0] as NSString
let movieFilePath = docsDir.appendingPathComponent("result.mov")
let movieDestinationUrl = NSURL(fileURLWithPath: movieFilePath)
// use AVAssetExportSession to export video
let assetExport = AVAssetExportSession(asset: composition, presetName:AVAssetExportPresetHighestQuality)
assetExport?.outputFileType = AVFileType.mov
assetExport?.videoComposition = layercomposition
// Check exist and remove old files
do { // delete old video
try FileManager.default.removeItem(at: movieDestinationUrl as URL)
} catch { print("Error Removing Existing File: \(error.localizedDescription).") }
do { // delete old video
try FileManager.default.removeItem(at: fileURL as URL)
} catch { print("Error Removing Existing File: \(error.localizedDescription).") }
assetExport?.outputURL = movieDestinationUrl as URL
assetExport?.exportAsynchronously(completionHandler: {
switch assetExport!.status {
case AVAssetExportSession.Status.failed:
print("failed")
print(assetExport?.error ?? "unknown error")
case AVAssetExportSession.Status.cancelled:
print("cancelled")
print(assetExport?.error ?? "unknown error")
default:
print("Movie complete")
PHPhotoLibrary.shared().performChanges({
PHAssetChangeRequest.creationRequestForAssetFromVideo(atFileURL: movieDestinationUrl as URL)
}) { saved, error in
if saved {
print("Saved")
}
}
}
})
}
}
It looks like the x position is correct, but the y is off. I think this is because the origin is at the bottom-left instead of the top-left. Try this:
var relativePosition = parentlayer.convert(textView.frame.origin, from: textView.layer)
relativePosition.y = size.height - relativePosition.y
imglayer.frame = CGRect(x: relativePosition.x, y: relativePosition.y, width: scaledWidth,height: scaledHeight)
I'm developing an app with a custom video recorder and I should put a watermark over it. It all goes well on the recording part, but when I try to put the watermark, the video is exported with a different orientation an sometimes it's badly cropped. The result is correct only with the device on landscape right.
Here's the code of the function I use to put the watermark on the video:
func addWatermarkToVideo(_ videoURL: URL, completion:#escaping (URL) -> Void) {
let videoAsset = AVURLAsset.init(url: videoURL)
let mixComposition = AVMutableComposition.init()
let compositionVideoTrack = mixComposition.addMutableTrack(withMediaType: AVMediaTypeVideo, preferredTrackID: kCMPersistentTrackID_Invalid)
do {
let clipVideoTrack = videoAsset.tracks(withMediaType: AVMediaTypeVideo).first
try compositionVideoTrack.insertTimeRange(CMTimeRangeMake(kCMTimeZero, videoAsset.duration), of: clipVideoTrack!, at: kCMTimeZero)
compositionVideoTrack.preferredTransform = (videoAsset.tracks(withMediaType: AVMediaTypeVideo).first?.preferredTransform)!
let layer = CALayer.init()
layer.contents = UIImage.init(named: "VideoWatermark")?.cgImage
layer.frame = CGRect.init(x: 10, y: 10, width: 300, height: 100)
let videoTrack = videoAsset.tracks(withMediaType: AVMediaTypeVideo).first
let videoSize = videoTrack!.naturalSize
let parentLayer = CALayer.init()
let videoLayer = CALayer.init()
parentLayer.frame = CGRect.init(x: 0, y: 0, width: videoSize.width, height: videoSize.height)
videoLayer.frame = CGRect.init(x: 0, y: 0, width: videoSize.width, height: videoSize.height)
parentLayer.addSublayer(videoLayer)
parentLayer.addSublayer(layer)
let videoComp = AVMutableVideoComposition.init()
videoComp.renderSize = videoSize
videoComp.frameDuration = CMTimeMake(1, 30)
videoComp.animationTool = AVVideoCompositionCoreAnimationTool.init(postProcessingAsVideoLayer: videoLayer, in: parentLayer)
let instruction = AVMutableVideoCompositionInstruction.init()
instruction.timeRange = CMTimeRangeMake(kCMTimeZero, mixComposition.duration)
let mixVideoTrack = mixComposition.tracks(withMediaType: AVMediaTypeVideo).first
let layerInstruction = AVMutableVideoCompositionLayerInstruction.init(assetTrack: mixVideoTrack!)
layerInstruction.setTransform((clipVideoTrack?.preferredTransform)!, at: kCMTimeZero)
instruction.layerInstructions = [layerInstruction]
videoComp.instructions = [instruction]
let assetExport = AVAssetExportSession.init(asset: mixComposition, presetName: AVAssetExportPreset1280x720)
assetExport?.videoComposition = videoComp
let videoName = "\(Date.init().timeIntervalSince1970).mp4"
let exportURL = URL.init(fileURLWithPath: NSTemporaryDirectory()).appendingPathComponent(videoName)
assetExport?.outputFileType = AVFileTypeMPEG4
assetExport?.outputURL = exportURL
assetExport?.shouldOptimizeForNetworkUse = true
assetExport?.exportAsynchronously(completionHandler: {
NSLog("Export status: \(String(describing: assetExport?.status.rawValue))")
completion(exportURL)
})
} catch let error as NSError {
print("\(error), \(error.localizedDescription)")
}
}
I tried googling my issue but no result was finally helpful to me, I have no idea what I'm doing wrong here.
Any help will be deeply appreciated, thanks in advance to anyone who will help me.
I'm learning AVFoundation and I'm having a problem trying to save a video with an overlay image in Swift 3. Using AVMutableComposition I'm able to add the image to the video however the video is zoomed in and not constraining itself to the portrait size the video was taken in. I've tried:
Setting the natural size through the AVAssetTrack.
Constraining the video to portrait size in the AVMutableVideoComposition renderFrame.
Locking the new video bounds to the recorded video width and height.
The code below works apart from the issue I'm needing help on. The image I'm trying to add covers the entire portrait view and has a border all around the edges. The app also only allows for portrait.
func processVideoWithWatermark(video: AVURLAsset, watermark: UIImage, completion: #escaping (Bool) -> Void) {
let composition = AVMutableComposition()
let asset = AVURLAsset(url: video.url, options: nil)
let track = asset.tracks(withMediaType: AVMediaTypeVideo)
let videoTrack:AVAssetTrack = track[0] as AVAssetTrack
let timerange = CMTimeRangeMake(kCMTimeZero, asset.duration)
let compositionVideoTrack:AVMutableCompositionTrack = composition.addMutableTrack(withMediaType: AVMediaTypeVideo, preferredTrackID: CMPersistentTrackID())
do {
try compositionVideoTrack.insertTimeRange(timerange, of: videoTrack, at: kCMTimeZero)
compositionVideoTrack.preferredTransform = videoTrack.preferredTransform
} catch {
print(error)
}
// let compositionAudioTrack:AVMutableCompositionTrack = composition.addMutableTrack(withMediaType: AVMediaTypeAudio, preferredTrackID: CMPersistentTrackID())
//
// for audioTrack in asset.tracks(withMediaType: AVMediaTypeAudio) {
// do {
// try compositionAudioTrack.insertTimeRange(audioTrack.timeRange, of: audioTrack, at: kCMTimeZero)
// } catch {
// print(error)
// }
//
// }
//
let size = videoTrack.naturalSize
let watermark = watermark.cgImage
let watermarklayer = CALayer()
watermarklayer.contents = watermark
watermarklayer.frame = CGRect(x: 0, y: 0, width: screenWidth, height: screenHeight)
watermarklayer.opacity = 1
let videolayer = CALayer()
videolayer.frame = CGRect(x: 0, y: 0, width: screenWidth, height: screenHeight)
let parentlayer = CALayer()
parentlayer.frame = CGRect(x: 0, y: 0, width: size.width, height: size.height)
parentlayer.addSublayer(videolayer)
parentlayer.addSublayer(watermarklayer)
let layercomposition = AVMutableVideoComposition()
layercomposition.frameDuration = CMTimeMake(1, 30)
layercomposition.renderSize = CGSize(width: screenWidth, height: screenHeight)
layercomposition.renderScale = 1.0
layercomposition.animationTool = AVVideoCompositionCoreAnimationTool(postProcessingAsVideoLayer: videolayer, in: parentlayer)
let instruction = AVMutableVideoCompositionInstruction()
instruction.timeRange = CMTimeRangeMake(kCMTimeZero, composition.duration)
let videotrack = composition.tracks(withMediaType: AVMediaTypeVideo)[0] as AVAssetTrack
let layerinstruction = AVMutableVideoCompositionLayerInstruction(assetTrack: videotrack)
layerinstruction.setTransform(videoTrack.preferredTransform, at: kCMTimeZero)
instruction.layerInstructions = [layerinstruction]
layercomposition.instructions = [instruction]
let filePath = NSTemporaryDirectory() + self.fileName()
let movieUrl = URL(fileURLWithPath: filePath)
guard let assetExport = AVAssetExportSession(asset: composition, presetName:AVAssetExportPresetHighestQuality) else {return}
assetExport.videoComposition = layercomposition
assetExport.outputFileType = AVFileTypeMPEG4
assetExport.outputURL = movieUrl
assetExport.exportAsynchronously(completionHandler: {
switch assetExport.status {
case .completed:
print("success")
print(video.url)
self.saveVideoToUserLibrary(fileURL: movieUrl, completion: { (success, error) in
if success {
completion(true)
} else {
completion(false)
}
})
break
case .cancelled:
print("cancelled")
break
case .exporting:
print("exporting")
break
case .failed:
print(video.url)
print("failed: \(assetExport.error!)")
break
case .unknown:
print("unknown")
break
case .waiting:
print("waiting")
break
}
})
}
If the video layer should fill parent layer, your videoLayer's frame is incorrect. You need to set the size equal to size instead of screenSize.
This function is exporting the merged composition to a landscape orientation when the source video is in portrait. I save the original video in portrait orientation to my documents directory and then save it to camera roll and works fine. I then pass the saved video's url to this function and it somehow rotates it to landscape when it shouldn't. How do I fix this?
func makeVideoOverlay (url : URL) {
print("documents directory url: \(url)")
let composition = AVMutableComposition()
let vidAsset = AVURLAsset(url: url as URL, options: nil)
// get video track
let vtrack = vidAsset.tracks(withMediaType: AVMediaTypeVideo)
let videoTrack:AVAssetTrack = vtrack[0]
let vid_duration = videoTrack.timeRange.duration
let vid_timerange = CMTimeRangeMake(kCMTimeZero, vidAsset.duration)
//var error: NSError?
let compositionvideoTrack:AVMutableCompositionTrack = composition.addMutableTrack(withMediaType: AVMediaTypeVideo, preferredTrackID: CMPersistentTrackID())
do {
try compositionvideoTrack.insertTimeRange(vid_timerange, of: videoTrack, at: kCMTimeZero)
} catch {
// handle error
print("comp video track error: \(error.localizedDescription)")
}
compositionvideoTrack.preferredTransform = videoTrack.preferredTransform
let size = videoTrack.naturalSize
//this prints out to 1920x1080 landscape dimension. i don't know how
print("asset size: \(size)")
// Watermark Effect
let imglogo = UIImage(named: "logo-image")
let imglayer = CALayer()
imglayer.contents = imglogo?.cgImage
imglayer.frame = CGRect.init(x: 5, y: size.height-160, width: 150, height: 150)
imglayer.opacity = 1.0
let videolayer = CALayer()
videolayer.frame = CGRect.init(x: 0, y: 0, width: size.width, height: size.height)
let parentlayer = CALayer()
parentlayer.frame = CGRect.init(x: 0, y: 0, width: size.width, height: size.height)
parentlayer.addSublayer(videolayer)
parentlayer.addSublayer(imglayer)
let layercomposition = AVMutableVideoComposition()
layercomposition.frameDuration = CMTimeMake(1, 30)
layercomposition.renderSize = size
layercomposition.animationTool = AVVideoCompositionCoreAnimationTool(postProcessingAsVideoLayer: videolayer, in: parentlayer)
// instruction for watermark
let instruction = AVMutableVideoCompositionInstruction()
instruction.timeRange = CMTimeRangeMake(kCMTimeZero, composition.duration)
let videotrack = composition.tracks(withMediaType: AVMediaTypeVideo)[0] as AVAssetTrack
let layerinstruction = AVMutableVideoCompositionLayerInstruction(assetTrack: videotrack)
instruction.layerInstructions = NSArray(object: layerinstruction) as [AnyObject] as [AnyObject] as! [AVVideoCompositionLayerInstruction]
layercomposition.instructions = NSArray(object: instruction) as [AnyObject] as [AnyObject] as! [AVVideoCompositionInstructionProtocol]
// create new file to receive data
let dirPaths = NSSearchPathForDirectoriesInDomains(.documentDirectory, .userDomainMask, true)
let docsDir: String = dirPaths[0] as String
let movieFilePath = docsDir.appending("/result.mov") as String
movieDestinationUrl = URL(fileURLWithPath: movieFilePath)
print("overlay destination url: \(movieDestinationUrl)")
// use AVAssetExportSession to export video
let assetExport = AVAssetExportSession(asset: composition, presetName:AVAssetExportPresetHighestQuality)
assetExport?.outputFileType = AVFileTypeQuickTimeMovie
assetExport?.outputURL = movieDestinationUrl as URL
assetExport?.videoComposition = layercomposition
assetExport?.exportAsynchronously(completionHandler: {
if assetExport?.status == AVAssetExportSessionStatus.failed
{
print("failed: \(assetExport?.error)")
}
else if assetExport?.status == AVAssetExportSessionStatus.cancelled
{
print("cancelled: \(assetExport?.error)")
}
else
{
print("Movie complete")
OperationQueue.main.addOperation({ () -> Void in
//saves in landscape
self.saveAsset(url: self.movieDestinationUrl)
})
}
})
}
AVMutableVideoCompositionLayerInstruction has a method setTransform(_:at:)
As documentation say
Sets a fixed transform to apply from the specified time until the next
time at which a transform is set. [...]. Before the first specified time for which a
transform is set, the affine transform is held constant at the value
of identity ; after the last time for which a transform is set, the
affine transform is held constant at that last value.
You should set videoTrack's preferredTransform to layerInstruction instead.
EDIT
You need to create layerinstruction with the new created composition track instead.
let layerinstruction = AVMutableVideoCompositionLayerInstruction(assetTrack: compositionvideoTrack) // NOT videoTrack.
layerinstruction.setTransform(videoTrack.preferredTransform, at: kCMTimeZero)