Add watermark to recorded video and save - ios

So I am trying to add a watermark to a previously recorded video using the following code, but when I view the video, there is no watermark. Can anyone help? I tried following the post at: iPhone Watermark on recorded Video.
public func addWatermarkToVideo(url: NSURL, completion:(url: NSURL?) -> Void) {
let videoAsset = AVURLAsset(URL: url)
let mixComposition = AVMutableComposition()
let compositionVideoTrack = mixComposition.addMutableTrackWithMediaType(AVMediaTypeVideo, preferredTrackID: kCMPersistentTrackID_Invalid)
let clipVideoTrack: AVAssetTrack = videoAsset.tracksWithMediaType(AVMediaTypeVideo)[0]
do {
try compositionVideoTrack.insertTimeRange(CMTimeRangeMake(kCMTimeZero, videoAsset.duration), ofTrack: clipVideoTrack, atTime: kCMTimeZero)
} catch {
print(error)
}
compositionVideoTrack.preferredTransform = videoAsset.tracksWithMediaType(AVMediaTypeVideo)[0].preferredTransform
//Add watermark
guard let myImage = UIImage(named: "Logo") else {
completion(url: nil)
return
}
let aLayer = CALayer()
aLayer.contents = myImage.CGImage
aLayer.frame = CGRectMake(5, 25, 100, 57)
aLayer.opacity = 0.65
let videoSize = videoAsset.tracksWithMediaType(AVMediaTypeVideo)[0].naturalSize
let parentLayer = CALayer()
let videoLayer = CALayer()
parentLayer.frame = CGRectMake(0, 0, videoSize.width, videoSize.height)
videoLayer.frame = CGRectMake(0, 0, videoSize.width, videoSize.height)
parentLayer.addSublayer(videoLayer)
parentLayer.addSublayer(aLayer)
let videoComp = AVMutableVideoComposition()
videoComp.renderSize = videoSize
videoComp.frameDuration = CMTimeMake(1,30)
videoComp.animationTool = AVVideoCompositionCoreAnimationTool(postProcessingAsVideoLayer: videoLayer, inLayer: parentLayer)
let instruction = AVMutableVideoCompositionInstruction()
instruction.timeRange = CMTimeRangeMake(kCMTimeZero, mixComposition.duration)
let videoTrack = mixComposition.tracksWithMediaType(AVMediaTypeVideo)[0]
let layerInstruction = AVMutableVideoCompositionLayerInstruction(assetTrack: videoTrack)
instruction.layerInstructions = [layerInstruction]
videoComp.instructions = [instruction]
let paths = NSSearchPathForDirectoriesInDomains(NSSearchPathDirectory.DocumentDirectory, NSSearchPathDomainMask.UserDomainMask, true)
let documentsDirectory: AnyObject = paths[0]
let dataPath = documentsDirectory.stringByAppendingPathComponent("VideoCache")
if (!NSFileManager.defaultManager().fileExistsAtPath(dataPath)) {
do {
try NSFileManager.defaultManager().createDirectoryAtPath(dataPath, withIntermediateDirectories: false, attributes: nil)
} catch {
print("Couldn't create path")
}
}
let tempURL = NSURL(fileURLWithPath: dataPath)
let completeMovieUrl = tempURL.URLByAppendingPathComponent("tether-\(NSDate()).mov")
if let exporter = AVAssetExportSession(asset: mixComposition, presetName: AVAssetExportPresetHighestQuality) {
exporter.outputURL = completeMovieUrl
exporter.outputFileType = AVFileTypeMPEG4
exporter.exportAsynchronouslyWithCompletionHandler({ () -> Void in
switch exporter.status {
case .Failed:
print("failed \(exporter.error)")
completion(url: nil)
break
case .Cancelled:
print("cancelled \(exporter.error)")
completion(url: nil)
break
default:
print("complete")
completion(url: exporter.outputURL)
}
})
}
}

You've forgotten to add your videoComposition to the AVAssetExportSession:
exporter.outputFileType = AVFileTypeMPEG4 // You had this
exporter.videoComposition = videoComp // but had forgotten this
exporter.exportAsynchronouslyWithCompletionHandler({ // ...

Related

I want to merge video with image but after merging it's showing black screen

I have taken one video URL from backend and I want to marge with image. So I have added VideoLayer, ImageLayer on ParentLayer called AnimationLayer.
After Merge Video and images, it seems 1 black screen.
How can I resolve this bug?
func MergeVideo1(_ vidioUrlString: String?, with img: UIImage?, With VideoName : String)
{
guard let videoUrl = URL(string: vidioUrlString ?? "") else { return }
let videoUrlAsset = AVURLAsset(url: videoUrl, options: nil)
// Setup `mutableComposition` from the existing video
let mutableComposition = AVMutableComposition()
let videoAssetTrack = videoUrlAsset.tracks(withMediaType: AVMediaType.video).first!
let videoCompositionTrack = mutableComposition.addMutableTrack(withMediaType: AVMediaType.video, preferredTrackID: kCMPersistentTrackID_Invalid)
videoCompositionTrack!.preferredTransform = videoAssetTrack.preferredTransform
try! videoCompositionTrack!.insertTimeRange(CMTimeRange(start:CMTime.zero, duration:videoAssetTrack.timeRange.duration), of: videoAssetTrack, at: CMTime.zero)
let audioAssetTrack = videoUrlAsset.tracks(withMediaType: AVMediaType.audio).first!
let audioCompositionTrack = mutableComposition.addMutableTrack(withMediaType: AVMediaType.audio, preferredTrackID: kCMPersistentTrackID_Invalid)
try! audioCompositionTrack!.insertTimeRange(CMTimeRange(start: CMTime.zero, duration:audioAssetTrack.timeRange.duration), of: audioAssetTrack , at: CMTime.zero)
// Create a `videoComposition` to represent the `foregroundImage`
let videoSize: CGSize = videoCompositionTrack!.naturalSize
let frame = CGRect(x: 0.0, y: 0.0, width: videoSize.width, height: videoSize.height)
let imgLogoMix = UIImage(named: "icn_RandomDownload")
//Logo
let imageLayer_LOGO = CALayer()
imageLayer_LOGO.contents = imgLogoMix.cgImage
imageLayer_LOGO.frame = frame
//Frame
let imageLayer = CALayer()
imageLayer.contents = img?.cgImage
imageLayer.frame = frame
let videoLayer = CALayer()
videoLayer.frame = frame
let animationLayer = CALayer()
animationLayer.frame = frame
animationLayer.addSublayer(videoLayer)
animationLayer.addSublayer(imageLayer)
animationLayer.addSublayer(imageLayer_LOGO)
imageLayer.bringToFront()
imageLayer_LOGO.bringToFront()
let videoComposition = AVMutableVideoComposition(propertiesOf: (videoCompositionTrack?.asset!)!)
videoComposition.animationTool = AVVideoCompositionCoreAnimationTool(postProcessingAsVideoLayer: videoLayer, in: animationLayer)
let paths = FileManager.default.urls(for: .documentDirectory, in: .userDomainMask)
let DirPath = paths[0].appendingPathComponent("CREATE_IMAGE")
//finalPath = DirPath.path + "/myVideo.mp4"
finalPath = DirPath.path + "/\(VideoName).mp4"
if FileManager.default.fileExists(atPath: finalPath) {
do {
try FileManager.default.removeItem(atPath: finalPath)
} catch {
}
}
let exportSession = AVAssetExportSession( asset: mutableComposition, presetName: AVAssetExportPresetHighestQuality)!
exportSession.videoComposition = videoComposition
// exportSession.outputURL = destinationFilePath
exportSession.outputURL = URL(fileURLWithPath: finalPath)
exportSession.outputFileType = AVFileType.mp4
exportSession.exportAsynchronously(completionHandler: {
switch exportSession.status {
case AVAssetExportSession.Status.failed:
print("failed")
SKActivityHUD.DismissHUD()
print(exportSession.error ?? "unknown error")
case AVAssetExportSession.Status.cancelled:
print("cancelled")
SKActivityHUD.DismissHUD()
print(exportSession.error ?? "unknown error")
default:
print("Movie complete")
// SKActivityHUD.DismissHUD()
}
})
}
Edited above code with
let videoComposition = AVMutableVideoComposition()
videoComposition.frameDuration = CMTimeMake(value: 1, timescale: Int32(videoCompositionTrack?.nominalFrameRate ?? 300))
videoComposition.renderSize = videoSize
videoComposition.animationTool = AVVideoCompositionCoreAnimationTool(postProcessingAsVideoLayer: videoLayer, in: animationLayer)
let instruction = AVMutableVideoCompositionInstruction()
instruction.timeRange = CMTimeRangeMake(start: CMTime.zero, duration: mutableComposition.duration)
let videotrack = mutableComposition.tracks(withMediaType: AVMediaType.video)[0] as AVAssetTrack
let layerinstruction = AVMutableVideoCompositionLayerInstruction(assetTrack: videotrack)
instruction.layerInstructions = NSArray(object: layerinstruction) as [AnyObject] as! [AVVideoCompositionLayerInstruction]
videoComposition.instructions = [instruction]
Above code just replaced with
let videoComposition = AVMutableVideoComposition()
videoComposition.frameDuration = CMTimeMake(value: 1, timescale: Int32(videoCompositionTrack?.nominalFrameRate ?? 300))
videoComposition.renderSize = videoSize
videoComposition.animationTool = AVVideoCompositionCoreAnimationTool(postProcessingAsVideoLayer: videoLayer, in: animationLayer)
let instruction = AVMutableVideoCompositionInstruction()
instruction.timeRange = CMTimeRangeMake(start: CMTime.zero, duration: mutableComposition.duration)
let videotrack = mutableComposition.tracks(withMediaType: AVMediaType.video)[0] as AVAssetTrack
let layerinstruction = AVMutableVideoCompositionLayerInstruction(assetTrack: videotrack)
let rgb = CGColorSpaceCreateDeviceRGB()
let myColor : [CGFloat] = [1.0, 1.0, 1.0, 1.0] //white
let ref = CGColor(colorSpace: rgb, components: myColor)
instruction.backgroundColor = ref
instruction.layerInstructions = NSArray(object: layerinstruction) as [AnyObject] as! [AVVideoCompositionLayerInstruction]
videoComposition.instructions = [instruction]

Exporting Video in full screen with AVAssetExportSession

i'm trying to do some animations to a video (after recording) and then export it.
but before any animation i had the problem with the orientation which became landscape (without export it was portrait) and it is solved not but now i have problem with making it full screen.in iphone 6,7 plus its full screen how ever in ipad it is not.
here is my method :
func export(_ url : URL) {
let composition = AVMutableComposition()
let asset = AVURLAsset(url: url, options: nil)
let track = asset.tracks(withMediaType : AVMediaTypeVideo)
let videoTrack:AVAssetTrack = track[0] as AVAssetTrack
let timerange = CMTimeRangeMake(kCMTimeZero, asset.duration)
let compositionVideoTrack:AVMutableCompositionTrack = composition.addMutableTrack(withMediaType: AVMediaTypeVideo, preferredTrackID: CMPersistentTrackID())
do {
try compositionVideoTrack.insertTimeRange(timerange, of: videoTrack, at: kCMTimeZero)
} catch {
print(error)
}
let compositionAudioTrack:AVMutableCompositionTrack = composition.addMutableTrack(withMediaType: AVMediaTypeAudio, preferredTrackID: CMPersistentTrackID())
for audioTrack in asset.tracks(withMediaType: AVMediaTypeAudio) {
do {
try compositionAudioTrack.insertTimeRange(audioTrack.timeRange, of: audioTrack, at: kCMTimeZero)
} catch {
print(error)
}
}
let size = self.view.bounds.size
let videolayer = CALayer()
videolayer.frame = CGRect(x: 0, y: 0, width: size.width, height: size.height)
let parentlayer = CALayer()
parentlayer.frame = CGRect(x: 0, y: 0, width: size.width, height: size.height)
parentlayer.addSublayer(videolayer)
let layercomposition = AVMutableVideoComposition()
layercomposition.frameDuration = CMTimeMake(1, 30)
layercomposition.renderSize = self.view.bounds.size
layercomposition.animationTool = AVVideoCompositionCoreAnimationTool(postProcessingAsVideoLayer: videolayer, in: parentlayer)
let instruction = AVMutableVideoCompositionInstruction()
instruction.timeRange = CMTimeRangeMake(kCMTimeZero, asset.duration)
let videotrack = composition.tracks(withMediaType: AVMediaTypeVideo)[0] as AVAssetTrack
let layerinstruction = AVMutableVideoCompositionLayerInstruction(assetTrack: videotrack)
let ratio = size.height / videoTrack.naturalSize.width
composition.naturalSize = videoTrack.naturalSize
layerinstruction.setTransform(videoTrack.preferredTransform.scaledBy(x: 0.645 , y: ratio).translatedBy(x: self.view.bounds.height, y: 0), at: kCMTimeZero)
instruction.layerInstructions = [layerinstruction]
layercomposition.instructions = [instruction]
let filePath = self.fileName()
let movieUrl = URL(fileURLWithPath: filePath)
guard let assetExport = AVAssetExportSession(asset: composition, presetName:AVAssetExportPresetHighestQuality) else {return}
assetExport.videoComposition = layercomposition
assetExport.outputFileType = AVFileTypeMPEG4
assetExport.outputURL = movieUrl
assetExport.exportAsynchronously(completionHandler: {
switch assetExport.status {
case .completed:
print("success")
let player = AVPlayer(url: movieUrl)
let playerViewController = AVPlayerViewController()
playerViewController.player = player
self.present(playerViewController, animated: true) {
playerViewController.player!.play()
}
break
case .cancelled:
print("cancelled")
break
case .exporting:
print("exporting")
break
case .failed:
print("failed: \(String(describing: assetExport.error))")
break
case .unknown:
print("unknown")
break
case .waiting:
print("waiting")
break
}
})
}
what fixed my problem was changing the scale factor and the translated amount. here is the code below :
let size = self.view.bounds.size
let trackTransform = videoTrack.preferredTransform
let xScale = size.height / videoTrack.naturalSize.width
let yScale = size.width / videoTrack.naturalSize.height
let exportTransform = videoTrack.preferredTransform.translatedBy(x: trackTransform.ty * -1 , y: 0).scaledBy(x: xScale , y: yScale)

AVAssetExportSession wrong orientation in front camera

I'm encountering wrong orientation of video exported using AVAssetExportSession only in front Camera. I followed this tutorial https://stackoverflow.com/a/35368649/3764365 but I got this scenario. I think it's not wrong orientation the image is cut at half. I tried changing the video layer, render layer but got no luck. My code looks like this.
let composition = AVMutableComposition()
let vidAsset = AVURLAsset(url: path)
// get video track
let vtrack = vidAsset.tracks(withMediaType: AVMediaTypeVideo)
// get audi trac
let videoTrack:AVAssetTrack = vtrack[0]
_ = videoTrack.timeRange.duration
let vid_timerange = CMTimeRangeMake(kCMTimeZero, vidAsset.duration)
var _: NSError?
let compositionvideoTrack:AVMutableCompositionTrack = composition.addMutableTrack(withMediaType: AVMediaTypeVideo, preferredTrackID: CMPersistentTrackID())
do {
try compositionvideoTrack.insertTimeRange(vid_timerange, of: videoTrack, at: kCMTimeZero)
} catch let error {
print(error.localizedDescription)
}
let compositionVideoTrack = composition.addMutableTrack(withMediaType: AVMediaTypeAudio, preferredTrackID: kCMPersistentTrackID_Invalid)
let audioTrack = vidAsset.tracks(withMediaType: AVMediaTypeAudio)[0]
do {
try compositionVideoTrack.insertTimeRange(CMTimeRangeMake(kCMTimeZero, vidAsset.duration), of: audioTrack, at: kCMTimeZero)
} catch {
print("error")
}
let size = videoTrack.naturalSize
let parentlayer = CALayer()
parentlayer.frame = CGRect(x: 0, y: 0, width: size.height, height: size.width)
let videolayer = CALayer()
videolayer.frame = CGRect(x: 0, y: 0, width: size.height, height: size.width)
parentlayer.addSublayer(videolayer)
let layercomposition = AVMutableVideoComposition()
layercomposition.frameDuration = CMTimeMake(1, 30)
layercomposition.renderSize = CGSize(width: size.height, height: size.width)
layercomposition.animationTool = AVVideoCompositionCoreAnimationTool(postProcessingAsVideoLayer: videolayer, in: parentlayer)
// instruction for watermark
let instruction = AVMutableVideoCompositionInstruction()
instruction.timeRange = CMTimeRangeMake(kCMTimeZero, composition.duration)
let videotrack = composition.tracks(withMediaType: AVMediaTypeVideo)[0] as AVAssetTrack
let layerinstruction = AVMutableVideoCompositionLayerInstruction(assetTrack: videotrack)
instruction.layerInstructions = [layerinstruction]
layercomposition.instructions = [instruction]
layerinstruction.setTransform(videoTrack.preferredTransform, at: kCMTimeZero)
// create new file to receive data
let movieDestinationUrl = UIImage.outPut()
// use AVAssetExportSession to export video
let assetExport = AVAssetExportSession(asset: composition, presetName: AVAssetExportPreset1280x720)!
assetExport.videoComposition = layercomposition
assetExport.outputFileType = AVFileTypeQuickTimeMovie
assetExport.outputURL = movieDestinationUrl
Setting movieFileOutputConnection?.isVideoMirrored from true to false fixed the issue for me. Its a weird bug in my opinion.
if self.currentCamera == .front {
movieFileOutputConnection?.isVideoMirrored = false
}
I will share my code on how I solved this issue.
func addImagesToVideo(path: URL, labelImageViews: [LabelImageView]) {
SVProgressHUD.show()
let composition = AVMutableComposition()
let vidAsset = AVURLAsset(url: path)
// get video track
let vtrack = vidAsset.tracks(withMediaType: AVMediaTypeVideo)
// get audi trac
let videoTrack:AVAssetTrack = vtrack[0]
_ = videoTrack.timeRange.duration
let vid_timerange = CMTimeRangeMake(kCMTimeZero, vidAsset.duration)
var _: NSError?
let compositionvideoTrack:AVMutableCompositionTrack = composition.addMutableTrack(withMediaType: AVMediaTypeVideo, preferredTrackID: CMPersistentTrackID())
do {
try compositionvideoTrack.insertTimeRange(vid_timerange, of: videoTrack, at: kCMTimeZero)
} catch let error {
print(error.localizedDescription)
}
let compositionVideoTrack = composition.addMutableTrack(withMediaType: AVMediaTypeAudio, preferredTrackID: kCMPersistentTrackID_Invalid)
let audioTrack = vidAsset.tracks(withMediaType: AVMediaTypeAudio)[0]
do {
try compositionVideoTrack.insertTimeRange(CMTimeRangeMake(kCMTimeZero, vidAsset.duration), of: audioTrack, at: kCMTimeZero)
} catch {
print("error")
}
let size = videoTrack.naturalSize
let parentlayer = CALayer()
parentlayer.frame = CGRect(x: 0, y: 0, width: size.height, height: size.width)
let videolayer = CALayer()
videolayer.frame = CGRect(x: 0, y: 0, width: size.height, height: size.width)
parentlayer.addSublayer(videolayer)
if labelImageViews.count != 0 {
let blankImage = self.clearImage(size: videolayer.frame.size)
let image = self.saveImage(imageOne: blankImage, labelImageViews: labelImageViews)
let imglayer = CALayer()
imglayer.contents = image.cgImage
imglayer.frame = CGRect(origin: CGPoint.zero, size: videolayer.frame.size)
imglayer.opacity = 1
parentlayer.addSublayer(imglayer)
}
let layercomposition = AVMutableVideoComposition()
layercomposition.frameDuration = CMTimeMake(1, 30)
layercomposition.renderSize = CGSize(width: size.height, height: size.width)
layercomposition.animationTool = AVVideoCompositionCoreAnimationTool(postProcessingAsVideoLayer: videolayer, in: parentlayer)
// instruction for watermark
let instruction = AVMutableVideoCompositionInstruction()
instruction.timeRange = CMTimeRangeMake(kCMTimeZero, composition.duration)
let videotrack = composition.tracks(withMediaType: AVMediaTypeVideo)[0] as AVAssetTrack
let layerinstruction = AVMutableVideoCompositionLayerInstruction(assetTrack: videotrack)
instruction.layerInstructions = [layerinstruction]
layercomposition.instructions = [instruction]
var isVideoAssetPortrait = false
let videoTransform = videoTrack.preferredTransform
if(videoTransform.a == 0 && videoTransform.b == 1.0 && videoTransform.c == -1.0 && videoTransform.d == 0) {
isVideoAssetPortrait = true
}
if(videoTransform.a == 0 && videoTransform.b == -1.0 && videoTransform.c == 1.0 && videoTransform.d == 0) {
isVideoAssetPortrait = true
}
if isVideoAssetPortrait {
let FirstAssetScaleFactor = CGAffineTransform(scaleX: 1, y: 1)
layerinstruction.setTransform(videoTrack.preferredTransform.concatenating(FirstAssetScaleFactor), at: kCMTimeZero)
} else {
let FirstAssetScaleFactor = CGAffineTransform(scaleX: 1, y: 1)
layerinstruction.setTransform(videoTrack.preferredTransform.concatenating(FirstAssetScaleFactor).concatenating(CGAffineTransform(translationX: 0, y: 560)), at: kCMTimeZero)
}
// create new file to receive data
let movieDestinationUrl = UIImage.outPut()
// use AVAssetExportSession to export video
let assetExport = AVAssetExportSession(asset: composition, presetName: AVAssetExportPreset1280x720)!
assetExport.videoComposition = layercomposition
assetExport.outputFileType = AVFileTypeQuickTimeMovie
assetExport.outputURL = movieDestinationUrl
assetExport.exportAsynchronously(completionHandler: {
switch assetExport.status{
case AVAssetExportSessionStatus.failed:
print("failed \(assetExport.error!)")
case AVAssetExportSessionStatus.cancelled:
print("cancelled \(assetExport.error!)")
default:
print("Movie complete")
// play video
OperationQueue.main.addOperation({ () -> Void in
let output = UIImage.outPut()
UIImage.compress(inputURL: movieDestinationUrl as NSURL, outputURL: output as NSURL) {
UISaveVideoAtPathToSavedPhotosAlbum(output.relativePath, nil, nil, nil)
print("Done Converting")
DispatchQueue.main.async {
SVProgressHUD.dismiss()
}
}
})
}
})
}

avassetexport return the video in landscape

I've created this function which get a video which is captured in portraitmode. however when i save the avassetexport and show it, it seems like it identify it as landscape, how can i make sure to create this as a portrait video??
func createVideo() -> AVAssetExportSession {
let documentsPath = NSSearchPathForDirectoriesInDomains(.DocumentDirectory, .UserDomainMask, true)[0] as NSString
let fileURL = NSURL(fileURLWithPath: "\(documentsPath)/pre.mov")
let composition = AVMutableComposition()
let vidAsset = AVURLAsset(URL: fileURL, options: nil)
// get video track
let vtrack = vidAsset.tracksWithMediaType(AVMediaTypeVideo)
let videoTrack:AVAssetTrack = vtrack[0]
let vid_timerange = CMTimeRangeMake(kCMTimeZero, vidAsset.duration)
do {
let compositionvideoTrack:AVMutableCompositionTrack = composition.addMutableTrackWithMediaType(AVMediaTypeVideo, preferredTrackID: CMPersistentTrackID())
try compositionvideoTrack.insertTimeRange(vid_timerange, ofTrack: videoTrack, atTime: kCMTimeZero)
compositionvideoTrack.preferredTransform = videoTrack.preferredTransform
} catch {
print(error)
}
//Get the video
let fullSizeImage = videoTrack
print(fullSizeImage.naturalSize)
let newOverLayHeight = fullSizeImage.naturalSize.width / self.containerView!.frame.width * self.containerView!.frame.height
UIGraphicsBeginImageContext(CGSizeMake(fullSizeImage.naturalSize.width, newOverLayHeight));
self.containerView!.drawViewHierarchyInRect(CGRectMake(0, 0, fullSizeImage.naturalSize.width, newOverLayHeight), afterScreenUpdates: true)
let overlayImage = UIGraphicsGetImageFromCurrentImageContext();
UIGraphicsEndImageContext();
let imglogo = UIImage(named: "image.png")
let imglayer = CALayer()
imglayer.contents = imglogo?.CGImage
imglayer.frame = CGRectMake(0,fullSizeImage.naturalSize.height - newOverLayHeight, overlayImage.size.width, overlayImage.size.height)
let videolayer = CALayer()
videolayer.frame = CGRectMake(0, 0, fullSizeImage.naturalSize.width, fullSizeImage.naturalSize.height)
let parentlayer = CALayer()
parentlayer.frame = CGRectMake(0, 0, fullSizeImage.naturalSize.width, fullSizeImage.naturalSize.height)
parentlayer.addSublayer(videolayer)
parentlayer.addSublayer(imglayer)
let layercomposition = AVMutableVideoComposition()
layercomposition.frameDuration = CMTimeMake(1, 30)
layercomposition.renderSize = fullSizeImage.naturalSize
layercomposition.animationTool = AVVideoCompositionCoreAnimationTool(postProcessingAsVideoLayer: videolayer, inLayer: parentlayer)
let instruction = AVMutableVideoCompositionInstruction()
instruction.timeRange = CMTimeRangeMake(kCMTimeZero, composition.duration)
let videotrack = composition.tracksWithMediaType(AVMediaTypeVideo)[0] as AVAssetTrack
let layerinstruction = AVMutableVideoCompositionLayerInstruction(assetTrack: videotrack)
instruction.layerInstructions = NSArray(object: layerinstruction) as! [AVVideoCompositionLayerInstruction]
layercomposition.instructions = NSArray(object: instruction) as! [AVVideoCompositionInstructionProtocol]
// create new file to receive data
let docsDir: AnyObject = documentsPath
let movieFilePath = docsDir.stringByAppendingPathComponent("result.mov")
let movieDestinationUrl = NSURL(fileURLWithPath: movieFilePath)
_ = try? NSFileManager().removeItemAtURL(movieDestinationUrl)
let preFilePath = docsDir.stringByAppendingPathComponent("pre.mov")
let preDestinationUrl = NSURL(fileURLWithPath: preFilePath)
_ = try? NSFileManager().removeItemAtURL(preDestinationUrl)
// use AVAssetExportSession to export video
let assetExport = AVAssetExportSession(asset: composition, presetName:AVAssetExportPresetHighestQuality)
assetExport!.outputFileType = AVFileTypeQuickTimeMovie
assetExport!.outputURL = movieDestinationUrl
assetExport!.videoComposition = layercomposition
self.movieUrl = movieFilePath
return assetExport!
}

Add a watermark image to a video

I'm new to AVFoundation, and i'm trying to add a image to a video. however i keep getting an error : failed Optional(Error Domain=AVFoundationErrorDomain Code=-11823 "Cannot Save" UserInfo={NSLocalizedDescription=Cannot Save, NSLocalizedRecoverySuggestion=Try saving again.})
. What am i doing wrong. Here is my code:
func createVideo() {
let documentsPath = NSSearchPathForDirectoriesInDomains(.DocumentDirectory, .UserDomainMask, true)[0] as NSString
let fileURL = NSURL(fileURLWithPath: "\(documentsPath)/\(self.randomVideoFileName).mov")
let composition = AVMutableComposition()
let vidAsset = AVURLAsset(URL: fileURL, options: nil)
// get video track
let vtrack = vidAsset.tracksWithMediaType(AVMediaTypeVideo)
let videoTrack:AVAssetTrack = vtrack[0]
let vid_timerange = CMTimeRangeMake(kCMTimeZero, vidAsset.duration)
do {
let compositionvideoTrack:AVMutableCompositionTrack = composition.addMutableTrackWithMediaType(AVMediaTypeVideo, preferredTrackID: CMPersistentTrackID())
try compositionvideoTrack.insertTimeRange(vid_timerange, ofTrack: videoTrack, atTime: kCMTimeZero)
compositionvideoTrack.preferredTransform = videoTrack.preferredTransform
} catch {
print(error)
}
//Get the video
let fullSizeImage = videoTrack
let newOverLayHeight = fullSizeImage.naturalSize.width / self.containerView!.frame.width * self.containerView!.frame.height
UIGraphicsBeginImageContext(CGSizeMake(fullSizeImage.naturalSize.width, newOverLayHeight));
self.containerView!.drawViewHierarchyInRect(CGRectMake(0, 0, fullSizeImage.naturalSize.width, newOverLayHeight), afterScreenUpdates: true)
let overlayImage = UIGraphicsGetImageFromCurrentImageContext();
UIGraphicsEndImageContext();
let imglogo = UIImage(named: "image.png")
let imglayer = CALayer()
imglayer.contents = imglogo?.CGImage
imglayer.frame = CGRectMake(0,fullSizeImage.naturalSize.height - newOverLayHeight, overlayImage.size.width, overlayImage.size.height)
let videolayer = CALayer()
videolayer.frame = CGRectMake(0, 0, fullSizeImage.naturalSize.width, fullSizeImage.naturalSize.height)
let parentlayer = CALayer()
parentlayer.frame = CGRectMake(0, 0, fullSizeImage.naturalSize.width, fullSizeImage.naturalSize.height)
parentlayer.addSublayer(imglayer)
let layercomposition = AVMutableVideoComposition()
layercomposition.frameDuration = CMTimeMake(1, 30)
layercomposition.renderSize = fullSizeImage.naturalSize
layercomposition.animationTool = AVVideoCompositionCoreAnimationTool(postProcessingAsVideoLayer: videolayer, inLayer: parentlayer)
let instruction = AVMutableVideoCompositionInstruction()
instruction.timeRange = CMTimeRangeMake(kCMTimeZero, composition.duration)
let videotrack = composition.tracksWithMediaType(AVMediaTypeVideo)[0] as AVAssetTrack
let layerinstruction = AVMutableVideoCompositionLayerInstruction(assetTrack: videotrack)
instruction.layerInstructions = NSArray(object: layerinstruction) as! [AVVideoCompositionLayerInstruction]
layercomposition.instructions = NSArray(object: instruction) as! [AVVideoCompositionInstructionProtocol]
// create new file to receive data
let docsDir: AnyObject = documentsPath
let movieFilePath = docsDir.stringByAppendingPathComponent("result.mov")
let movieDestinationUrl = NSURL(fileURLWithPath: movieFilePath)
// use AVAssetExportSession to export video
let assetExport = AVAssetExportSession(asset: composition, presetName:AVAssetExportPresetHighestQuality)
assetExport!.outputFileType = AVFileTypeQuickTimeMovie
assetExport!.outputURL = movieDestinationUrl
assetExport!.exportAsynchronouslyWithCompletionHandler({
switch assetExport!.status{
case AVAssetExportSessionStatus.Failed:
print("failed \(assetExport!.error)")
case AVAssetExportSessionStatus.Cancelled:
print("cancelled \(assetExport!.error)")
default:
print("Movie complete")
// save to photoalbum
NSOperationQueue.mainQueue().addOperationWithBlock({ () -> Void in
UISaveVideoAtPathToSavedPhotosAlbum(movieDestinationUrl.absoluteString, self, "image:didFinishSavingWithError:contextInfo:", nil)
})
}
})
}
As Matt commented, you've forgotten to delete the output file (AVFoundation refuses to overwrite files for some reason). So do that:
let movieDestinationUrl = NSURL(fileURLWithPath: movieFilePath)
_ = try? NSFileManager().removeItemAtURL(movieDestinationUrl)
That fixes the error, but you won't yet see your watermark because you're not setting the AVAssetExportSession's videoComposition:
assetExport?.videoComposition = layercomposition // important!
assetExport!.exportAsynchronouslyWithCompletionHandler({...})
Hi i have done this in ObjectivC following is my code...
AVMutableVideoComposition* videoComp = [AVMutableVideoComposition videoComposition] ;
CGSize videoSize = CGSizeApplyAffineTransform(a_compositionVideoTrack.naturalSize, a_compositionVideoTrack.preferredTransform);
CATextLayer *titleLayer = [CATextLayer layer];
titleLayer.string = #"lippieapp.com";
titleLayer.font = (__bridge CFTypeRef)(#"Helvetica-Bold");
titleLayer.fontSize = 32.0;
//titleLayer.alignmentMode = kCAAlignmentCenter;
titleLayer.frame = CGRectMake(30, 0, 250, 60); //You may need to adjust this for proper display
CALayer *parentLayer = [CALayer layer];
CALayer *videoLayer = [CALayer layer];
parentLayer.frame = CGRectMake(0, 0, videoSize.width, videoSize.height);
videoLayer.frame = CGRectMake(0, 0, videoSize.width, videoSize.height);
[parentLayer addSublayer:videoLayer];

Resources