Swift ReplayKit AVAssetWriter Video Audio out of Sync when Converted to HLS - ios

In iOS/Swift I am working with ReplayKit to use AVAssetWriter to create a mov or MP4 video of the user's screen and microphone audio.
When I create a video, it plays fine locally and the audio and video are in sync. However when I convert this video to HLS (HTTP Live Stream) format using AWS Mediaconvert, the audio is out of sync with the video. Does anyone know what could be causing this? I read about timecoding, maybe I need to add a timecode to my video? Is there an easier way to fix this or has anyone experience similar issues?
private func startRecordingVideo(){
//Initialize MP4 Output File for Screen Recorded Video
let fileManager = FileManager.default
let urls = fileManager.urls(for: .documentDirectory, in: .userDomainMask)
guard let documentDirectory: NSURL = urls.first as NSURL? else {
fatalError("documentDir Error")
}
videoOutputURL = documentDirectory.appendingPathComponent("OutputVideo.mov")
if FileManager.default.fileExists(atPath: videoOutputURL!.path) {
do {
try FileManager.default.removeItem(atPath: videoOutputURL!.path)
} catch {
fatalError("Unable to delete file: \(error) : \(#function).")
}
}
//Initialize Asset Writer to Write Video to User's Storage
assetWriter = try! AVAssetWriter(outputURL: videoOutputURL!, fileType:
AVFileType.mov)
let videoOutputSettings: Dictionary<String, Any> = [
AVVideoCodecKey : AVVideoCodecType.h264,
AVVideoWidthKey : UIScreen.main.bounds.size.width,
AVVideoHeightKey : UIScreen.main.bounds.size.height,
];
let audioSettings = [
AVFormatIDKey : kAudioFormatMPEG4AAC,
AVNumberOfChannelsKey : 1,
AVSampleRateKey : 44100.0,
AVEncoderBitRateKey: 96000,
] as [String : Any]
videoInput = AVAssetWriterInput(mediaType: AVMediaType.video,outputSettings: videoOutputSettings)
audioInput = AVAssetWriterInput(mediaType: AVMediaType.audio,outputSettings:audioSettings )
videoInput?.expectsMediaDataInRealTime = true
audioInput?.expectsMediaDataInRealTime = true
assetWriter?.add(videoInput!)
assetWriter?.add(audioInput!)
let sharedRecorder = RPScreenRecorder.shared()
sharedRecorder.isMicrophoneEnabled = true
sharedRecorder.startCapture(handler: {
(sample, bufferType, error) in
//Audio/Video Buffer Data returned from the Screen Recorder
if CMSampleBufferDataIsReady(sample) {
DispatchQueue.main.async { [weak self] in
//Start the Asset Writer if it has not yet started
if self?.assetWriter?.status == AVAssetWriter.Status.unknown {
if !(self?.assetWriter?.startWriting())! {
return
}
self?.assetWriter?.startSession(atSourceTime: CMSampleBufferGetPresentationTimeStamp(sample))
self?.startSession = true
}
}
//Handle errors
if self.assetWriter?.status == AVAssetWriter.Status.failed {
print("Error occured, status = \(String(describing: self.assetWriter?.status.rawValue)), \(String(describing: self.assetWriter?.error!.localizedDescription)) \(String(describing: self.assetWriter?.error))")
return
}
//Add video buffer to AVAssetWriter Video Input
if (bufferType == .video)
{
if(self.videoInput!.isReadyForMoreMediaData) && self.startSession {
self.videoInput?.append(sample)
}
}
//Add audio microphone buffer to AVAssetWriter Audio Input
if (bufferType == .audioMic)
{
print("MIC BUFFER RECEIVED")
if self.audioInput!.isReadyForMoreMediaData
{
print("Audio Buffer Came")
self.audioInput?.append(sample)
}
}
}
}, completionHandler: {
error in
print("COMP HANDLER ERROR", error?.localizedDescription)
})
}
private func stopRecordingVideo(){
self.startSession = false
RPScreenRecorder.shared().stopCapture{ (error) in
self.videoInput?.markAsFinished()
self.audioInput?.markAsFinished()
if error == nil{
self.assetWriter?.finishWriting{
self.startSession = false
print("FINISHED WRITING!")
DispatchQueue.main.async {
self.setUpVideoPreview()
}
}
}else{
//DELETE DIRECTORY
}
}
}

I’m sure you’ve either figured this out or moved on, but for all Googlers you basically have to set the mediaTimeScale on the video input. You can see an example here
Here’s the relevant part of that code (This code is using a AVSampleBufferDisplayLayer, but the same concept applies:
double pts = CMTimeGetSeconds(CMSampleBufferGetPresentationTimeStamp(sampleBuffer));
if(!timebaseSet && pts != 0)
{
timebaseSet = true;
CMTimebaseRef controlTimebase;
CMTimebaseCreateWithMasterClock( CFAllocatorGetDefault(), CMClockGetHostTimeClock(), &controlTimebase );
displayLayer.controlTimebase = controlTimebase;
CMTimebaseSetTime(displayLayer.controlTimebase, CMTimeMake(pts, 1));
CMTimebaseSetRate(displayLayer.controlTimebase, 1.0);
}
if([displayLayer isReadyForMoreMediaData])
{
[displayLayer enqueueSampleBuffer:sampleBuffer];
}

Related

AVAssetWriter - Capturing video but no audio

I am making an app that records video. Up until now, I have been able to successfully record video and audio using AVCaptureMovieFileOutput, however, I now have a need to edit the video frames in real time to overlay some data onto the video. I began the switch to AVAssetWriter.
After the switch, I am able to record video (with my overlays) just fine using AVCaptureVideoDataOutput, however, AVCaptureAudioDataOutput never calls the delegate method so my audio doesn't record.
This is how I set up my AVCaptureSession:
fileprivate func setupCamera() {
//Set queues
queue = DispatchQueue(label: "myqueue", qos: .utility, attributes: .concurrent, autoreleaseFrequency: DispatchQueue.AutoreleaseFrequency.inherit, target: DispatchQueue.global())
//The size of output video will be 720x1280
print("Established AVCaptureSession")
cameraSession.sessionPreset = AVCaptureSession.Preset.hd1280x720
//Setup your camera
//Detect which type of camera should be used via `isUsingFrontFacingCamera`
let videoDevice: AVCaptureDevice
videoDevice = AVCaptureDevice.default(AVCaptureDevice.DeviceType.builtInWideAngleCamera, for: AVMediaType.video, position: AVCaptureDevice.Position.front)!
print("Created AVCaptureDeviceInput: video")
//Setup your microphone
var audioDevice: AVCaptureDevice
//audioDevice = AVCaptureDevice.default(for: AVMediaType.audio)!
audioDevice = AVCaptureDevice.default(AVCaptureDevice.DeviceType.builtInMicrophone, for: AVMediaType.audio, position: AVCaptureDevice.Position.unspecified)!
print("Created AVCaptureDeviceInput: audio")
do {
cameraSession.beginConfiguration()
cameraSession.automaticallyConfiguresApplicationAudioSession = false
cameraSession.usesApplicationAudioSession = true
// Add camera to your session
let videoInput = try AVCaptureDeviceInput(device: videoDevice)
if cameraSession.canAddInput(videoInput) {
cameraSession.addInput(videoInput)
print("Added AVCaptureDeviceInput: video")
} else
{
print("Could not add VIDEO!!!")
}
// Add microphone to your session
let audioInput = try AVCaptureDeviceInput(device: audioDevice)
if cameraSession.canAddInput(audioInput) {
cameraSession.addInput(audioInput)
print("Added AVCaptureDeviceInput: audio")
} else
{
print("Could not add MIC!!!")
}
//Define your video output
videoDataOutput.videoSettings = [
kCVPixelBufferPixelFormatTypeKey as String: kCVPixelFormatType_32BGRA,
]
videoDataOutput.alwaysDiscardsLateVideoFrames = true
if cameraSession.canAddOutput(videoDataOutput) {
videoDataOutput.setSampleBufferDelegate(self, queue: queue)
cameraSession.addOutput(videoDataOutput)
print("Added AVCaptureDataOutput: video")
}
//Define your audio output
if cameraSession.canAddOutput(audioDataOutput) {
audioDataOutput.setSampleBufferDelegate(self, queue: queue)
cameraSession.addOutput(audioDataOutput)
print("Added AVCaptureDataOutput: audio")
}
//Set up the AVAssetWriter (to write to file)
do {
videoWriter = try AVAssetWriter(outputURL: getURL()!, fileType: AVFileType.mp4)
print("Setup AVAssetWriter")
//Video Settings
let videoSettings: [String : Any] = [
AVVideoCodecKey : AVVideoCodecType.h264,
AVVideoWidthKey : 720,
AVVideoHeightKey : 1280,
]
videoWriterVideoInput = AVAssetWriterInput(mediaType: AVMediaType.video, outputSettings: videoSettings)
videoWriterVideoInput?.expectsMediaDataInRealTime = true;
print("Setup AVAssetWriterInput: Video")
if (videoWriter?.canAdd(videoWriterVideoInput!))!
{
videoWriter?.add(videoWriterVideoInput!)
print("Added AVAssetWriterInput: Video")
} else{
print("Could not add VideoWriterInput to VideoWriter")
}
// Add the audio input
//Audio Settings
let audioSettings : [String : Any] = [
AVFormatIDKey : kAudioFormatMPEG4AAC,
AVSampleRateKey : 44100,
AVEncoderBitRateKey : 64000,
AVNumberOfChannelsKey: 1
]
videoWriterAudioInput = AVAssetWriterInput(mediaType: AVMediaType.audio, outputSettings: audioSettings)
videoWriterAudioInput?.expectsMediaDataInRealTime = true;
print("Setup AVAssetWriterInput: Audio")
if (videoWriter?.canAdd(videoWriterAudioInput!))!
{
videoWriter?.add(videoWriterAudioInput!)
print("Added AVAssetWriterInput: Audio")
} else{
print("Could not add AudioWriterInput to VideoWriter")
}
}
catch {
print("ERROR")
return
}
//PixelWriter
videoWriterInputPixelBufferAdaptor = AVAssetWriterInputPixelBufferAdaptor(assetWriterInput: videoWriterVideoInput!, sourcePixelBufferAttributes: [
kCVPixelBufferPixelFormatTypeKey as String: kCVPixelFormatType_32BGRA,
kCVPixelBufferWidthKey as String: 1280,
kCVPixelBufferHeightKey as String: 768,
kCVPixelFormatOpenGLESCompatibility as String: true,
])
print("Created AVAssetWriterInputPixelBufferAdaptor")
//Present the preview of video
previewLayer = AVCaptureVideoPreviewLayer(session: cameraSession)
previewLayer.position = CGPoint.init(x: CGFloat(self.view.frame.width/2), y: CGFloat(self.view.frame.height/2))
previewLayer.bounds = self.view.bounds
previewLayer.videoGravity = AVLayerVideoGravity.resizeAspectFill
cameraView.layer.addSublayer(previewLayer)
print("Created AVCaptureVideoPreviewLayer")
//Don't forget start running your session
//this doesn't mean start record!
cameraSession.commitConfiguration()
cameraSession.startRunning()
}
catch let error {
debugPrint(error.localizedDescription)
}
}
Start recording:
func startRecording()
{
print("Begin Recording...")
let recordingClock = self.cameraSession.masterClock
isRecording = true
videoWriter?.startWriting()
videoWriter?.startSession(atSourceTime: CMClockGetTime(recordingClock!))
}
Stop recording:
func stopRecording()
{
if (videoWriter?.status.rawValue == 1) {
videoWriterVideoInput?.markAsFinished()
videoWriterAudioInput?.markAsFinished()
print("video finished")
print("audio finished")
}else{
print("not writing")
}
self.videoWriter?.finishWriting(){
self.isRecording = false
print("finished writing")
DispatchQueue.main.async{
if self.videoWriter?.status == AVAssetWriterStatus.failed {
print("status: failed")
}else if self.videoWriter?.status == AVAssetWriterStatus.completed{
print("status: completed")
}else if self.videoWriter?.status == AVAssetWriterStatus.cancelled{
print("status: cancelled")
}else{
print("status: unknown")
}
if let e=self.videoWriter?.error{
print("stop record error:", e)
}
}
}
print("Stop Recording!")
}
And this is the delegate method, which gets called for video, but not for audio:
func captureOutput(_ output: AVCaptureOutput, didOutput sampleBuffer: CMSampleBuffer, from connection: AVCaptureConnection) {
autoreleasepool {
guard captureOutput != nil,
sampleBuffer != nil,
connection != nil,
CMSampleBufferDataIsReady(sampleBuffer) else { return }
guard CMSampleBufferDataIsReady(sampleBuffer) else{
return
}
if (connection.isVideoOrientationSupported) {
connection.videoOrientation = currentVideoOrientation()
} else
{
return
}
if (connection.isVideoStabilizationSupported) {
//connection.preferredVideoStabilizationMode = AVCaptureVideoStabilizationMode.auto
}
if !self.isRecording
{
return
}
var string = ""
if let audio = self.videoWriterAudioInput
{
if connection.audioChannels.count > 0
{
//EXECUTION NEVER REACHES HERE
if audio.isReadyForMoreMediaData
{
queue!.async() {
audio.append(sampleBuffer)
}
return
}
}
}
print ("\(string)")
if let camera = self.videoWriterVideoInput, camera.isReadyForMoreMediaData {
//This is getting called!!!
queue!.async() {
self.videoWriterInputPixelBufferAdaptor.append(self.imageToBuffer(from: image!)!, withPresentationTime: timestamp)
}
}
}//End autoreleasepool
}
}
I am sure the problem does not lie with my devices or inputs, as I was able to successfully record video and audio using AVCaptureMovieFileOutput. I have also read other relevant posts with no luck:
Corrupt video capturing audio and video using AVAssetWriter
VAssetWriter audio with video together
Ripped my hair out for days on this. My mistake was simple - The delegate method was being called, but was being returned BEFORE I reached the audio statements. These were the culprits which needed to be moved to after the audio processing portion of my code:
if (connection.isVideoOrientationSupported) {
connection.videoOrientation = currentVideoOrientation()
} else
{
return
}
if (connection.isVideoStabilizationSupported) {
//connection.preferredVideoStabilizationMode = AVCaptureVideoStabilizationMode.auto
}

Saving video from CMSampleBuffer while streaming using ReplayKit

I'm streaming a content of my app to my RTMP server and using RPBroadcastSampleHandler.
One of the methods is
override func processSampleBuffer(_ sampleBuffer: CMSampleBuffer, with sampleBufferType: RPSampleBufferType) {
switch sampleBufferType {
case .video:
streamer.appendSampleBuffer(sampleBuffer, withType: .video)
captureOutput(sampleBuffer)
case .audioApp:
streamer.appendSampleBuffer(sampleBuffer, withType: .audio)
captureAudioOutput(sampleBuffer)
case .audioMic:
()
}
}
And the captureOutput method is
self.lastSampleTime = CMSampleBufferGetPresentationTimeStamp(sampleBuffer);
// Append the sampleBuffer into videoWriterInput
if self.isRecordingVideo {
if self.videoWriterInput!.isReadyForMoreMediaData {
if self.videoWriter!.status == AVAssetWriterStatus.writing {
let whetherAppendSampleBuffer = self.videoWriterInput!.append(sampleBuffer)
print(">>>>>>>>>>>>>The time::: \(self.lastSampleTime.value)/\(self.lastSampleTime.timescale)")
if whetherAppendSampleBuffer {
print("DEBUG::: Append sample buffer successfully")
} else {
print("WARN::: Append sample buffer failed")
}
} else {
print("WARN:::The videoWriter status is not writing")
}
} else {
print("WARN:::Cannot append sample buffer into videoWriterInput")
}
}
Since this sample buffer contains audio/video data, I figured I can use AVKit to save it locally while streaming. So what I'm doing is creating an asset writer at the start of the stream:
let fileManager = FileManager.default
let documentsPath = NSSearchPathForDirectoriesInDomains(.documentDirectory, .userDomainMask, true)[0]
self.videoOutputFullFileName = documentsPath.stringByAppendingPathComponent(str: "test_capture_video.mp4")
if self.videoOutputFullFileName == nil {
print("ERROR:The video output file name is nil")
return
}
self.isRecordingVideo = true
if fileManager.fileExists(atPath: self.videoOutputFullFileName!) {
print("WARN:::The file: \(self.videoOutputFullFileName!) exists, will delete the existing file")
do {
try fileManager.removeItem(atPath: self.videoOutputFullFileName!)
} catch let error as NSError {
print("WARN:::Cannot delete existing file: \(self.videoOutputFullFileName!), error: \(error.debugDescription)")
}
} else {
print("DEBUG:::The file \(self.videoOutputFullFileName!) doesn't exist")
}
let screen = UIScreen.main
let screenBounds = info.size
let videoCompressionPropertys = [
AVVideoAverageBitRateKey: screenBounds.width * screenBounds.height * 10.1
]
let videoSettings: [String: Any] = [
AVVideoCodecKey: AVVideoCodecH264,
AVVideoWidthKey: screenBounds.width,
AVVideoHeightKey: screenBounds.height,
AVVideoCompressionPropertiesKey: videoCompressionPropertys
]
self.videoWriterInput = AVAssetWriterInput(mediaType: AVMediaTypeVideo, outputSettings: videoSettings)
guard let videoWriterInput = self.videoWriterInput else {
print("ERROR:::No video writer input")
return
}
videoWriterInput.expectsMediaDataInRealTime = true
// Add the audio input
var acl = AudioChannelLayout()
memset(&acl, 0, MemoryLayout<AudioChannelLayout>.size)
acl.mChannelLayoutTag = kAudioChannelLayoutTag_Mono;
let audioOutputSettings: [String: Any] =
[ AVFormatIDKey: kAudioFormatMPEG4AAC,
AVSampleRateKey : 44100,
AVNumberOfChannelsKey : 1,
AVEncoderBitRateKey : 64000,
AVChannelLayoutKey : Data(bytes: &acl, count: MemoryLayout<AudioChannelLayout>.size)]
audioWriterInput = AVAssetWriterInput(mediaType: AVMediaTypeAudio, outputSettings: audioOutputSettings)
guard let audioWriterInput = self.audioWriterInput else {
print("ERROR:::No audio writer input")
return
}
audioWriterInput.expectsMediaDataInRealTime = true
do {
self.videoWriter = try AVAssetWriter(outputURL: URL(fileURLWithPath: self.videoOutputFullFileName!), fileType: AVFileTypeMPEG4)
} catch let error as NSError {
print("ERROR:::::>>>>>>>>>>>>>Cannot init videoWriter, error:\(error.localizedDescription)")
}
guard let videoWriter = self.videoWriter else {
print("ERROR:::No video writer")
return
}
if videoWriter.canAdd(videoWriterInput) {
videoWriter.add(videoWriterInput)
} else {
print("ERROR:::Cannot add videoWriterInput into videoWriter")
}
//Add audio input
if videoWriter.canAdd(audioWriterInput) {
videoWriter.add(audioWriterInput)
} else {
print("ERROR:::Cannot add audioWriterInput into videoWriter")
}
if videoWriter.status != AVAssetWriterStatus.writing {
print("DEBUG::::::::::::::::The videoWriter status is not writing, and will start writing the video.")
let hasStartedWriting = videoWriter.startWriting()
if hasStartedWriting {
videoWriter.startSession(atSourceTime: self.lastSampleTime)
print("DEBUG:::Have started writting on videoWriter, session at source time: \(self.lastSampleTime)")
LOG(videoWriter.status.rawValue)
} else {
print("WARN:::Fail to start writing on videoWriter")
}
} else {
print("WARN:::The videoWriter.status is writing now, so cannot start writing action on videoWriter")
}
And then saving and finishing writing at the end of the stream:
print("DEBUG::: Starting to process recorder final...")
print("DEBUG::: videoWriter status: \(self.videoWriter!.status.rawValue)")
self.isRecordingVideo = false
guard let videoWriterInput = self.videoWriterInput else {
print("ERROR:::No video writer input")
return
}
guard let videoWriter = self.videoWriter else {
print("ERROR:::No video writer")
return
}
guard let audioWriterInput = self.audioWriterInput else {
print("ERROR:::No audio writer input")
return
}
videoWriterInput.markAsFinished()
audioWriterInput.markAsFinished()
videoWriter.finishWriting {
if videoWriter.status == AVAssetWriterStatus.completed {
print("DEBUG:::The videoWriter status is completed")
let fileManager = FileManager.default
if fileManager.fileExists(atPath: self.videoOutputFullFileName!) {
print("DEBUG:::The file: \(self.videoOutputFullFileName ?? "") has been saved in documents folder, and is ready to be moved to camera roll")
let sharedFileURL = fileManager.containerURL(forSecurityApplicationGroupIdentifier: "group.jp.awalker.co.Hotter")
guard let documentsPath = sharedFileURL?.path else {
LOG("ERROR:::No shared file URL path")
return
}
let finalFilename = documentsPath.stringByAppendingPathComponent(str: "test_capture_video.mp4")
//Check whether file exists
if fileManager.fileExists(atPath: finalFilename) {
print("WARN:::The file: \(finalFilename) exists, will delete the existing file")
do {
try fileManager.removeItem(atPath: finalFilename)
} catch let error as NSError {
print("WARN:::Cannot delete existing file: \(finalFilename), error: \(error.debugDescription)")
}
} else {
print("DEBUG:::The file \(self.videoOutputFullFileName!) doesn't exist")
}
do {
try fileManager.copyItem(at: URL(fileURLWithPath: self.videoOutputFullFileName!), to: URL(fileURLWithPath: finalFilename))
}
catch let error as NSError {
LOG("ERROR:::\(error.debugDescription)")
}
PHPhotoLibrary.shared().performChanges({
PHAssetChangeRequest.creationRequestForAssetFromVideo(atFileURL: URL(fileURLWithPath: finalFilename))
}) { completed, error in
if completed {
print("Video \(self.videoOutputFullFileName ?? "") has been moved to camera roll")
}
if error != nil {
print ("ERROR:::Cannot move the video \(self.videoOutputFullFileName ?? "") to camera roll, error: \(error!.localizedDescription)")
}
}
} else {
print("ERROR:::The file: \(self.videoOutputFullFileName ?? "") doesn't exist, so can't move this file camera roll")
}
} else {
print("WARN:::The videoWriter status is not completed, stauts: \(videoWriter.status)")
}
}
The problem I'm having is the finishWriting completion code is never being reached. The writer stays in "writing" status therefore the video file is not saved.
If I remove the "finishWriting" line and just leave the completion code to run, a file is being saved, but not properly finished and when I'm trying to view it it's unplayable because it's probably missing metadata.
Is there any other way to do this? I don't want to actually start capturing using AVKit to save the recording, because it's taking too much of the CPU and the RPBroadcastSampleHandler's CMSampleBuffer already has the video data, but maybe using AVKit at all is a wrong move here?
What should I change? How do I save the video from that CMSampleBuffer?
From https://developer.apple.com/documentation/avfoundation/avassetwriter/1390432-finishwritingwithcompletionhandl
This method returns immediately and causes its work to be performed asynchronously
When broadcastFinished returns, your extension is killed. The only way I've been able to get this to work is by blocking the method from returning until the video processing is done. I'm not sure if this is the correct way to do it (seems weird), but it works. Something like this:
var finishedWriting = false
videoWriter.finishWriting {
NSLog("DEBUG:::The videoWriter finished writing.")
if videoWriter.status == .completed {
NSLog("DEBUG:::The videoWriter status is completed")
let fileManager = FileManager.default
if fileManager.fileExists(atPath: self.videoOutputFullFileName!) {
NSLog("DEBUG:::The file: \(self.videoOutputFullFileName ?? "") has been saved in documents folder, and is ready to be moved to camera roll")
let sharedFileURL = fileManager.containerURL(forSecurityApplicationGroupIdentifier: "group.xxx.com")
guard let documentsPath = sharedFileURL?.path else {
NSLog("ERROR:::No shared file URL path")
finishedWriting = true
return
}
let finalFilename = documentsPath + "/test_capture_video.mp4"
//Check whether file exists
if fileManager.fileExists(atPath: finalFilename) {
NSLog("WARN:::The file: \(finalFilename) exists, will delete the existing file")
do {
try fileManager.removeItem(atPath: finalFilename)
} catch let error as NSError {
NSLog("WARN:::Cannot delete existing file: \(finalFilename), error: \(error.debugDescription)")
}
} else {
NSLog("DEBUG:::The file \(self.videoOutputFullFileName!) doesn't exist")
}
do {
try fileManager.copyItem(at: URL(fileURLWithPath: self.videoOutputFullFileName!), to: URL(fileURLWithPath: finalFilename))
}
catch let error as NSError {
NSLog("ERROR:::\(error.debugDescription)")
}
PHPhotoLibrary.shared().performChanges({
PHAssetCollectionChangeRequest.creationRequestForAssetCollection(withTitle: "xxx")
PHAssetChangeRequest.creationRequestForAssetFromVideo(atFileURL: URL(fileURLWithPath: finalFilename))
}) { completed, error in
if completed {
NSLog("Video \(self.videoOutputFullFileName ?? "") has been moved to camera roll")
}
if error != nil {
NSLog("ERROR:::Cannot move the video \(self.videoOutputFullFileName ?? "") to camera roll, error: \(error!.localizedDescription)")
}
finishedWriting = true
}
} else {
NSLog("ERROR:::The file: \(self.videoOutputFullFileName ?? "") doesn't exist, so can't move this file camera roll")
finishedWriting = true
}
} else {
NSLog("WARN:::The videoWriter status is not completed, status: \(videoWriter.status)")
finishedWriting = true
}
}
while finishedWriting == false {
// NSLog("DEBUG:::Waiting to finish writing...")
}
I would think you'd have to also call extensionContext.completeRequest at some point, but mine's working fine without it shrug.
#Marty's answer should be accepted because he pointed out the problem and its DispatchGroup solution works perfectly.
Since he used a while loop and didn't describe how to use DispatchGroups, here's the way I implemented it.
override func broadcastFinished() {
let dispatchGroup = DispatchGroup()
dispatchGroup.enter()
self.writerInput.markAsFinished()
self.writer.finishWriting {
// Do your work to here to make video available
dispatchGroup.leave()
}
dispatchGroup.wait() // <= blocks the thread here
}
You can try this:
override func broadcastFinished() {
Log(#function)
...
// Need to give the end CMTime, if not set, the video cannot be used
videoWriter.endSession(atSourceTime: ...)
videoWriter.finishWriting {
// Callback cannot be executed here
}
...
// The program has been executed.
}

Video is not getting saved + Appending pixel buffer to adapter returns false , both is very rare and random

I am writing a video to photo library / document directory using capture session and AVAssetWriter. What I want to know when I append pixel buffer to the adapter I do get false here print("video is (bobo)")same with audio.
This doesn't save my output file and I do get an error on export and saving.
I am working on it from so long any suggestions or mistake would help me a lot.
Main problem is this issue is very random lets say 1 in 10 times but it do persist and I want to eliminate this issue.
My code where I am appending pixel buffer to adapter
func captureOutput(_ captureOutput: AVCaptureOutput!, didOutputSampleBuffer sampleBuffer: CMSampleBuffer!, from connection: AVCaptureConnection!)
{
starTime = CMSampleBufferGetPresentationTimeStamp(sampleBuffer)
if captureOutput == videoOutput
{
if self.record == true{
let pixelBuffer = CMSampleBufferGetImageBuffer(sampleBuffer)
if self.record == true
{
if self.writerInput.isReadyForMoreMediaData
{
DispatchQueue(label: "newQeueLocalFeedVideo2", attributes: DispatchQueue.Attributes.concurrent).sync(execute: {
starTime = CMSampleBufferGetPresentationTimeStamp(sampleBuffer)
let bobo = self.adapter.append(pixelBuffer!, withPresentationTime: self.starTime)
print("video is \(bobo)")
})
}
}
}
}else if captureOutput == audioOutput{
if self.record == true
{
if audioWriterInput.isReadyForMoreMediaData
{
let bo = audioWriterInput.append(sampleBuffer)
print("audio conversion is \(bo)")
}
}
}
}
/*****------******/
Code where I am setting asset writer
{
let fileUrl = URL(fileURLWithPath: NSTemporaryDirectory()).appendingPathComponent("\(getCurrentDate())-capturedvideo.mp4")
lastPath = fileUrl.path
videoWriter = try? AVAssetWriter(outputURL: fileUrl, fileType: AVFileTypeMPEG4)
lastPathURL = fileUrl
let outputSettings = [AVVideoCodecKey : AVVideoCodecH264, AVVideoWidthKey : NSNumber(value: Float(outputSize.width) as Float), AVVideoHeightKey : NSNumber(value: Float(outputSize.height) as Float)] as [String : Any]
writerInput = AVAssetWriterInput(mediaType: AVMediaTypeVideo, outputSettings: outputSettings)
writerInput.expectsMediaDataInRealTime = true
// writerInput.performsMultiPassEncodingIfSupported = true
audioWriterInput = AVAssetWriterInput(mediaType: AVMediaTypeAudio, outputSettings: DejalActivityView.getAudioDictionary() as? [String:AnyObject])
videoWriter.add(writerInput)
videoWriter.add(audioWriterInput)
adapter = AVAssetWriterInputPixelBufferAdaptor(assetWriterInput: writerInput, sourcePixelBufferAttributes: DejalActivityView.getAdapterDictionary() as? [String:AnyObject])
videoWriter.startWriting()
videoWriter.startSession(atSourceTime: starTime)
//self.client?.recordCaptureSession.captureSession.startRunning()
record = true
}
And to export the file to a video I am using this code.
self.videoWriter.finishWriting { () -> Void in
Thread.sleep(forTimeInterval: 1.0)
if self.videoWriter.status == AVAssetWriterStatus.failed {
print("oh noes, an error: \(self.videoWriter.error.debugDescription)")
completionHandler(true)
} else {
let content = FileManager.default.contents(atPath: self.lastPathURL.path)
print("wrote video: \(self.lastPathURL.path) at size: \(content?.count)")
// This below line will save the video to photo library
HEPhotoLibraryHelper.saveVideosToPhotoLibrary(self.lastPathURL, withCompletionBlock: { (result) in
if result == true
{
do
{
try HEDocDirectory.shared.fileManagerDefault .removeItem(atPath: self.lastPath)
}catch let err as NSError
{
print("Error in removing file from doc dir \(err.localizedDescription)")
}
}
})
}
}

iOS - Trimming mp4 on device

I thought I might be able to use an AVAssetReader/Writer to trim an MP4, but when I attempt to read the audio (haven't tried video yet), I get this:
AVAssetReaderOutput does not currently support compressed output
Is there any way to read an MP4 off disk via AVAssetReader?
let asset: AVAsset = AVAsset(URL: NSURL(fileURLWithPath: filePath))
let audioTrack: AVAssetTrack = asset.tracksWithMediaType(AVMediaTypeAudio)[0]
let audioTrackOutput: AVAssetReaderOutput = AVAssetReaderTrackOutput(track: audioTrack, outputSettings:
[AVFormatIDKey: NSNumber(unsignedInt: kAudioFormatMPEG4AAC)])
if let reader = try? AVAssetReader(asset: asset) {
if reader.canAddOutput(audioTrackOutput) {
reader.addOutput(audioTrackOutput)
}
reader.startReading()
var done = false
while (!done) {
let sampleBuffer: CMSampleBufferRef? = audioTrackOutput.copyNextSampleBuffer()
if (sampleBuffer != nil) {
print("buffer")
//process
} else {
if (reader.status == AVAssetReaderStatus.Failed) {
print("Error: ")
print(reader.error)
} else {
done = true
}
}
}
}
AVAssetExportSession is the solution. SCAssetExportSession works even better.

iOS swift convert mp3 to aac

I'm converting an mp3 to m4a in Swift with code based on this.
It works when I generate a PCM file. When I change the export format to m4a it generates a file but it won't play. Why is it corrupt?
Here is the code so far:
import AVFoundation
import UIKit
class ViewController: UIViewController {
var rwAudioSerializationQueue:dispatch_queue_t!
var asset:AVAsset!
var assetReader:AVAssetReader!
var assetReaderAudioOutput:AVAssetReaderTrackOutput!
var assetWriter:AVAssetWriter!
var assetWriterAudioInput:AVAssetWriterInput!
var outputURL:NSURL!
override func viewDidLoad() {
super.viewDidLoad()
let rwAudioSerializationQueueDescription = String(self) + " rw audio serialization queue"
// Create the serialization queue to use for reading and writing the audio data.
self.rwAudioSerializationQueue = dispatch_queue_create(rwAudioSerializationQueueDescription, nil)
let paths = NSSearchPathForDirectoriesInDomains(.DocumentDirectory, .UserDomainMask, true)
let documentsPath = paths[0]
print(NSBundle.mainBundle().pathForResource("input", ofType: "mp3"))
self.asset = AVAsset(URL: NSURL(fileURLWithPath: NSBundle.mainBundle().pathForResource("input", ofType: "mp3")! ))
self.outputURL = NSURL(fileURLWithPath: documentsPath + "/output.m4a")
print(self.outputURL)
// [self.asset loadValuesAsynchronouslyForKeys:#[#"tracks"] completionHandler:^{
self.asset.loadValuesAsynchronouslyForKeys(["tracks"], completionHandler: {
print("loaded")
var success = true
var localError:NSError?
success = (self.asset.statusOfValueForKey("tracks", error: &localError) == AVKeyValueStatus.Loaded)
// Check for success of loading the assets tracks.
//success = ([self.asset statusOfValueForKey:#"tracks" error:&localError] == AVKeyValueStatusLoaded);
if (success)
{
// If the tracks loaded successfully, make sure that no file exists at the output path for the asset writer.
let fm = NSFileManager.defaultManager()
let localOutputPath = self.outputURL.path
if (fm.fileExistsAtPath(localOutputPath!)) {
do {
try fm.removeItemAtPath(localOutputPath!)
success = true
} catch {
}
}
}
if (success) {
success = self.setupAssetReaderAndAssetWriter()
}
if (success) {
success = self.startAssetReaderAndWriter()
}
})
}
func setupAssetReaderAndAssetWriter() -> Bool {
do {
try self.assetReader = AVAssetReader(asset: self.asset)
} catch {
}
do {
try self.assetWriter = AVAssetWriter(URL: self.outputURL, fileType: AVFileTypeCoreAudioFormat)
} catch {
}
var assetAudioTrack:AVAssetTrack? = nil
let audioTracks = self.asset.tracksWithMediaType(AVMediaTypeAudio)
if (audioTracks.count > 0) {
assetAudioTrack = audioTracks[0]
}
if (assetAudioTrack != nil)
{
let decompressionAudioSettings:[String : AnyObject] = [
AVFormatIDKey:Int(kAudioFormatLinearPCM)
]
self.assetReaderAudioOutput = AVAssetReaderTrackOutput(track: assetAudioTrack!, outputSettings: decompressionAudioSettings)
self.assetReader.addOutput(self.assetReaderAudioOutput)
var channelLayout = AudioChannelLayout()
memset(&channelLayout, 0, sizeof(AudioChannelLayout));
channelLayout.mChannelLayoutTag = kAudioChannelLayoutTag_Stereo;
/*let compressionAudioSettings:[String : AnyObject] = [
AVFormatIDKey:Int(kAudioFormatMPEG4AAC) ,
AVEncoderBitRateKey:128000,
AVSampleRateKey:44100 ,
// AVEncoderBitRatePerChannelKey:16,
// AVEncoderAudioQualityKey:AVAudioQuality.High.rawValue,
AVNumberOfChannelsKey:2,
AVChannelLayoutKey: NSData(bytes:&channelLayout, length:sizeof(AudioChannelLayout))
]
var outputSettings:[String : AnyObject] = [
AVFormatIDKey: Int(kAudioFormatLinearPCM),
AVSampleRateKey: 44100,
AVNumberOfChannelsKey: 2,
AVChannelLayoutKey: NSData(bytes:&channelLayout, length:sizeof(AudioChannelLayout)),
AVLinearPCMBitDepthKey: 16,
AVLinearPCMIsNonInterleaved: false,
AVLinearPCMIsFloatKey: false,
AVLinearPCMIsBigEndianKey: false
]*/
let outputSettings:[String : AnyObject] = [
AVFormatIDKey: Int(kAudioFormatMPEG4AAC),
AVSampleRateKey: 44100,
AVNumberOfChannelsKey: 2,
AVChannelLayoutKey: NSData(bytes:&channelLayout, length:sizeof(AudioChannelLayout)) ]
self.assetWriterAudioInput = AVAssetWriterInput(mediaType: AVMediaTypeAudio, outputSettings: outputSettings)
self.assetWriter.addInput(self.assetWriterAudioInput)
}
return true
}
func startAssetReaderAndWriter() -> Bool {
self.assetWriter.startWriting()
self.assetReader.startReading()
self.assetWriter.startSessionAtSourceTime(kCMTimeZero)
self.assetWriterAudioInput.requestMediaDataWhenReadyOnQueue(self.rwAudioSerializationQueue, usingBlock: {
while (self.assetWriterAudioInput.readyForMoreMediaData ) {
var sampleBuffer = self.assetReaderAudioOutput.copyNextSampleBuffer()
if (sampleBuffer != nil) {
self.assetWriterAudioInput.appendSampleBuffer(sampleBuffer!)
sampleBuffer = nil
} else {
self.assetWriterAudioInput.markAsFinished()
self.assetReader.cancelReading()
print("done")
break
}
}
})
return true
}
}
Updated the source code in the question to Swift 4 and wrapped it in a class. Credit goes to Castles and Rythmic Fistman for original source code and answer. Left author's comments, added a few assertion's and print statements for debugging. Tested on iOS.
The bit rate for the output file is hardcoded at 96kb/s, you can easily override this value. Most of the audio files I'm converting are 320kb/s, so I'm using this class to compress the files for offline storage. Compression results at the bottom of this answer.
Usage:
let inputFilePath = URL(fileURLWithPath: "/path/to/file.mp3")
let outputFileURL = URL(fileURLWithPath: "/path/to/output/compressed.mp4")
if let audioConverter = AVAudioFileConverter(inputFileURL: inputFilePath, outputFileURL: outputFileURL) {
audioConverter.convert()
}
Class
import AVFoundation
final class AVAudioFileConverter {
var rwAudioSerializationQueue: DispatchQueue!
var asset:AVAsset!
var assetReader:AVAssetReader!
var assetReaderAudioOutput:AVAssetReaderTrackOutput!
var assetWriter:AVAssetWriter!
var assetWriterAudioInput:AVAssetWriterInput!
var outputURL:URL
var inputURL:URL
init?(inputFileURL: URL, outputFileURL: URL) {
inputURL = inputFileURL
outputURL = outputFileURL
if (FileManager.default.fileExists(atPath: inputURL.absoluteString)) {
print("Input file does not exist at file path \(inputURL.absoluteString)")
return nil
}
}
func convert() {
let rwAudioSerializationQueueDescription = " rw audio serialization queue"
// Create the serialization queue to use for reading and writing the audio data.
rwAudioSerializationQueue = DispatchQueue(label: rwAudioSerializationQueueDescription)
assert(rwAudioSerializationQueue != nil, "Failed to initialize Dispatch Queue")
asset = AVAsset(url: inputURL)
assert(asset != nil, "Error creating AVAsset from input URL")
print("Output file path -> ", outputURL.absoluteString)
asset.loadValuesAsynchronously(forKeys: ["tracks"], completionHandler: {
var success = true
var localError:NSError?
success = (self.asset.statusOfValue(forKey: "tracks", error: &localError) == AVKeyValueStatus.loaded)
// Check for success of loading the assets tracks.
if (success) {
// If the tracks loaded successfully, make sure that no file exists at the output path for the asset writer.
let fm = FileManager.default
let localOutputPath = self.outputURL.path
if (fm.fileExists(atPath: localOutputPath)) {
do {
try fm.removeItem(atPath: localOutputPath)
success = true
} catch {
print("Error trying to remove output file at path -> \(localOutputPath)")
}
}
}
if (success) {
success = self.setupAssetReaderAndAssetWriter()
} else {
print("Failed setting up Asset Reader and Writer")
}
if (success) {
success = self.startAssetReaderAndWriter()
return
} else {
print("Failed to start Asset Reader and Writer")
}
})
}
func setupAssetReaderAndAssetWriter() -> Bool {
do {
assetReader = try AVAssetReader(asset: asset)
} catch {
print("Error Creating AVAssetReader")
}
do {
assetWriter = try AVAssetWriter(outputURL: outputURL, fileType: AVFileType.m4a)
} catch {
print("Error Creating AVAssetWriter")
}
var assetAudioTrack:AVAssetTrack? = nil
let audioTracks = asset.tracks(withMediaType: AVMediaType.audio)
if (audioTracks.count > 0) {
assetAudioTrack = audioTracks[0]
}
if (assetAudioTrack != nil) {
let decompressionAudioSettings:[String : Any] = [
AVFormatIDKey:Int(kAudioFormatLinearPCM)
]
assetReaderAudioOutput = AVAssetReaderTrackOutput(track: assetAudioTrack!, outputSettings: decompressionAudioSettings)
assert(assetReaderAudioOutput != nil, "Failed to initialize AVAssetReaderTrackOutout")
assetReader.add(assetReaderAudioOutput)
var channelLayout = AudioChannelLayout()
memset(&channelLayout, 0, MemoryLayout<AudioChannelLayout>.size);
channelLayout.mChannelLayoutTag = kAudioChannelLayoutTag_Stereo;
let outputSettings:[String : Any] = [
AVFormatIDKey: Int(kAudioFormatMPEG4AAC),
AVSampleRateKey: 44100,
AVEncoderBitRateKey: 96000,
AVNumberOfChannelsKey: 2,
AVChannelLayoutKey: NSData(bytes:&channelLayout, length:MemoryLayout<AudioChannelLayout>.size)]
assetWriterAudioInput = AVAssetWriterInput(mediaType: AVMediaType.audio, outputSettings: outputSettings)
assert(rwAudioSerializationQueue != nil, "Failed to initialize AVAssetWriterInput")
assetWriter.add(assetWriterAudioInput)
}
print("Finsihed Setup of AVAssetReader and AVAssetWriter")
return true
}
func startAssetReaderAndWriter() -> Bool {
print("STARTING ASSET WRITER")
assetWriter.startWriting()
assetReader.startReading()
assetWriter.startSession(atSourceTime: kCMTimeZero)
assetWriterAudioInput.requestMediaDataWhenReady(on: rwAudioSerializationQueue, using: {
while(self.assetWriterAudioInput.isReadyForMoreMediaData ) {
var sampleBuffer = self.assetReaderAudioOutput.copyNextSampleBuffer()
if(sampleBuffer != nil) {
self.assetWriterAudioInput.append(sampleBuffer!)
sampleBuffer = nil
} else {
self.assetWriterAudioInput.markAsFinished()
self.assetReader.cancelReading()
self.assetWriter.finishWriting {
print("Asset Writer Finished Writing")
}
break
}
}
})
return true
}
}
Input File: 17.3 MB
// generated with afinfo on mac
File: D290A73C37B777F1.mp3
File type ID: MPG3
Num Tracks: 1
----
Data format: 2 ch, 44100 Hz, '.mp3' (0x00000000) 0 bits/channel, 0 bytes/packet, 1152 frames/packet, 0 bytes/frame
no channel layout.
estimated duration: 424.542025 sec
audio bytes: 16981681
audio packets: 16252
bit rate: 320000 bits per second
packet size upper bound: 1052
maximum packet size: 1045
audio data file offset: 322431
optimized
audio 18720450 valid frames + 576 priming + 1278 remainder = 18722304
----
Output File: 5.1 MB
// generated with afinfo on Mac
File: compressed.m4a
File type ID: m4af
Num Tracks: 1
----
Data format: 2 ch, 44100 Hz, 'aac ' (0x00000000) 0 bits/channel, 0 bytes/packet, 1024 frames/packet, 0 bytes/frame
Channel layout: Stereo (L R)
estimated duration: 424.542041 sec
audio bytes: 5019294
audio packets: 18286
bit rate: 94569 bits per second
packet size upper bound: 763
maximum packet size: 763
audio data file offset: 44
not optimized
audio 18722304 valid frames + 2112 priming + 448 remainder = 18724864
format list:
[ 0] format: 2 ch, 44100 Hz, 'aac ' (0x00000000) 0 bits/channel, 0 bytes/packet, 1024 frames/packet, 0 bytes/frame
Channel layout: Stereo (L R)
----
update
You're creating a caf file instead of an m4a.
Replace AVFileTypeCoreAudioFormat with AVFileTypeAppleM4A in
AVAssetWriter(URL: self.outputURL, fileType: AVFileTypeCoreAudioFormat)
Call self.assetWriter.finishWritingWithCompletionHandler() when you've finished.

Resources