How to process frames of an existing video in Swift - ios

Currently I am trying to process the frames of an existing video with OpenCV. Are there any AV reader libraries that contain delegate methods that process frames while playing back videos? I know how to process frames during a live AVCaptureSession through the use of the AVCaptureVideoDataOutput and the captureOutput delegate method. Is there something similar for playing back videos?
Any help would be appreiciated.

Here's the solution. Thanks to Tim Bull's answer I accomplished this using AVAssetReader / AssetReaderOutput
The below function I called within a button click to start the video, and begin processing each frame with OpenCV:
func processVids() {
guard let pathOfOrigVid = Bundle.main.path(forResource: "output_10_34_34", ofType: "mp4") else{
print("video.m4v not found\n")
exit(0)
}
var path: URL? = nil
do{
path = try FileManager.default.url(for: .documentDirectory, in:.userDomainMask, appropriateFor: nil, create: false)
path = path?.appendingPathComponent("grayVideo.mp4")
}catch{
print("Unable to make URL to Movies path\n")
exit(0)
}
let movie: AVURLAsset = AVURLAsset(url: NSURL(fileURLWithPath: pathOfOrigVid) as URL, options: nil)
let tracks: [AVAssetTrack] = movie.tracks(withMediaType: AVMediaTypeVideo)
let track: AVAssetTrack = tracks[0]
var reader: AVAssetReader? = nil
do{
reader = try AVAssetReader(asset: movie)
}
catch{
print("Problem initializing AVReader\n")
}
let settings : [String: Any?] = [
String(kCVPixelBufferPixelFormatTypeKey): NSNumber(value: kCVPixelFormatType_32ARGB),
String(kCVPixelBufferIOSurfacePropertiesKey): [:]
]
let rout: AVAssetReaderTrackOutput = AVAssetReaderTrackOutput(track: track, outputSettings: settings)
reader?.add(rout)
reader?.startReading()
DispatchQueue.global().async(execute: {
while reader?.status == AVAssetReaderStatus.reading {
if(rout.copyNextSampleBuffer() != nil){
// Buffer of the frame to perform OpenCV processing on
let sbuff: CMSampleBuffer = rout.copyNextSampleBuffer()!
}
usleep(10000)
}
})
}

AVAssetReader / AVAssetReaderOutput are what you're looking for. Check out the CopyNextSampleBuffer method.
https://developer.apple.com/documentation/avfoundation/avassetreaderoutput

You can use AVVideoComposition
If You want to process frames with CoreImage you can create an instance by calling init(asset:applyingCIFiltersWithHandler:) method.
Or you can create custom comopsitor
You can implement your own custom video compositor by implementing the
AVVideoCompositing protocol; a custom video compositor is provided
with pixel buffers for each of its video sources during playback and
other operations and can perform arbitrary graphical operations on
them in order to produce visual output.
See docs for more info.
Here you can find an example (but example is in Objective-C).

For someone need to process frame of video by OpenCV.
Decode video:
#objc public protocol ARVideoReaderDelegate : NSObjectProtocol {
func reader(_ reader:ARVideoReader!, newFrameReady sampleBuffer:CMSampleBuffer?, _ frameCount:Int)
func readerDidFinished(_ reader:ARVideoReader!, totalFrameCount:Int)
}
#objc open class ARVideoReader: NSObject {
var _asset: AVURLAsset!
#objc var _delegate: ARVideoReaderDelegate?
#objc public init!(urlAsset asset:AVURLAsset){
_asset = asset
super.init()
}
#objc open func startReading() -> Void {
if let reader = try? AVAssetReader.init(asset: _asset){
let videoTrack = _asset.tracks(withMediaType: .video).compactMap{ $0 }.first;
let options = [kCVPixelBufferPixelFormatTypeKey : Int(kCVPixelFormatType_32BGRA)]
let readerOutput = AVAssetReaderTrackOutput.init(track: videoTrack!, outputSettings: options as [String : Any])
reader.add(readerOutput)
reader.startReading()
var count = 0
//reading
while (reader.status == .reading && videoTrack?.nominalFrameRate != 0){
let sampleBuffer = readerOutput.copyNextSampleBuffer()
_delegate?.reader(self, newFrameReady: sampleBuffer, count)
count = count+1;
}
_delegate?.readerDidFinished(self,totalFrameCount: count)
}
}
}
In the callback of delegate:
//convert sampleBuffer to cv::Mat
CVImageBufferRef imageBuffer = CMSampleBufferGetImageBuffer(sampleBuffer);
size_t width = CVPixelBufferGetWidth(imageBuffer);
size_t height = CVPixelBufferGetHeight(imageBuffer);
CVPixelBufferLockBaseAddress(imageBuffer, kCVPixelBufferLock_ReadOnly);
char *baseBuffer = (char*)CVPixelBufferGetBaseAddress(imageBuffer);
cv::Mat cvImage = cv::Mat((int)height,(int)width,CV_8UC3);
cv::MatIterator_<cv::Vec3b> it_start = cvImage.begin<cv::Vec3b>();
cv::MatIterator_<cv::Vec3b> it_end = cvImage.end<cv::Vec3b>();
long cur = 0;
size_t padding = CVPixelBufferGetBytesPerRow(imageBuffer) - width*4;
size_t offset = padding;
while (it_start != it_end) {
//opt pixel
long p_idx = cur*4 + offset;
char b = baseBuffer[p_idx];
char g = baseBuffer[p_idx + 1];
char r = baseBuffer[p_idx + 2];
cv::Vec3b newpixel(b,g,r);
*it_start = newpixel;
cur++;
it_start++;
if (cur%width == 0) {
offset = offset + padding;
}
}
CVPixelBufferUnlockBaseAddress(imageBuffer, kCVPixelBufferLock_ReadOnly);
//process cvImage now

Related

Remove AVAssetWriter's First Black/Blank Frame

I have an avassetwriter to record a video with an applied filter to then play back via avqueueplayer.
My issue is, on play back, the recorded video displays a black/blank screen for the first frame. To my understanding, this is due to the writer capturing audio before capturing the first actual video frame.
To attempt to resolve, I had placed a boolean check when appending to the audio writer input whether the first video frame was appended to the adapter. That said, I still saw a black frame on playback despite having printed out the timestamps, which showed video having preceded audio...I also tried to put a check to start the write session when output == video, but ended up with the same result.
Any guidance or other workaround would be appreciated.
func captureOutput(_ output: AVCaptureOutput, didOutput sampleBuffer: CMSampleBuffer, from connection: AVCaptureConnection) {
let timestamp = CMSampleBufferGetPresentationTimeStamp(sampleBuffer).seconds
if output == _videoOutput {
if connection.isVideoOrientationSupported { connection.videoOrientation = .portrait }
guard let cvImageBuffer = CMSampleBufferGetImageBuffer(sampleBuffer) else { return }
let ciImage = CIImage(cvImageBuffer: cvImageBuffer)
guard let filteredCIImage = applyFilters(inputImage: ciImage) else { return }
self.ciImage = filteredCIImage
guard let cvPixelBuffer = getCVPixelBuffer(from: filteredCIImage) else { return }
self.cvPixelBuffer = cvPixelBuffer
self.ciContext.render(filteredCIImage, to: cvPixelBuffer, bounds: filteredCIImage.extent, colorSpace: CGColorSpaceCreateDeviceRGB())
metalView.draw()
}
switch _captureState {
case .start:
guard let outputUrl = tempURL else { return }
let writer = try! AVAssetWriter(outputURL: outputUrl, fileType: .mp4)
let videoSettings = _videoOutput!.recommendedVideoSettingsForAssetWriter(writingTo: .mp4)
let videoInput = AVAssetWriterInput(mediaType: .video, outputSettings: videoSettings)
videoInput.mediaTimeScale = CMTimeScale(bitPattern: 600)
videoInput.expectsMediaDataInRealTime = true
let pixelBufferAttributes = [
kCVPixelBufferCGImageCompatibilityKey: NSNumber(value: true),
kCVPixelBufferCGBitmapContextCompatibilityKey: NSNumber(value: true),
kCVPixelBufferPixelFormatTypeKey: NSNumber(value: Int32(kCVPixelFormatType_32ARGB))
] as [String:Any]
let adapter = AVAssetWriterInputPixelBufferAdaptor(assetWriterInput: videoInput, sourcePixelBufferAttributes: pixelBufferAttributes)
if writer.canAdd(videoInput) { writer.add(videoInput) }
let audioSettings = _audioOutput!.recommendedAudioSettingsForAssetWriter(writingTo: .mp4) as? [String:Any]
let audioInput = AVAssetWriterInput(mediaType: .audio, outputSettings: audioSettings)
audioInput.expectsMediaDataInRealTime = true
if writer.canAdd(audioInput) { writer.add(audioInput) }
_filename = outputUrl.absoluteString
_assetWriter = writer
_assetWriterVideoInput = videoInput
_assetWriterAudioInput = audioInput
_adapter = adapter
_captureState = .capturing
_time = timestamp
writer.startWriting()
writer.startSession(atSourceTime: CMTime(seconds: timestamp, preferredTimescale: CMTimeScale(600)))
case .capturing:
if output == _videoOutput {
if _assetWriterVideoInput?.isReadyForMoreMediaData == true {
let time = CMTime(seconds: timestamp, preferredTimescale: CMTimeScale(600))
_adapter?.append(self.cvPixelBuffer, withPresentationTime: time)
if !hasWrittenFirstVideoFrame { hasWrittenFirstVideoFrame = true }
}
} else if output == _audioOutput {
if _assetWriterAudioInput?.isReadyForMoreMediaData == true, hasWrittenFirstVideoFrame {
_assetWriterAudioInput?.append(sampleBuffer)
}
}
break
case .end:
guard _assetWriterVideoInput?.isReadyForMoreMediaData == true, _assetWriter!.status != .failed else { break }
_assetWriterVideoInput?.markAsFinished()
_assetWriterAudioInput?.markAsFinished()
_assetWriter?.finishWriting { [weak self] in
guard let output = self?._assetWriter?.outputURL else { return }
self?._captureState = .idle
self?._assetWriter = nil
self?._assetWriterVideoInput = nil
self?._assetWriterAudioInput = nil
self?.previewRecordedVideo(with: output)
}
default:
break
}
}
It's true that in the .capturing state you make sure the first sample buffer written is a video sample buffer by discarding preceding audio sample buffers - however you are still allowing an audio sample buffer's presentation timestamp to start the timeline with writer.startSession(atSourceTime:). This means your video starts with nothing, so not only do you briefly hear nothing (which is hard to notice) you also see nothing, which your video player happens to represent with a black frame.
From this point of view, there are no black frames to remove, there is only a void to fill. You can fill this void by starting the session from the first video timestamp.
This can be achieved by guarding against non-video sample buffers in the .start state, or less cleanly by moving writer.startSession(atSourceTime:) into if !hasWrittenFirstVideoFrame {} I guess.
p.s. why do you convert back and forth between CMTime and seconds? Why not stick with CMTime?

A way to crop AudioFile while recording?

I'm writing a first in first out recording app that buffers up to 2.5 mins of audio using AudioQueue. I've got most of it figured out but I'm at a roadblock trying to crop audio data.
I've seen people do it with AVAssetExportSession but it seems like it wouldn't be performant to export a new track every time the AudioQueueInputCallback is called.
I'm not married to using AVAssestExportSession by any means if anyone has a better idea.
Here's where I'm doing my write and was hoping to execute the crop.
var beforeSeconds = TimeInterval() // find the current estimated duration (not reliable)
var propertySize = UInt32(MemoryLayout.size(ofValue: beforeSeconds))
var osStatus = AudioFileGetProperty(audioRecorder.recordFile!, kAudioFilePropertyEstimatedDuration, &propertySize, &beforeSeconds)
if numPackets > 0 {
AudioFileWritePackets(audioRecorder.recordFile!, // write to disk
false,
buffer.mAudioDataByteSize,
packetDescriptions,
audioRecorder.recordPacket,
&numPackets,
buffer.mAudioData)
audioRecorder.recordPacket += Int64(numPackets) // up the packet index
var afterSeconds = TimeInterval() // find the after write estimated duration (not reliable)
var propertySize = UInt32(MemoryLayout.size(ofValue: afterSeconds))
var osStatus = AudioFileGetProperty(audioRecorder.recordFile!, kAudioFilePropertyEstimatedDuration, &propertySize, &afterSeconds)
assert(osStatus == noErr, "couldn't get record time")
if afterSeconds >= 150.0 {
print("hit max buffer!")
audioRecorder.onBufferMax?(afterSeconds - beforeSeconds)
}
}
Here's where the callback is executed
func onBufferMax(_ difference: Double){
let asset = AVAsset(url: tempFilePath)
let duration = CMTimeGetSeconds(asset.duration)
guard duration >= 150.0 else { return }
guard let exporter = AVAssetExportSession(asset: asset, presetName: AVAssetExportPresetAppleM4A) else {
print("exporter init failed")
return }
exporter.outputURL = getDocumentsDirectory().appendingPathComponent("buffered.caf") // helper function that calls the FileManager
exporter.outputFileType = AVFileTypeAppleM4A
let startTime = CMTimeMake(Int64(difference), 1)
let endTime = CMTimeMake(Int64(WYNDRConstants.maxTimeInterval + difference), 1)
exporter.timeRange = CMTimeRangeFromTimeToTime(startTime, endTime)
exporter.exportAsynchronously(completionHandler: {
switch exporter.status {
case .failed:
print("failed to export")
case .cancelled:
print("canceled export")
default:
print("export successful")
}
})
}
A ring buffer is a useful structure for storing, either in memory or on disk, the most recent n seconds of audio. Here is a simple solution that stores the audio in memory, presented in the traditional UIViewController format.
N.B 2.5 minutes of 44.1kHz audio stored as floats requires about 26MB of RAM, which is on the heavy side for a mobile device.
import AVFoundation
class ViewController: UIViewController {
let engine = AVAudioEngine()
var requiredSamples: AVAudioFrameCount = 0
var ringBuffer: [AVAudioPCMBuffer] = []
var ringBufferSizeInSamples: AVAudioFrameCount = 0
func startRecording() {
let input = engine.inputNode!
let bus = 0
let inputFormat = input.inputFormat(forBus: bus)
requiredSamples = AVAudioFrameCount(inputFormat.sampleRate * 2.5 * 60)
input.installTap(onBus: bus, bufferSize: 512, format: inputFormat) { (buffer, time) -> Void in
self.appendAudioBuffer(buffer)
}
try! engine.start()
}
func appendAudioBuffer(_ buffer: AVAudioPCMBuffer) {
ringBuffer.append(buffer)
ringBufferSizeInSamples += buffer.frameLength
// throw away old buffers if ring buffer gets too large
if let firstBuffer = ringBuffer.first {
if ringBufferSizeInSamples - firstBuffer.frameLength >= requiredSamples {
ringBuffer.remove(at: 0)
ringBufferSizeInSamples -= firstBuffer.frameLength
}
}
}
func stopRecording() {
engine.stop()
let url = FileManager.default.urls(for: .documentDirectory, in: .userDomainMask).first!.appendingPathComponent("foo.m4a")
let settings: [String : Any] = [AVFormatIDKey: Int(kAudioFormatMPEG4AAC)]
// write ring buffer to file.
let file = try! AVAudioFile(forWriting: url, settings: settings)
for buffer in ringBuffer {
try! file.write(from: buffer)
}
}
override func viewDidLoad() {
super.viewDidLoad()
// example usage
startRecording()
DispatchQueue.main.asyncAfter(deadline: .now() + 4*60) {
print("stopping")
self.stopRecording()
}
}
}

Recording gapless audio with AVAssetWriter

I'm trying to record segments of audio and recombine them without producing a gap in audio.
The eventual goal is to also have video, but I've found that audio itself creates gaps when combined with ffmpeg -f concat -i list.txt -c copy out.mp4
If I put the audio in an HLS playlist, there are also gaps, so I don't think this is unique to ffmpeg.
The idea is that samples come in continuously, and my controller routes samples to the proper AVAssetWriter. How do I eliminate gaps in audio?
import Foundation
import UIKit
import AVFoundation
class StreamController: UIViewController, AVCaptureAudioDataOutputSampleBufferDelegate, AVCaptureVideoDataOutputSampleBufferDelegate {
var closingAudioInput: AVAssetWriterInput?
var closingAssetWriter: AVAssetWriter?
var currentAudioInput: AVAssetWriterInput?
var currentAssetWriter: AVAssetWriter?
var nextAudioInput: AVAssetWriterInput?
var nextAssetWriter: AVAssetWriter?
var videoHelper: VideoHelper?
var startTime: NSTimeInterval = 0
let closeAssetQueue: dispatch_queue_t = dispatch_queue_create("closeAssetQueue", nil);
override func viewDidLoad() {
super.viewDidLoad()
startTime = NSDate().timeIntervalSince1970
createSegmentWriter()
videoHelper = VideoHelper()
videoHelper!.delegate = self
videoHelper!.startSession()
NSTimer.scheduledTimerWithTimeInterval(1, target: self, selector: "createSegmentWriter", userInfo: nil, repeats: true)
}
func createSegmentWriter() {
print("Creating segment writer at t=\(NSDate().timeIntervalSince1970 - self.startTime)")
let outputPath = OutputFileNameHelper.instance.pathForOutput()
OutputFileNameHelper.instance.incrementSegmentIndex()
try? NSFileManager.defaultManager().removeItemAtPath(outputPath)
nextAssetWriter = try! AVAssetWriter(URL: NSURL(fileURLWithPath: outputPath), fileType: AVFileTypeMPEG4)
nextAssetWriter!.shouldOptimizeForNetworkUse = true
let audioSettings: [String:AnyObject] = EncodingSettings.AUDIO
nextAudioInput = AVAssetWriterInput(mediaType: AVMediaTypeAudio, outputSettings: audioSettings)
nextAudioInput!.expectsMediaDataInRealTime = true
nextAssetWriter?.addInput(nextAudioInput!)
nextAssetWriter!.startWriting()
}
func closeWriterIfNecessary() {
if closing && audioFinished {
closing = false
audioFinished = false
let outputFile = closingAssetWriter?.outputURL.pathComponents?.last
closingAssetWriter?.finishWritingWithCompletionHandler() {
let delta = NSDate().timeIntervalSince1970 - self.startTime
print("segment \(outputFile!) finished at t=\(delta)")
}
self.closingAudioInput = nil
self.closingAssetWriter = nil
}
}
var audioFinished = false
var closing = false
func captureOutput(captureOutput: AVCaptureOutput!, didOutputSampleBuffer sampleBuffer: CMSampleBufferRef, fromConnection connection: AVCaptureConnection!) {
if let nextWriter = nextAssetWriter {
if nextWriter.status.rawValue != 0 {
if (currentAssetWriter != nil) {
closing = true
}
var sampleTiming: CMSampleTimingInfo = kCMTimingInfoInvalid
CMSampleBufferGetSampleTimingInfo(sampleBuffer, 0, &sampleTiming)
print("Switching asset writers at t=\(NSDate().timeIntervalSince1970 - self.startTime)")
closingAssetWriter = currentAssetWriter
closingAudioInput = currentAudioInput
currentAssetWriter = nextAssetWriter
currentAudioInput = nextAudioInput
nextAssetWriter = nil
nextAudioInput = nil
currentAssetWriter?.startSessionAtSourceTime(sampleTiming.presentationTimeStamp)
}
}
if let _ = captureOutput as? AVCaptureVideoDataOutput {
} else if let _ = captureOutput as? AVCaptureAudioDataOutput {
captureAudioSample(sampleBuffer)
}
dispatch_async(closeAssetQueue) {
self.closeWriterIfNecessary()
}
}
func printTimingInfo(sampleBuffer: CMSampleBufferRef, prefix: String) {
var sampleTiming: CMSampleTimingInfo = kCMTimingInfoInvalid
CMSampleBufferGetSampleTimingInfo(sampleBuffer, 0, &sampleTiming)
let presentationTime = Double(sampleTiming.presentationTimeStamp.value) / Double(sampleTiming.presentationTimeStamp.timescale)
print("\(prefix):\(presentationTime)")
}
func captureAudioSample(sampleBuffer: CMSampleBufferRef) {
printTimingInfo(sampleBuffer, prefix: "A")
if (closing && !audioFinished) {
if closingAudioInput?.readyForMoreMediaData == true {
closingAudioInput?.appendSampleBuffer(sampleBuffer)
}
closingAudioInput?.markAsFinished()
audioFinished = true
} else {
if currentAudioInput?.readyForMoreMediaData == true {
currentAudioInput?.appendSampleBuffer(sampleBuffer)
}
}
}
}
With packet formats like AAC you have silent priming frames (a.k.a encoder delay) at the beginning and remainder frames at the end (when your audio length is not a multiple of the packet size). In your case it's 2112 of them at the beginning of every file. Priming and remainder frames break the possibility of concatenating the files without transcoding them, so you can't really blame ffmpeg -c copy for not producing seamless output.
I'm not sure where this leaves you with video - obviously audio is synced to the video, even in the presence of priming frames.
It all depends on how you intend to concatenate the final audio (and eventually video). If you're doing it yourself using AVFoundation, then you can detect and account for priming/remainder frames using
CMGetAttachment(buffer, kCMSampleBufferAttachmentKey_TrimDurationAtStart, NULL)
CMGetAttachment(audioBuffer, kCMSampleBufferAttachmentKey_TrimDurationAtEnd, NULL)
As a short term solution, you can switch to a non "packetised" to get gapless, concatenatable (with ffmpeg) files.
e.g.
AVFormatIDKey: kAudioFormatAppleIMA4, fileType: AVFileTypeAIFC, suffix ".aifc" or
AVFormatIDKey: kAudioFormatLinearPCM, fileType: AVFileTypeWAVE, suffix ".wav"
p.s. you can see priming & remainder frames and packet sizes using the ubiquitous afinfo tool.
afinfo chunk.mp4
Data format: 2 ch, 44100 Hz, 'aac ' (0x00000000) 0 bits/channel, 0 bytes/packet, 1024 frames/packet, 0 bytes/frame
...
audio 39596 valid frames + 2112 priming + 276 remainder = 41984
...
Not sure if this helps you but if you have a bunch of MP4s you can use this code to combine them:
func mergeAudioFiles(audioFileUrls: NSArray, callback: (url: NSURL?, error: NSError?)->()) {
// Create the audio composition
let composition = AVMutableComposition()
// Merge
for (var i = 0; i < audioFileUrls.count; i++) {
let compositionAudioTrack :AVMutableCompositionTrack = composition.addMutableTrackWithMediaType(AVMediaTypeAudio, preferredTrackID: CMPersistentTrackID())
let asset = AVURLAsset(URL: audioFileUrls[i] as! NSURL)
let track = asset.tracksWithMediaType(AVMediaTypeAudio)[0]
let timeRange = CMTimeRange(start: CMTimeMake(0, 600), duration: track.timeRange.duration)
try! compositionAudioTrack.insertTimeRange(timeRange, ofTrack: track, atTime: composition.duration)
}
// Create output url
let format = NSDateFormatter()
format.dateFormat="yyyy-MM-dd-HH-mm-ss"
let currentFileName = "recording-\(format.stringFromDate(NSDate()))-merge.m4a"
print(currentFileName)
let documentsDirectory = NSFileManager.defaultManager().URLsForDirectory(.DocumentDirectory, inDomains: .UserDomainMask)[0]
let outputUrl = documentsDirectory.URLByAppendingPathComponent(currentFileName)
print(outputUrl.absoluteString)
// Export it
let assetExport = AVAssetExportSession(asset: composition, presetName: AVAssetExportPresetAppleM4A)
assetExport?.outputFileType = AVFileTypeAppleM4A
assetExport?.outputURL = outputUrl
assetExport?.exportAsynchronouslyWithCompletionHandler({ () -> Void in
switch assetExport!.status {
case AVAssetExportSessionStatus.Failed:
callback(url: nil, error: assetExport?.error)
default:
callback(url: assetExport?.outputURL, error: nil)
}
})
}

iOS reverse audio through AVAssetWriter

I'm trying to reverse audio in iOS with AVAsset and AVAssetWriter.
The following code is working, but the output file is shorter than input.
For example, input file has 1:59 duration, but output 1:50 with the same audio content.
- (void)reverse:(AVAsset *)asset
{
AVAssetReader* reader = [[AVAssetReader alloc] initWithAsset:asset error:nil];
AVAssetTrack* audioTrack = [[asset tracksWithMediaType:AVMediaTypeAudio] objectAtIndex:0];
NSMutableDictionary* audioReadSettings = [NSMutableDictionary dictionary];
[audioReadSettings setValue:[NSNumber numberWithInt:kAudioFormatLinearPCM]
forKey:AVFormatIDKey];
AVAssetReaderTrackOutput* readerOutput = [AVAssetReaderTrackOutput assetReaderTrackOutputWithTrack:audioTrack outputSettings:audioReadSettings];
[reader addOutput:readerOutput];
[reader startReading];
NSDictionary *outputSettings = [NSDictionary dictionaryWithObjectsAndKeys:
[NSNumber numberWithInt: kAudioFormatMPEG4AAC], AVFormatIDKey,
[NSNumber numberWithFloat:44100.0], AVSampleRateKey,
[NSNumber numberWithInt:2], AVNumberOfChannelsKey,
[NSNumber numberWithInt:128000], AVEncoderBitRateKey,
[NSData data], AVChannelLayoutKey,
nil];
AVAssetWriterInput *writerInput = [[AVAssetWriterInput alloc] initWithMediaType:AVMediaTypeAudio
outputSettings:outputSettings];
NSString *exportPath = [NSTemporaryDirectory() stringByAppendingPathComponent:#"out.m4a"];
NSURL *exportURL = [NSURL fileURLWithPath:exportPath];
NSError *writerError = nil;
AVAssetWriter *writer = [[AVAssetWriter alloc] initWithURL:exportURL
fileType:AVFileTypeAppleM4A
error:&writerError];
[writerInput setExpectsMediaDataInRealTime:NO];
[writer addInput:writerInput];
[writer startWriting];
[writer startSessionAtSourceTime:kCMTimeZero];
CMSampleBufferRef sample = [readerOutput copyNextSampleBuffer];
NSMutableArray *samples = [[NSMutableArray alloc] init];
while (sample != NULL) {
sample = [readerOutput copyNextSampleBuffer];
if (sample == NULL)
continue;
[samples addObject:(__bridge id)(sample)];
CFRelease(sample);
}
NSArray* reversedSamples = [[samples reverseObjectEnumerator] allObjects];
for (id reversedSample in reversedSamples) {
if (writerInput.readyForMoreMediaData) {
[writerInput appendSampleBuffer:(__bridge CMSampleBufferRef)(reversedSample)];
}
else {
[NSThread sleepForTimeInterval:0.05];
}
}
[writerInput markAsFinished];
dispatch_queue_t queue = dispatch_get_global_queue(DISPATCH_QUEUE_PRIORITY_HIGH, 0);
dispatch_async(queue, ^{
[writer finishWriting];
});
}
UPDATE:
If I write samples directly in first while loop - everything is ok (even with writerInput.readyForMoreMediaData checking). In this case result file has exactly the same duration as original. But if I write the same samples from reversed NSArray - the result is shorter.
The method described here is implemented in an Xcode project at this link (multi-platform SwiftUI app):
ReverseAudio Xcode Project
It is not sufficient to write the audio samples in the reverse order. The sample data needs to be reversed itself.
In Swift, we create an extension to AVAsset.
The samples must be processed as decompressed samples. To that end create audio reader settings with kAudioFormatLinearPCM:
let kAudioReaderSettings = [
AVFormatIDKey: Int(kAudioFormatLinearPCM) as AnyObject,
AVLinearPCMBitDepthKey: 16 as AnyObject,
AVLinearPCMIsBigEndianKey: false as AnyObject,
AVLinearPCMIsFloatKey: false as AnyObject,
AVLinearPCMIsNonInterleaved: false as AnyObject]
Use our AVAsset extension method audioReader:
func audioReader(outputSettings: [String : Any]?) -> (audioTrack:AVAssetTrack?, audioReader:AVAssetReader?, audioReaderOutput:AVAssetReaderTrackOutput?) {
if let audioTrack = self.tracks(withMediaType: .audio).first {
if let audioReader = try? AVAssetReader(asset: self) {
let audioReaderOutput = AVAssetReaderTrackOutput(track: audioTrack, outputSettings: outputSettings)
return (audioTrack, audioReader, audioReaderOutput)
}
}
return (nil, nil, nil)
}
let (_, audioReader, audioReaderOutput) = self.audioReader(outputSettings: kAudioReaderSettings)
to create an audioReader (AVAssetReader) and audioReaderOutput (AVAssetReaderTrackOutput) for reading the audio samples.
We need to keep track of the audio sample:
var audioSamples:[CMSampleBuffer] = []
Now start reading samples.
if audioReader.startReading() {
while audioReader.status == .reading {
if let sampleBuffer = audioReaderOutput.copyNextSampleBuffer(){
// process sample
}
}
}
Save the audio sample buffer, we need it later when we create the reversed sample:
audioSamples.append(sampleBuffer)
We need an AVAssetWriter:
guard let assetWriter = try? AVAssetWriter(outputURL: destinationURL, fileType: AVFileType.wav) else {
// error handling
return
}
The file type is 'wav' because the reversed samples will be written as uncompressed audio format Linear PCM, as follows.
For the assetWriter we specify audio compression settings, and a ‘source format hint’ and can acquire this from an uncompressed sample buffer:
let sampleBuffer = audioSamples[0]
let sourceFormat = CMSampleBufferGetFormatDescription(sampleBuffer)
let audioCompressionSettings = [AVFormatIDKey: kAudioFormatLinearPCM] as [String : Any]
Now we can create the AVAssetWriterInput, add it to the writer and start writing:
let assetWriterInput = AVAssetWriterInput(mediaType: AVMediaType.audio, outputSettings:audioCompressionSettings, sourceFormatHint: sourceFormat)
assetWriter.add(assetWriterInput)
assetWriter.startWriting()
assetWriter.startSession(atSourceTime: CMTime.zero)
Now iterate through the samples, in reverse order, and for each reverse the samples themselves.
We have an extension for CMSampleBuffer that does just that, called ‘reverse’.
Using requestMediaDataWhenReady we do this as follows:
let nbrSamples = audioSamples.count
var index = 0
let serialQueue: DispatchQueue = DispatchQueue(label: "com.limit-point.reverse-audio-queue")
assetWriterInput.requestMediaDataWhenReady(on: serialQueue) {
while assetWriterInput.isReadyForMoreMediaData, index < nbrSamples {
let sampleBuffer = audioSamples[nbrSamples - 1 - index]
if let reversedBuffer = sampleBuffer.reverse(), assetWriterInput.append(reversedBuffer) == true {
index += 1
}
else {
index = nbrSamples
}
if index == nbrSamples {
assetWriterInput.markAsFinished()
finishWriting() // call assetWriter.finishWriting, check assetWriter status, etc.
}
}
}
So the last thing to explain is how do you reverse the audio sample in the ‘reverse’ method?
We create an extension to CMSampleBuffer that takes a sample buffer and returns the reversed sample buffer, as an extension on CMSampleBuffer:
func reverse() -> CMSampleBuffer?
The data that has to be reversed needs to be obtained using the method:
CMSampleBufferGetAudioBufferListWithRetainedBlockBuffer
The CMSampleBuffer header files descibes this method as follows:
“Creates an AudioBufferList containing the data from the CMSampleBuffer, and a CMBlockBuffer which references (and manages the lifetime of) the data in that AudioBufferList.”
Call it as follows, where ‘self’ refers to the CMSampleBuffer we are reversing since this is an extension:
var blockBuffer: CMBlockBuffer? = nil
let audioBufferList: UnsafeMutableAudioBufferListPointer = AudioBufferList.allocate(maximumBuffers: 1)
CMSampleBufferGetAudioBufferListWithRetainedBlockBuffer(
self,
bufferListSizeNeededOut: nil,
bufferListOut: audioBufferList.unsafeMutablePointer,
bufferListSize: AudioBufferList.sizeInBytes(maximumBuffers: 1),
blockBufferAllocator: nil,
blockBufferMemoryAllocator: nil,
flags: kCMSampleBufferFlag_AudioBufferList_Assure16ByteAlignment,
blockBufferOut: &blockBuffer
)
Now you can access the raw data as:
let data: UnsafeMutableRawPointer = audioBufferList.unsafePointer.pointee.mBuffers.mData
Reversing data we need to access the data as an array of ‘samples’ called sampleArray, and is done as follows in Swift:
let samples = data.assumingMemoryBound(to: Int16.self)
let sizeofInt16 = MemoryLayout<Int16>.size
let dataSize = audioBufferList.unsafePointer.pointee.mBuffers.mDataByteSize
let dataCount = Int(dataSize) / sizeofInt16
var sampleArray = Array(UnsafeBufferPointer(start: samples, count: dataCount)) as [Int16]
Now reverse the array sampleArray:
sampleArray.reverse()
Using the reversed samples we create a new CMSampleBuffer that contains the reversed samples.
Now we replace the data in the CMBlockBuffer we previously obtained with CMSampleBufferGetAudioBufferListWithRetainedBlockBuffer:
First reassign ‘samples’ using the reversed array:
var status:OSStatus = noErr
sampleArray.withUnsafeBytes { sampleArrayPtr in
if let baseAddress = sampleArrayPtr.baseAddress {
let bufferPointer: UnsafePointer<Int16> = baseAddress.assumingMemoryBound(to: Int16.self)
let rawPtr = UnsafeRawPointer(bufferPointer)
status = CMBlockBufferReplaceDataBytes(with: rawPtr, blockBuffer: blockBuffer!, offsetIntoDestination: 0, dataLength: Int(dataSize))
}
}
if status != noErr {
return nil
}
Finally create the new sample buffer using CMSampleBufferCreate. This function needs two arguments we can get from the original sample buffer, namely the formatDescription and numberOfSamples:
let formatDescription = CMSampleBufferGetFormatDescription(self)
let numberOfSamples = CMSampleBufferGetNumSamples(self)
var newBuffer:CMSampleBuffer?
Now create the new sample buffer with the reversed blockBuffer:
guard CMSampleBufferCreate(allocator: kCFAllocatorDefault, dataBuffer: blockBuffer, dataReady: true, makeDataReadyCallback: nil, refcon: nil, formatDescription: formatDescription, sampleCount: numberOfSamples, sampleTimingEntryCount: 0, sampleTimingArray: nil, sampleSizeEntryCount: 0, sampleSizeArray: nil, sampleBufferOut: &newBuffer) == noErr else {
return self
}
return newBuffer
And that’s all there is to it!
As a final note the Core Audio and AVFoundation headers provide a lot of useful information, such as CoreAudioTypes.h, CMSampleBuffer.h, and many more.
Complete example for reverse video and audio using Swift 5 into the same asset output, audio processed using above recommendations:
private func reverseVideo(inURL: URL, outURL: URL, queue: DispatchQueue, _ completionBlock: ((Bool)->Void)?) {
Log.info("Start reverse video!")
let asset = AVAsset.init(url: inURL)
guard
let reader = try? AVAssetReader.init(asset: asset),
let videoTrack = asset.tracks(withMediaType: .video).first,
let audioTrack = asset.tracks(withMediaType: .audio).first
else {
assert(false)
completionBlock?(false)
return
}
let width = videoTrack.naturalSize.width
let height = videoTrack.naturalSize.height
// Video reader
let readerVideoSettings: [String : Any] = [ String(kCVPixelBufferPixelFormatTypeKey) : kCVPixelFormatType_420YpCbCr8BiPlanarVideoRange,]
let readerVideoOutput = AVAssetReaderTrackOutput.init(track: videoTrack, outputSettings: readerVideoSettings)
reader.add(readerVideoOutput)
// Audio reader
let readerAudioSettings: [String : Any] = [
AVFormatIDKey: kAudioFormatLinearPCM,
AVLinearPCMBitDepthKey: 16 ,
AVLinearPCMIsBigEndianKey: false ,
AVLinearPCMIsFloatKey: false,]
let readerAudioOutput = AVAssetReaderTrackOutput.init(track: audioTrack, outputSettings: readerAudioSettings)
reader.add(readerAudioOutput)
//Start reading content
reader.startReading()
//Reading video samples
var videoBuffers = [CMSampleBuffer]()
while let nextBuffer = readerVideoOutput.copyNextSampleBuffer() {
videoBuffers.append(nextBuffer)
}
//Reading audio samples
var audioBuffers = [CMSampleBuffer]()
var timingInfos = [CMSampleTimingInfo]()
while let nextBuffer = readerAudioOutput.copyNextSampleBuffer() {
var timingInfo = CMSampleTimingInfo()
var timingInfoCount = CMItemCount()
CMSampleBufferGetSampleTimingInfoArray(nextBuffer, entryCount: 0, arrayToFill: &timingInfo, entriesNeededOut: &timingInfoCount)
let duration = CMSampleBufferGetDuration(nextBuffer)
let endTime = CMTimeAdd(timingInfo.presentationTimeStamp, duration)
let newPresentationTime = CMTimeSubtract(duration, endTime)
timingInfo.presentationTimeStamp = newPresentationTime
timingInfos.append(timingInfo)
audioBuffers.append(nextBuffer)
}
//Stop reading
let status = reader.status
reader.cancelReading()
guard status == .completed, let firstVideoBuffer = videoBuffers.first, let firstAudioBuffer = audioBuffers.first else {
assert(false)
completionBlock?(false)
return
}
//Start video time
let sessionStartTime = CMSampleBufferGetPresentationTimeStamp(firstVideoBuffer)
//Writer for video
let writerVideoSettings: [String:Any] = [
AVVideoCodecKey : AVVideoCodecType.h264,
AVVideoWidthKey : width,
AVVideoHeightKey: height,
]
let writerVideoInput: AVAssetWriterInput
if let formatDescription = videoTrack.formatDescriptions.last {
writerVideoInput = AVAssetWriterInput.init(mediaType: .video, outputSettings: writerVideoSettings, sourceFormatHint: (formatDescription as! CMFormatDescription))
} else {
writerVideoInput = AVAssetWriterInput.init(mediaType: .video, outputSettings: writerVideoSettings)
}
writerVideoInput.transform = videoTrack.preferredTransform
writerVideoInput.expectsMediaDataInRealTime = false
//Writer for audio
let writerAudioSettings: [String:Any] = [
AVFormatIDKey : kAudioFormatMPEG4AAC,
AVSampleRateKey : 44100,
AVNumberOfChannelsKey: 2,
AVEncoderBitRateKey:128000,
AVChannelLayoutKey: NSData(),
]
let sourceFormat = CMSampleBufferGetFormatDescription(firstAudioBuffer)
let writerAudioInput: AVAssetWriterInput = AVAssetWriterInput.init(mediaType: .audio, outputSettings: writerAudioSettings, sourceFormatHint: sourceFormat)
writerAudioInput.expectsMediaDataInRealTime = true
guard
let writer = try? AVAssetWriter.init(url: outURL, fileType: .mp4),
writer.canAdd(writerVideoInput),
writer.canAdd(writerAudioInput)
else {
assert(false)
completionBlock?(false)
return
}
let pixelBufferAdaptor = AVAssetWriterInputPixelBufferAdaptor.init(assetWriterInput: writerVideoInput, sourcePixelBufferAttributes: nil)
let group = DispatchGroup.init()
group.enter()
writer.add(writerVideoInput)
writer.add(writerAudioInput)
writer.startWriting()
writer.startSession(atSourceTime: sessionStartTime)
var videoFinished = false
var audioFinished = false
//Write video samples in reverse order
var currentSample = 0
writerVideoInput.requestMediaDataWhenReady(on: queue) {
for i in currentSample..<videoBuffers.count {
currentSample = i
if !writerVideoInput.isReadyForMoreMediaData {
return
}
let presentationTime = CMSampleBufferGetPresentationTimeStamp(videoBuffers[i])
guard let imageBuffer = CMSampleBufferGetImageBuffer(videoBuffers[videoBuffers.count - i - 1]) else {
Log.info("VideoWriter reverseVideo: warning, could not get imageBuffer from SampleBuffer...")
continue
}
if !pixelBufferAdaptor.append(imageBuffer, withPresentationTime: presentationTime) {
Log.info("VideoWriter reverseVideo: warning, could not append imageBuffer...")
}
}
// finish write video samples
writerVideoInput.markAsFinished()
Log.info("Video writing finished!")
videoFinished = true
if(audioFinished){
group.leave()
}
}
//Write audio samples in reverse order
let totalAudioSamples = audioBuffers.count
writerAudioInput.requestMediaDataWhenReady(on: queue) {
for i in 0..<totalAudioSamples-1 {
if !writerAudioInput.isReadyForMoreMediaData {
return
}
let audioSample = audioBuffers[totalAudioSamples-1-i]
let timingInfo = timingInfos[i]
// reverse samples data using timing info
if let reversedBuffer = audioSample.reverse(timingInfo: [timingInfo]) {
// append data
if writerAudioInput.append(reversedBuffer) == false {
break
}
}
}
// finish
writerAudioInput.markAsFinished()
Log.info("Audio writing finished!")
audioFinished = true
if(videoFinished){
group.leave()
}
}
group.notify(queue: queue) {
writer.finishWriting {
if writer.status != .completed {
Log.info("VideoWriter reverse video: error - \(String(describing: writer.error))")
completionBlock?(false)
} else {
Log.info("Ended reverse video!")
completionBlock?(true)
}
}
}
}
Happy coding!
Print out the size of each buffer in number of samples (through the "reading" readerOuput while loop), and repeat in the "writing" writerInput for-loop. This way you can see all the buffer sizes and see if they add up.
For example, are you missing or skipping a buffer if (writerInput.readyForMoreMediaData) is false, you "sleep", but then proceed to the next reversedSample in reversedSamples (that buffer effectively gets dropped from the writerInput)
UPDATE (based on comments):
I found in the code, there are two problems:
The output settings is incorrect (the input file is mono (1 channel), but the output settings is configured to be 2 channels. It should be: [NSNumber numberWithInt:1], AVNumberOfChannelsKey. Look at the info on output and input files:
The second problem is that you are reversing 643 buffers of 8192 audio samples, instead of reversing the index of each audio sample. To see each buffer, I changed your debugging from looking at the size of each sample to looking at the size of the buffer, which is 8192. So line 76 is now: size_t sampleSize = CMSampleBufferGetNumSamples(sample);
The output looks like:
2015-03-19 22:26:28.171 audioReverse[25012:4901250] Reading [0]: 8192
2015-03-19 22:26:28.172 audioReverse[25012:4901250] Reading [1]: 8192
...
2015-03-19 22:26:28.651 audioReverse[25012:4901250] Reading [640]: 8192
2015-03-19 22:26:28.651 audioReverse[25012:4901250] Reading [641]: 8192
2015-03-19 22:26:28.651 audioReverse[25012:4901250] Reading [642]: 5056
2015-03-19 22:26:28.651 audioReverse[25012:4901250] Writing [0]: 5056
2015-03-19 22:26:28.652 audioReverse[25012:4901250] Writing [1]: 8192
...
2015-03-19 22:26:29.134 audioReverse[25012:4901250] Writing [640]: 8192
2015-03-19 22:26:29.135 audioReverse[25012:4901250] Writing [641]: 8192
2015-03-19 22:26:29.135 audioReverse[25012:4901250] Writing [642]: 8192
This shows that you're reversing the order of each buffer of 8192 samples, but in each buffer the audio is still "facing forward". We can see this in this screen shot I took of a correctly reversed (sample-by-sample) versus your buffer reversal:
I think your current scheme can work if you also reverse each sample each 8192 buffer. I personally would not recommend using NSArray enumerators for signal-processing, but it can work if you operate at the sample-level.
extension CMSampleBuffer {
func reverse(timingInfo:[CMSampleTimingInfo]) -> CMSampleBuffer? {
var blockBuffer: CMBlockBuffer? = nil
let audioBufferList: UnsafeMutableAudioBufferListPointer = AudioBufferList.allocate(maximumBuffers: 1)
CMSampleBufferGetAudioBufferListWithRetainedBlockBuffer(
self,
bufferListSizeNeededOut: nil,
bufferListOut: audioBufferList.unsafeMutablePointer,
bufferListSize: AudioBufferList.sizeInBytes(maximumBuffers: 1),
blockBufferAllocator: nil,
blockBufferMemoryAllocator: nil,
flags: kCMSampleBufferFlag_AudioBufferList_Assure16ByteAlignment,
blockBufferOut: &blockBuffer
)
if let data = audioBufferList.unsafePointer.pointee.mBuffers.mData {
let samples = data.assumingMemoryBound(to: Int16.self)
let sizeofInt16 = MemoryLayout<Int16>.size
let dataSize = audioBufferList.unsafePointer.pointee.mBuffers.mDataByteSize
let dataCount = Int(dataSize) / sizeofInt16
var sampleArray = Array(UnsafeBufferPointer(start: samples, count: dataCount)) as [Int16]
sampleArray.reverse()
var status:OSStatus = noErr
sampleArray.withUnsafeBytes { sampleArrayPtr in
if let baseAddress = sampleArrayPtr.baseAddress {
let bufferPointer: UnsafePointer<Int16> = baseAddress.assumingMemoryBound(to: Int16.self)
let rawPtr = UnsafeRawPointer(bufferPointer)
status = CMBlockBufferReplaceDataBytes(with: rawPtr, blockBuffer: blockBuffer!, offsetIntoDestination: 0, dataLength: Int(dataSize))
}
}
if status != noErr {
return nil
}
let formatDescription = CMSampleBufferGetFormatDescription(self)
let numberOfSamples = CMSampleBufferGetNumSamples(self)
var newBuffer:CMSampleBuffer?
guard CMSampleBufferCreate(allocator: kCFAllocatorDefault, dataBuffer: blockBuffer, dataReady: true, makeDataReadyCallback: nil, refcon: nil, formatDescription: formatDescription, sampleCount: numberOfSamples, sampleTimingEntryCount: timingInfo.count, sampleTimingArray: timingInfo, sampleSizeEntryCount: 0, sampleSizeArray: nil, sampleBufferOut: &newBuffer) == noErr else {
return self
}
return newBuffer
}
return nil
}
}
Missed function!

iOS Determine Number of Frames in Video

If I have a MPMoviePlayerController in Swift:
MPMoviePlayerController mp = MPMoviePlayerController(contentURL: url)
Is there a way I can get the number of frames within the video located at url? If not, is there some other way to determine the frame count?
I don't think MPMoviePlayerController can help you.
Use an AVAssetReader and count the number of CMSampleBuffers it returns to you. You can configure it to not even decode the frames, effectively parsing the file, so it should be fast and memory efficient.
Something like
var asset = AVURLAsset(URL: url, options: nil)
var reader = AVAssetReader(asset: asset, error: nil)
var videoTrack = asset.tracksWithMediaType(AVMediaTypeVideo)[0] as! AVAssetTrack
var readerOutput = AVAssetReaderTrackOutput(track: videoTrack, outputSettings: nil) // NB: nil, should give you raw frames
reader.addOutput(readerOutput)
reader.startReading()
var nFrames = 0
while true {
var sampleBuffer = readerOutput.copyNextSampleBuffer()
if sampleBuffer == nil {
break
}
nFrames++
}
println("Num frames: \(nFrames)")
Sorry if that's not idiomatic, I don't know swift.
Swift 5
func getNumberOfFrames(url: URL) -> Int {
let asset = AVURLAsset(url: url, options: nil)
do {
let reader = try AVAssetReader(asset: asset)
//AVAssetReader(asset: asset, error: nil)
let videoTrack = asset.tracks(withMediaType: AVMediaType.video)[0]
let readerOutput = AVAssetReaderTrackOutput(track: videoTrack, outputSettings: nil) // NB: nil, should give you raw frames
reader.add(readerOutput)
reader.startReading()
var nFrames = 0
while true {
let sampleBuffer = readerOutput.copyNextSampleBuffer()
if sampleBuffer == nil {
break
}
nFrames = nFrames+1
}
print("Num frames: \(nFrames)")
return nFrames
}catch {
print("Error: \(error)")
}
return 0
}
You could also use frames per second to calculate total frames.
var player: AVPlayer?
var playerController = AVPlayerViewController()
var videoFPS: Int = 0
var totalFrames: Int?
guard let videoURL = "" else { return }
player = AVPlayer(url: videoURL)
playerController.player = player
guard player?.currentItem?.asset != nil else {
return
}
let asset = self.player?.currentItem?.asset
let tracks = asset!.tracks(withMediaType: .video)
let fps = tracks.first?.nominalFrameRate
let duration = self.player?.currentItem?.duration
self.videoFPS = lround(Double(fps!))
self.totalFrames = lround(Double(self!.videoFPS) * durationSeconds)

Resources