I just want to get a list of the markers in an audio file. I thought this would be an easy common task that wouldn't be too difficult. However, I can barely find any example code or documentation, so I ended up with this:
private func getMarkers(_ url: CFURL) -> AudioFileMarkerList {
var file: AudioFileID?
var size: UInt32 = 0
var markers = AudioFileMarkerList()
AudioFileOpenURL(url, .readPermission, kAudioFileWAVEType, &file)
AudioFileGetPropertyInfo(file!, kAudioFilePropertyMarkerList, &size, nil)
AudioFileGetProperty(file!, kAudioFilePropertyMarkerList, &size, &markers)
return markers
}
Sadly, this doesn't work: error: memory read failed for 0x0.
I just can't figure out the problem. I checked the url and the size (which are both valid), but it always fails to retrieve the markers. Any help with this would be fantastic!
EDIT:
This sort of works, but all the data is completely wrong, and I can't understand how a single audio file can have multiple AudioFileMarkerLists of markers:
private func getMarkers(_ url: CFURL) -> [AudioFileMarkerList] {
var file: AudioFileID?
var size: UInt32 = 0
AudioFileOpenURL(url, .readPermission, kAudioFileWAVEType, &file)
AudioFileGetPropertyInfo(file!, kAudioFilePropertyMarkerList, &size, nil)
let length = NumBytesToNumAudioFileMarkers(Int(size))
var markers = [AudioFileMarkerList](repeating: AudioFileMarkerList(), count: length)
AudioFileGetProperty(file!, kAudioFilePropertyMarkerList, &size, &markers)
return markers
}
EDIT 2: According to most answers I've seen so far, this should work, but it returns an empty array:
private func getMarkers(_ url: CFURL) -> [AudioFileMarkerList] {
var file: AudioFileID?
var size: UInt32 = 0
AudioFileOpenURL(url, .readPermission, kAudioFileWAVEType, &file)
AudioFileGetPropertyInfo(file!, kAudioFilePropertyMarkerList, &size, nil)
let length = NumBytesToNumAudioFileMarkers(Int(size))
var markers = [AudioFileMarkerList]()
markers.reserveCapacity(length)
AudioFileGetProperty(file!, kAudioFilePropertyMarkerList, &size, &markers)
return markers
}
EDIT 3:
I got rid of a bunch of error checking and useful stuff from Ryan's code for anyone wanting to quickly try and find the problem:
private func getMarkers(_ url: CFURL) -> [AudioFileMarker]? {
var file: AudioFileID?
var size: UInt32 = 0
var markers: [AudioFileMarker] = []
AudioFileOpenURL(url, .readPermission, kAudioFileWAVEType, &file)
AudioFileGetPropertyInfo(file!, kAudioFilePropertyMarkerList, &size, nil)
let length = NumBytesToNumAudioFileMarkers(Int(size))
let data = UnsafeMutablePointer<AudioFileMarkerList>.allocate(capacity: length)
AudioFileGetProperty(file!, kAudioFilePropertyMarkerList, &size, data)
markers.append(data.pointee.mMarkers)
data.deallocate(capacity: length)
return markers
}
I just hope Apple actually tested AudioFileMarkerList in the first place.
EDIT 4:
SOLVED thanks to Rhythmic Fistman and Ryan Francesconi! Final result:
private func getMarkers(_ url: CFURL) -> [AudioFileMarker]? {
var file: AudioFileID?
var size: UInt32 = 0
var markerList: [AudioFileMarker] = []
AudioFileOpenURL(url, .readPermission, kAudioFileWAVEType, &file)
AudioFileGetPropertyInfo(file!, kAudioFilePropertyMarkerList, &size, nil)
let length = NumBytesToNumAudioFileMarkers(Int(size))
let data = UnsafeMutablePointer<AudioFileMarkerList>.allocate(capacity: length)
AudioFileGetProperty(file!, kAudioFilePropertyMarkerList, &size, data)
let markers = UnsafeBufferPointer<AudioFileMarker>(start: &data.pointee.mMarkers, count: length)
for marker in markers {
markerList.append(marker)
}
data.deallocate(capacity: length)
return markerList
}
Looks like you need to use UnsafeBufferPointer to access variable length arrays (like mMarkers). So instead of
out.append(markerList.mMarkers)
which only adds the first element, do this
let markersBuffer = UnsafeBufferPointer<AudioFileMarker>(start: &data.pointee.mMarkers,
count: Int(data.pointee.mNumberMarkers))
for marker in markersBuffer {
markers.append(marker)
}
Modeled on this answer
EDIT: Simplest solution is to use AudioKit's version of EZAudioFile.markers. Note this is not the same as the original EZAudio framework as I had added this marker code to AudioKit's version only.
import AudioKit
...
if let file = EZAudioFile(url: url) {
if let markers = file.markers as? [EZAudioFileMarker] {
for m in markers {
Swift.print("NAME: \(m.name) FRAME: \(m.framePosition)")
}
}
}
If you REALLY want to try in Swift, it would look something like the below. I'm not an expert in this, but as far as I can tell, there is some issue translating the AudioFileMarkerList struct to Swift. This may be solvable, but it seems to me it's best to just use Objective C to accomplish these calls. Here is the almost finished function in Swift. I recommend using AudioKit to accomplish what you need as I have added the marker code to EZAudioFile there. Check: https://github.com/AudioKit/AudioKit/blob/master/AudioKit/Common/Internals/EZAudio/EZAudioFile.m
But for the record here is the Swift code in progress! Note it's hard coded to WAVE files for the moment... Perhaps someone else can finish this?
class func getAudioFileMarkers(_ url: URL) -> [AudioFileMarker]? {
Swift.print("getAudioFileMarkers() \(url)")
var err: OSStatus = noErr
var audioFileID: AudioFileID?
err = AudioFileOpenURL(url as CFURL,
.readPermission,
kAudioFileWAVEType,
&audioFileID)
if err != noErr {
Swift.print("AudioFileOpenURL FAILED, Error: \(err)")
return nil
}
guard audioFileID != nil else {
return nil
}
Swift.print("audioFileID: \(audioFileID)")
var outSize: UInt32 = 0
var writable: UInt32 = 0
err = AudioFileGetPropertyInfo(audioFileID!, kAudioFilePropertyMarkerList, &outSize, &writable)
if err != noErr {
Swift.print("AudioFileGetPropertyInfo kAudioFilePropertyMarkerList FAILED, Error: \(err)")
return nil
}
Swift.print("outSize: \(outSize), writable: \(writable)")
guard outSize != 0 else { return nil }
let length = NumBytesToNumAudioFileMarkers( Int(outSize) )
Swift.print("Found \(length) markers")
let theData = UnsafeMutablePointer<AudioFileMarkerList>.allocate(capacity: length)
if length == 0 {
return nil
}
// pull marker list
err = AudioFileGetProperty(audioFileID!, kAudioFilePropertyMarkerList, &outSize, theData)
if err != noErr {
Swift.print("AudioFileGetProperty kAudioFilePropertyMarkerList FAILED, Error: \(err)")
return nil
}
let markerList: AudioFileMarkerList = theData.pointee
Swift.print("markerList.mMarkers: \(markerList.mMarkers)")
// this is only showing up as a single AudioFileMarker, not an array of them.
// I DON'T KNOW WHY. It works in Obj-C. I'm obviously missing something, or there is a problem in translation
var out = [AudioFileMarker]()
let mirror = Mirror(reflecting: markerList.mMarkers)
for m in mirror.children {
Swift.print( "label: \(m.label) value: \(m.value)" )
}
// for now just append the first one.
// :(
out.append(markerList.mMarkers)
// done with this now
theData.deallocate(capacity: length)
return out
}
Related
I am creating an application that will take an audio measurement by playing some stimulus data and recording the microphone input, and then analysing the data.
I am having trouble accounting for the time taken to initialise and start the audio engine, as this varies each time and is also dependant on the hardware used, etc.
So, I have an audio engine and have installed a Tap the hardware input, with input 1 being the microphone recording, and input 2 being a reference input (also from the hardware). The output is physically Y-Split and fed back into input 2.
The app initialises the engine, plays the stimulus audio plus 1 second of silence (to allow propagation time for the microphone to record the whole signal back), and then stop and close the engine.
I write the two input buffers as a WAV file so that I can import this into an an existing DAW. to visually examine the signals. I can see that each time I take a measurement, the time difference between the two signals is different (despite the fact the microphone is not moved and the hardware has stayed the same). I am assuming this is to do with the latency of the hardware, the time taken to initialise the engine and the way the divice distributes tasks.
I have tried to capture the absolute time using mach_absolute_time of the first buffer callback on each installTap function and subtracting the two, and I can see that this does vary quite a lot with each call:
class newAVAudioEngine{
var engine = AVAudioEngine()
var audioBuffer = AVAudioPCMBuffer()
var running = true
var in1Buf:[Float]=Array(repeating:0, count:totalRecordSize)
var in2Buf:[Float]=Array(repeating:0, count:totalRecordSize)
var buf1current:Int = 0
var buf2current:Int = 0
var in1firstRun:Bool = false
var in2firstRun:Bool = false
var in1StartTime = 0
var in2startTime = 0
func measure(inputSweep:SweepFilter) -> measurement {
initializeEngine(inputSweep: inputSweep)
while running == true {
}
let measureResult = measurement.init(meas: meas,ref: ref)
return measureResult
}
func initializeEngine(inputSweep:SweepFilter) {
buf1current = 0
buf2current = 0
in1StartTime = 0
in2startTime = 0
in1firstRun = true
in2firstRun = true
in1Buf = Array(repeating:0, count:totalRecordSize)
in2Buf = Array(repeating:0, count:totalRecordSize)
engine.stop()
engine.reset()
engine = AVAudioEngine()
let srcNode = AVAudioSourceNode { _, _, frameCount, AudioBufferList -> OSStatus in
let ablPointer = UnsafeMutableAudioBufferListPointer(AudioBufferList)
if (Int(frameCount) + time) <= inputSweep.stimulus.count {
for frame in 0..<Int(frameCount) {
let value = inputSweep.stimulus[frame + time]
for buffer in ablPointer {
let buf: UnsafeMutableBufferPointer<Float> = UnsafeMutableBufferPointer(buffer)
buf[frame] = value
}
}
time += Int(frameCount)
return noErr
} else {
for frame in 0..<Int(frameCount) {
let value = 0
for buffer in ablPointer {
let buf: UnsafeMutableBufferPointer<Float> = UnsafeMutableBufferPointer(buffer)
buf[frame] = Float(value)
}
}
}
return noErr
}
let format = engine.outputNode.inputFormat(forBus: 0)
let stimulusFormat = AVAudioFormat(commonFormat: format.commonFormat,
sampleRate: Double(sampleRate),
channels: 1,
interleaved: format.isInterleaved)
do {
try AVAudioSession.sharedInstance().setCategory(.playAndRecord)
let ioBufferDuration = 128.0 / 44100.0
try AVAudioSession.sharedInstance().setPreferredIOBufferDuration(ioBufferDuration)
} catch {
assertionFailure("AVAudioSession setup failed")
}
let input = engine.inputNode
let inputFormat = input.inputFormat(forBus: 0)
print("InputNode Format is \(inputFormat)")
engine.attach(srcNode)
engine.connect(srcNode, to: engine.mainMixerNode, format: stimulusFormat)
if internalRefLoop == true {
srcNode.installTap(onBus: 0, bufferSize: 1024, format: stimulusFormat, block: {(buffer: AVAudioPCMBuffer!, time: AVAudioTime!) -> Void in
if self.in2firstRun == true {
var info = mach_timebase_info()
mach_timebase_info(&info)
let currentTime = mach_absolute_time()
let nanos = currentTime * UInt64(info.numer) / UInt64(info.denom)
self.in2startTime = Int(nanos)
self.in2firstRun = false
}
do {
let floatData = buffer.floatChannelData?.pointee
for frame in 0..<buffer.frameLength{
if (self.buf2current + Int(frame)) < totalRecordSize{
self.in2Buf[self.buf2current + Int(frame)] = floatData![Int(frame)]
}
}
self.buf2current += Int(buffer.frameLength)
if (self.numberOfSamples + Int(buffer.frameLength)) <= totalRecordSize{
try self.stimulusFile.write(from: buffer)
self.numberOfSamples += Int(buffer.frameLength) } else {
self.engine.stop()
self.running = false
}
} catch {
print(NSString(string: "write failed"))
}
})
}
let micAudioConverter = AVAudioConverter(from: inputFormat, to: stimulusFormat!)
var micChannelMap:[NSNumber] = [0,-1]
micAudioConverter?.channelMap = micChannelMap
let refAudioConverter = AVAudioConverter(from: inputFormat, to: stimulusFormat!)
var refChannelMap:[NSNumber] = [1,-1]
refAudioConverter?.channelMap = refChannelMap
//Measurement Tap
engine.inputNode.installTap(onBus: 0, bufferSize: 1024, format: inputFormat, block: {(buffer2: AVAudioPCMBuffer!, time: AVAudioTime!) -> Void in
//print(NSString(string:"writing"))
if self.in1firstRun == true {
var info = mach_timebase_info()
mach_timebase_info(&info)
let currentTime = mach_absolute_time()
let nanos = currentTime * UInt64(info.numer) / UInt64(info.denom)
self.in1StartTime = Int(nanos)
self.in1firstRun = false
}
do {
let micConvertedBuffer = AVAudioPCMBuffer(pcmFormat: stimulusFormat!, frameCapacity: buffer2.frameCapacity)
let micInputBlock: AVAudioConverterInputBlock = { inNumPackets, outStatus in
outStatus.pointee = AVAudioConverterInputStatus.haveData
return buffer2
}
var error: NSError? = nil
//let status = audioConverter.convert(to: convertedBuffer!, error: &error, withInputFrom: inputBlock)
let status = micAudioConverter?.convert(to: micConvertedBuffer!, error: &error, withInputFrom: micInputBlock)
//print(status)
let floatData = micConvertedBuffer?.floatChannelData?.pointee
for frame in 0..<micConvertedBuffer!.frameLength{
if (self.buf1current + Int(frame)) < totalRecordSize{
self.in1Buf[self.buf1current + Int(frame)] = floatData![Int(frame)]
}
if (self.buf1current + Int(frame)) >= totalRecordSize {
self.engine.stop()
self.running = false
}
}
self.buf1current += Int(micConvertedBuffer!.frameLength)
try self.measurementFile.write(from: micConvertedBuffer!)
} catch {
print(NSString(string: "write failed"))
}
if internalRefLoop == false {
if self.in2firstRun == true{
var info = mach_timebase_info()
mach_timebase_info(&info)
let currentTime = mach_absolute_time()
let nanos = currentTime * UInt64(info.numer) / UInt64(info.denom)
self.in2startTime = Int(nanos)
self.in2firstRun = false
}
do {
let refConvertedBuffer = AVAudioPCMBuffer(pcmFormat: stimulusFormat!, frameCapacity: buffer2.frameCapacity)
let refInputBlock: AVAudioConverterInputBlock = { inNumPackets, outStatus in
outStatus.pointee = AVAudioConverterInputStatus.haveData
return buffer2
}
var error: NSError? = nil
let status = refAudioConverter?.convert(to: refConvertedBuffer!, error: &error, withInputFrom: refInputBlock)
//print(status)
let floatData = refConvertedBuffer?.floatChannelData?.pointee
for frame in 0..<refConvertedBuffer!.frameLength{
if (self.buf2current + Int(frame)) < totalRecordSize{
self.in2Buf[self.buf2current + Int(frame)] = floatData![Int(frame)]
}
}
if (self.numberOfSamples + Int(buffer2.frameLength)) <= totalRecordSize{
self.buf2current += Int(refConvertedBuffer!.frameLength)
try self.stimulusFile.write(from: refConvertedBuffer!) } else {
self.engine.stop()
self.running = false
}
} catch {
print(NSString(string: "write failed"))
}
}
}
)
assert(engine.inputNode != nil)
running = true
try! engine.start()
So The above method is my entire class. Currently each buffer call on installTap writes the input directly to a WAV file. This is where I can see the two end results differing each time. I have tried adding the startTime variable and subtracting the two, but the results still vary.
Do I need to take into account my output will have latency too that may vary with each call? If so, how do I add this time into the equation? What I am looking for is for the two inputs and outputs to all have relative time, so that I can compare them. The different hardware latency will not matter too much, as long as I can identify the end call times.
If you are doing real-time measurements, you might want to use AVAudioSinkNode instead of a Tap. The Sink Node is new and introduced along with AVAudioSourceNode you are using. With installing a Tap you won't be able to get precise timing.
I'm attempting to create a continuous FIFO audio recorder in Swift. I'm running into and issue while trying to create the audioQueueCallback.
From the docs AudioTimeStamp has this init method:
AudioTimeStamp(mSampleTime: Float64, mHostTime: UInt64, mRateScalar: Float64, mWordClockTime: UInt64, mSMPTETime: SMPTETime, mFlags: AudioTimeStampFlags, mReserved: UInt32)
And I have not idea how to use it.
It seems to me like the device should have a reliable internal clock to be able to manage audioQueues off of but I haven't been able to find any documentation for it.
Here's my attempt at creating a BufferQueue:
ypealias WYNDRInputQueueCallback = ((Data) -> Void)
class WYNDRInputQueue {
class WYNDRInputQueueUserData {
let callback: WYNDRInputQueueCallback
let bufferStub: NSData
init(callback: #escaping WYNDRInputQueueCallback, bufferStub: NSData){
self.callback = callback
self.bufferStub = bufferStub
}
}
private var audioQueueRef: AudioQueueRef?
private let userData: WYNDRInputQueueUserData
public init(asbd: inout AudioStreamBasicDescription, callback: #escaping WYNDRInputQueueCallback, buffersCount: UInt32 = 3, bufferSize: UInt32 = 9600) throws {
self.userData = WYNDRInputQueueUserData(callback: callback, bufferStub: NSMutableData(length: Int(bufferSize))!)
let userDataUnsafe = UnsafeMutableRawPointer(Unmanaged.passRetained(self.userData).toOpaque())
let input = AudioQueueNewInput(&asbd,
audioQueueInputCallback,
userDataUnsafe,
.none,
.none,
0,
&audioQueueRef)
if input != noErr {
throw InputQueueError.genericError(input)
}
assert(audioQueueRef != nil )
for _ in 0..<buffersCount {
var bufferRef: AudioQueueBufferRef?
let bufferInput = AudioQueueAllocateBuffer(audioQueueRef!, bufferSize, &bufferRef)
if bufferInput != noErr {
throw InputQueueError.genericError(bufferInput)
}
assert(bufferRef != nil)
Here's where I'm using the audioTimeStamp:
audioQueueInputCallback(userDataUnsafe, audioQueueRef!, bufferRef!, <#T##UnsafePointer<AudioTimeStamp>#>, 0, nil)
}
}
private let audioQueueInputCallback: AudioQueueInputCallback = { (inUserData, inAQ, inBuffer, inStartTime, inNumberPacketDescriptions, inPacketDescs) in
let userData = Unmanaged<WYNDRInputQueueUserData>.fromOpaque(inUserData!).takeUnretainedValue()
let dataSize = Int(inBuffer.pointee.mAudioDataByteSize)
let inputData = Data(bytes: inBuffer.pointee.mAudioData, count: dataSize)
userData.callback(inputData)
AudioQueueEnqueueBuffer(inAQ, inBuffer, 0, nil)
}
Any advice here would be greatly appreciated!
I'm not sure how the timestamp is going to be used or who is going to use it, but if in doubt, why not use the number of samples you've recorded as the timestamp?
var timestamp = AudioTimeStamp()
timestamp.mSampleTime = numberOfSamplesRecorded
timestamp.mFlags = .sampleHostTimeValid
I have a mp4 video file which i am encrypting to save and decrypting to play via AVPlayer. Using CRYPTOSWIFT Library for encrypting/decrypting
Its working fine when i am decrypting whole file at once but my file is quite big and taking 100% CPU usage and lot of memory. So, I need to decrypt encrypted file in chunks.
I tried to decrypt file in chunks but its not playing video as AVPlayer is not recognizing decrypted chunk data maybe data is not stored sequentially while encrypting file. I have tried chacha20, AES, AES.CTR & AES.CBC protocols to encrypt and decrypt files but to no avail.
extension PlayerController: AVAssetResourceLoaderDelegate {
func resourceLoader(resourceLoader: AVAssetResourceLoader, shouldWaitForLoadingOfRequestedResource loadingRequest: AVAssetResourceLoadingRequest) -> Bool {
let request = loadingRequest.request
guard let path = request.URL?.path where request.URL?.scheme == Constants.customVideoScheme else { return true }
if let contentRequest = loadingRequest.contentInformationRequest {
do {
let fileAttributes = try NSFileManager.defaultManager().attributesOfItemAtPath(path)
if let fileSizeNumber = fileAttributes[NSFileSize] {
contentRequest.contentLength = fileSizeNumber.longLongValue
}
} catch { }
if fileHandle == nil {
fileHandle = NSFileHandle(forReadingAtPath: (request.URL?.path)!)!
}
contentRequest.contentType = "video/mp4"
contentRequest.byteRangeAccessSupported = true
}
if let data = decryptData(loadingRequest, path: path), dataRequest = loadingRequest.dataRequest {
dataRequest.respondWithData(data)
loadingRequest.finishLoading()
return true
}
return true
}
func decryptData(loadingRequest: AVAssetResourceLoadingRequest, path: String) -> NSData? {
print("Current OFFSET: \(loadingRequest.dataRequest?.currentOffset)")
print("requested OFFSET: \(loadingRequest.dataRequest?.requestedOffset)")
print("Current Length: \(loadingRequest.dataRequest?.requestedLength)")
if loadingRequest.contentInformationRequest != nil {
var data = fileHandle!.readDataOfLength((loadingRequest.dataRequest?.requestedLength)!)
fileHandle!.seekToFileOffset(0)
data = decodeVideoData(data)!
return data
} else {
fileHandle?.seekToFileOffset(UInt64((loadingRequest.dataRequest?.currentOffset)!))
let data = fileHandle!.readDataOfLength((loadingRequest.dataRequest?.requestedLength)!)
// let data = fileHandle!.readDataOfLength(length!) ** When I use this its not playing video but play fine when try with requestedLength **
return decodeVideoData(data)
}
}
}
Decode code to decode nsdata :
func decodeVideoData(data: NSData) -> NSData? {
if let cha = ChaCha20(key: Constants.Encryption.SecretKey, iv: Constants.Encryption.IvKey) {
let decrypted: NSData = try! data.decrypt(cha)
return decrypted
}
return nil
}
I need help regarding this issue, Kindly guide me to the right way to achieve this.
For in depth and a more complete CommonCrypto wrapper, check out my CommonCrypto wrapper. I've extracted bits and pieces for this answer.
First of all, we need to define some functions that will do the encryption/decryption. I'm assuming, for now, you use AES(256) CBC with PKCS#7 padding. Summarising the snippet below: we have an update function, that can be called repeatedly to consume the chunks. There's also a final function that will wrap up any left overs (usually deals with padding).
import CommonCrypto
import Foundation
enum CryptoError: Error {
case generic(CCCryptorStatus)
}
func getOutputLength(_ reference: CCCryptorRef?, inputLength: Int, final: Bool) -> Int {
CCCryptorGetOutputLength(reference, inputLength, final)
}
func update(_ reference: CCCryptorRef?, data: Data) throws -> Data {
var output = [UInt8](repeating: 0, count: getOutputLength(reference, inputLength: data.count, final: false))
let status = data.withUnsafeBytes { dataPointer -> CCCryptorStatus in
CCCryptorUpdate(reference, dataPointer.baseAddress, data.count, &output, output.count, nil)
}
guard status == kCCSuccess else {
throw CryptoError.generic(status)
}
return Data(output)
}
func final(_ reference: CCCryptorRef?) throws -> Data {
var output = [UInt8](repeating: 0, count: getOutputLength(reference, inputLength: 0, final: true))
var moved = 0
let status = CCCryptorFinal(reference, &output, output.count, &moved)
guard status == kCCSuccess else {
throw CryptoError.generic(status)
}
output.removeSubrange(moved...)
return Data(output)
}
Next up, for the purpose of demonstration, the encryption.
let key = Data(repeating: 0x0a, count: kCCKeySizeAES256)
let iv = Data(repeating: 0, count: kCCBlockSizeAES128)
let bigFile = (0 ..< 0xffff).map { _ in
return Data(repeating: UInt8.random(in: 0 ... UInt8.max), count: kCCBlockSizeAES128)
}.reduce(Data(), +)
var encryptor: CCCryptorRef?
CCCryptorCreate(CCOperation(kCCEncrypt), CCAlgorithm(kCCAlgorithmAES), CCOptions(kCCOptionPKCS7Padding), Array(key), key.count, Array(iv), &encryptor)
do {
let ciphertext = try update(encryptor, data: bigFile) + final(encryptor)
print(ciphertext) // 1048576 bytes
} catch {
print(error)
}
That appears to me as quite a large file. Now decrypting, would be done in a similar fashion.
var decryptor: CCCryptorRef?
CCCryptorCreate(CCOperation(kCCDecrypt), CCAlgorithm(kCCAlgorithmAES), CCOptions(kCCOptionPKCS7Padding), Array(key), key.count, Array(iv), &decryptor)
do {
var plaintext = Data()
for i in 0 ..< 0xffff {
plaintext += try update(decryptor, data: ciphertext[i * kCCBlockSizeAES128 ..< i * kCCBlockSizeAES128 + kCCBlockSizeAES128])
}
plaintext += try final(decryptor)
print(plaintext == bigFile, plaintext) // true 1048560 bytes
} catch {
print(error)
}
The encryptor can be altered for different modes and should also be released once it's done, and I'm not too sure how arbitrary output on the update function will behave, but this should be enough to give you an idea of how it can be done using CommonCrypto.
I am trying to read the raw values of a sound file. I am pretty new to IOS development. I am ultimately trying to take a fast fourier transform of the audio file. The output of the data looks like a sound wave, but when I take the fft of a beep sound provided hereenter link description here I do not get an obvious frequency from the fft, which leads me to believe I am not getting the real raw data. I have constructed the following code using a combination of several stack overflow posts. Am I reading the file incorrectly?
class AudioAnalyzer {
init(file_path: NSURL) {
var assetOptions = [
AVURLAssetPreferPreciseDurationAndTimingKey : 1,
AVFormatIDKey : kAudioFormatLinearPCM
]
println(file_path)
var videoAsset=AVURLAsset(URL: file_path, options: assetOptions)
var error:NSError?
var videoAssetReader=AVAssetReader(asset: videoAsset, error: &error)
if error != nil
{
println(error)
}
var tracksArray=videoAsset?.tracksWithMediaType(AVMediaTypeAudio)
var videotrack = tracksArray?[0] as! AVAssetTrack
var fps = videotrack.nominalFrameRate
var videoTrackOutput=AVAssetReaderTrackOutput(track:videotrack as AVAssetTrack , outputSettings: nil)
if videoAssetReader.canAddOutput(videoTrackOutput)
{
videoAssetReader.addOutput(videoTrackOutput)
videoAssetReader.startReading()
}
if videoAssetReader.status == AVAssetReaderStatus.Reading {
var sampleBuffer = videoTrackOutput.copyNextSampleBuffer()
var audioBuffer = CMSampleBufferGetDataBuffer(sampleBuffer)
let samplesInBuffer = CMSampleBufferGetNumSamples(sampleBuffer)
var currentZ = Double(samplesInBuffer)
let buffer: CMBlockBufferRef = CMSampleBufferGetDataBuffer(sampleBuffer)
var lengthAtOffset: size_t = 0
var totalLength: size_t = 0
var data: UnsafeMutablePointer<Int8> = nil
var output: Array<Float> = [];
if( CMBlockBufferGetDataPointer( buffer, 0, &lengthAtOffset, &totalLength, &data ) != noErr ) {
println("some sort of error happened")
} else {
for i in stride(from: 0, to: totalLength, by: 2) {
var myint = Int16(data[i]) << 8 | Int16(data[i+1])
var myFloat = Float(myint)
output.append(myFloat);
}
println(output)
}
}
}
}
Your AVAssetReaderTrackOutput is giving you raw packet data. For LPCM output, pass in some outputSettings:
var settings = [NSObject : AnyObject]()
settings[AVFormatIDKey] = kAudioFormatLinearPCM
settings[AVLinearPCMBitDepthKey] = 16
settings[AVLinearPCMIsFloatKey] = false
var videoTrackOutput=AVAssetReaderTrackOutput(track:videotrack as AVAssetTrack , outputSettings: settings)
p.s. I'd feel much better if you renamed videoTrackOutput to audioTrackOutput.
I'd like to cache CAF files before converting them to PCM whenever they play.
For example,
char *mybuffer = malloc(mysoundsize);
FILE *f = fopen("mysound.caf", "rb");
fread(mybuffer, mysoundsize, 1, f);
fclose(f);
char *pcmBuffer = malloc(pcmsoundsize);
// Convert to PCM for playing
AudioFileReadBytes(mybuffer, false, 0, mysoundsize, &numbytes, pcmBuffer);
This way, whenever the sound plays, the compressed CAF file is already loaded into memory, avoiding disk access. How can I open a block of memory with an 'AudioFileID' to make AudioFileReadBytes happy? Is there another method I can use?
I have not done it myself, but from the documentation I would think that you have to use AudioFileOpenWithCallbacks and implement callback functions that read from your memory buffer.
You can finish it with AudioFileStreamOpen
fileprivate var streamID: AudioFileStreamID?
public func parse(data: Data) throws {
let streamID = self.streamID!
let count = data.count
_ = try data.withUnsafeBytes { (bytes: UnsafePointer<UInt8>) in
let result = AudioFileStreamParseBytes(streamID, UInt32(count), bytes, [])
guard result == noErr else {
throw ParserError.failedToParseBytes(result)
}
}
}
you can store the data in memory within the callback
func ParserPacketCallback(_ context: UnsafeMutableRawPointer, _ byteCount: UInt32, _ packetCount: UInt32, _ data: UnsafeRawPointer, _ packetDescriptions: Optional<UnsafeMutablePointer<AudioStreamPacketDescription>>) {
let parser = Unmanaged<Parser>.fromOpaque(context).takeUnretainedValue()
/// At this point we should definitely have a data format
guard let dataFormat = parser.dataFormatD else {
return
}
let format = dataFormat.streamDescription.pointee
let bytesPerPacket = Int(format.mBytesPerPacket)
for i in 0 ..< Int(packetCount) {
let packetStart = i * bytesPerPacket
let packetSize = bytesPerPacket
let packetData = Data(bytes: data.advanced(by: packetStart), count: packetSize)
parser.packetsX.append(packetData)
}
}
full code in github repo