I want to record audio on iOS in real time, analyze the raw audio data and save parts of the recorded data. I'm recording the data with this code: https://gist.github.com/hotpaw2/ba815fc23b5d642705f2b1dedfaf0107
Now, my data is saved in a Float array and I want to save it to an audio file. I tried doing it with this code:
let fileMgr = FileManager.default
let dirPaths = fileMgr.urls(for: .documentDirectory, in: .userDomainMask)
let recordSettings = [AVEncoderAudioQualityKey: AVAudioQuality.min.rawValue,
AVEncoderBitRateKey: 16,
AVNumberOfChannelsKey: 2,
AVSampleRateKey: 44100] as [String: Any]
let soundFileUrl = dirPaths[0].appendingPathComponent("recording-" + getDate() + ".pcm")
do {
let audioFile = try AVAudioFile(forWriting: soundFileUrl, settings: recordSettings)
let format = AVAudioFormat(commonFormat: .pcmFormatInt16, sampleRate: 44100, channels: 2, interleaved: true)
let audioFileBuffer = AVAudioPCMBuffer(pcmFormat: format, frameCapacity: 3000)
for i in 0..<circBuffer.count {
audioFileBuffer.int16ChannelData?.pointee[i] = Int16(circBuffer[i])
}
try audioFile.write(from: audioFileBuffer)
}
On the last line, I get an error which says:
ERROR: >avae> AVAudioFile.mm:306: -[AVAudioFile writeFromBuffer:error:]: error -50
amplitudeDemo(79264,0x70000f7cb000) malloc: * error for object 0x7fc5b9057e00: incorrect checksum for freed object - object was probably modified after being freed.
* set a breakpoint in malloc_error_break to debug
I already searched in a lot of other questions, but I couldn't find anything that helped me.
In your code at this line:
let audioFileBuffer = AVAudioPCMBuffer(pcmFormat: format, frameCapacity: 3000)
You declare an AVAudioPCMBuffer with capacity 3000 frames * 2 channels, which means the allocated buffer for your audioFileBuffer can contain 6000 samples. If the index for the channel data exceeds this limit, your code destroys the nearby regions in the heap, which causes object was probably modified error.
So, your circBuffer.count may probably be exceeding this limit. You need to allocate enough buffer size, for the AVAudioPCMBuffer.
do {
//### You need to specify common format
let audioFile = try AVAudioFile(forWriting: soundFileUrl, settings: recordSettings, commonFormat: .pcmFormatInt16, interleaved: true)
let channels = 2
let format = AVAudioFormat(commonFormat: .pcmFormatInt16, sampleRate: 44100, channels: AVAudioChannelCount(channels), interleaved: true)
let audioFileBuffer = AVAudioPCMBuffer(pcmFormat: format, frameCapacity: AVAudioFrameCount(circBuffer.count / channels)) //<-allocate enough frames
//### `stride` removed as it seems useless...
let int16ChannelData = audioFileBuffer.int16ChannelData! //<-cannot be nil for the `format` above
//When interleaved, channel data of AVAudioPCMBuffer is not as described in its doc:
// https://developer.apple.com/reference/avfoundation/avaudiopcmbuffer/1386212-floatchanneldata .
// The following code is modified to work with actual AVAudioPCMBuffer.
//Assuming `circBuffer` as
// Interleaved, 2 channel, Float32
// Each sample is normalized to [-1.0, 1.0] (as usual floating point audio format)
for i in 0..<circBuffer.count {
int16ChannelData[0][i] = Int16(circBuffer[i] * Float(Int16.max))
}
//You need to update `frameLength` of the `AVAudioPCMBuffer`.
audioFileBuffer.frameLength = AVAudioFrameCount(circBuffer.count / channels)
try audioFile.write(from: audioFileBuffer)
} catch {
print("Error", error)
}
Some notes added as comments, please check them before trying this code.
UPDATE
Sorry, for showing untested code, two things fixed:
You need to specify commonFormat: when instantiating AVAudioFile.
int16ChannelData (and other channel data) does not return expected pointers as described in its documentation when interleaved, data filling loop modified to fit for the actual behaviour.
Please try.
You seem to have chosen the frameCapacity arbitrarily to 3000.
Set it to the actual sample count:
let audioFileBuffer = AVAudioPCMBuffer(pcmFormat: format, frameCapacity: AVAudioFrameCount(circBuffer.count))
Related
I'm trying to write out an audio file after doing some processing, and am getting an error. I've reduced the error to this simple standalone case:
import Foundation
import AVFoundation
do {
let inputFileURL = URL(fileURLWithPath: "/Users/andrewmadsen/Desktop/test.m4a")
let file = try AVAudioFile(forReading: inputFileURL, commonFormat: .pcmFormatFloat32, interleaved: true)
guard let buffer = AVAudioPCMBuffer(pcmFormat: file.processingFormat, frameCapacity: AVAudioFrameCount(file.length)) else {
throw NSError()
}
buffer.frameLength = buffer.frameCapacity
try file.read(into: buffer)
let tempURL =
URL(fileURLWithPath: NSTemporaryDirectory())
.appendingPathComponent("com.openreelsoftware.AudioWriteTest")
.appendingPathComponent(UUID().uuidString)
.appendingPathExtension("caf")
let fm = FileManager.default
let dirURL = tempURL.deletingLastPathComponent()
if !fm.fileExists(atPath: dirURL.path, isDirectory: nil) {
try fm.createDirectory(at: dirURL, withIntermediateDirectories: true, attributes: nil)
}
var settings = buffer.format.settings
settings[AVAudioFileTypeKey] = kAudioFileCAFType
let tempFile = try AVAudioFile(forWriting: tempURL, settings: settings)
try tempFile.write(from: buffer)
} catch {
print(error)
}
When this code runs, the tempFile.write(from: buffer) call throws an error:
Error Domain=com.apple.coreaudio.avfaudio Code=-50 "(null)" UserInfo={failed call=ExtAudioFileWrite(_imp->_extAudioFile, buffer.frameLength, buffer.audioBufferList)}
test.m4a is a stereo, 44.1 KHz AAC file (from the iTunes store), though the failure occurs with other stereo files in other formats (AIFF and WAV) as well.
The code does not fail, and instead correctly saves the original audio out to a new file if I change the interleaved parameter to false when creating the original input AVAudioFile (file). However, in this case, the following message is logged to the console:
Audio files cannot be non-interleaved. Ignoring setting AVLinearPCMIsNonInterleaved YES.
It seems strange and confusing that writing a non-interleaved buffer works fine, despite a message saying that files must be interleaved, while writing an interleaved buffer fails. This is the opposite of what I expected.
I'm aware that reading a file using the plain AVAudioFile(forReading:) initializer without specifying a format defaults to using non-interleaved (ie. the "standard" AVAudioFormat at the file's actual sample rate and channel count). Does this mean that I really do have to convert interleaved audio to non-interleaved before trying to write it?
Notably, in the actual program where this problem came up, I'm doing something much more complex than simply reading a file in and writing it back out again, and I do need to handle interleaved audio. I have confirmed however that that original, more complex code is also failing only for interleaved stereo audio.
Is there something tricky I need to do to get AVAudioFile to write out a buffer containing interleaved PCM audio?
The mixup here is that there are TWO formats in play: the format of the output file, and the format of the buffers you will write (the processing format). The initializer AVAudioFile(forWriting: settings:) does not let you choose the processing format and defaults to de-interleaved, hence your error.
This opens the file for writing using the standard format (deinterleaved floating point).
You need to use the other initializer: AVAudioFile(forWriting:settings: commonFormat:interleaved:) whose last two arguments specify the processing format (the argument names could have been clearer about that tbh).
var settings: [String : Any] = [:]
settings[AVFormatIDKey] = kAudioFormatMPEG4AAC
settings[AVAudioFileTypeKey] = kAudioFileCAFType
settings[AVSampleRateKey] = buffer.format.sampleRate
settings[AVNumberOfChannelsKey] = 2
settings[AVLinearPCMIsFloatKey] = (buffer.format.commonFormat == .pcmFormatInt32)
let tempFile = try AVAudioFile(forWriting: tempURL, settings: settings, commonFormat: buffer.format.commonFormat, interleaved: buffer.format.isInterleaved)
try tempFile.write(from: buffer)
p.s. passing the buffer format setting directly to AVAudioFile gets you an LPCM caf file, which you may not want, hence I reconstruct the file settings.
Not positive here, but maybe since you're making the outputFile settings the same as the processing format, it's possible that the processing format has an inflexible policy on interleaving, whereas the file settings format will be fine with it - or vice versa.
Here's what I'd try first. Incomplete example, but should be enough to illustrate the areas to test.
let sourceFile: AVAudioFile
let format: AVAudioFormat
do {
// for the moment, try this without any specific format and see what it gives you
let sourceFile = try AVAudioFile(forReading: inputFileURL)
format = sourceFile.processingFormat
print(format) // let's see what we're getting so far, maybe some clues
} catch {
fatalError("Unable to load the source audio file: \(error.localizedDescription).")
}
let sourceSettings = sourceFile.fileFormat.settings
var outputSettings = sourceSettings // start with the settings of the original file rather than the buffer format settings
outputSettings[AVAudioFileTypeKey] = kAudioFileCAFType
// etc...
I have an AKSequencer which has an AKMusicTrack inside of it with the output of an AKMIDISampler. I also load the AKMIDISampler with a soundfont file.
The problem that I'm facing with AudioKit's renderToFile is that when it does create the file the sound is empty/silent, or it will play a single note which will be at the very beginning of the file, as well as only playing the single note a strange sound is played for the entirety of the length.
Here's the code for the initialisation
let midiSampler = AKMIDISampler()
let sequencer = AKSequencer()
let midi = AKMIDI()
do {
try midiSampler.loadSoundFont("soundFontFile", preset: 0, bank: 0)
} catch {
AKLog("Error - Couldn't load Sample!!!")
}
AudioKit.output = midiSampler
do {
try AudioKit.start()
} catch {
AKLog("AudioKit didn't begin")
}
let drumTrack = sequencer.newTrack("Drum Track")
midi.openInput()
midiSampler.enableMIDI(midi.client, name: "MIDI Sampler MIDI In")
drumTrack.setMIDIOutput(midiSampler.midiIn)
sequencer.setLength(AKDuration(beats: 8))
sequencer.setTempo(136)
sequencer.setRate(40)
midi = AudioKit.midi
Here is how I attempt to renderToFile:
let path = "recordedMIDIAudio.caf"
let url = FileManager.default.urls(for: .documentDirectory, in: .userDomainMask).first!.appendingPathComponent(path)
let format = AVAudioFormat(commonFormat: .pcmFormatFloat64, sampleRate: 44100, channels: 1, interleaved: true)!
do {
let audioFile = try AKAudioFile(forWriting: url, settings: format.settings, commonFormat: format.commonFormat, interleaved: format.isInterleaved)
try AudioKit.renderToFile(audioFile, duration: 3.55, prerender: {
self.sequencer.play()
})
} catch {
AKLog("Error when converting")
}
I've done quite a lot of research on this particular issue but I've had no luck. Any help or pointers will be greatly appreciated, thanks in advance!
Unfortunately its a well known but probably not well enough documented fact that offline rendering does not work with MIDI based signal generation. The time clock that the midi system uses is not sped up with the speed of sample generation that happens when rendering to a file.
I'm trying to use AudioKit.renderToFile() to export short MIDI passages to audio (m4a):
// renderSequencer is an instance of AKSequencer
self.renderSequencer.loadMIDIFile(fromURL: midiURL)
Conductor.sharedInstance.setInstrument(renderItem.soundID, forOfflineRender: true)
// we only have one track with note content
for track in self.renderSequencer.tracks {
if track.isNotEmpty {
track.setMIDIOutput(Conductor.sharedInstance.midiIn)
}
}
let audioCacheDir = self.module.stateManager.audioCacheDirectory
// strip name off midi file
let midiFileName = String(midiURL.lastPathComponent.split(separator: ".")[0])
audioFileName = midiFileName
audioFileURL = audioCacheDir.appendingPathComponent("\(midiFileName).m4a")
if let audioFileURL = audioFileURL {
let settings = [
AVFormatIDKey: Int(kAudioFormatMPEG4AAC),
AVSampleRateKey: 44100,
AVNumberOfChannelsKey: 2,
AVEncoderAudioQualityKey: AVAudioQuality.high.rawValue
]
let audioFile: AVAudioFile = try! AVAudioFile(forWriting: audioFileURL, settings: settings)
// get time in seconds of audio file (with 4-beat tail)
var duration: Float64 = 0.0
MusicSequenceGetSecondsForBeats(seq, (16.0 + 4), &duration)
// render sequence
do { try AudioKit.renderToFile(audioFile, duration: duration) {
self.renderSequencer.setRate(60.0)
self.renderSequencer.play()
}
} catch { print("Error performing offline file render!") }
}
This does produce an audio file of the expected duration, but it is silent. I've also tried logging from my MIDI output and can see that the events "played" from inside the preload closure are actually being sent/handled.
Mostly, I suppose, I'm curious to know whether this is actually expected to work. I've seen a couple of posts suggesting that renderToFile from MIDI is not supported (while others have suggested they have it working).
I did, btw, also post an issue on the audiokit GitHub.
I am making an app which needs to stream audio to a server. What I want to do is to divide the recorded audio into chunks and upload them while recording.
I used two recorders to do that, but it didn't work well; I can hear the difference between the chunks (stops for couple of milliseconds).
How can I do this?
Your problem can be broken into two pieces: recording and chunking (and uploading, but who cares).
For recording from the microphone and writing to the file, you can get started quickly with AVAudioEngine and AVAudioFile. See below for a sample, which records chunks at the device's default input sampling rate (you will probably want to rate convert that).
When you talk about the "difference between the chunks" you are referring to the ability to divide your audio data into pieces in such a way that when you concatenate them you don't hear discontinuities. e.g. LPCM audio data can be divided into chunks at the sample level, but the LPCM bitrate is high, so you're more likely to use a packetised format like adpcm (called ima4 on iOS?), or mp3 or aac. These formats can only be divided on packet boundaries, e.g. 64, 576 or 1024 samples, say. If your chunks are written without a header (usual for mp3 and aac, not sure about ima4), then concatenation is trivial: simply lay the chunks end to end, exactly as the cat command line tool would. Sadly, on iOS there is no mp3 encoder, so that leaves aac as a likely format for you, but that depends on your playback requirements. iOS devices and macs can definitely play it back.
import AVFoundation
class ViewController: UIViewController {
let engine = AVAudioEngine()
struct K {
static let secondsPerChunk: Float64 = 10
}
var chunkFile: AVAudioFile! = nil
var outputFramesPerSecond: Float64 = 0 // aka input sample rate
var chunkFrames: AVAudioFrameCount = 0
var chunkFileNumber: Int = 0
func writeBuffer(_ buffer: AVAudioPCMBuffer) {
let samplesPerSecond = buffer.format.sampleRate
if chunkFile == nil {
createNewChunkFile(numChannels: buffer.format.channelCount, samplesPerSecond: samplesPerSecond)
}
try! chunkFile.write(from: buffer)
chunkFrames += buffer.frameLength
if chunkFrames > AVAudioFrameCount(K.secondsPerChunk * samplesPerSecond) {
chunkFile = nil // close file
}
}
func createNewChunkFile(numChannels: AVAudioChannelCount, samplesPerSecond: Float64) {
let fileUrl = NSURL(fileURLWithPath: NSTemporaryDirectory()).appendingPathComponent("chunk-\(chunkFileNumber).aac")!
print("writing chunk to \(fileUrl)")
let settings: [String: Any] = [
AVFormatIDKey: kAudioFormatMPEG4AAC,
AVEncoderBitRateKey: 64000,
AVNumberOfChannelsKey: numChannels,
AVSampleRateKey: samplesPerSecond
]
chunkFile = try! AVAudioFile(forWriting: fileUrl, settings: settings)
chunkFileNumber += 1
chunkFrames = 0
}
override func viewDidLoad() {
super.viewDidLoad()
let input = engine.inputNode!
let bus = 0
let inputFormat = input.inputFormat(forBus: bus)
input.installTap(onBus: bus, bufferSize: 512, format: inputFormat) { (buffer, time) -> Void in
DispatchQueue.main.async {
self.writeBuffer(buffer)
}
}
try! engine.start()
}
}
I'd like to record the some audio using AVAudioEngine and the users Microphone. I already have a working sample, but just can't figure out how to specify the format of the output that I want...
My requirement would be that I need the AVAudioPCMBuffer as I speak which it currently does...
Would I need to add a seperate node that does some transcoding? I can't find much documentation/samples on that problem...
And I am also a noob when it comes to Audio-Stuff. I know that I want NSData containing PCM-16bit with a max sample-rate of 16000 (8000 would be better)
Here's my working sample:
private var audioEngine = AVAudioEngine()
func startRecording() {
let format = audioEngine.inputNode!.inputFormatForBus(bus)
audioEngine.inputNode!.installTapOnBus(bus, bufferSize: 1024, format: format) { (buffer: AVAudioPCMBuffer, time:AVAudioTime) -> Void in
let audioFormat = PCMBuffer.format
print("\(audioFormat)")
}
audioEngine.prepare()
do {
try audioEngine.start()
} catch { /* Imagine some super awesome error handling here */ }
}
If I changed the format to let' say
let format = AVAudioFormat(commonFormat: AVAudioCommonFormat.PCMFormatInt16, sampleRate: 8000.0, channels: 1, interleaved: false)
then if will produce an error saying that the sample rate needs to be the same as the hwInput...
Any help is very much appreciated!!!
EDIT: I just found AVAudioConverter but I need to be compatible with iOS8 as well...
You cannot change audio format directly on input nor output nodes. In the case of the microphone, the format will always be 44KHz, 1 channel, 32bits. To do so, you need to insert a mixer in between. Then when you connect inputNode > changeformatMixer > mainEngineMixer, you can specify the details of the format you want.
Something like:
var inputNode = audioEngine.inputNode
var downMixer = AVAudioMixerNode()
//I think you the engine's I/O nodes are already attached to itself by default, so we attach only the downMixer here:
audioEngine.attachNode(downMixer)
//You can tap the downMixer to intercept the audio and do something with it:
downMixer.installTapOnBus(0, bufferSize: 2048, format: downMixer.outputFormatForBus(0), block: //originally 1024
{ (buffer: AVAudioPCMBuffer!, time: AVAudioTime!) -> Void in
print(NSString(string: "downMixer Tap"))
do{
print("Downmixer Tap Format: "+self.downMixer.outputFormatForBus(0).description)//buffer.audioBufferList.debugDescription)
})
//let's get the input audio format right as it is
let format = inputNode.inputFormatForBus(0)
//I initialize a 16KHz format I need:
let format16KHzMono = AVAudioFormat.init(commonFormat: AVAudioCommonFormat.PCMFormatInt16, sampleRate: 11050.0, channels: 1, interleaved: true)
//connect the nodes inside the engine:
//INPUT NODE --format-> downMixer --16Kformat--> mainMixer
//as you can see I m downsampling the default 44khz we get in the input to the 16Khz I want
audioEngine.connect(inputNode, to: downMixer, format: format)//use default input format
audioEngine.connect(downMixer, to: audioEngine.outputNode, format: format16KHzMono)//use new audio format
//run the engine
audioEngine.prepare()
try! audioEngine.start()
I would recommend using an open framework such as EZAudio, instead, though.
The only thing I found that worked to change the sampling rate was
AVAudioSettings.sharedInstance().setPreferredSampleRate(...)
You can tap off engine.inputNode and use the input node's output format:
engine.inputNode.installTap(onBus: 0, bufferSize: 2048,
format: engine.inputNode.outputFormat(forBus: 0))
Unfortunately, there is no guarantee that you will get the sample rate that you want, although it seems like 8000, 12000, 16000, 22050, 44100 all worked.
The following did NOT work:
Setting the my custom format in a tap off engine.inputNode. (Exception)
Adding a mixer with my custom format and tapping that. (Exception)
Adding a mixer, connecting it with the inputNode's format, connecting the mixer to the main mixer with my custom format, then removing the input of the outputNode so as not to send the audio to the speaker and get instant feedback. (Worked, but got all zeros)
Not using my custom format at all in the AVAudioEngine, and using AVAudioConverter to convert from the hardware rate in my tap. (Length of the buffer was not set, no way to tell if results were correct)
This was with iOS 12.3.1.
In order to change the sample rate of input node, you have to first connect the input node to a mixer node, and specify a new format in the parameter.
let input = avAudioEngine.inputNode
let mainMixer = avAudioEngine.mainMixerNode
let newAudioFormat = AVAudioFormat(commonFormat: .pcmFormatFloat32, sampleRate: 44100, channels: 1, interleaved: true)
avAudioEngine.connect(input, to: mainMixer, format: newAudioFormat)
Now you can call installTap function on input node with the newAudioFormat.
One more thing I'd like to point out is, since the new launch of iPhone12, the default sample rate of input node has been no longer 44100 anymore. It has been upgraded to 48000.
You cannot change the configuration of input node, try to create a mixer node with the format that you want, attach it to the engine, then connect it to the input node and then connect the mainMixer to the node that you just created. Now you can install a tap on this node to get PCM data.
Note that for some strange reasons, you don't have a lot of choice for sample rate! At least not on iOS 9.1, Use standard 11025, 22050 or 44100. Any other sample rate will fail!
If you just need to change the sample rate and channel, I recommend using row-level API. You do not need to use a mixer or converter. Here you can find the Apple document about low-level recording. If you want, you will be able to convert to Objective-C class and add protocol.
Audio Queue Services Programming Guide
If your goal is simply to end up with AVAudioPCMBuffers that contains audio in your desired format, you can convert the buffers returned in the tap block using AVAudioConverter. This way, you actually don't need to know or care what the format of the inputNode is.
class MyBufferRecorder {
private let audioEngine:AVAudioEngine = AVAudioEngine()
private var inputNode:AVAudioInputNode!
private let audioQueue:DispatchQueue = DispatchQueue(label: "Audio Queue 5000")
private var isRecording:Bool = false
func startRecording() {
if (isRecording) {
return
}
isRecording = true
// must convert (unknown until runtime) input format to our desired output format
inputNode = audioEngine.inputNode
let inputFormat:AVAudioFormat! = inputNode.outputFormat(forBus: 0)
// 9600 is somewhat arbitrary... min seems to be 4800, max 19200... it doesn't matter what we set
// because we don't re-use this value -- we query the buffer returned in the tap block for it's true length.
// Using [weak self] in the tap block is probably a better idea, but it results in weird warnings for now
inputNode.installTap(onBus: 0, bufferSize: AVAudioFrameCount(9600), format: inputFormat) { (buffer, time) in
// not sure if this is necessary
if (!self.isRecording) {
print("\nDEBUG - rejecting callback, not recording")
return }
// not really sure if/why this needs to be async
self.audioQueue.async {
// Convert recorded buffer to our preferred format
let convertedPCMBuffer = AudioUtils.convertPCMBuffer(bufferToConvert: buffer, fromFormat: inputFormat, toFormat: AudioUtils.desiredFormat)
// do something with converted buffer
}
}
do {
// important not to start engine before installing tap
try audioEngine.start()
} catch {
print("\nDEBUG - couldn't start engine!")
return
}
}
func stopRecording() {
print("\nDEBUG - recording stopped")
isRecording = false
inputNode.removeTap(onBus: 0)
audioEngine.stop()
}
}
Separate class:
import Foundation
import AVFoundation
// assumes we want 16bit, mono, 44100hz
// change to what you want
class AudioUtils {
static let desiredFormat:AVAudioFormat! = AVAudioFormat(commonFormat: .pcmFormatInt16, sampleRate: Double(44100), channels: 1, interleaved: false)
// PCM <--> PCM
static func convertPCMBuffer(bufferToConvert: AVAudioPCMBuffer, fromFormat: AVAudioFormat, toFormat: AVAudioFormat) -> AVAudioPCMBuffer {
let convertedPCMBuffer = AVAudioPCMBuffer(pcmFormat: toFormat, frameCapacity: AVAudioFrameCount(bufferToConvert.frameLength))
var error: NSError? = nil
let inputBlock:AVAudioConverterInputBlock = {inNumPackets, outStatus in
outStatus.pointee = AVAudioConverterInputStatus.haveData
return bufferToConvert
}
let formatConverter:AVAudioConverter = AVAudioConverter(from:fromFormat, to: toFormat)!
formatConverter.convert(to: convertedPCMBuffer!, error: &error, withInputFrom: inputBlock)
if error != nil {
print("\nDEBUG - " + error!.localizedDescription)
}
return convertedPCMBuffer!
}
}
This is by no means production ready code -- I'm also learning IOS Audio... so please, please let me know any errors, best practices, or dangerous things going on in that code and I'll keep this answer updated.