I have a project for Android reading a short[] array with PCM data from microphone Buffer for live analysis. I need to convert this functionality to iOS Swift. In Android it is very simple and looks like this..
import android.media.AudioFormat;
import android.media.AudioRecord;
...
AudioRecord recorder = new AudioRecord(MediaRecorder.AudioSource.DEFAULT, someSampleRate, AudioFormat.CHANNEL_IN_MONO, AudioFormat.ENCODING_PCM_16BIT, AudioRecord.getMinBufferSize(...));
recorder.startRecording();
later I read the buffer with
recorder.read(data, offset, length); //data is short[]
(That's what i'm looking for)
Documentation: https://developer.android.com/reference/android/media/AudioRecord.html
I'm very new to Swift and iOS. I've read a lot of documentation about AudioToolkit, ...Core and whatever. All I found is C++/Obj-C and Bridging Swift Header solutions. Thats much to advanced and outdated for me.
For now I can read PCM-Data to a CAF-File with AVFoundation
settings = [
AVLinearPCMBitDepthKey: 16 as NSNumber,
AVFormatIDKey: Int(kAudioFormatLinearPCM),
AVLinearPCMIsBigEndianKey: 0 as NSNumber,
AVLinearPCMIsFloatKey: 0 as NSNumber,
AVSampleRateKey: 12000.0,
AVNumberOfChannelsKey: 1 as NSNumber,
]
...
recorder = try AVAudioRecorder(URL: someURL, settings: settings)
recorder.delegate = self
recorder.record()
But that's not what I'm looking for (or?). Is there an elegant way to achieve the android read functionality described above? I need to get a sample-array from the microphone buffer. Or do i need to do the reading on the recorded CAF file?
Thanks a lot! Please help me with easy explanations or code examples. iOS terminology is not mine yet ;-)
If you don't mind floating point samples and 48kHz, you can quickly get audio data from the microphone like so:
let engine = AVAudioEngine() // instance variable
func setup() {
let input = engine.inputNode!
let bus = 0
input.installTapOnBus(bus, bufferSize: 512, format: input.inputFormatForBus(bus)) { (buffer, time) -> Void in
let samples = buffer.floatChannelData[0]
// audio callback, samples in samples[0]...samples[buffer.frameLength-1]
}
try! engine.start()
}
Related
I'm developing an iOS application and I'm quite new to iOS development. So far I have implemented a h264 decoder from network stream using VideoToolbox, which was quite hard.
Now I need to play an audio stream that comes from network, but with no file involved, just a raw AAC stream read directly from the socket. This streams comes from the output of a ffmpeg instance.
The problem is that I don't know how to start with this, it seems there is little information about this topic. I have already tried with AVAudioPlayer but found just silence. I think I have first need to decompress the packets from the stream, just like with the h264 decoder.
I have been trying also with AVAudioEngine and AVAudioPlayerNode but no sucess, same as with AVAudioPlayer. Can someone provide me some guidance? Maybe AudioToolbox? AudioQueue?
Thank you very much for the help :)
Edit:
I'm playing around with AVAudioCompressedBuffer and having no error using AVAudioEngine and AVAudioNode. But, I don't know what this output means:
inBuffer: <AVAudioCompressedBuffer#0x6040004039f0: 0/1024 bytes>
Does this mean that the buffer is empty? I have been trying to feed this buffer in several ways, but always returns something like 0/1024. I think I'm not doing this right:
compressedBuffer.mutableAudioBufferList.pointee = audioBufferList
Any idea?
Thank you!
Edit 2:
I'm editing for reflecting my code for decompressing the buffer. Maybe some one can point me in the right direction.
Note: The packet that is ingested by this function actually is passed without the ADTS header (9 bytes) but I have also tried passing it with the header.
func decodeCompressedPacket(packet: Data) -> AVAudioPCMBuffer {
var packetCopy = packet
var streamDescription: AudioStreamBasicDescription = AudioStreamBasicDescription.init(mSampleRate: 44100, mFormatID: kAudioFormatMPEG4AAC, mFormatFlags: UInt32(MPEG4ObjectID.AAC_LC.rawValue), mBytesPerPacket: 0, mFramesPerPacket: 1024, mBytesPerFrame: 0, mChannelsPerFrame: 1, mBitsPerChannel: 0, mReserved: 0)
let audioFormat = AVAudioFormat.init(streamDescription: &streamDescription)
let compressedBuffer = AVAudioCompressedBuffer.init(format: audioFormat!, packetCapacity: 1, maximumPacketSize: 1024)
print("packetCopy count: \(packetCopy.count)")
var audioBuffer: AudioBuffer = AudioBuffer.init(mNumberChannels: 1, mDataByteSize: UInt32(packetCopy.count), mData: &packetCopy)
var audioBufferList: AudioBufferList = AudioBufferList.init(mNumberBuffers: 1, mBuffers: audioBuffer)
var mNumberBuffers = 1
var packetSize = packetCopy.count
// memcpy(&compressedBuffer.mutableAudioBufferList[0].mBuffers, &audioBuffer, MemoryLayout<AudioBuffer>.size)
// memcpy(&compressedBuffer.mutableAudioBufferList[0].mBuffers.mDataByteSize, &packetSize, MemoryLayout<Int>.size)
// memcpy(&compressedBuffer.mutableAudioBufferList[0].mNumberBuffers, &mNumberBuffers, MemoryLayout<UInt32>.size)
// compressedBuffer.mutableAudioBufferList.pointee = audioBufferList
var bufferPointer = compressedBuffer.data
for byte in packetCopy {
memset(compressedBuffer.mutableAudioBufferList[0].mBuffers.mData, Int32(byte), MemoryLayout<UInt8>.size)
}
print("mBuffers: \(compressedBuffer.audioBufferList[0].mBuffers.mNumberChannels)")
print("mBuffers: \(compressedBuffer.audioBufferList[0].mBuffers.mDataByteSize)")
print("mBuffers: \(compressedBuffer.audioBufferList[0].mBuffers.mData)")
var uncompressedBuffer = uncompress(inBuffer: compressedBuffer)
print("uncompressedBuffer: \(uncompressedBuffer)")
return uncompressedBuffer
}
So you are right in thinking you will (most likely) need to decompress the packets received from the stream. The idea is to get them to raw PCM format so that this can be sent directly to the audio output. This way you could also apply any DSP / audio manipulation you could want to the audio stream.
As you mentioned, you will probably need to be looking into the AudioQueue direction and the Apple Docs provide a good example of streaming audio in realtime, although this is in obj-c (in this case I think it may be a good idea to carry this out in obj-c). This is probably the best place to get started (interfacing the obj-c to swift is super simple).
Looking again at it in Swift there is the class AVAudioCompressedBuffer which seems to handle AAC for your case (would not need to decode the AAC if you get this to work), however there is no direct method for setting the buffer as it is intended for just being a storage container, I believe. Here's a working example of someone using the AVAudioCompressedBuffer along with an AVAudioFile (maybe you could buffer everything into files in background threads? I think it would be too much IO overhead).
However, if you tackle this in obj-c there is a post on how to set the AVAudioPCMBuffer (maybe works with AVAudioCompressedBuffer?) directly through memset (kind of digusting but at the same time lovely as an embedded programmer myself).
// make a silent stereo buffer
AVAudioChannelLayout *chLayout = [[AVAudioChannelLayout alloc] initWithLayoutTag:kAudioChannelLayoutTag_Stereo];
AVAudioFormat *chFormat = [[AVAudioFormat alloc] initWithCommonFormat:AVAudioPCMFormatFloat32
sampleRate:44100.0
interleaved:NO
channelLayout:chLayout];
AVAudioPCMBuffer *thePCMBuffer = [[AVAudioPCMBuffer alloc] initWithPCMFormat:chFormat frameCapacity:1024];
thePCMBuffer.frameLength = thePCMBuffer.frameCapacity;
for (AVAudioChannelCount ch = 0; ch < chFormat.channelCount; ++ch) {
memset(thePCMBuffer.floatChannelData[ch], 0, thePCMBuffer.frameLength * chFormat.streamDescription->mBytesPerFrame);
}
I know this is a lot to take and no way seems like a simple solution, but I think the obj-c AudioQueue technique would be my first stop!
Hope this helps!
I am making an app which needs to stream audio to a server. What I want to do is to divide the recorded audio into chunks and upload them while recording.
I used two recorders to do that, but it didn't work well; I can hear the difference between the chunks (stops for couple of milliseconds).
How can I do this?
Your problem can be broken into two pieces: recording and chunking (and uploading, but who cares).
For recording from the microphone and writing to the file, you can get started quickly with AVAudioEngine and AVAudioFile. See below for a sample, which records chunks at the device's default input sampling rate (you will probably want to rate convert that).
When you talk about the "difference between the chunks" you are referring to the ability to divide your audio data into pieces in such a way that when you concatenate them you don't hear discontinuities. e.g. LPCM audio data can be divided into chunks at the sample level, but the LPCM bitrate is high, so you're more likely to use a packetised format like adpcm (called ima4 on iOS?), or mp3 or aac. These formats can only be divided on packet boundaries, e.g. 64, 576 or 1024 samples, say. If your chunks are written without a header (usual for mp3 and aac, not sure about ima4), then concatenation is trivial: simply lay the chunks end to end, exactly as the cat command line tool would. Sadly, on iOS there is no mp3 encoder, so that leaves aac as a likely format for you, but that depends on your playback requirements. iOS devices and macs can definitely play it back.
import AVFoundation
class ViewController: UIViewController {
let engine = AVAudioEngine()
struct K {
static let secondsPerChunk: Float64 = 10
}
var chunkFile: AVAudioFile! = nil
var outputFramesPerSecond: Float64 = 0 // aka input sample rate
var chunkFrames: AVAudioFrameCount = 0
var chunkFileNumber: Int = 0
func writeBuffer(_ buffer: AVAudioPCMBuffer) {
let samplesPerSecond = buffer.format.sampleRate
if chunkFile == nil {
createNewChunkFile(numChannels: buffer.format.channelCount, samplesPerSecond: samplesPerSecond)
}
try! chunkFile.write(from: buffer)
chunkFrames += buffer.frameLength
if chunkFrames > AVAudioFrameCount(K.secondsPerChunk * samplesPerSecond) {
chunkFile = nil // close file
}
}
func createNewChunkFile(numChannels: AVAudioChannelCount, samplesPerSecond: Float64) {
let fileUrl = NSURL(fileURLWithPath: NSTemporaryDirectory()).appendingPathComponent("chunk-\(chunkFileNumber).aac")!
print("writing chunk to \(fileUrl)")
let settings: [String: Any] = [
AVFormatIDKey: kAudioFormatMPEG4AAC,
AVEncoderBitRateKey: 64000,
AVNumberOfChannelsKey: numChannels,
AVSampleRateKey: samplesPerSecond
]
chunkFile = try! AVAudioFile(forWriting: fileUrl, settings: settings)
chunkFileNumber += 1
chunkFrames = 0
}
override func viewDidLoad() {
super.viewDidLoad()
let input = engine.inputNode!
let bus = 0
let inputFormat = input.inputFormat(forBus: bus)
input.installTap(onBus: bus, bufferSize: 512, format: inputFormat) { (buffer, time) -> Void in
DispatchQueue.main.async {
self.writeBuffer(buffer)
}
}
try! engine.start()
}
}
I'm trying to use AVAudioEngine to record sounds from the microphone together with various sound effect files to a AVAudioFile.
I create an AVAudioFile like this:
let settings = self.engine.mainMixerNode.outputFormatForBus(0).settings
try self.audioFile = AVAudioFile(forWriting: self.audioURL, settings: settings, commonFormat: .PCMFormatFloat32, interleaved: false)
I install a tap on the audio engine's mainMixerNode, where I write the buffer to the file:
self.engine.mainMixerNode.installTapOnBus(0, bufferSize: 4096, format: self.engine.mainMixerNode.outputFormatForBus(0)) { (buffer, time) -> Void in
do {
try self.audioFile?.writeFromBuffer(buffer)
} catch let error as NSError {
NSLog("Error writing %#", error.localizedDescription)
}
}
I'm using self.engine.mainMixerNode.outputFormatForBus(0).settingswhen creating the audio file since Apple states that "The buffer format MUST match the file's processing format which is why outputFormatForBus: was used when creating the AVAudioFile object above". In the documentation for installTapOnBus they also say this: " The tap and connection formats (if non-nil) on the specified bus should be identical"
However, this gives me a very large, uncompressed audio file. I want to save the file as .m4a but don't understand where to specify the settings I want to use:
[
AVFormatIDKey: NSNumber(unsignedInt: kAudioFormatMPEG4AAC),
AVSampleRateKey : NSNumber(double: 32000.0), //44100.0
AVNumberOfChannelsKey: NSNumber(int: 1),
AVEncoderBitRatePerChannelKey: NSNumber(int: 16),
AVEncoderAudioQualityKey: NSNumber(int: Int32(AVAudioQuality.High.rawValue))
]
If I pass in these settings instead when creating the audio file, the app crashes when I record.
Any suggestions or ideas on how to solve this?
When using an AVAudioPlayerNode to schedule a short buffer to play immediately on a touch event ("Touch Up Inside"), I've noticed audible glitches / artifacts on playback while testing. The audio does not glitch at all in iOS simulator, however there is audible distortion on playback when I run the app on an actual iOS device. The audible distortion occurs randomly (the triggered sound will sometimes sound great, while other times it sounds distorted)
I've tried using different audio files, file formats, and preparing the buffer for playback using the prepareWithFrameCount method, but unfortunately the result is always the same and I'm stuck wondering what could be going wrong..
I've stripped the code down to globals for clarity and simplicity. Any help or insight would be greatly appreciated. This is my first attempt at developing an iOS app and my first question posted on Stack Overflow.
let filePath = NSBundle.mainBundle().pathForResource("BD_withSilence", ofType: "caf")!
let fileURL: NSURL = NSURL(fileURLWithPath: filePath)!
var error: NSError?
let file = AVAudioFile(forReading: fileURL, error: &error)
let fileFormat = file.processingFormat
let frameCount = UInt32(file.length)
let buffer = AVAudioPCMBuffer(PCMFormat: fileFormat, frameCapacity: frameCount)
let audioEngine = AVAudioEngine()
let playerNode = AVAudioPlayerNode()
func startEngine() {
var error: NSError?
file.readIntoBuffer(buffer, error: &error)
audioEngine.attachNode(playerNode)
audioEngine.connect(playerNode, to: audioEngine.mainMixerNode, format: buffer.format)
audioEngine.prepare()
func start() {
var error: NSError?
audioEngine.startAndReturnError(&error)
}
start()
}
startEngine()
let frameCapacity = AVAudioFramePosition(buffer.frameCapacity)
let frameLength = buffer.frameLength
let sampleRate: Double = 44100.0
func play() {
func scheduleBuffer() {
playerNode.scheduleBuffer(buffer, atTime: nil, options: AVAudioPlayerNodeBufferOptions.Interrupts, completionHandler: nil)
playerNode.prepareWithFrameCount(frameLength)
}
if playerNode.playing == false {
scheduleBuffer()
let time = AVAudioTime(sampleTime: frameCapacity, atRate: sampleRate)
playerNode.playAtTime(time)
}
else {
scheduleBuffer()
}
}
// triggered by a "Touch Up Inside" event on a UIButton in my ViewController
#IBAction func triggerPlay(sender: AnyObject) {
play()
}
Update:
Ok I think I've identified the source of the distortion: the volume of the node(s) is too great at output and causes clipping. By adding these two lines in my startEngine function, the distortion no longer occurred:
playerNode.volume = 0.8
audioEngine.mainMixerNode.volume = 0.8
However, I'm still don't know why I need to lower the output- my audio file itself does not clip. I'm guessing that it might be a result of the way that the AVAudioPlayerNodeBufferOptions.Interrupts is implemented. When a buffer interrupts another buffer, could there be an increase in output volume as a result of the interruption, causing output clipping? I'm still looking for a solid understanding as to why this occurs.. If anyone is willing/able to provide any clarification about this that would be fantastic!
Not sure if this is the problem you experienced in 2015, it may be the same issue that #suthar experienced in 2018.
I experienced a very similar problem and was due to the fact that the sampleRate on the device is different to the simulator. On macOS it is 44100 and on iOS Devices (late model ones) it is 48000.
So when you fill your buffer with 44100 samples on a 48000 device, you get 3900 samples of silence. When played back it doesn't sound like silence, it sounds like a glitch.
I used the mainMixer format when connecting my playerNode and also when creating my pcmBuffer. Don't refer to 48000 or 44100 anywhere in the code.
audioEngine.attach( playerNode)
audioEngine.connect( playerNode, to:mixerNode, format:mixerNode.outputFormat(forBus:0))
let pcmBuffer = AVAudioPCMBuffer( pcmFormat:SynthEngine.shared.audioEngine.mainMixerNode.outputFormat( forBus:0),
frameCapacity:AVAudioFrameCount(bufferSize))
AVSpeechSynthesizer has a fairly simple API, which doesn't have support for saving to an audio file built-in.
I'm wondering if there's a way around this - perhaps recording the output as it's played silently, for playback later? Or something more efficient.
This is finally possible, in iOS 13 AVSpeechSynthesizer now has write(_:toBufferCallback:):
let synthesizer = AVSpeechSynthesizer()
let utterance = AVSpeechUtterance(string: "test 123")
utterance.voice = AVSpeechSynthesisVoice(language: "en")
var output: AVAudioFile?
synthesizer.write(utterance) { (buffer: AVAudioBuffer) in
guard let pcmBuffer = buffer as? AVAudioPCMBuffer else {
fatalError("unknown buffer type: \(buffer)")
}
if pcmBuffer.frameLength == 0 {
// done
} else {
// append buffer to file
if output == nil {
output = AVAudioFile(
forWriting: URL(fileURLWithPath: "test.caf"),
settings: pcmBuffer.format.settings,
commonFormat: .pcmFormatInt16,
interleaved: false)
}
output?.write(from: pcmBuffer)
}
}
As of now AVSpeechSynthesizer does not support this . There in no way get the audio file using AVSpeechSynthesizer . I tried this few weeks ago for one of my apps and found out that it is not possible , Also nothing has changed for AVSpeechSynthesizer in iOS 8.
I too thought of recording the sound as it is being played , but there are so many flaws with that approach like user might be using headphones, the system sound might be low or mute , it might catch other external sound, so its not advisable to go with that approach.
You can use OSX to prepare AIFF files (or, maybe, some OSX-based service) via NSSpeechSynthesizer method
startSpeakingString:toURL: