Converting Audio Samples to .pcmFormatInt16 showing 0s ios swift AVAudioEngine - ios

I am a beginner in working with sounds and AVAudioEngine in IOS, and I'm developing an application that captures the audio samples as buffers and analyzes it. Furthermore, the sample rate must be 8 kHz with an integer16 PCM data, but when I try to record from the inputNode and convert the data to 8 kHz, it shows 0s in the buffer. However, when I set the commonFormat to .pcmFormatFloat32 it works fine.
My Code:
let inputNode = audioEngine.inputNode
let downMixer = AVAudioMixerNode()
let main = audioEngine.mainMixerNode
let format = inputNode.inputFormat(forBus: 0)
let format16KHzMono = AVAudioFormat(commonFormat: AVAudioCommonFormat.pcmFormatInt16, sampleRate: 8000, channels: 1, interleaved: true)
audioEngine.attach(downMixer)
downMixer.installTap(onBus: 0, bufferSize: 640, format: format16KHzMono) { (buffer, time) -> Void in
do{
print(buffer.description)
if let channel1Buffer = buffer.int16ChannelData?[0] {
// print(channel1Buffer[0])
for i in 0 ... Int(buffer.frameLength-1) {
print((channel1Buffer[i])) //prints 0s :(
}
}
}
}
audioEngine.connect(inputNode, to: downMixer, format: format)
audioEngine.connect(downMixer, to: main, format: format16KHzMono)
audioEngine.prepare()
try! audioEngine.start()
Thanks

Related

I want to record from iphone microphone and convert to ulaw format streaming

I want to record from iphone microphone and convert to ulaw format streaming data,I guess that is pcm data but I got noise.
What audio format is installTap buff? How can I do to got ulaw data format?
I can got it from AVAudioRecorder but I'm not to got a file.
Do change format settings 'AVFormatIDKey=kAudioFormatULaw' will got crash.
func testMicrophoneRecording1 () throws {
let tapNode: AVAudioNode = mixerNode
let format = tapNode.outputFormat(forBus: 0)
tapNode.installTap(onBus: 0, bufferSize: 1024, format: format, block: {
(buffer, time) in
let d = buffer.toNSData() as Data
let ulaw_data = convert_pcm_(to_ulaw: d)
sendUlawDataToDevice(data: ulaw_data)
})
try engine.start()
}
and connections is:
func makeConnections() {
let inputNode = engine.inputNode
let inputFormat = inputNode.outputFormat(forBus: 0)
engine.connect(inputNode, to: mixerNode, format: inputFormat)
let mainMixerNode = engine.mainMixerNode
let mixerFormat = AVAudioFormat(commonFormat: .pcmFormatFloat32, sampleRate: 44100, channels: 1, interleaved: true)
engine.connect(mixerNode, to: mainMixerNode, format: mixerFormat)
}
I have got pcm from microphone and convert to ulaw,form this example:
https://github.com/Epskampie/ios-coreaudio-example

Preventing playback while recording using AVAudioEngine on WatchOS

I used AVAudioEngine to gather PCM data from the microphone in iOS and it worked fine, however when I tried moving the project to WatchOS, I get feedback while recording. How would I stop playback from the speakers while recording?
var audioEngine = AVAudioEngine()
try AVAudioSession.sharedInstance().setCategory(.playAndRecord, mode: .default)
try AVAudioSession.sharedInstance().setActive(true)
let input = audioEngine.inputNode
let format = input.inputFormat(forBus: 0)
audioEngine.connect(input, to: audioEngine.mainMixerNode, format: format)
try! audioEngine.start()
let mixer = audioEngine.mainMixerNode
let format = mixer.outputFormat(forBus: 0)
let sampleRate = format.sampleRate
let fft_size = 2048
mixer.installTap(onBus: 0, bufferSize: UInt32(fft_size), format: format,
block: {(buffer: AVAudioPCMBuffer!, time: AVAudioTime!) -> Void in
// Processing
}
For anyone else that runs into this, I fixed it by removing the connection from the inputNode to the mainMixerNode, and installed the tap straight on the inputNode. The way I was doing it before I guess creates a feedback loop where it's playing back what it's recording. Not sure why this only happens in WatchOS and not on iPhone... perhaps it was playing back from the ear speaker rather than the one next to the mic. Fixed code:
var audioEngine = AVAudioEngine()
try AVAudioSession.sharedInstance().setCategory(.playAndRecord, mode: .default)
try AVAudioSession.sharedInstance().setActive(true)
try! audioEngine.start()
let input = audioEngine.inputNode
let format = mixer.outputFormat(forBus: 0)
let sampleRate = format.sampleRate
let fft_size = 2048
input.installTap(onBus: 0, bufferSize: UInt32(fft_size), format: format,
block: {(buffer: AVAudioPCMBuffer!, time: AVAudioTime!) -> Void in
// Processing
}

Change encoder format version

I have, for the past week, been trying to take audio from the microphone (on iOS), down sample it and write that to a '.aac' file.
I've finally gotten to point where it's almost working
let inputNode = audioEngine.inputNode
let inputFormat = inputNode.outputFormat(forBus: 0)
let bufferSize = UInt32(4096)
//let sampleRate = 44100.0
let sampleRate = 8000
let bitRate = sampleRate * 16
let fileUrl = url(appending: "NewRecording.aac")
print("Write to \(fileUrl)")
do {
outputFile = try AVAudioFile(forWriting: fileUrl,
settings: [
AVFormatIDKey: kAudioFormatMPEG4AAC,
AVSampleRateKey: sampleRate,
AVEncoderBitRateKey: bitRate,
AVEncoderAudioQualityKey: AVAudioQuality.high.rawValue,
AVNumberOfChannelsKey: 1],
commonFormat: .pcmFormatFloat32,
interleaved: false)
} catch let error {
print("Failed to create audio file for \(fileUrl): \(error)")
return
}
recordButton.setImage(RecordingStyleKit.imageOfMicrophone(fill: .red), for: [])
// Down sample the audio to 8kHz
let fmt = AVAudioFormat(commonFormat: .pcmFormatFloat32, sampleRate: Double(sampleRate), channels: 1, interleaved: false)!
let converter = AVAudioConverter(from: inputFormat, to: fmt)!
inputNode.installTap(onBus: 0, bufferSize: AVAudioFrameCount(bufferSize), format: inputFormat) { (buffer, time) in
let inputCallback: AVAudioConverterInputBlock = { inNumPackets, outStatus in
outStatus.pointee = AVAudioConverterInputStatus.haveData
return buffer
}
let convertedBuffer = AVAudioPCMBuffer(pcmFormat: fmt,
frameCapacity: AVAudioFrameCount(fmt.sampleRate) * buffer.frameLength / AVAudioFrameCount(buffer.format.sampleRate))!
var error: NSError? = nil
let status = converter.convert(to: convertedBuffer, error: &error, withInputFrom: inputCallback)
assert(status != .error)
if let outputFile = self.outputFile {
do {
try outputFile.write(from: convertedBuffer)
}
catch let error {
print("Write failed: \(error)")
}
}
}
audioEngine.prepare()
do {
try audioEngine.start()
}
catch {
print(error.localizedDescription)
}
The problem is, the resulting file MUST be in MPEG ADTS, AAC, v4 LC, 8 kHz, monaural format, but the code above only generates MPEG ADTS, AAC, v2 LC, 8 kHz, monaural
That is, it MUST be v4, not v2 (I have no choice)
(This result is generated by using file {name} on the command line to dump it's properties. I also use MediaInfo to provide additional information)
I've been trying to figure out if there is someway to provide a hint or setting to AVAudioFile which will change the LC (Low Complexity) version from 2 to 4?
I've been scanning through the docs and examples but can't seem to find any suggestions

Connecting AVAudioMixerNode to AVAudioEngine

I use AVAudioMixerNode to change audio format. this entry helped me a lot. Below code gives me data i want. But i hear my own voice on phone's speaker. How can i prevent it?
func startAudioEngine()
{
engine = AVAudioEngine()
guard let engine = engine, let input = engine.inputNode else {
// #TODO: error out
return
}
let downMixer = AVAudioMixerNode()
//I think you the engine's I/O nodes are already attached to itself by default, so we attach only the downMixer here:
engine.attach(downMixer)
//You can tap the downMixer to intercept the audio and do something with it:
downMixer.installTap(onBus: 0, bufferSize: 2048, format: downMixer.outputFormat(forBus: 0), block: //originally 1024
{ (buffer: AVAudioPCMBuffer!, time: AVAudioTime!) -> Void in
//i get audio data here
}
)
//let's get the input audio format right as it is
let format = input.inputFormat(forBus: 0)
//I initialize a 16KHz format I need:
let format16KHzMono = AVAudioFormat.init(commonFormat: AVAudioCommonFormat.pcmFormatInt16, sampleRate: 11025.0, channels: 1, interleaved: true)
//connect the nodes inside the engine:
//INPUT NODE --format-> downMixer --16Kformat--> mainMixer
//as you can see I m downsampling the default 44khz we get in the input to the 16Khz I want
engine.connect(input, to: downMixer, format: format)//use default input format
engine.connect(downMixer, to: engine.outputNode, format: format16KHzMono)//use new audio format
engine.prepare()
do {
try engine.start()
} catch {
// #TODO: error out
}
}
You can hear your microphone recording through your speakers because your microphone is connected to downMixer, which is connected to engine.outputNode. You could probably just mute the output for the downMixer if you aren't using it with other inputs:
downMixer.outputVolume = 0.0
I did it like this to change the frequency to 48000Hz / 16 bit per sample / 2 channels, and save it to wave file:
let outputAudioFileFormat = [AVFormatIDKey: Int(kAudioFormatLinearPCM), AVSampleRateKey: 48000, AVNumberOfChannelsKey: 2, AVEncoderAudioQualityKey: AVAudioQuality.high.rawValue]
let audioRecordingFormat : AVAudioFormat = AVAudioFormat.init(commonFormat: AVAudioCommonFormat.pcmFormatInt16, sampleRate: 48000, channels: 2, interleaved: true)!
do{
try file = AVAudioFile(forWriting: url, settings: outputAudioFileFormat, commonFormat: .pcmFormatInt16, interleaved: true)
let recordingSession = AVAudioSession.sharedInstance()
try recordingSession.setPreferredInput(input)
try recordingSession.setPreferredSampleRate(audioRecordingFormat.sampleRate)
engine.inputNode.installTap(onBus: 0, bufferSize: 1024, format: audioRecordingFormat, block: self.bufferAvailable)
engine.connect(engine.inputNode, to: engine.outputNode, format: audioRecordingFormat) //configure graph
}
catch
{
debugPrint("Could not initialize the audio file: \(error)")
}
And the function block
func bufferAvailable(buffer: AVAudioPCMBuffer, time: AVAudioTime)
{
do
{
try self.file?.write(from: buffer)
if self.onBufferAvailable != nil {
DispatchQueue.main.async {
self.onBufferAvailable!(buffer) // outside function used for analyzing and displaying a wave meter
}
}
}
catch{
self.stopEngine()
DispatchQueue.main.async {
self.onRecordEnd(false)
}
}
}
The stopEngine function is this, you should call it also when you want to stop the recording:
private func stopEngine()
{
self.engine.inputNode.removeTap(onBus: 0)
self.engine.stop()
}

Capture audio samples with a specific sample rate like Android in iOS Swift

I am a beginner in working with sound processing and AVAudioEngine in iOS, and I'm developing an application that captures the audio samples as a buffer and analyzes it. Furthermore, the sample rate must be 8000 kHz and also must be encoded as PCM16Bit, but the default inputNode in the AVAudioEngine is 44.1 kHz.
In Android, the process is quite simple:
AudioRecord audioRecord = new AudioRecord(MediaRecorder.AudioSource.MIC,
8000, AudioFormat.CHANNEL_IN_MONO,
AudioFormat.ENCODING_PCM_16BIT, bufferSize);
and then start the reading function for the buffer.
I searched a lot, but I didn't find any similar example. Instead, all the examples in which I encountered are capturing the samples in the default node's sample rate(44.1 kHz) like:
let input = audioEngine.inputNode
let inputFormat = input.inputFormat(forBus: 0)
input.installTap(onBus: 0, bufferSize: 640, format: inputFormat) { (buffer, time) -> Void in
print(inputFormat)
if let channel1Buffer = buffer.floatChannelData?[0] {
for i in 0...Int(buffer.frameLength-1) {
print(channel1Buffer[i])
}
}
}
try! audioEngine.start()
So I would like to capture audio samples using AVAudioEngine with 8000 kHz sample rate and PCM16Bit encoding.
Edit:
I reached a solution to transform the input to 8 kHz:
let inputNode = audioEngine.inputNode
let downMixer = AVAudioMixerNode()
let main = audioEngine.mainMixerNode
let format = inputNode.inputFormat(forBus: 0)
let format16KHzMono = AVAudioFormat(commonFormat: AVAudioCommonFormat.pcmFormatInt16, sampleRate: 8000, channels: 1, interleaved: true)
audioEngine.attach(downMixer)
downMixer.installTap(onBus: 0, bufferSize: 640, format: format16KHzMono) { (buffer, time) -> Void in
do{
print(buffer.description)
if let channel1Buffer = buffer.int16ChannelData?[0] {
// print(channel1Buffer[0])
for i in 0 ... Int(buffer.frameLength-1) {
print((channel1Buffer[i]))
}
}
}
}
audioEngine.connect(inputNode, to: downMixer, format: format)
audioEngine.connect(downMixer, to: main, format: format16KHzMono)
audioEngine.prepare()
try! audioEngine.start()
, but when I use .pcmFormatInt16 it doesn't work. However, when I use .pcmFormatFloat32 it works fine!
Have you checked with settings parameter
let format16KHzMono = AVAudioFormat(settings: [AVFormatIDKey: AVAudioCommonFormat.pcmFormatInt16,
AVEncoderAudioQualityKey: AVAudioQuality.high.rawValue,
AVEncoderBitRateKey: 16,
AVNumberOfChannelsKey: 1,
AVSampleRateKey: 8000.0] as [String : AnyObject])

Resources