I'm implementing a speech recognition module for an app. It works fine, however there are some additional things that I need to do. For example, I need to know if a user is speaking or shouting. I know, I can achieve that by knowing the frequency of the sound. Here is how I implement it:
let audioEngine = AVAudioEngine()
let speechRecognizer: SFSpeechRecognizer? = SFSpeechRecognizer()
let request = SFSpeechAudioBufferRecognitionRequest()
var recognitionTask = SFSpeechRecognitionTask()
func recordAndRecognizeSpeech() {
let node = audioEngine.inputNode
let recordingFormat = node.outputFormat(forBus: 0)
node.installTap(onBus: 0, bufferSize: 1024, format: recordingFormat) { (buffer, _) in
self.request.append(buffer)
}
audioEngine.prepare()
do {
try audioEngine.start()
} catch {
return print(error)
}
guard let myRecoginizer = SFSpeechRecognizer() else {
return
}
if !myRecoginizer.isAvailable {
return
}
recognitionTask = (speechRecognizer?.recognitionTask(with: request, resultHandler: { (result, error) in
//Handling speech recognition tasks here
}))!
}
This works fine for the speech recognition, but how can I get the frequency or amplitude value of the sound?
Related
I am currently using Microsoft Azure Cognitive Speech SDK to play text to speech.
I am able to get the data from the Stream which is provided in the following format (reference):
.audio16Khz32KBitRateMonoMp3
This is set like this:
private let inputFormat = AVAudioFormat(
commonFormat: .pcmFormatFloat32,
sampleRate: 16000,
channels: 1,
interleaved: false
)!
I'm using AVAudioEngine & AVAudioPlayerNode:
let engine = AVAudioEngine()
let player = AVAudioPlayerNode()
override func viewDidLoad() {
super.viewDidLoad()
let mainMixer = engine.mainMixerNode
engine.attach(player)
engine.connect(player, to: mainMixer, format: inputFormat)
try! engine.start()
}
I am able to play this back with some success using the following:
func playAudio(dialogue: String, audioPlayer: AVAudioPlayerNode, then completion: #escaping ( () -> Void)) {
audioAsset = nil
try? FileManager.default.removeItem(at: recordingPath)
FileManager.default.createFile(atPath: recordingPath.path, contents: nil, attributes: nil)
do {
let configuration = try SPXSpeechConfiguration(subscription: Microsoft.key, region: Microsoft.region)
configuration.setSpeechSynthesisOutputFormat(.audio16Khz32KBitRateMonoMp3)
let synthesizer = try SPXSpeechSynthesizer(speechConfiguration: configuration, audioConfiguration: nil)
let speechResult = try synthesizer.startSpeakingSsml(dialogue)
let stream = try SPXAudioDataStream(from: speechResult)
guard
let mutableFile = FileHandle(forWritingAtPath: recordingPath.path),
let streamData = NSMutableData(capacity:Int(bufferCapacity))
else {
fatalError()
}
while stream.read(streamData, length:bufferCapacity) > 0 {
mutableFile.write(streamData as Data)
mutableFile.seekToEndOfFile()
do {
let buffer = try readFileIntoBuffer(audioUrl: recordingPath)
audioPlayer.scheduleBuffer(buffer, at: currentBufferTime(buffer: buffer)) { [weak self] in
guard let self = self else { return }
if let audioAsset = self.audioAsset, audioPlayer.currentTime >= CMTimeGetSeconds(audioAsset.duration) {
DispatchQueue.main.async {
audioPlayer.stop()
}
completion()
}
}
audioPlayer.play()
} catch {
print("Unable To Play Azure Buffer Stream \(error)")
}
}
print("Did Complete Azure Buffer Rendering To File")
audioAsset = AVURLAsset.init(url: recordingPath, options: nil)
mutableFile.closeFile()
} catch {
print("Unable To Run Azure Vocder \(error)")
}
}
With my Buffer creation function being as follows:
func currentBufferTime(buffer: AVAudioPCMBuffer) -> AVAudioTime {
let framecount = Double(buffer.frameLength)
let samplerate = buffer.format.sampleRate
let position = TimeInterval(framecount / samplerate)
return AVAudioTime(sampleTime: AVAudioFramePosition(position), atRate: 1)
}
func readFileIntoBuffer(audioUrl: URL) throws -> AVAudioPCMBuffer {
let audioFile = try AVAudioFile(forReading: audioUrl)
let audioFileFormat = audioFile.processingFormat
let audioFileSize = UInt32(audioFile.length)
let audioBuffer = AVAudioPCMBuffer(pcmFormat: audioFileFormat, frameCapacity: audioFileSize)!
try audioFile.read(into: audioBuffer)
return audioBuffer
}
The issue is that this is not performant and the CPU is around 100% for a significant amount of time when running the function.
As such my question is what is a more optimum way of reading the data into a PCM Buffer?
I have looked at many examples and there doesn't seem to be any thing which works. For example:
func toPCMBuffer(format: AVAudioFormat, data: NSData) -> AVAudioPCMBuffer? {
let buffer = AVAudioPCMBuffer(pcmFormat: format, frameCapacity: UInt32(data.count) / format.streamDescription.pointee.mBytesPerFrame)
guard let buffer = buffer else { return nil }
buffer.frameLength = buffer.frameCapacity
let channels = UnsafeBufferPointer(start: buffer.int32ChannelData, count: Int(buffer.format.channelCount))
data.getBytes(UnsafeMutableRawPointer(channels[0]) , length: data.count)
return buffer
}
I'm trying install a tap on the output audio that is played on my app. I have no issue catching buffer from microphone input, but when it comes to catch sound that it goes trough the speaker or the earpiece or whatever the output device is, it does not succeed. Am I missing something?
In my example I'm trying to catch the audio buffer from an audio file that an AVPLayer is playing. But let's pretend I don't have access directly to the AVPlayer instance.
The goal is to perform Speech Recognition on an audio stream.
func catchAudioBuffers() throws {
let audioSession = AVAudioSession.sharedInstance()
try audioSession.setCategory(.playAndRecord, mode: .voiceChat, options: .allowBluetooth)
try audioSession.setActive(true)
let outputNode = audioEngine.outputNode
let recordingFormat = outputNode.outputFormat(forBus: 0)
outputNode.installTap(onBus: 0, bufferSize: 1024, format: recordingFormat) { (buffer: AVAudioPCMBuffer, when: AVAudioTime) in
// PROCESS AUDIO BUFFER
}
audioEngine.prepare()
try audioEngine.start()
// For example I am playing an audio conversation with an AVPlayer and a local file.
player.playSound()
}
This code results in a:
AVAEInternal.h:76 required condition is false: [AVAudioIONodeImpl.mm:1057:SetOutputFormat: (_isInput)]
*** Terminating app due to uncaught exception 'com.apple.coreaudio.avfaudio', reason: 'required condition is false: _isInput'
I was facing the same problem and during 2 days of brainstorming found the following.
Apple says that For AVAudioOutputNode, tap format must be specified as nil. I'm not sure that it's important but in my case, that finally worked, format was nil.
You need to start recording and don't forget to stop it.
Removing tap is really important, otherwise you will have file that you can't open.
Try to save the file with the same audio settings that you used in source file.
Here's my code that finally worked. It was partly taken from this question Saving Audio After Effect in iOS.
func playSound() {
let rate: Float? = effect.speed
let pitch: Float? = effect.pitch
let echo: Bool? = effect.echo
let reverb: Bool? = effect.reverb
// initialize audio engine components
audioEngine = AVAudioEngine()
// node for playing audio
audioPlayerNode = AVAudioPlayerNode()
audioEngine.attach(audioPlayerNode)
// node for adjusting rate/pitch
let changeRatePitchNode = AVAudioUnitTimePitch()
if let pitch = pitch {
changeRatePitchNode.pitch = pitch
}
if let rate = rate {
changeRatePitchNode.rate = rate
}
audioEngine.attach(changeRatePitchNode)
// node for echo
let echoNode = AVAudioUnitDistortion()
echoNode.loadFactoryPreset(.multiEcho1)
audioEngine.attach(echoNode)
// node for reverb
let reverbNode = AVAudioUnitReverb()
reverbNode.loadFactoryPreset(.cathedral)
reverbNode.wetDryMix = 50
audioEngine.attach(reverbNode)
// connect nodes
if echo == true && reverb == true {
connectAudioNodes(audioPlayerNode, changeRatePitchNode, echoNode, reverbNode, audioEngine.mainMixerNode, audioEngine.outputNode)
} else if echo == true {
connectAudioNodes(audioPlayerNode, changeRatePitchNode, echoNode, audioEngine.mainMixerNode, audioEngine.outputNode)
} else if reverb == true {
connectAudioNodes(audioPlayerNode, changeRatePitchNode, reverbNode, audioEngine.mainMixerNode, audioEngine.outputNode)
} else {
connectAudioNodes(audioPlayerNode, changeRatePitchNode, audioEngine.mainMixerNode, audioEngine.outputNode)
}
// schedule to play and start the engine!
audioPlayerNode.stop()
audioPlayerNode.scheduleFile(audioFile, at: nil) {
var delayInSeconds: Double = 0
if let lastRenderTime = self.audioPlayerNode.lastRenderTime, let playerTime = self.audioPlayerNode.playerTime(forNodeTime: lastRenderTime) {
if let rate = rate {
delayInSeconds = Double(self.audioFile.length - playerTime.sampleTime) / Double(self.audioFile.processingFormat.sampleRate) / Double(rate)
} else {
delayInSeconds = Double(self.audioFile.length - playerTime.sampleTime) / Double(self.audioFile.processingFormat.sampleRate)
}
}
// schedule a stop timer for when audio finishes playing
self.stopTimer = Timer(timeInterval: delayInSeconds, target: self, selector: #selector(EditViewController.stopAudio), userInfo: nil, repeats: false)
RunLoop.main.add(self.stopTimer!, forMode: RunLoop.Mode.default)
}
do {
try audioEngine.start()
} catch {
showAlert(Alerts.AudioEngineError, message: String(describing: error))
return
}
//Try to save
let dirPaths: String = (NSSearchPathForDirectoriesInDomains(.libraryDirectory, .userDomainMask, true)[0]) + "/sounds/"
let tmpFileUrl = URL(fileURLWithPath: dirPaths + "effected.caf")
//Save the tmpFileUrl into global varibale to not lose it (not important if you want to do something else)
filteredOutputURL = URL(fileURLWithPath: filePath)
do{
print(dirPaths)
let settings = [AVSampleRateKey : NSNumber(value: Float(44100.0)),
AVFormatIDKey : NSNumber(value: Int32(kAudioFormatMPEG4AAC)),
AVNumberOfChannelsKey : NSNumber(value: 1),
AVEncoderAudioQualityKey : NSNumber(value: Int32(AVAudioQuality.medium.rawValue))]
self.newAudio = try! AVAudioFile(forWriting: tmpFileUrl as URL, settings: settings)
let length = self.audioFile.length
audioEngine.mainMixerNode.installTap(onBus: 0, bufferSize: 4096, format: nil) {
(buffer: AVAudioPCMBuffer?, time: AVAudioTime!) -> Void in
//Let us know when to stop saving the file, otherwise saving infinitely
if (self.newAudio.length) <= length {
do{
try self.newAudio.write(from: buffer!)
} catch _{
print("Problem Writing Buffer")
}
} else {
//if we dont remove it, will keep on tapping infinitely
self.audioEngine.mainMixerNode.removeTap(onBus: 0)
}
}
}
// play the recording!
audioPlayerNode.play()
}
#objc func stopAudio() {
if let audioPlayerNode = audioPlayerNode {
let engine = audioEngine
audioPlayerNode.stop()
engine?.mainMixerNode.removeTap(onBus: 0)
}
if let stopTimer = stopTimer {
stopTimer.invalidate()
}
configureUI(.notPlaying)
if let audioEngine = audioEngine {
audioEngine.stop()
audioEngine.reset()
}
isPlaying = false
}
A newbie to swift! I am trying to implement an app that converts speech to text using speech recognizer.
Problem
SFSpeechRecognizer().isAvailable is false
private let request = SFSpeechAudioBufferRecognitionRequest()
private var task: SFSpeechRecognitionTask?
private let engine = AVAudioEngine()
func recognize() {
guard let node = engine.inputNode else {
return
}
let recordingFormat = node.outputFormat(forBus: 0)
node.installTap(onBus: 0, bufferSize: 1024, format: recordingFormat) { buffer, _ in
self.recognitionRequest.append(buffer);
}
engine.prepare()
do {
try engine.start()
} catch {
return print(error)
}
guard let systemRecognizer = SFSpeechRecognizer() else {
return
}
if !systemRecognizer.isAvailable {
self.log(.debug, msg: "Entered this condition and stopped!")
return
}
}
Question
I am not sure why it stops in the simulator. Does microphone works in iPhone simulator?
Update
I tried testing with a audio file with below code,
let audioFile = Bundle.main.url(forResource: "create_activity", withExtension: "m4a", subdirectory: "Sample Recordings")
let recognitionRequest = SFSpeechURLRecognitionRequest(url: audioFile!)
getting error which says, Error Domain=kAFAssistantErrorDomain Code=1101 "(null)"
It looks that simulator has gained access to microphone with iOS 11.
Unfortunately I was not able to find any documentation confirming that, but can confirm this functionality with the following code sample. Works perfectly fine on iOS 11 simulator, but does nothing on iOS 10 simulator (or earlier).
import UIKit
import Speech
class ViewController: UIViewController {
private var recognizer = SFSpeechRecognizer()
private var request = SFSpeechAudioBufferRecognitionRequest()
private let engine = AVAudioEngine()
override func viewDidLoad() {
super.viewDidLoad()
requestPermissions()
}
private func requestPermissions() {
//
// Do not forget to add `NSMicrophoneUsageDescription` and `NSSpeechRecognitionUsageDescription` to `Info.plist`
//
// Request recording permission
AVAudioSession.sharedInstance().requestRecordPermission { allowed in
if allowed {
// Request speech recognition authorization
SFSpeechRecognizer.requestAuthorization { status in
switch status {
case .authorized: self.prepareSpeechRecognition()
case .notDetermined, .denied, .restricted: print("SFSpeechRecognizer authorization status: \(status).")
}
}
} else {
print("AVAudioSession record permission: \(allowed).")
}
}
}
private func prepareSpeechRecognition() {
// Check if recognizer is available (has failable initializer)
guard let recognizer = recognizer else {
print("SFSpeechRecognizer not supported.")
return
}
// Prepare recognition task
recognizer.recognitionTask(with: request) { (result, error) in
if let result = result {
print("SFSpeechRecognizer result: \(result.bestTranscription.formattedString)")
} else {
print("SFSpeechRecognizer error: \(String(describing: error))")
}
}
// Install tap to audio engine input node
let inputNode = engine.inputNode
let busNumber = 0
let recordingFormat = inputNode.outputFormat(forBus: busNumber)
inputNode.installTap(onBus: busNumber, bufferSize: 1024, format: recordingFormat) { buffer, time in
self.request.append(buffer);
}
// Prepare and start audio engine
engine.prepare()
do {
try engine.start()
} catch {
return print(error)
}
}
}
Do not forget to add NSMicrophoneUsageDescription and NSSpeechRecognitionUsageDescription to Info.plist.
I must be overlooking something, but when I try to combine voice synthesis and voice recognition in Swift, I get bad results ("Could not get attribute 'LocalURL': Error Domain=MobileAssetError Code=1 "Unable to copy asset attributes" UserInfo={NSDescription=Unable to copy asset attributes}") and the final result is that after that I am able to do speech to text, but text to speech is ruined until restart of the app.
let identifier = "\(Locale.current.languageCode!)_\(Locale.current.regionCode!)" // e.g. en-US
speechRecognizer = SFSpeechRecognizer(locale: Locale.init(identifier: identifier))!
if audioEngine.isRunning {
audioEngine.stop() // will also stop playing music.
recognitionRequest?.endAudio()
speechButton.isEnabled = false
} else {
recordSpeech() // here we do steps 1 .. 12
}
// recordSpeech() :
if recognitionTask != nil { // Step 1
recognitionTask?.cancel()
recognitionTask = nil
}
let audioSession = AVAudioSession.sharedInstance() // Step 2
do {
try audioSession.setCategory(AVAudioSessionCategoryRecord)
try audioSession.setMode(AVAudioSessionModeMeasurement)
try audioSession.setActive(true, with: .notifyOthersOnDeactivation)
} catch {
print("audioSession properties weren't set because of an error.")
}
recognitionRequest = SFSpeechAudioBufferRecognitionRequest() // Step 3
guard let inputNode = audioEngine.inputNode else {
fatalError("Audio engine has no input node")
} // Step 4
guard let recognitionRequest = recognitionRequest else {
fatalError("Unable to create an SFSpeechAudioBufferRecognitionRequest object")
} // Step 5
recognitionRequest.shouldReportPartialResults = true // Step 6
recognitionTask = speechRecognizer.recognitionTask(with: recognitionRequest, resultHandler: { (result, error) in // Step 7
var isFinal = false // Step 8
if result != nil {
print(result?.bestTranscription.formattedString as Any)
isFinal = (result?.isFinal)!
if (isFinal) {
if (result != nil) {
self.speechOutput.text = self.speechOutput.text + "\n" + (result?.bestTranscription.formattedString)!
}
}
}
if error != nil || isFinal { // Step 10
self.audioEngine.stop()
inputNode.removeTap(onBus: 0)
self.recognitionRequest = nil
self.recognitionTask = nil
self.speechButton.isEnabled = true
}
})
let recordingFormat = inputNode.outputFormat(forBus: 0) // Step 11
inputNode.installTap(onBus: 0, bufferSize: 1024, format: recordingFormat) { (buffer, when) in
self.recognitionRequest?.append(buffer)
}
audioEngine.prepare() // Step 12
do {
try audioEngine.start()
} catch {
print("audioEngine couldn't start because of an error.")
}
I used thus tutorial to base my code upon:
http://www.appcoda.com/siri-speech-framework/
func say(_ something : String, lang : String ) {
let synth = AVSpeechSynthesizer()
synth.delegate = self
print(something) // debug code, works fine
let identifier = "\(Locale.current.languageCode!)-\(Locale.current.regionCode!)"
let utterance = AVSpeechUtterance(string: something)
utterance.voice = AVSpeechSynthesisVoice(language: identifier)
synth.speak(utterance)
}
So if I use the "say" method on it's own it works well, if I combine the two, after doing speech recognition, the synthesizer does not work anymore. Any hints into the direction of the solution? I suppose something is not being gracefully restored to it's prior state, but I can't seem to figure out what.
Grrr...
This is the solution, sorry about not looking well enough, costed me a lot of time though.
func say(_ something : String, lang : String ) {
let audioSession = AVAudioSession.sharedInstance()
do {
// this is the solution:
try audioSession.setCategory(AVAudioSessionCategoryPlayback)
try audioSession.setMode(AVAudioSessionModeDefault)
// the recognizer uses AVAudioSessionCategoryRecord
// so we want to set it to AVAudioSessionCategoryPlayback
// again before we can say something
} catch {
print("audioSession properties weren't set because of an error.")
}
synth = AVSpeechSynthesizer()
synth.delegate = self
print(something)
let identifier = "\(Locale.current.languageCode!)-\(Locale.current.regionCode!)"
let utterance = AVSpeechUtterance(string: something)
utterance.voice = AVSpeechSynthesisVoice(language: identifier)
synth.speak(utterance)
}
I'm really excited about the new AVAudioEngine. It seems like a good API wrapper around audio unit. Unfortunately the documentation is so far nonexistent, and I'm having problems getting a simple graph to work.
Using the following simple code to set up an audio engine graph, the tap block is never called. It mimics some of the sample code floating around the web, though those also did not work.
let inputNode = audioEngine.inputNode
var error: NSError?
let bus = 0
inputNode.installTapOnBus(bus, bufferSize: 2048, format: inputNode.inputFormatForBus(bus)) {
(buffer: AVAudioPCMBuffer!, time: AVAudioTime!) -> Void in
println("sfdljk")
}
audioEngine.prepare()
if audioEngine.startAndReturnError(&error) {
println("started audio")
} else {
if let engineStartError = error {
println("error starting audio: \(engineStartError.localizedDescription)")
}
}
All I'm looking for is the raw pcm buffer for analysis. I don't need any effects or output. According to the WWDC talk "502 Audio Engine in Practice", this setup should work.
Now if you want to capture data from the input node, you can install a node tap and we've talked about that.
But what's interesting about this particular example is, if I wanted to work with just the input node, say just capture data from the microphone and maybe examine it, analyze it in real time or maybe write it out to file, I can directly install a tap on the input node.
And the tap will do the work of pulling the input node for data, stuffing it in buffers and then returning that back to the application.
Once you have that data you can do whatever you need to do with it.
Here are some links I tried:
http://hondrouthoughts.blogspot.com/2014/09/avfoundation-audio-monitoring.html
http://jamiebullock.com/post/89243252529/live-coding-audio-with-swift-playgrounds (SIGABRT in playground on startAndReturnError)
Edit: This is the implementation based on Thorsten Karrer's suggestion. It unfortunately does not work.
class AudioProcessor {
let audioEngine = AVAudioEngine()
init(){
let inputNode = audioEngine.inputNode
let bus = 0
var error: NSError?
inputNode.installTapOnBus(bus, bufferSize: 2048, format:inputNode.inputFormatForBus(bus)) {
(buffer: AVAudioPCMBuffer!, time: AVAudioTime!) -> Void in
println("sfdljk")
}
audioEngine.prepare()
audioEngine.startAndReturnError(nil)
println("started audio")
}
}
It might be the case that your AVAudioEngine is going out of scope and is released by ARC ("If you liked it then you should have put retain on it...").
The following code (engine is moved to an ivar and thus sticks around) fires the tap:
class AppDelegate: NSObject, NSApplicationDelegate {
let audioEngine = AVAudioEngine()
func applicationDidFinishLaunching(aNotification: NSNotification) {
let inputNode = audioEngine.inputNode
let bus = 0
inputNode.installTapOnBus(bus, bufferSize: 2048, format: inputNode.inputFormatForBus(bus)) {
(buffer: AVAudioPCMBuffer!, time: AVAudioTime!) -> Void in
println("sfdljk")
}
audioEngine.prepare()
audioEngine.startAndReturnError(nil)
}
}
(I removed the error handling for brevity)
UPDATED: I have implemented a complete working example of Recording mic input, applying some effects (reverbs, delay, distortion) at runtime, and save all these effects to an output file.
var engine = AVAudioEngine()
var distortion = AVAudioUnitDistortion()
var reverb = AVAudioUnitReverb()
var audioBuffer = AVAudioPCMBuffer()
var outputFile = AVAudioFile()
var delay = AVAudioUnitDelay()
//Initialize the audio engine
func initializeAudioEngine() {
engine.stop()
engine.reset()
engine = AVAudioEngine()
isRealTime = true
do {
try AVAudioSession.sharedInstance().setCategory(AVAudioSessionCategoryPlayAndRecord)
let ioBufferDuration = 128.0 / 44100.0
try AVAudioSession.sharedInstance().setPreferredIOBufferDuration(ioBufferDuration)
} catch {
assertionFailure("AVAudioSession setup error: \(error)")
}
let fileUrl = URLFor("/NewRecording.caf")
print(fileUrl)
do {
try outputFile = AVAudioFile(forWriting: fileUrl!, settings: engine.mainMixerNode.outputFormatForBus(0).settings)
}
catch {
}
let input = engine.inputNode!
let format = input.inputFormatForBus(0)
//settings for reverb
reverb.loadFactoryPreset(.MediumChamber)
reverb.wetDryMix = 40 //0-100 range
engine.attachNode(reverb)
delay.delayTime = 0.2 // 0-2 range
engine.attachNode(delay)
//settings for distortion
distortion.loadFactoryPreset(.DrumsBitBrush)
distortion.wetDryMix = 20 //0-100 range
engine.attachNode(distortion)
engine.connect(input, to: reverb, format: format)
engine.connect(reverb, to: distortion, format: format)
engine.connect(distortion, to: delay, format: format)
engine.connect(delay, to: engine.mainMixerNode, format: format)
assert(engine.inputNode != nil)
isReverbOn = false
try! engine.start()
}
//Now the recording function:
func startRecording() {
let mixer = engine.mainMixerNode
let format = mixer.outputFormatForBus(0)
mixer.installTapOnBus(0, bufferSize: 1024, format: format, block:
{ (buffer: AVAudioPCMBuffer!, time: AVAudioTime!) -> Void in
print(NSString(string: "writing"))
do{
try self.outputFile.writeFromBuffer(buffer)
}
catch {
print(NSString(string: "Write failed"));
}
})
}
func stopRecording() {
engine.mainMixerNode.removeTapOnBus(0)
engine.stop()
}
I hope this might help you. Thanks!
The above answer didn't work for me but the following did. I'm installing a tap on a mixer node.
mMixerNode?.installTapOnBus(0, bufferSize: 4096, format: mMixerNode?.outputFormatForBus(0),
{
(buffer: AVAudioPCMBuffer!, time:AVAudioTime!) -> Void in
NSLog("tapped")
}
)
nice topic
hi brodney
in your topic i find my solution . here is similar topic Generate AVAudioPCMBuffer with AVAudioRecorder
see lecture Wwdc 2014 502 - AVAudioEngine in Practice capture microphone => in 20 min create buffer with tap code => in 21 .50
here is swift 3 code
#IBAction func button01Pressed(_ sender: Any) {
let inputNode = audioEngine.inputNode
let bus = 0
inputNode?.installTap(onBus: bus, bufferSize: 2048, format: inputNode?.inputFormat(forBus: bus)) {
(buffer: AVAudioPCMBuffer!, time: AVAudioTime!) -> Void in
var theLength = Int(buffer.frameLength)
print("theLength = \(theLength)")
var samplesAsDoubles:[Double] = []
for i in 0 ..< Int(buffer.frameLength)
{
var theSample = Double((buffer.floatChannelData?.pointee[i])!)
samplesAsDoubles.append( theSample )
}
print("samplesAsDoubles.count = \(samplesAsDoubles.count)")
}
audioEngine.prepare()
try! audioEngine.start()
}
to stop audio
func stopAudio()
{
let inputNode = audioEngine.inputNode
let bus = 0
inputNode?.removeTap(onBus: bus)
self.audioEngine.stop()
}