Related
I'm trying to use AVAudioEngine instead of AVAudioPlayer because I need to do some per-packet processing as the audio is playing, but before I can get that far, I need to convert the 16-bit 8khz mono audio data to stereo so the AVAudioEngine will play it. This is my (incomplete) attempt to do it. I'm currently stuck at how to make AVAudioConverter do the mono-to-stereo conversion. If I don't use the AVAudioConverter, the iOS runtime complains that the input format doesn't match the output format. If I do use it (as below), the runtime doesn't complain, but the audio does not play back properly (likely because i'm not doing the mono-to-stereo conversion correctly). Any assistance is appreciated!
private func loadAudioData(audioData: Data?) {
// Load audio data into player
guard let audio = audioData else {return}
do {
let inputAudioFormat = AVAudioFormat(commonFormat: .pcmFormatInt16, sampleRate: Double(sampleRate), channels: 1, interleaved: false)
let outputAudioFormat = self.audioEngine.mainMixerNode.outputFormat(forBus: 0)
if inputAudioFormat != nil {
let inputStreamDescription = inputAudioFormat?.streamDescription.pointee
let outputStreamDescription = outputAudioFormat.streamDescription.pointee
let count = UInt32(audio.count)
if inputStreamDescription != nil && count > 0 {
if let ibpf = inputStreamDescription?.mBytesPerFrame {
let inputFrameCapacity = count / ibpf
let outputFrameCapacity = count / outputStreamDescription.mBytesPerFrame
self.pcmInputBuffer = AVAudioPCMBuffer(pcmFormat: inputAudioFormat!, frameCapacity: inputFrameCapacity)
self.pcmOutputBuffer = AVAudioPCMBuffer(pcmFormat: outputAudioFormat, frameCapacity: outputFrameCapacity)
if let input = self.pcmInputBuffer, let output = self.pcmOutputBuffer {
self.pcmConverter = AVAudioConverter(from: inputAudioFormat!, to: outputAudioFormat)
input.frameLength = input.frameCapacity
let b = UnsafeMutableBufferPointer(start: input.int16ChannelData?[0], count: input.stride * Int(inputFrameCapacity))
let bytesCopied = audio.copyBytes(to: b)
assert(bytesCopied == count)
audioEngine.attach(playerNode)
audioEngine.connect(playerNode, to: audioEngine.mainMixerNode, format: nil)
self.pcmConverter?.convert(to: output, error: nil) { packets, status in
status.pointee = .haveData
return self.pcmInputBuffer // I know this is wrong, but i'm not sure how to do it correctly
}
try audioEngine.start()
}
}
}
}
}
}
Speculative, incorrect answer
How about pcmConverter?.channelMap = [0, 0]?
Actual answer
You don't need to use the audio converter channel map, because mono to stereo AVAudioConverters seem to duplicate the mono channel by default. The main problems were that outputFrameCapacity was wrong, and you use mainMixers outputFormat before calling audioEngine.prepare() or starting the engine.
Assuming sampleRate = 8000, an amended solution looks like this:
private func loadAudioData(audioData: Data?) throws {
// Load audio data into player
guard let audio = audioData else {return}
do {
audioEngine.attach(playerNode)
audioEngine.connect(playerNode, to: audioEngine.mainMixerNode, format: nil)
audioEngine.prepare() // https://stackoverflow.com/a/70392017/22147
let outputAudioFormat = self.audioEngine.mainMixerNode.outputFormat(forBus: 0)
guard let inputAudioFormat = AVAudioFormat(commonFormat: .pcmFormatInt16, sampleRate: Double(sampleRate), channels: 1, interleaved: false) else { return }
let inputStreamDescription = inputAudioFormat.streamDescription.pointee
let outputStreamDescription = outputAudioFormat.streamDescription.pointee
let count = UInt32(audio.count)
if count > 0 {
let ibpf = inputStreamDescription.mBytesPerFrame
let inputFrameCapacity = count / ibpf
let outputFrameCapacity = Float64(inputFrameCapacity) * outputStreamDescription.mSampleRate / inputStreamDescription.mSampleRate
self.pcmInputBuffer = AVAudioPCMBuffer(pcmFormat: inputAudioFormat, frameCapacity: inputFrameCapacity)
self.pcmOutputBuffer = AVAudioPCMBuffer(pcmFormat: outputAudioFormat, frameCapacity: AVAudioFrameCount(outputFrameCapacity))
if let input = self.pcmInputBuffer, let output = self.pcmOutputBuffer {
self.pcmConverter = AVAudioConverter(from: inputAudioFormat, to: outputAudioFormat)
input.frameLength = input.frameCapacity
let b = UnsafeMutableBufferPointer(start: input.int16ChannelData?[0], count: input.stride * Int(inputFrameCapacity))
let bytesCopied = audio.copyBytes(to: b)
assert(bytesCopied == count)
self.pcmConverter?.convert(to: output, error: nil) { packets, status in
status.pointee = .haveData
return self.pcmInputBuffer // I know this is wrong, but i'm not sure how to do it correctly
}
try audioEngine.start()
self.playerNode.scheduleBuffer(output, completionHandler: nil)
self.playerNode.play()
}
}
}
}
I am building an app that needs to perform analysis on the audio it receives from the microphone in real time. In my app, I also need to play a beep sound and start recording audio at the same time, in other words, I can't play the beep sound and then start recording. This introduces the problem of hearing the beep sound in my recording, (this might be because I am playing the beep sound through the speaker, but unfortunately I cannot compromise in this regard either). Since the beep sound is just a tone of about 2350 kHz, I was wondering how I could exclude that range of frequencies (say from 2300 kHz to 2400 kHz) in my recordings and prevent it from influencing my audio samples. After doing some googling I came up with what I think might be the solution, a band stop filter. According to Wikipedia: "a band-stop filter or band-rejection filter is a filter that passes most frequencies unaltered, but attenuates those in a specific range to very low levels". This seems like what I need to to exclude frequencies from 2300 kHz to 2400 kHz in my recordings (or at least for the first second of the recording while the beep sound is playing). My question is: how would I implement this with AVAudioEngine? Is there a way I can turn off the filter after the first second of the recording when the beep sound is done playing without stopping the recording?
Since I am new to working with audio with AVAudioEngine (I've always just stuck to the higher levels of AVFoundation) I followed this tutorial to help me create a class to handle all the messy stuff. This is what my code looks like:
class Recorder {
enum RecordingState {
case recording, paused, stopped
}
private var engine: AVAudioEngine!
private var mixerNode: AVAudioMixerNode!
private var state: RecordingState = .stopped
private var audioPlayer = AVAudioPlayerNode()
init() {
setupSession()
setupEngine()
}
fileprivate func setupSession() {
let session = AVAudioSession.sharedInstance()
//The original tutorial sets the category to .record
//try? session.setCategory(.record)
try? session.setCategory(.playAndRecord, options: [.mixWithOthers, .defaultToSpeaker])
try? session.setActive(true, options: .notifyOthersOnDeactivation)
}
fileprivate func setupEngine() {
engine = AVAudioEngine()
mixerNode = AVAudioMixerNode()
// Set volume to 0 to avoid audio feedback while recording.
mixerNode.volume = 0
engine.attach(mixerNode)
//Attach the audio player node
engine.attach(audioPlayer)
makeConnections()
// Prepare the engine in advance, in order for the system to allocate the necessary resources.
engine.prepare()
}
fileprivate func makeConnections() {
let inputNode = engine.inputNode
let inputFormat = inputNode.outputFormat(forBus: 0)
engine.connect(inputNode, to: mixerNode, format: inputFormat)
let mainMixerNode = engine.mainMixerNode
let mixerFormat = AVAudioFormat(commonFormat: .pcmFormatFloat32, sampleRate: inputFormat.sampleRate, channels: 1, interleaved: false)
engine.connect(mixerNode, to: mainMixerNode, format: mixerFormat)
//AudioPlayer Connection
let path = Bundle.main.path(forResource: "beep.mp3", ofType:nil)!
let url = URL(fileURLWithPath: path)
let file = try! AVAudioFile(forReading: url)
engine.connect(audioPlayer, to: mainMixerNode, format: nil)
audioPlayer.scheduleFile(file, at: nil)
}
//MARK: Start Recording Function
func startRecording() throws {
print("Start Recording!")
let tapNode: AVAudioNode = mixerNode
let format = tapNode.outputFormat(forBus: 0)
let documentURL = FileManager.default.urls(for: .documentDirectory, in: .userDomainMask)[0]
// AVAudioFile uses the Core Audio Format (CAF) to write to disk.
// So we're using the caf file extension.
let file = try AVAudioFile(forWriting: documentURL.appendingPathComponent("recording.caf"), settings: format.settings)
tapNode.installTap(onBus: 0, bufferSize: 4096, format: format, block: {
(buffer, time) in
try? file.write(from: buffer)
print(buffer.description)
print(buffer.stride)
let floatArray = Array(UnsafeBufferPointer(start: buffer.floatChannelData![0], count:Int(buffer.frameLength)))
})
try engine.start()
audioPlayer.play()
state = .recording
}
//MARK: Other recording functions
func resumeRecording() throws {
try engine.start()
state = .recording
}
func pauseRecording() {
engine.pause()
state = .paused
}
func stopRecording() {
// Remove existing taps on nodes
mixerNode.removeTap(onBus: 0)
engine.stop()
state = .stopped
}
}
AVAudioUnitEQ supports a band-stop filter.
Perhaps something like:
// Create an instance of AVAudioUnitEQ and connect it to the engine's main mixer
let eq = AVAudioUnitEQ(numberOfBands: 1)
engine.attach(eq)
engine.connect(eq, to: engine.mainMixerNode, format: nil)
engine.connect(player, to: eq, format: nil)
eq.bands[0].frequency = 2350
eq.bands[0].filterType = .bandStop
eq.bands[0].bypass = false
A slightly more complete answer, linked to an IBAction; in this example, I use .parametric for the filter type, with more bands than required, to give a broader insight on how to use it:
#IBAction func PlayWithEQ(_ sender: Any) {
self.engine.stop()
self.engine = AVAudioEngine()
let player = AVAudioPlayerNode()
let url = Bundle.main.url(forResource:"yoursong", withExtension: "m4a")!
let f = try! AVAudioFile(forReading: url)
self.engine.attach(player)
// adding eq effect node
let effect = AVAudioUnitEQ(numberOfBands: 4)
let bands = effect.bands
let freq = [125, 250, 2350, 8000]
for i in 0...(bands.count - 1) {
bands[i].frequency = Float(freq[i])
}
bands[0].gain = 0.0
bands[0].filterType = .parametric
bands[0].bandwidth = 1
bands[1].gain = 0.0
bands[1].filterType = .parametric
bands[1].bandwidth = 0.5
// filter of interest, rejecting 2350Hz (adjust bandwith as needed)
bands[2].gain = -60.0
bands[2].filterType = .parametric
bands[2].bandwidth = 1
bands[3].gain = 0.0
bands[3].filterType = .parametric
bands[3].bandwidth = 1
self.engine.attach(effect)
self.engine.connect(player, to: effect, format: f.processingFormat)
let mixer = self.engine.mainMixerNode
self.engine.connect(effect, to: mixer, format: f.processingFormat)
player.scheduleFile(f, at: nil) {
delay(0.05) {
if self.engine.isRunning {
self.engine.stop()
}
}
}
self.engine.prepare()
try! self.engine.start()
player.play()
}
I am recording sound through audio engine and make a file name my_file.caf and trying to make another file which will make its phase inverse that i can cancel its voice in mono.
But when i do some operations and calculations it reversed its sin wave but also reverse the sound.
do {
let inFile: AVAudioFile = try AVAudioFile(forReading: URLFor(filename: "my_file.caf")!)
let format: AVAudioFormat = inFile.processingFormat
let frameCount: AVAudioFrameCount = UInt32(inFile.length)
let outSettings = [AVNumberOfChannelsKey: format.channelCount,
AVSampleRateKey: format.sampleRate,
AVLinearPCMBitDepthKey: 16,
AVFormatIDKey: kAudioFormatMPEG4AAC] as [String : Any]
let outFile: AVAudioFile = try AVAudioFile(forWriting: URLFor(filename: "my_file1.caf")!, settings: outSettings)
let forwardBuffer: AVAudioPCMBuffer = AVAudioPCMBuffer(pcmFormat: format, frameCapacity: frameCount)!
let reverseBuffer: AVAudioPCMBuffer = AVAudioPCMBuffer(pcmFormat: format, frameCapacity: frameCount)!
try inFile.read(into: forwardBuffer)
let frameLength = forwardBuffer.frameLength
reverseBuffer.frameLength = frameLength
let audioStride = forwardBuffer.stride
for channelIdx in 0..<forwardBuffer.format.channelCount {
let forwardChannelData = forwardBuffer.floatChannelData?.advanced(by: Int(channelIdx)).pointee
let reverseChannelData = reverseBuffer.floatChannelData?.advanced(by: Int(channelIdx)).pointee
var reverseIdx: Int = 0
for idx in stride(from: frameLength, to: 0, by: -1) {
memcpy(reverseChannelData?.advanced(by: reverseIdx * audioStride), forwardChannelData?.advanced(by: Int(idx) * audioStride), MemoryLayout<Float>.size)
reverseIdx += 1
}
}
try outFile.write(from: reverseBuffer)
} catch let error {
print(error.localizedDescription)
}
I have this flow now: i record audio with AudioEngine, send it to an audio processing library and get an audio buffer back, then i have a strong will to write it to a wav file but i'm totally confused how to do that in swift.
I've tried this snippet from another stackoverflow answer but it writes an empty and corrupted file.( load a pcm into a AVAudioPCMBuffer )
//get data from library
var len : CLong = 0
let res: UnsafePointer<Double> = getData(CLong(), &len )
let bufferPointer: UnsafeBufferPointer = UnsafeBufferPointer(start: res, count: len)
//tranform it to Data
let arrayDouble = Array(bufferPointer)
let arrayFloats = arrayDouble.map{Float($0)}
let data = try Data(buffer: bufferPointer)
//attempt to write in file
do {
let format = AVAudioFormat(commonFormat: .pcmFormatFloat32, sampleRate: 16000, channels: 2, interleaved: false)
var buffer = AVAudioPCMBuffer(pcmFormat: format, frameCapacity: AVAudioFrameCount(data.count))
buffer.floatChannelData!.pointee.withMemoryRebound(to: UInt8.self, capacity: data.count) {
let stream = OutputStream(toBuffer: $0, capacity: data.count)
stream.open()
_ = data.withUnsafeBytes {
stream.write($0, maxLength: data.count)
}
stream.close()
}
//settings are from AudioEngine.inputNode!.inputFormat(forBus: 0).settings
var audioFile = try AVAudioFile(forWriting: url, settings: settings)
try audioFile.write(from: buffer)
} catch let error as NSError {
print("ERROR HERE", error.localizedDescription)
}
So, i guess i do this transform of floatChannelData wrong or everything wrong. Any suggestions or pointers where to read about it would be great!
With a great colleague help we've managed to get it to work. Apparently, AudioPCMBuffer after filling also needs to be notified about it's new size.
Also i was using totally wrong formats.
Here is the code:
let SAMPLE_RATE = Float64(16000.0)
let outputFormatSettings = [
AVFormatIDKey:kAudioFormatLinearPCM,
AVLinearPCMBitDepthKey:32,
AVLinearPCMIsFloatKey: true,
// AVLinearPCMIsBigEndianKey: false,
AVSampleRateKey: SAMPLE_RATE,
AVNumberOfChannelsKey: 1
] as [String : Any]
let audioFile = try? AVAudioFile(forWriting: url, settings: outputFormatSettings, commonFormat: AVAudioCommonFormat.pcmFormatFloat32, interleaved: true)
let bufferFormat = AVAudioFormat(settings: outputFormatSettings)
let outputBuffer = AVAudioPCMBuffer(pcmFormat: bufferFormat, frameCapacity: AVAudioFrameCount(buff.count))
// i had my samples in doubles, so convert then write
for i in 0..<buff.count {
outputBuffer.floatChannelData!.pointee[i] = Float( buff[i] )
}
outputBuffer.frameLength = AVAudioFrameCount( buff.count )
do{
try audioFile?.write(from: outputBuffer)
} catch let error as NSError {
print("error:", error.localizedDescription)
}
Update for Swift 5
This is an update for writing array of floats to a wav audio file in swift 5. The function can be used as the following saveWav([channel1, channel2])
func saveWav(_ buf: [[Float]]) {
if let format = AVAudioFormat(commonFormat: .pcmFormatFloat32, sampleRate: 44100, channels: 2, interleaved: false) {
let pcmBuf = AVAudioPCMBuffer(pcmFormat: format, frameCapacity: AVAudioFrameCount(buf[0].count))
memcpy(pcmBuf?.floatChannelData?[0], buf[0], 4 * buf[0].count)
memcpy(pcmBuf?.floatChannelData?[1], buf[1], 4 * buf[1].count)
pcmBuf?.frameLength = UInt32(buf[0].count)
let fileManager = FileManager.default
do {
let documentDirectory = try fileManager.url(for: .documentDirectory, in: .userDomainMask, appropriateFor:nil, create:false)
try FileManager.default.createDirectory(atPath: documentDirectory.path, withIntermediateDirectories: true, attributes: nil)
let fileURL = documentDirectory.appendingPathComponent("out.wav")
print(fileURL.path)
let audioFile = try AVAudioFile(forWriting: fileURL, settings: format.settings)
try audioFile.write(from: pcmBuf!)
} catch {
print(error)
}
}
}
To make sure the above function works properly, use the following function that converts an audio file to an array of floats, and save it back to an audio file with saveWav
do {
guard let url = Bundle.main.url(forResource: "audio_example", withExtension: "wav") else { return }
let file = try AVAudioFile(forReading: url)
if let format = AVAudioFormat(commonFormat: .pcmFormatFloat32, sampleRate: file.fileFormat.sampleRate, channels: file.fileFormat.channelCount, interleaved: false), let buf = AVAudioPCMBuffer(pcmFormat: format, frameCapacity: AVAudioFrameCount(file.length)) {
try file.read(into: buf)
guard let floatChannelData = buf.floatChannelData else { return }
let frameLength = Int(buf.frameLength)
// we convert audio using audio pcm buffer to arrays of floats with two channels
let channel1 = Array(UnsafeBufferPointer(start:floatChannelData[0], count:frameLength))
let channel2 = Array(UnsafeBufferPointer(start:floatChannelData[1], count:frameLength))
// we save the audio back using saveWave function
saveWav([channel1,channel2])
}
} catch {
print("Audio Error: \(error)")
}
i am developing an applicatoin so that people can record and change their voices thru app and share it . Basically i so many things and now its time to ask you to help . Here is my play function which plays recorded audio file and adds effects on it .
private func playAudio(pitch : Float, rate: Float, reverb: Float, echo: Float) {
// Initialize variables
audioEngine = AVAudioEngine()
audioPlayerNode = AVAudioPlayerNode()
audioEngine.attachNode(audioPlayerNode)
// Setting the pitch
let pitchEffect = AVAudioUnitTimePitch()
pitchEffect.pitch = pitch
audioEngine.attachNode(pitchEffect)
// Setting the platback-rate
let playbackRateEffect = AVAudioUnitVarispeed()
playbackRateEffect.rate = rate
audioEngine.attachNode(playbackRateEffect)
// Setting the reverb effect
let reverbEffect = AVAudioUnitReverb()
reverbEffect.loadFactoryPreset(AVAudioUnitReverbPreset.Cathedral)
reverbEffect.wetDryMix = reverb
audioEngine.attachNode(reverbEffect)
// Setting the echo effect on a specific interval
let echoEffect = AVAudioUnitDelay()
echoEffect.delayTime = NSTimeInterval(echo)
audioEngine.attachNode(echoEffect)
// Chain all these up, ending with the output
audioEngine.connect(audioPlayerNode, to: playbackRateEffect, format: nil)
audioEngine.connect(playbackRateEffect, to: pitchEffect, format: nil)
audioEngine.connect(pitchEffect, to: reverbEffect, format: nil)
audioEngine.connect(reverbEffect, to: echoEffect, format: nil)
audioEngine.connect(echoEffect, to: audioEngine.outputNode, format: nil)
audioPlayerNode.stop()
let length = 4000
let buffer = AVAudioPCMBuffer(PCMFormat: audioPlayerNode.outputFormatForBus(0),frameCapacity:AVAudioFrameCount(length))
buffer.frameLength = AVAudioFrameCount(length)
try! audioEngine.start()
let dirPaths: AnyObject = NSSearchPathForDirectoriesInDomains( NSSearchPathDirectory.DocumentDirectory, NSSearchPathDomainMask.UserDomainMask, true)[0]
let tmpFileUrl: NSURL = NSURL.fileURLWithPath(dirPaths.stringByAppendingPathComponent("effectedSound.m4a"))
do{
print(dirPaths)
let settings = [AVFormatIDKey: NSNumber(unsignedInt: kAudioFormatMPEG4AAC), AVSampleRateKey: NSNumber(integer: 44100), AVNumberOfChannelsKey: NSNumber(integer: 2)]
self.newAudio = try AVAudioFile(forWriting: tmpFileUrl, settings: settings)
audioEngine.outputNode.installTapOnBus(0, bufferSize: (AVAudioFrameCount(self.player!.duration)), format: self.audioPlayerNode.outputFormatForBus(0)){
(buffer: AVAudioPCMBuffer!, time: AVAudioTime!) in
print(self.newAudio.length)
print("=====================")
print(self.audioFile.length)
print("**************************")
if (self.newAudio.length) < (self.audioFile.length){
do{
//print(buffer)
try self.newAudio.writeFromBuffer(buffer)
}catch _{
print("Problem Writing Buffer")
}
}else{
self.audioPlayerNode.removeTapOnBus(0)
}
}
}catch _{
print("Problem")
}
audioPlayerNode.play()
}
I guess the problem is i am installTapOnBus to audioPlayerNode but the effected audio is on audioEngine.outputNode .However i tried to installTapOnBus to audioEngine.outputNode but it gives me error.Also i've tried to connect effects to audioEngine.mixerNode but it also not a solution . So that do you have any experiences on saving effected audio file ? How can i get this effected audio?
Any help is appreciated
Thank you
Here it is my solution to question :
func playAndRecord(pitch : Float, rate: Float, reverb: Float, echo: Float) {
// Initialize variables
// These are global variables . if you want you can just (let audioEngine = etc ..) init here these variables
audioEngine = AVAudioEngine()
audioPlayerNode = AVAudioPlayerNode()
audioEngine.attachNode(audioPlayerNode)
playerB = AVAudioPlayerNode()
audioEngine.attachNode(playerB)
// Setting the pitch
let pitchEffect = AVAudioUnitTimePitch()
pitchEffect.pitch = pitch
audioEngine.attachNode(pitchEffect)
// Setting the platback-rate
let playbackRateEffect = AVAudioUnitVarispeed()
playbackRateEffect.rate = rate
audioEngine.attachNode(playbackRateEffect)
// Setting the reverb effect
let reverbEffect = AVAudioUnitReverb()
reverbEffect.loadFactoryPreset(AVAudioUnitReverbPreset.Cathedral)
reverbEffect.wetDryMix = reverb
audioEngine.attachNode(reverbEffect)
// Setting the echo effect on a specific interval
let echoEffect = AVAudioUnitDelay()
echoEffect.delayTime = NSTimeInterval(echo)
audioEngine.attachNode(echoEffect)
// Chain all these up, ending with the output
audioEngine.connect(audioPlayerNode, to: playbackRateEffect, format: nil)
audioEngine.connect(playbackRateEffect, to: pitchEffect, format: nil)
audioEngine.connect(pitchEffect, to: reverbEffect, format: nil)
audioEngine.connect(reverbEffect, to: echoEffect, format: nil)
audioEngine.connect(echoEffect, to: audioEngine.mainMixerNode, format: nil)
// Good practice to stop before starting
audioPlayerNode.stop()
// Play the audio file
// this player is also a global variable AvAudioPlayer
if(player != nil){
player?.stop()
}
// audioFile here is our original audio
audioPlayerNode.scheduleFile(audioFile, atTime: nil, completionHandler: {
print("Complete")
})
try! audioEngine.start()
let dirPaths: AnyObject = NSSearchPathForDirectoriesInDomains( NSSearchPathDirectory.DocumentDirectory, NSSearchPathDomainMask.UserDomainMask, true)[0]
let tmpFileUrl: NSURL = NSURL.fileURLWithPath(dirPaths.stringByAppendingPathComponent("effectedSound2.m4a"))
//Save the tmpFileUrl into global varibale to not lose it (not important if you want to do something else)
filteredOutputURL = tmpFileUrl
do{
print(dirPaths)
self.newAudio = try! AVAudioFile(forWriting: tmpFileUrl, settings: [
AVFormatIDKey: NSNumber(unsignedInt:kAudioFormatAppleLossless),
AVEncoderAudioQualityKey : AVAudioQuality.Low.rawValue,
AVEncoderBitRateKey : 320000,
AVNumberOfChannelsKey: 2,
AVSampleRateKey : 44100.0
])
let length = self.audioFile.length
audioEngine.mainMixerNode.installTapOnBus(0, bufferSize: 1024, format: self.audioEngine.mainMixerNode.inputFormatForBus(0)) {
(buffer: AVAudioPCMBuffer!, time: AVAudioTime!) -> Void in
print(self.newAudio.length)
print("=====================")
print(length)
print("**************************")
if (self.newAudio.length) < length {//Let us know when to stop saving the file, otherwise saving infinitely
do{
//print(buffer)
try self.newAudio.writeFromBuffer(buffer)
}catch _{
print("Problem Writing Buffer")
}
}else{
self.audioEngine.mainMixerNode.removeTapOnBus(0)//if we dont remove it, will keep on tapping infinitely
//DO WHAT YOU WANT TO DO HERE WITH EFFECTED AUDIO
}
}
}catch _{
print("Problem")
}
audioPlayerNode.play()
}
This doesn't seem to be hooked up correctly. I'm just learning all this myself, but I found that the effects are correctly added when you connect them to a mixer node. Also, you'll want to tap the mixer, not the engine output node. I've just copied your code and made a few modifications to take this into account.
private func playAudio(pitch : Float, rate: Float, reverb: Float, echo: Float) {
// Initialize variables
audioEngine = AVAudioEngine()
audioPlayerNode = AVAudioPlayerNode()
audioEngine.attachNode(audioPlayerNode)
// Setting the pitch
let pitchEffect = AVAudioUnitTimePitch()
pitchEffect.pitch = pitch
audioEngine.attachNode(pitchEffect)
// Setting the playback-rate
let playbackRateEffect = AVAudioUnitVarispeed()
playbackRateEffect.rate = rate
audioEngine.attachNode(playbackRateEffect)
// Setting the reverb effect
let reverbEffect = AVAudioUnitReverb()
reverbEffect.loadFactoryPreset(AVAudioUnitReverbPreset.Cathedral)
reverbEffect.wetDryMix = reverb
audioEngine.attachNode(reverbEffect)
// Setting the echo effect on a specific interval
let echoEffect = AVAudioUnitDelay()
echoEffect.delayTime = NSTimeInterval(echo)
audioEngine.attachNode(echoEffect)
// Set up a mixer node
let audioMixer = AVAudioMixerNode()
audioEngine.attachNode(audioMixer)
// Chain all these up, ending with the output
audioEngine.connect(audioPlayerNode, to: playbackRateEffect, format: nil)
audioEngine.connect(playbackRateEffect, to: pitchEffect, format: nil)
audioEngine.connect(pitchEffect, to: reverbEffect, format: nil)
audioEngine.connect(reverbEffect, to: echoEffect, format: nil)
audioEngine.connect(echoEffect, to: audioMixer, format: nil)
audioEngine.connect(audioMixer, to: audioEngine.outputNode, format: nil)
audioPlayerNode.stop()
let length = 4000
let buffer = AVAudioPCMBuffer(PCMFormat: audioPlayerNode.outputFormatForBus(0),frameCapacity:AVAudioFrameCount(length))
buffer.frameLength = AVAudioFrameCount(length)
try! audioEngine.start()
let dirPaths: AnyObject = NSSearchPathForDirectoriesInDomains( NSSearchPathDirectory.DocumentDirectory, NSSearchPathDomainMask.UserDomainMask, true)[0]
let tmpFileUrl: NSURL = NSURL.fileURLWithPath(dirPaths.stringByAppendingPathComponent("effectedSound.m4a"))
do{
print(dirPaths)
let settings = [AVFormatIDKey: NSNumber(unsignedInt: kAudioFormatMPEG4AAC), AVSampleRateKey: NSNumber(integer: 44100), AVNumberOfChannelsKey: NSNumber(integer: 2)]
self.newAudio = try AVAudioFile(forWriting: tmpFileUrl, settings: settings)
audioMixer.installTapOnBus(0, bufferSize: (AVAudioFrameCount(self.player!.duration)), format: self.audioMixer.outputFormatForBus(0)){
(buffer: AVAudioPCMBuffer!, time: AVAudioTime!) in
print(self.newAudio.length)
print("=====================")
print(self.audioFile.length)
print("**************************")
if (self.newAudio.length) < (self.audioFile.length){
do{
//print(buffer)
try self.newAudio.writeFromBuffer(buffer)
}catch _{
print("Problem Writing Buffer")
}
}else{
self.audioMixer.removeTapOnBus(0)
}
}
}catch _{
print("Problem")
}
audioPlayerNode.play()
}
I also had trouble getting the file formatted properly. I finally got it working when I changed my path of the output file from m4a to caf. One other suggestion is to not have nil for the format parameter. I used the audioFile.processingFormat. I hope this helps. My audio effects/mixing is functional, although I did not chain my effects. So feel free to ask questions.
just change the parameter unsigned int from kAudioFormatMPEG4AAC to kAudioFormatLinearPCM and also change file type to .caf it will sure helpfull my friend
For anyone who have the problem of having to play the audio file TWICE to save it, i just added the following line at the respective place and it solved my problem.
might help someone in the future.
P.S: I used the EXACT same code as the checked Answer from above, just added this one line and solved my problem.
//Do what you want to do here with effected Audio
self.newAudio = try! AVAudioFile(forReading: tmpFileUrl)
We can use a certain way to adjust the voices such as: aliens, men, old people, robots, children, ....
and has a playback counter
var delayInSecond: Double = 0
if let lastRenderTime = self.audioPlayerNode.lastRenderTime, let playerTime = self.audioPlayerNode.playerTime(forNodeTime: lastRenderTime)
{
if let rate = rate {
delayInSecond = Double(self.audioFile.length - playerTime.sampleTime) / Double(self.audioFile.processingFormat.sampleRate) / Double(rate)
}else{
delayInSecond = Double(self.audioFile.length - playerTime.sampleTime) / Double(self.audioFile.processingFormat.sampleRate)
}
//schedule a stop timer for when audio finishes playing
self.stopTimer = Timer(timeInterval: delayInSecond, target: self, selector: #selector(stopPlay), userInfo: nil, repeats: true)
RunLoop.main.add(self.stopTimer, forMode: .default)
}
I got this after I add
self.newAudio = try! AVAudioFile(forReading: tmpFileUrl)
return like this
Error
Domain=com.apple.coreaudio.avfaudio
Code=1685348671 "(null)" UserInfo={failed
call=ExtAudioFileOpenURL((CFURLRef)fileUR
L, &_extAudioFile)}