// My code is below
do{
file = try AKAudioFile(readFileName: "Sound1.mp3", baseDir: .resources)
// file = try AKAudioFile(forReading: SingletonClass.sharedInstance.recordedURLs[SingletonClass.sharedInstance.recordedURL]!)
// AKSettings.defaultToSpeaker = true
}
catch {
}
do {
player = try AKAudioPlayer(file : file)
}
catch {
}
let lfoAmplitude = 1_000
let lfoRate = 1.0 / 3.428
_ = 0.9
//filter section effect below
filterSectionEffect = AKOperationEffect(tracker) { input, _ in
let lfo = AKOperation.sineWave(frequency: lfoRate, amplitude: lfoAmplitude)
return input.moogLadderFilter(cutoffFrequency: lfo + cutoffFrequency,
resonance: resonance)
}
Audiokit.output = filterSectionEffect
Audiokit.start()
And whenever I play the audio using a button with code player.play , the audio gets played properly. And if I connect the headphones, it gets played properly as well but as soon as I disconnect the headphones, I see the error:
It happens in same way for both wired as well as bluetooth headphones.
My app with stuck because of this issue only that too happens only with AKOperationEffect. Any help would be appreciated.
The comment from Kunal Verma that this is fixed is correct, but just for completeness here is the commit that fixed it.
https://github.com/AudioKit/AudioKit/commit/ffac4acbe93553764f6095011e9bf5d71fdc88c2
Related
EDIT #2: OK, I missed something big here, but I still have a problem. The reason the sound is soft and I have to amplify it is that it is coming from the earpiece, not the speaker. When I add the option .defaultToSpeaker to the setCategory I get no sound at all.
So, this is the real problem, when I set the category to .playbackAndRecord and the option to .defaultToSpeaker, why do I get no sound at all on a real phone? In addition to no sound, I did not receive input from the mic either. The sound is fine in the simulator.
EDIT #3: I began observing route changes and my code reports the following when the .defaultToSpeaker option is included.
2020-12-26 12:17:56.212366-0700 SST[13807:3950195] Current route:
2020-12-26 12:17:56.213275-0700 SST[13807:3950195] <AVAudioSessionRouteDescription: 0x2816af8e0,
inputs = (
"<AVAudioSessionPortDescription: 0x2816af900, type = MicrophoneBuiltIn; name = iPhone Microphone; UID = Built-In Microphone; selectedDataSource = Bottom>"
);
outputs = (
"<AVAudioSessionPortDescription: 0x2816af990, type = Speaker; name = Speaker; UID = Speaker; selectedDataSource = (null)>"
)>
The output is set to Speaker. Is it significant that the selectedDataSource is (null)? Before the .defaultToSpeaker option was added this reported output set to Receiver, also with selectedDataSource = (null), so I would guess not.
EDIT: I added the code to set the Audio Session category. The new code is shown below. So far it seems to have no effect. If I leave it in or comment it out, I don't see any difference. I also have code (that I deleted here for simplicity) that modifies the microphone pattern. That too had no discernible effect. Perhaps though, that is to be expected?
In addition to the symptoms below, if I use Settings/Bluetooth to select the AirPods, then I got no output from the App, even after I remove the AirPods.
What am I missing here?
/EDIT
After getting this to work well on the simulator, I moved to debugging on my 11 Pro Max. When playing notes on the MandolinString, the sound from the (11 Pro Max or an 8) simulator is loud and clear. On the real phone, the sound is barely audible and from the speaker only. It does not go to the attached audio speaker, be that a HomePod or AirPods. Is this a v5 bug? Do I need to do something with the output?
A second less important issue is that when I instantiate this object the MandolinString triggers without me calling anything. The extra fader and the reset of the gain from 0 to 1 after a delay suppresses this sound.
private let engine = AudioEngine()
private let mic : AudioEngine.InputNode
private let micAmp : Fader
private let mixer1 : Mixer
private let mixer2 : Mixer
private let silence : Fader
private let stringAmp : Fader
private var pitchTap : PitchTap
private var tockAmp : Fader
private var metro = Timer()
private let sampler = MIDISampler(name: "click")
private let startTime = NSDate.timeIntervalSinceReferenceDate
private var ampThreshold: AUValue = 0.12
private var ampJumpSize: AUValue = 0.05
private var samplePause = 0
private var trackingNotStarted = true
private var tracking = false
private var ampPrev: AUValue = 0.0
private var freqArray: [AUValue] = []
init() {
// Set up mic input and pitchtap
mic = engine.input!
micAmp = Fader(mic, gain: 1.0)
mixer1 = Mixer(micAmp)
silence = Fader(mixer1, gain: 0)
mixer2 = Mixer(silence)
pitchTap = PitchTap(mixer1, handler: {_ , _ in })
// All sound is fed into mixer2
// Mic input is faded to zero
// Now add String sound to Mixer2 with a Fader
pluckedString = MandolinString()
stringAmp = Fader(pluckedString, gain: 4.0)
mixer2.addInput(stringAmp)
// Create a sound for the metronome (tock), add as input to mixer2
try! sampler.loadWav("Click")
tockAmp = Fader(sampler, gain: 1.0)
mixer2.addInput(tockAmp)
engine.output = mixer2
self.pitchTap = PitchTap(micAmp,
handler:
{ freq, amp in
if (self.samplePause <= 0 && self.tracking) {
self.samplePause = 0
self.sample(freq: freq[0], amp: amp[0])
}
})
do {
//try audioSession.setCategory(AVAudioSession.Category.playAndRecord, mode: AVAudioSession.Mode.measurement)
try audioSession.setCategory(AVAudioSession.Category.playAndRecord)
//, options: AVAudioSession.CategoryOptions.defaultToSpeaker)
try audioSession.setActive(true)
} catch let error as NSError {
print("Unable to create AudioSession: \(error.localizedDescription)")
}
do {
try engine.start()
akStartSucceeded = true
} catch {
akStartSucceeded = false
}
} // init
XCode 12, iOS 14, SPM. Everything up to date
Most likely this is not an AudioKit issue per se, it has to do with AVAudioSession, you probably need to set it on the device to be DefaultToSpeaker. AudioKit 5 has less automatic session management compared to version 4, opting to make fewer assumptions and let the developer have control.
The answer was indeed to add code for AVAudioSession. However, it did not work where I first put it. It only worked for me when I put it in the App delegate didFInishLauchWithOptions. I found this in the AudioKit Cookbook. This works:
class AppDelegate: UIResponder, UIApplicationDelegate {
func application(_ application: UIApplication, didFinishLaunchingWithOptions launchOptions: [UIApplication.LaunchOptionsKey: Any]?) -> Bool {
// Override point for customization after application launch.
#if os(iOS)
self.audioSetup()
#endif
return true
}
#if os(iOS)
func audioSetup() {
let session = AVAudioSession.sharedInstance()
do {
Settings.bufferLength = .short
try session.setPreferredIOBufferDuration(Settings.bufferLength.duration)
try session.setCategory(.playAndRecord,
options: [.defaultToSpeaker, .mixWithOthers])
try session.setActive(true)
} catch let err {
print(err)
}
// Other AudioSession stuff here
do {
try session.setActive(true)
} catch let err {
print(err)
}
}
#endif
}
I'm trying to create an app with Swift.
I integrated correctly speech-to-text and text-to-speech: my app works perfectly. You can find my project here.
After speech-to-text, the app makes an http request to a server (sending the text recognized) and the response (It is a string, i.e.: "Ok, I'll show you something") is reproduced vocally from text-to-speech. But, there is a big issue and I can't solve it.
When the app is reproducing the text vocally, the voice is too slow, as if it were in the background, as if there was something to be reproduced more important than the voice (actually nothing).
Debugging, I discovered that the issue starts using audioEngine (AVAudioEngine) inside the function recordAndRecognizeSpeech(). Running the app without using this function and playing a random text it works like a charm.
So, in my opinion when the app is reproducing the text vocally, it thinks there is still active audioengine, so the volume is very slow.
But, before reproducing the text, I called these functions (look inside ac function, line 96):
audioEngine.stop()
audioEngine.reset()
How can I solve this issue?
EDIT:
I found a partial solution. Now before the app plays the text vocally my code is:
audioEngine.inputNode.removeTap(onBus: 0)
audioEngine.stop()
audioEngine.reset()
recognitionTask?.cancel()
isRecording = false
microphoneButton.setTitle("Avvia..", for: UIControl.State.normal);
do {
let audioSession = AVAudioSession.sharedInstance()
try audioSession.setCategory(AVAudioSession.Category.ambient)
try audioSession.setActive(false, options: .notifyOthersOnDeactivation)
} catch {
print(error)
}
make_request(msg: self.speech_result.text!)
The function .setCategory works and the volume is like the default one. When I try to recall recordAndRecognizeSpeech() function, the app gives me this exception:
VAEInternal.h:70:_AVAE_Check: required condition is false: [AVAudioIONodeImpl.mm:910:SetOutputFormat: (IsFormatSampleRateAndChannelCountValid(hwFormat))]
This exception is caused by .setCategory(AVAudioSession.Category.ambient), it should be .playAndRecord, but with this value the volume returns to be low.
try this one.
let speaker = AVSpeechSynthesizer()
func say(text: String, language: String) {
// Start audio session
let audioSession = AVAudioSession.sharedInstance()
do {
try audioSession.setCategory(AVAudioSession.Category.playAndRecord)
try audioSession.setMode(AVAudioSession.Mode.default)
try audioSession.setActive(true)
try AVAudioSession.sharedInstance().overrideOutputAudioPort(AVAudioSession.PortOverride.speaker)
} catch {
return
}
if speaker.isSpeaking {
speaker.stopSpeaking(at: .immediate)
} else {
myUtterance = AVSpeechUtterance(string: text)
myUtterance.rate = AVSpeechUtteranceDefaultSpeechRate
myUtterance.voice = AVSpeechSynthesisVoice(language: language)
myUtterance.pitchMultiplier = 1
myUtterance.volume = 2
DispatchQueue.main.async {
self.speaker.speak(myUtterance)
}
}
}
Try This .
set rate for play speedly
var speedd = AVSpeechSynthesizer()
var voicert = AVSpeechUtterance()
voicert = AVSpeechUtterance(string: "Your post appears to contain code that is not properly formatted as code. Please indent all code by 4 spaces using the code toolbar button or the CTRL+K keyboard shortcut. For more editing help, click the [?] toolbar icon")
voicert.voice = AVSpeechSynthesisVoice(language: "en-US")
voicert.rate = 0.5
speedd.speak(voicert)
I’m having a problem using AKMIDISampler in my project when there’s an initialized AKMicrophone. Along with correctly playing the woodblock sample when “play” is called on the sampler, the first time “play” is called a constant sine wave starts playing - it never stops.
I’ve replicated the problem in the smallest amount of code below. Issue happens when the class is initialized then playTestSample() is called.
Note that if the AKMicrophone related code is all muted the AKMIDISampler plays fine and the sine wave that currently haunts my dreams doesn’t happen.
(I’ve tried switching to use the AKSampler() just to see if the problem would exist there but I haven’t been able to get any sound out of that).
Fyi: I have “App plays audio or streams audio/video using AirPlay” in the “Required background modes” in info.plist - which is know to fix another sine wave issue.
Thank you very much for any assistance.
Btw: AudioKit rocks and has been a massive help on this project! :^)
AK 4.5.4
Xcode 10.1
import Foundation
import AudioKit
class AudioKitTESTManager {
var mixer = AKMixer()
var sampler = AKMIDISampler()
var mic = AKMicrophone()
var micMixer = AKMixer()
var micBooster = AKBooster()
init() {
mixer = AKMixer(sampler, micBooster)
do {
let woodblock = try AKAudioFile(readFileName: RhythmGameConfig.woodblockSoundName)
try sampler.loadAudioFiles([woodblock])
} catch {
print("Error loading audio files into sampler")
}
micMixer = AKMixer(mic)
micBooster = AKBooster(micMixer)
micBooster.gain = 0.0
AudioKit.output = mixer
AKSettings.playbackWhileMuted = true
AKSettings.defaultToSpeaker = true
AKSettings.sampleRate = 44100
do {
print("Attempting to start AudioKit")
try AudioKit.start()
} catch {
AKLog("AudioKit did not start!")
}
}
func playTestSample() {
// You hear the sample and a continuous sine wave starts playing through the samplerMixer
try? sampler.play(noteNumber: 60, velocity: 90, channel: 1)
}
}
Wheeew. I believe I've found a solution. Maybe it will help out someone else?
It seems that loading the files into the sampler AFTER AudioKit.start() fixes the Sine Wave of Terror!
//..
do {
print("Attempting to start AudioKit")
try AudioKit.start()
} catch {
AKLog("AudioKit did not start!")
}
do {
let woodblock = try AKAudioFile(readFileName: RhythmGameConfig.woodblockSoundName)
try sampler.loadAudioFiles([woodblock])
} catch {
print("Error loading audio files into sampler")
}
I have an app that uses samplers to play loops. I am in the process of converting my app from using AVAudioEngine to AudioKit. My app now works well except for this: Approximately every 1-3 minutes, my app receives two .AVAudioEngineConfigurationChange notifications in a row. There is no apparent pattern to its repetition and this happens on both my iPhone 6s and new iPad.
Here is my init code for my "conductor" singleton:
init() {
//sampler array
//sampler array is cycled through as user changes sounds
samplerArray = [sampler0, sampler1, sampler2, sampler3]
//start by loading samplers with default preset
for sampler in samplerArray {
//get the sampler preset
let presetPath = Bundle.main.path(forResource: currentSound, ofType: "aupreset")
let presetURL = NSURL.fileURL(withPath: presetPath!)
do {
try sampler.samplerUnit.loadPreset(at: presetURL)
print("rrob: loaded sample")
} catch {
print("rrob: failed to load sample")
}
}
//signal chain
samplerMixer = AKMixer(samplerArray)
filter = AKMoogLadder(samplerMixer)
reverb = AKCostelloReverb(filter)
reverbMixer = AKDryWetMixer(filter, reverb, balance: 0.3)
outputMixer = AKMixer(reverbMixer)
AudioKit.output = outputMixer
//AKSettings.enableRouteChangeHandling = false
AKSettings.playbackWhileMuted = true
do {
try AKSettings.setSession(category: AKSettings.SessionCategory.playback, with: AVAudioSessionCategoryOptions.mixWithOthers)
} catch {
print("rrob: failed to set audio session")
}
//AudioBus recommended buffer length
AKSettings.bufferLength = .medium
AudioKit.start()
print("rrob: did init autoEngine")
}
Any AudioKit experts have ideas for where I can start troubleshooting? Happy to provide more info. Thanks.
When using an AVAudioPlayerNode to schedule a short buffer to play immediately on a touch event ("Touch Up Inside"), I've noticed audible glitches / artifacts on playback while testing. The audio does not glitch at all in iOS simulator, however there is audible distortion on playback when I run the app on an actual iOS device. The audible distortion occurs randomly (the triggered sound will sometimes sound great, while other times it sounds distorted)
I've tried using different audio files, file formats, and preparing the buffer for playback using the prepareWithFrameCount method, but unfortunately the result is always the same and I'm stuck wondering what could be going wrong..
I've stripped the code down to globals for clarity and simplicity. Any help or insight would be greatly appreciated. This is my first attempt at developing an iOS app and my first question posted on Stack Overflow.
let filePath = NSBundle.mainBundle().pathForResource("BD_withSilence", ofType: "caf")!
let fileURL: NSURL = NSURL(fileURLWithPath: filePath)!
var error: NSError?
let file = AVAudioFile(forReading: fileURL, error: &error)
let fileFormat = file.processingFormat
let frameCount = UInt32(file.length)
let buffer = AVAudioPCMBuffer(PCMFormat: fileFormat, frameCapacity: frameCount)
let audioEngine = AVAudioEngine()
let playerNode = AVAudioPlayerNode()
func startEngine() {
var error: NSError?
file.readIntoBuffer(buffer, error: &error)
audioEngine.attachNode(playerNode)
audioEngine.connect(playerNode, to: audioEngine.mainMixerNode, format: buffer.format)
audioEngine.prepare()
func start() {
var error: NSError?
audioEngine.startAndReturnError(&error)
}
start()
}
startEngine()
let frameCapacity = AVAudioFramePosition(buffer.frameCapacity)
let frameLength = buffer.frameLength
let sampleRate: Double = 44100.0
func play() {
func scheduleBuffer() {
playerNode.scheduleBuffer(buffer, atTime: nil, options: AVAudioPlayerNodeBufferOptions.Interrupts, completionHandler: nil)
playerNode.prepareWithFrameCount(frameLength)
}
if playerNode.playing == false {
scheduleBuffer()
let time = AVAudioTime(sampleTime: frameCapacity, atRate: sampleRate)
playerNode.playAtTime(time)
}
else {
scheduleBuffer()
}
}
// triggered by a "Touch Up Inside" event on a UIButton in my ViewController
#IBAction func triggerPlay(sender: AnyObject) {
play()
}
Update:
Ok I think I've identified the source of the distortion: the volume of the node(s) is too great at output and causes clipping. By adding these two lines in my startEngine function, the distortion no longer occurred:
playerNode.volume = 0.8
audioEngine.mainMixerNode.volume = 0.8
However, I'm still don't know why I need to lower the output- my audio file itself does not clip. I'm guessing that it might be a result of the way that the AVAudioPlayerNodeBufferOptions.Interrupts is implemented. When a buffer interrupts another buffer, could there be an increase in output volume as a result of the interruption, causing output clipping? I'm still looking for a solid understanding as to why this occurs.. If anyone is willing/able to provide any clarification about this that would be fantastic!
Not sure if this is the problem you experienced in 2015, it may be the same issue that #suthar experienced in 2018.
I experienced a very similar problem and was due to the fact that the sampleRate on the device is different to the simulator. On macOS it is 44100 and on iOS Devices (late model ones) it is 48000.
So when you fill your buffer with 44100 samples on a 48000 device, you get 3900 samples of silence. When played back it doesn't sound like silence, it sounds like a glitch.
I used the mainMixer format when connecting my playerNode and also when creating my pcmBuffer. Don't refer to 48000 or 44100 anywhere in the code.
audioEngine.attach( playerNode)
audioEngine.connect( playerNode, to:mixerNode, format:mixerNode.outputFormat(forBus:0))
let pcmBuffer = AVAudioPCMBuffer( pcmFormat:SynthEngine.shared.audioEngine.mainMixerNode.outputFormat( forBus:0),
frameCapacity:AVAudioFrameCount(bufferSize))