Why doesn't Core Haptics play? (CHHapticPatternPlayer) - ios

I am experiencing no haptic output while testing the code below on my physical device (iPhone XR). I followed Apple Developer's "Playing a Single-tap Haptic Pattern" article, as well as double-checked with various other articles on the internet, and I am confident I have implemented it correctly. Moreover, there are no errors that are caught when running. What could be the reason why my device is not outputting the haptic pattern?
Side Notes:
I can confirm that my phone does produce haptics for other apps, so it does not appear to be an issue with my physical device.
I also implement an AVAudioPlayer separately; I doubt that would interfere, but I thought I'd mention it just in case.
Any help would be much appreciated--thanks!
var hapticEngine: CHHapticEngine!
var hapticPlayer: CHHapticPatternPlayer!
override func viewDidLoad() {
super.viewDidLoad()
// Create and configure a haptic engine.
do {
hapticEngine = try CHHapticEngine()
} catch let error {
fatalError("Engine Creation Error: \(error)")
}
// The reset handler provides an opportunity to restart the engine.
hapticEngine.resetHandler = {
print("Reset Handler: Restarting the engine.")
do {
// Try restarting the engine.
try self.hapticEngine.start()
// Register any custom resources you had registered, using registerAudioResource.
// Recreate all haptic pattern players you had created, using createPlayer.
} catch let error {
fatalError("Failed to restart the engine: \(error.localizedDescription)")
}
}
// The stopped handler alerts engine stoppage.
hapticEngine.stoppedHandler = { reason in
print("Stop Handler: The engine stopped for reason: \(reason.rawValue)")
switch reason {
case .audioSessionInterrupt: print("Audio session interrupt")
case .applicationSuspended: print("Application suspended")
case .idleTimeout: print("Idle timeout")
case .systemError: print("System error")
#unknown default:
print("Unknown error")
}
}
// Create haptic dictionary
let hapticDict = [
CHHapticPattern.Key.pattern: [
[
CHHapticPattern.Key.event: [
CHHapticPattern.Key.eventType: CHHapticEvent.EventType.hapticTransient,
CHHapticPattern.Key.time: 0.001,
CHHapticPattern.Key.eventDuration: 1.0
]
]
]
]
// Create haptic pattern from haptic dictionary, then add it to the haptic player
do {
let pattern = try CHHapticPattern(dictionary: hapticDict)
hapticPlayer = try hapticEngine.makePlayer(with: pattern)
} catch let error {
print("Failed to create hapticPlayer: \(error.localizedDescription)")
}
// ...
}
// ...
func playHaptic() {
//audioPlayer?.play()
// Start Haptic Engine
do {
try hapticEngine.start()
} catch let error {
print("Haptic Engine could not start: \(error.localizedDescription)")
}
// Start Haptic Player
do {
try hapticPlayer.start(atTime: 0.0)
print("Why")
} catch let error {
print("Haptic Player could not start: \(error.localizedDescription)")
}
// Stop Haptic Engine
hapticEngine.stop(completionHandler: nil)
}

The problem is this line:
hapticEngine.stop(completionHandler: nil)
Delete it and all will be well. You're stopping your "sound" the very instant it is trying to get started. That doesn't make much sense.
(Of course I am also assuming that somewhere there is code that actually calls your playHaptic method. You didn't show that code, but I'm just guessing that you probably remembered to include some. Having made sure that happens, I ran your code with the stop line commented out, and I definitely felt and heard the "pop" of the taptic tap.)

Related

AudioKit v5: How does one choose the microphone?

I am trying to update my project to AudioKit v5, using SPM. As far as I can see in the current documentation, you instantiate the microphone by attaching it to the audio engine input.
However, I am missing what used to be AudioKit.inputDevices (and then AKManager.inputDevices). I used to be able to select my microphone of choice.
How does one select a specific microphone using AudioKit v5 on iOS?
As of November 6, 2020, you need to make sure you are using the v5-develop branch, since v5-main still does not support hardware with 48K sample rate.
Here is code that allows you to choose the microphone according to its debug description:
// AudioKit engine and node definitions
let engine = AudioEngine()
var mic : AudioEngine.InputNode!
var boost : Fader!
var mixer : Mixer!
// Choose device for microphone
if let inputs = AudioEngine.inputDevices {
// print (inputs) // Uncomment to see the possible inputs
let micSelection : String = "Front" // On a 2020 iPad pro you can also choose "Back" or "Top"
var chosenMic : Int = 0
var micTypeCounter : Int = 0
for microphones in inputs {
let micType : String = "\(microphones)"
if micType.range(of: micSelection) != nil {
chosenMic = micTypeCounter
}
// If we find a wired mic, prefer it
if micType.range(of: "Wired") != nil {
chosenMic = micTypeCounter
break
}
// If we find a USB mic (newer devices), prefer it
if micType.range(of: "USB") != nil {
chosenMic = micTypeCounter
break
}
micTypeCounter += 1
}
do {
try AudioEngine.setInputDevice(inputs[chosenMic])
} catch {
print ("Could not set audio inputs: \(error)")
}
mic = engine.input
}
Settings.sampleRate = mic.avAudioNode.inputFormat(forBus: 0).sampleRate // This is essential for 48Kbps
// Start AudioKit
if !engine.avEngine.isRunning {
do {
boost = Fader(mic)
// Set boost values here, or leave it for silence
// Connect mic or boost to any other audio nodes you need
// Set AudioKit's output
mixer = Mixer(boost) // You can add any other nodes to the mixer
engine.output = mixer
// Additional settings
Settings.audioInputEnabled = true
// Start engine
try engine.avEngine.start()
try Settings.setSession(category: .playAndRecord)
} catch {
print ("Could not start AudioKit: \(error)")
}
}
It is advisable to add a notification for audio route changes to viewDidLoad:
// Notification for monitoring audio route changes
NotificationCenter.default.addObserver(
self,
selector: #selector(audioRouteChanged(notification:)),
name: AVAudioSession.routeChangeNotification,
object: nil)
This will call
#objc func audioRouteChanged(notification:Notification) {
// Replicate the code for choosing the microphone here (the first `if let` block)
}
EDIT: To clarify, the reason for the selective use of break in the loop is to create a hierarchy of selected inputs, if more than one is present. You may change the order of the inputs detected at your discretion, or add break to other parts of the loop.
the same for audio kit 4..
Apis are changed.
Seems You should write:
guard let inputs = AKManager.inputDevices else{
print("NO AK INPUT devices")
return false
}

iOS - AudioKit Crashes when receiving a phone call

AudioKit 4.9.3
iOS 11+
I am working on a project where the user is recording on the device using the microphone and it continues to record, even if the app is in the background. This works fine but when receiving a phone call I get an AudioKit error. I assume it has something to do with the phone taking over the mic or something. here is the error:
[avae] AVAEInternal.h:109
[AVAudioEngineGraph.mm:1544:Start: (err = PerformCommand(*ioNode,
kAUStartIO, NULL, 0)): error 561017449
AudioKit+StartStop.swift:restartEngineAfterRouteChange(_:):198:error
restarting engine after route change
basically everything that i have recording up until that point is lost.
here is my set up AudioKit code:
func configureAudioKit() {
AKSettings.audioInputEnabled = true
AKSettings.defaultToSpeaker = true
do {
try try audioSession.setCategory((AVAudioSession.Category.playAndRecord), options: AVAudioSession.CategoryOptions.mixWithOthers)
try audioSession.setActive(true)
audioSession.requestRecordPermission({ allowed in
DispatchQueue.main.async {
if allowed {
print("Audio recording session allowed")
self.configureAudioKitSession()
} else {
print("Audio recoding session not allowed")
}
}
})
} catch let error{
print("Audio recoding session not allowed: \(error.localizedDescription)")
}
}
func configureAudioKitSession() {
isMicPresent = AVAudioSession.sharedInstance().isInputAvailable
if !isMicPresent {
return
}
print("mic present and configuring audio session")
mic = AKMicrophone()
do{
let _ = try AKNodeRecorder(node: mic)
let recorderGain = AKBooster(mic, gain: 0)
AudioKit.output = recorderGain
//try AudioKit.start()
}
catch let error{
print("configure audioKit error: ", error)
}
}
and when tapping on the record button code:
do {
audioRecorder = try AVAudioRecorder(url: actualRecordingPath, settings: audioSettings)
audioRecorder?.record()
//print("Recording: \(isRecording)")
do{
try AudioKit.start()
}
catch let error{
print("Cannot start AudioKit", error.localizedDescription)
}
}
Current audio Settings:
private let audioSettings = [
AVFormatIDKey : Int(kAudioFormatMPEG4AAC),
AVSampleRateKey : 44100,
AVNumberOfChannelsKey : 2,
AVEncoderAudioQualityKey : AVAudioQuality.medium.rawValue
]
What can I do to ensure that I can get a proper recording, even when receiving a phone call? The error happens as soon as you receive the call - whether you choose to answer it or decline.
Any thoughts?
I've done work in this area, I'm afraid you cannot access the microphone(s) while a call or a VOIP call is in progress.
This is a basic privacy measure that is enforced by iOS for self-evident reasons.
AudioKit handles only the basic route change handling for an audio playback app. We've found that when an app becomes sufficiently complex, the framework can't effectively predestine the appropriate course of action when interruptions occur. So, I would suggest turning off AudioKit's route change handling and respond to the notifications yourself.
Also, I would putting AudioKit activation code in a button.

App doesn't route audio to headphones (initially)

I have a VOIP app implemented using the Sinch SDK and CallKit. Everything works fine, apart from when the device has headphones plugged in. In the latter case, when the call starts, audio is still routed through the main speaker of the device. If I unplug and plug the headphones back in - during the call -, audio is then correctly routed to the headphones.
All I am doing is
func provider(_ provider: CXProvider, perform action: CXAnswerCallAction) {
guard let c = self.currentCall else {
action.fail()
return
}
c.answer()
self.communicationClient.audioController().configureAudioSessionForCallKitCall()
action.fulfill()
}
Shouldn't this be taken care automatically by the OS?
It seems like the Sinch SDK overrides the output audio port. Try to run this code just after the audio session has been configured:
do {
try AVAudioSession.sharedInstance().overrideOutputAudioPort(.none)
} catch {
print("OverrideOutputAudioPort failed: \(error)")
}
If it doesn't work, try to configure the audio session by yourself, instead of relying on Sinch SDK if you can. Replace the configureAudioSessionForCallKitCall call with something like this:
let session = AVAudioSession.sharedInstance()
do {
try session.setCategory(
.playAndRecord,
mode: .voiceChat,
options: [.allowBluetooth, .allowBluetoothA2DP])
try session.setActive(true)
} catch {
print("Unable to activate audio session: \(error)")
}

"__CFRunLoopModeFindSourceForMachPort returned NULL" messages when using AVAudioPlayer

We're working on a SpriteKit game. In order to have more control over sound effects, we switched from using SKAudioNodes to having some AVAudioPlayers. While everything seems to be working well in terms of game play, frame rate, and sounds, we're seeing occasional error(?) messages in the console output when testing on physical devices:
... [general] __CFRunLoopModeFindSourceForMachPort returned NULL for mode 'kCFRunLoopDefaultMode' livePort: #####
It doesn't seem to really cause any harm when it happens (no sound glitches or hiccups in frame rate or anything), but not understanding exactly what the message means and why it's happening is making us nervous.
Details:
The game is all standard SpriteKit, all events driven by SKActions, nothing unusual there.
The uses of AVFoundation stuff are the following. Initialization of app sounds:
class Sounds {
let soundQueue: DispatchQueue
init() {
do {
try AVAudioSession.sharedInstance().setActive(true)
} catch {
print(error.localizedDescription)
}
soundQueue = DispatchQueue.global(qos: .background)
}
func execute(_ soundActions: #escaping () -> Void) {
soundQueue.async(execute: soundActions)
}
}
Creating various sound effect players:
guard let player = try? AVAudioPlayer(contentsOf: url) else {
fatalError("Unable to instantiate AVAudioPlayer")
}
player.prepareToPlay()
Playing a sound effect:
let pan = stereoBalance(...)
sounds.execute {
if player.pan != pan {
player.pan = pan
}
player.play()
}
The AVAudioPlayers are all for short sound effects with no looping, and they get reused. We create about 25 players total, including multiple players for certain effects when they can repeat in quick succession. For a particular effect, we rotate through the players for that effect in a fixed sequence. We have verified that whenever a player is triggered, its isPlaying is false, so we're not trying to invoke play on something that's already playing.
The message isn't that often. Over the course of a 5-10 minute game with possibly thousands of sound effects, we see the message maybe 5-10 times.
The message seems to occur most commonly when a bunch of sound effects are being played in quick succession, but it doesn't feel like it's 100% correlated with that.
Not using the dispatch queue (i.e., having sounds.execute just call soundActions() directly) doesn't fix the issue (though that does cause the game to lag significantly). Changing the dispatch queue to some of the other priorities like .utility also doesn't affect the issue.
Making sounds.execute just return immediately (i.e., don't actually call the closure at all, so there's no play()) does eliminate the messages.
We did find the source code that's producing the message at this link:
https://github.com/apple/swift-corelibs-foundation/blob/master/CoreFoundation/RunLoop.subproj/CFRunLoop.c
but we don't understand it except at an abstract level, and are not sure how run loops are involved in the AVFoundation stuff.
Lots of googling has turned up nothing helpful. And as I indicated, it doesn't seem to be causing noticeable problems at all. It would be nice to know why it's happening though, and either how to fix it or to have certainty that it won't ever be an issue.
We're still working on this, but have experimented enough that it's clear how we should do things. Outline:
Use the scene's audioEngine property.
For each sound effect, make an AVAudioFile for reading the audio's URL from the bundle. Read it into an AVAudioPCMBuffer. Stick the buffers into a dictionary that's indexed by sound effect.
Make a bunch of AVAudioPlayerNodes. Attach() them to the audioEngine. Connect(playerNode, to: audioEngine.mainMixerNode). At the moment we're creating these dynamically, searching through our current list of player nodes to find one that's not playing and making a new one if there's none available. That's probably got more overhead than is needed, since we have to have callbacks to observe when the player node finishes whatever it's playing and set it back to a stopped state. We'll try switching to just a fixed maximum number of active sound effects and rotating through the players in order.
To play a sound effect, grab the buffer for the effect, find a non-busy playerNode, and do playerNode.scheduleBuffer(buffer, ...). And playerNode.play() if it's not currently playing.
I may update this with some more detailed code once we have things fully converted and cleaned up. We still have a couple of long-running AVAudioPlayers that we haven't switched to use AVAudioPlayerNode going through the mixer. But anyway, pumping the vast majority of sound effects through the scheme above has eliminated the error message, and it needs far less stuff sitting around since there's no duplication of the sound effects in-memory like we had before. There's a tiny bit of lag, but we haven't even tried putting some stuff on a background thread yet, and maybe not having to search for and constantly start/stop players would even eliminate it without having to worry about that.
Since switching to this approach, we've had no more runloop complaints.
Edit: Some example code...
import SpriteKit
import AVFoundation
enum SoundEffect: String, CaseIterable {
case playerExplosion = "player_explosion"
// lots more
var url: URL {
guard let url = Bundle.main.url(forResource: self.rawValue, withExtension: "wav") else {
fatalError("Sound effect file \(self.rawValue) missing")
}
return url
}
func audioBuffer() -> AVAudioPCMBuffer {
guard let file = try? AVAudioFile(forReading: self.url) else {
fatalError("Unable to instantiate AVAudioFile")
}
guard let buffer = AVAudioPCMBuffer(pcmFormat: file.processingFormat, frameCapacity: AVAudioFrameCount(file.length)) else {
fatalError("Unable to instantiate AVAudioPCMBuffer")
}
do {
try file.read(into: buffer)
} catch {
fatalError("Unable to read audio file into buffer, \(error.localizedDescription)")
}
return buffer
}
}
class Sounds {
var audioBuffers = [SoundEffect: AVAudioPCMBuffer]()
// more stuff
init() {
for effect in SoundEffect.allCases {
preload(effect)
}
}
func preload(_ sound: SoundEffect) {
audioBuffers[sound] = sound.audioBuffer()
}
func cachedAudioBuffer(_ sound: SoundEffect) -> AVAudioPCMBuffer {
guard let buffer = audioBuffers[sound] else {
fatalError("Audio buffer for \(sound.rawValue) was not preloaded")
}
return buffer
}
}
class Globals {
// Sounds loaded once and shared amount all scenes in the game
static let sounds = Sounds()
}
class SceneAudio {
let stereoEffectsFrame: CGRect
let audioEngine: AVAudioEngine
var playerNodes = [AVAudioPlayerNode]()
var nextPlayerNode = 0
// more stuff
init(stereoEffectsFrame: CGRect, audioEngine: AVAudioEngine) {
self.stereoEffectsFrame = stereoEffectsFrame
self.audioEngine = audioEngine
do {
try audioEngine.start()
let buffer = Globals.sounds.cachedAudioBuffer(.playerExplosion)
// We got up to about 10 simultaneous sounds when really pushing the game
for _ in 0 ..< 10 {
let playerNode = AVAudioPlayerNode()
playerNodes.append(playerNode)
audioEngine.attach(playerNode)
audioEngine.connect(playerNode, to: audioEngine.mainMixerNode, format: buffer.format)
playerNode.play()
}
} catch {
logging("Cannot start audio engine, \(error.localizedDescription)")
}
}
func soundEffect(_ sound: SoundEffect, at position: CGPoint = .zero) {
guard audioEngine.isRunning else { return }
let buffer = Globals.sounds.cachedAudioBuffer(sound)
let playerNode = playerNodes[nextPlayerNode]
nextPlayerNode = (nextPlayerNode + 1) % playerNodes.count
playerNode.pan = stereoBalance(position)
playerNode.scheduleBuffer(buffer)
}
func stereoBalance(_ position: CGPoint) -> Float {
guard stereoEffectsFrame.width != 0 else { return 0 }
guard position.x <= stereoEffectsFrame.maxX else { return 1 }
guard position.x >= stereoEffectsFrame.minX else { return -1 }
return Float((position.x - stereoEffectsFrame.midX) / (0.5 * stereoEffectsFrame.width))
}
}
class GameScene: SKScene {
var audio: SceneAudio!
// lots more stuff
// somewhere in initialization
// gameFrame is the area where action takes place and which
// determines panning for stereo sound effects
audio = SceneAudio(stereoEffectsFrame: gameFrame, audioEngine: audioEngine)
func destroyPlayer(_ player: SKSpriteNode) {
audio.soundEffect(.playerExplosion, at: player.position)
// more stuff
}
}

iOS 11 setPreferredInput to HFP not working while Podcast playing

My App allows use of HFP (Hands Free Protocol) for it's "Spoken" prompts (like a Navigation App).
The function below to Setup Audio before TextToSpeech or AVAudioPlayer has worked fairly well since iOS 9.x.
I didn't test it against running a PodCast very often so I'm not sure when things broke. The Function below works perfect if Music is Streaming from the Phone to the Bluetooth A2DP or Music is Playing on the FM Radio in the Car (i.e. it will interrupt Radio, Prompt and the resume Radio). But it will NOT work if I'm streaming a PodCast. PodCast pauses and silence comes out for the prompt, then resumes the PodCast.
I recently checked Waze, Google Maps and Apple Maps (all which also offer this HFP option).
Waze is broken (but again I don't test against PodCast often).
Google Maps still works perfectly.
Apple Maps is just weird. The option for HFP is greyed out when Streaming. But when it tries to Pause and Play it also fails.
But again, Google Maps works, so it can be done.
When I call setPreferredInput with the HFP route, my route change handler (also shown below) is NOT called if a PodCast is streaming. If music is Streaming my route change handler is called and Audio from my app comes over HFP correctly.
Background or Foreground doesn't matter.
Any suggestions to solve would be greatly appreciated.
func setupSound(_ activate: Bool)
{
if !Settings.sharedInstance.soundOn && !Settings.sharedInstance.voiceOn
{
return
}
var avopts:AVAudioSessionCategoryOptions = [
.mixWithOthers,
.duckOthers,
.interruptSpokenAudioAndMixWithOthers,
.allowBluetooth
]
if #available(iOS 10.0, *)
{
avopts.insert(AVAudioSessionCategoryOptions.allowBluetoothA2DP)
avopts.insert(AVAudioSessionCategoryOptions.allowAirPlay)
}
var HFP = false
if Settings.sharedInstance.HFPOn && callCenter.currentCalls == nil
{
do
{
if #available(iOS 10.0, *)
{
try AVAudioSession.sharedInstance().setCategory(AVAudioSessionCategoryPlayAndRecord, mode: AVAudioSessionModeSpokenAudio, options: avopts)
}
else
{
try AVAudioSession.sharedInstance().setCategory(AVAudioSessionCategoryPlayAndRecord, with: avopts)
try AVAudioSession.sharedInstance().setMode(AVAudioSessionModeSpokenAudio)
}
}
catch
{
Settings.vprint("Failed Setup HFP Bluetooth Audio, ", error.localizedDescription)
}
if let inputs = AVAudioSession.sharedInstance().availableInputs
{
for route in inputs
{
if route.portType == AVAudioSessionPortBluetoothHFP
{
HFP = true
do
{
try AVAudioSession.sharedInstance().setPreferredInput(route)
HFP = true
break
}
catch
{
Settings.vprint("Failed Set Route HFP Bluetooth Audio, ", error.localizedDescription)
}
}
}
}
}
lastHFPStatus = HFP
if !HFP
{
var avopts:AVAudioSessionCategoryOptions = [
.mixWithOthers
]
if Settings.sharedInstance.duckingSoundOn
{
avopts.insert(.duckOthers)
avopts.insert(.interruptSpokenAudioAndMixWithOthers)
}
if Settings.sharedInstance.speakerOnlyOn
{
avopts.insert(.defaultToSpeaker)
do
{
try AVAudioSession.sharedInstance().setCategory(AVAudioSessionCategoryPlayAndRecord, with: avopts)
}
catch
{
Settings.vprint("Failed setCategory, ", error.localizedDescription)
}
}
else
{
do
{
try AVAudioSession.sharedInstance().setCategory(AVAudioSessionCategoryPlayback, with: avopts)
}
catch
{
Settings.vprint("Failed setCategory, ", error.localizedDescription)
}
}
}
do
{
try AVAudioSession.sharedInstance().setActive(activate, with: .notifyOthersOnDeactivation)
if ((Settings.sharedInstance.debugMask & 2) != 0)
{
Settings.vprint(activate)
}
}
catch
{
Settings.vprint("Could not setActive, ", error.localizedDescription)
}
}
#objc func handleRouteChange(notification: Notification)
{
Settings.vprint(notification)
let currentRoute = AVAudioSession.sharedInstance().currentRoute
for route in currentRoute.outputs
{
Settings.vprint("Change Output: ", route.portName)
}
for route in currentRoute.inputs
{
Settings.vprint("Change Input: ", route.portName)
}
}

Resources