Audio from haptic engine only playing through speakers - ios

I'm working on an app that uses CoreHaptics to play a synchronised pattern of vibrations and audio.
The problem is that the audio only gets played through the iPhones speakers (if the mute switch is not turned on). As soon as I connect my AirPods to the phone the audio stops playing, but the haptics continue.
My code looks something like this:
let engine = CHHapticEngine()
...
var events = [CHHapticEvent]()
...
let volume: Float = 1
let decay: Float = 0.5
let sustained: Float = 0.5
let audioParameters = [
CHHapticEventParameter(parameterID: .audioVolume, value: volume),
CHHapticEventParameter(parameterID: .decayTime, value: decay),
CHHapticEventParameter(parameterID: .sustained, value: sustained)
]
let breathingTimes = pacer.breathingTimeInSeconds
let combinedTimes = breathingTimes.inhale + breathingTimes.exhale
let audioEvent = CHHapticEvent(
audioResourceID: selectedAudio,
parameters: audioParameters,
relativeTime: 0,
duration: combinedTimes
)
events.append(audioEvent)
...
let pattern = try CHHapticPattern(events: events, parameterCurves: [])
let player = try engine.makeAdvancedPlayer(with: pattern)
...
try player.start(atTime: CHHapticTimeImmediate)
My idea to activate an audio session before the player starts, to indicate to the system that audio is played, also didn't changed the outcome:
try AVAudioSession.sharedInstance().setActive(true)
Is there a different way to route the audio from CoreHaptics to a different output other than the integrated speakers?

Related

AVFoundation positional audio not working on iOS device

I'm trying to play positional audio in a Swift iOS app using AVAudioEngine and AVAudioEnvironmentNode. I can successfully play the audio fine and hear it spatialized, shifting between both outputs in stereo, but only in the simulator. When I run the same app on an iPhone, the audio plays but in both ears rather than panning when the source is moved around. Is there some special configuration I need to do, like manually handling the device audio output?
I initialize the audio engine and player like so:
let audioEngine = AVAudioEngine()
let audioEnv = AVAudioEnvironmentNode()
audioEngine.attach(audioEnv)
audioEngine.connect(
audioEnv,
to: audioEngine.mainMixerNode,
format: audioEnv.outputFormat(forBus: 0)
)
try audioEngine.start()
let player = AVAudioPlayerNode()
audioEngine.attach(player)
audioEngine.connect(
player,
to: audioEnv,
format: AVAudioFormat(standardFormatWithSampleRate: 44100, channels: 1)
)
player.scheduleFile(...)
player.play()
My source files are mono channel .wav.
At some point in the future, I change the position of the player:
player.position = AVAudio3DPoint(x: 5, y: 0, z: 0)
This should play only (or mostly) in one ear. When run in the iOS simulator, it does exactly what I expect. However, on an actual device it just plays evenly in both ears no matter what player.position is set to. I suspect it has to do with the configuration of audioEngine.
Thoughts?
Try setting:
audioEnv.renderingAlgorithm = .HRTFHQ // or .HRTF

Out sync sound and picture between two players

I am building a karaoke app with the ability to sing with video so here is my problem:
I am recording the user video (video only from the front camera) along with applying voice filters with audiokit on a separate audio records.
Now in my playback, i want to play the video and the audio in a sync mode but it didn't succeed because a have an out sync of video and audio.
I am using akplayer for audio so i can apply voice mod and vlckit for playing user video.
do {
//MARK: VLC kit part of the video setup
Vlc_VideoPlayer = VLCMediaPlayer()
Vlc_VideoPlayer.media = VLCMedia(url: recordVideoURL)
Vlc_VideoPlayer.addObserver(self, forKeyPath: "time", options: [], context: nil)
Vlc_VideoPlayer.addObserver(self, forKeyPath: "remainingTime", options: [], context: nil)
Vlc_VideoPlayer.drawable = self.CameraView
//MARK: Audiokit with AKPlayer Setup
file = try AKAudioFile(forReading: recordVoiceURL)
player = AKPlayer(audioFile: file)
self.player.preroll()
delay = AKVariableDelay(player)
delay.rampTime = 0.5
delayMixer = AKDryWetMixer(player, delay)
reverb = AKCostelloReverb(delayMixer)
reverbMixer = AKDryWetMixer(delayMixer, reverb)
booster = AKBooster(reverbMixer)
tracker = AKAmplitudeTracker(booster)
AudioKit.output = tracker
try AudioKit.start()
}catch{
print (error)
}
self.startPlayers()
now the startPlayers function :
func startPlayers(){
DispatchQueue.main.asyncAfter(deadline: .now() + 1) {
if AudioKit.engine.isRunning {
self.Vlc_VideoPlayer.audio.isMuted = true
self.Vlc_VideoPlayer.play()
self.player.isLooping = false
self.player.play()
}else{
self.startPlayers()
}
}
}
I don't know anything about the VLC player, but with the built in AVPlayer there is an option to sync to a clock:
var time: TimeInterval = 1 // 1 second in the future
videoPlayer.masterClock = CMClockGetHostTimeClock()
let hostTime = mach_absolute_time()
let cmHostTime = CMClockMakeHostTimeFromSystemUnits(hostTime)
let cmVTime = CMTimeMakeWithSeconds(time, preferredTimescale: videoPlayer.currentTime().timescale)
let futureTime = CMTimeAdd(cmHostTime, cmVTime)
videoPlayer.setRate(1, time: CMTime.invalid, atHostTime: futureTime)
AKPlayer then supports syncing to the mach_absolute_time() hostTime using its scheduling functions. As you have above, the two will start close together but there is no guarantee of any sync.
Trying to start two players will work out of pure look and unless you have means to synchronize playback after it started, it will not be perfect. Ideally, you should play the audio with VLC as well to make use of its internal synchronization tools.
To iterate on what you have right now, I would suggest to start playback with VLC until it decoded the first frame, pause, start your audio and continue playback with VLC as soon as you decoded the first audio sample. This will still not be perfect, but probably better.

Build a simple Equalizer

I would like to make a 5-band audio equalizer (60Hz, 230Hz, 910Hz, 4kHz, 14kHz) using AVAudioEngine. I would like to have the user input gain per band through a vertical slider and accordingly adjust the audio that is playing. I tried using AVAudioUnitEQ to do this, but I hear no difference when playing the audio. I tried to hardcode in values to specify a gain at each frequency, but it still does not work. Here is the code I have:
var audioEngine: AVAudioEngine = AVAudioEngine()
var equalizer: AVAudioUnitEQ!
var audioPlayerNode: AVAudioPlayerNode = AVAudioPlayerNode()
var audioFile: AVAudioFile!
// in viewDidLoad():
equalizer = AVAudioUnitEQ(numberOfBands: 5)
audioEngine.attach(audioPlayerNode)
audioEngine.attach(equalizer)
let bands = equalizer.bands
let freqs = [60, 230, 910, 4000, 14000]
audioEngine.connect(audioPlayerNode, to: equalizer, format: nil)
audioEngine.connect(equalizer, to: audioEngine.outputNode, format: nil)
for i in 0...(bands.count - 1) {
bands[i].frequency = Float(freqs[i])
}
bands[0].gain = -10.0
bands[0].filterType = .lowShelf
bands[1].gain = -10.0
bands[1].filterType = .lowShelf
bands[2].gain = -10.0
bands[2].filterType = .lowShelf
bands[3].gain = 10.0
bands[3].filterType = .highShelf
bands[4].gain = 10.0
bands[4].filterType = .highShelf
do {
if let filepath = Bundle.main.path(forResource: "song", ofType: "mp3") {
let filepathURL = NSURL.fileURL(withPath: filepath)
audioFile = try AVAudioFile(forReading: filepathURL)
audioEngine.prepare()
try audioEngine.start()
audioPlayerNode.scheduleFile(audioFile, at: nil, completionHandler: nil)
audioPlayerNode.play()
}
} catch _ {}
Since the low frequencies have a gain of -10 and the high frequencies have a gain of 10, there should be a very noticeable difference when playing any media. However, when the media starts playing, it sounds the same as if played without any equalizer attached.
I'm not sure why this is happening, but I tried several different things to debug. I thought that it might be the order of the functions so I tried switching it so that audioEngine.connect is called after adjusting all of the bands, but that did not make a difference either.
I tried this same code with using an AVAudioUnitTimePitch, and it worked perfectly, so I am dumbfounded as to why it does not work with AVAudioUnitEQ.
I do not want to use any third-party libraries or cocoa pods for this project, I would like to do it using AVFoundation alone.
Any help would be greatly appreciated!
Thanks in advance.
AVAudioUnitEQFilterParameters
Looking through the documentation, I noticed that I had messed with all of the parameters except bypass and it seems that changing this flag fixed everything!
So, I believe the main issue here is that each AVAudioUnitEQ band must not be bypassed by the provided system values rather than the values the programmer sets.
So, I changed
for i in 0...(bands.count - 1) {
bands[i].frequency = Float(freqs[i])
}
to
for i in 0...(bands.count - 1) {
bands[i].frequency = Float(freqs[i])
bands[i].bypass = false
bands[i].filtertype = .parametric
}
and everything started working. Furthermore, to make an effective equalizer that allows the user to modify individual frequencies the filtertype for each band should be set to .parametric.
I am still unsure on what I should set the bandwith to, but I can probably check online for that or just mess with it until the sound matches a different equalizer application.

Playing scheduled audio in the background

I am having a really difficult time with playing audio in the background of my app. The app is a timer that is counting down and plays bells, and everything worked using the timer originally. Since you cannot run a timer over 3 minutes in the background, I need to play the bells another way.
The user has the ability to choose bells and set the time for these bells to play (e.g. play bell immediately, after 5 minutes, repeat another bell every 10 minutes, etc).
So far I have tried using notifications using DispatchQueue.main and this will work fine if the user does not pause the timer. If they re-enter the app though and pause, I cannot seem to cancel this queue or pause it in anyway.
Next I tried using AVAudioEngine, and created a set of nodes. These will play while the app is in the foreground but seem to stop upon backgrounding. Additionally when I pause the engine and resume later, it won't pause the sequence properly. It will squish the bells into playing one after the other or not at all.
If anyone has any ideas of how to solve my issue that would be great. Technically I could try remove everything from the engine and recreate it from the paused time when the user pauses/resumes, but this seems quite costly. It also doesn't solve the problem of the audio stopping in the background. I have the required background mode 'App plays audio or streams audio/video using Airplay', and it is also checked under the background modes in capabilities.
Below is a sample of how I tried to set up the audio engine. The registerAndPlaySound method is called several other times to create the chain of nodes (or is this done incorrectly?). The code is kinda messy at the moment because I have been trying many ways trying to get this to work.
func setupSounds{
if (attached){
engine.detach(player)
}
engine.attach(player)
attached = true
let mixer = engine.mainMixerNode
engine.connect(player, to: mixer, format: mixer.outputFormat(forBus: 0))
var bell = ""
do {
try engine.start()
} catch {
return
}
if (currentSession.bellObject?.startBell != nil){
bell = (currentSession.bellObject?.startBell)!
guard let url = Bundle.main.url(forResource: bell, withExtension: "mp3") else {
return
}
registerAndPlaySound(url: url, delay: warmUpTime)
}
}
func registerAndPlaySound(url: URL, delay: Double) {
do {
let file = try AVAudioFile(forReading: url)
let format = file.processingFormat
let capacity = file.length
let buffer = AVAudioPCMBuffer(pcmFormat: format, frameCapacity: AVAudioFrameCount(capacity))
do {
try file.read(into: buffer)
}catch {
return
}
let sampleRate = buffer.format.sampleRate
let sampleTime = sampleRate*delay
let futureTime = AVAudioTime(sampleTime: AVAudioFramePosition(sampleTime), atRate: sampleRate)
player.scheduleBuffer(buffer, at: futureTime, options: AVAudioPlayerNodeBufferOptions(rawValue: 0), completionHandler: nil)
player.play()
} catch {
return
}
}

Audible glitches on buffer playback via AVAudioPlayerNode in iOS (Swift) *working in simulator, but not on device

When using an AVAudioPlayerNode to schedule a short buffer to play immediately on a touch event ("Touch Up Inside"), I've noticed audible glitches / artifacts on playback while testing. The audio does not glitch at all in iOS simulator, however there is audible distortion on playback when I run the app on an actual iOS device. The audible distortion occurs randomly (the triggered sound will sometimes sound great, while other times it sounds distorted)
I've tried using different audio files, file formats, and preparing the buffer for playback using the prepareWithFrameCount method, but unfortunately the result is always the same and I'm stuck wondering what could be going wrong..
I've stripped the code down to globals for clarity and simplicity. Any help or insight would be greatly appreciated. This is my first attempt at developing an iOS app and my first question posted on Stack Overflow.
let filePath = NSBundle.mainBundle().pathForResource("BD_withSilence", ofType: "caf")!
let fileURL: NSURL = NSURL(fileURLWithPath: filePath)!
var error: NSError?
let file = AVAudioFile(forReading: fileURL, error: &error)
let fileFormat = file.processingFormat
let frameCount = UInt32(file.length)
let buffer = AVAudioPCMBuffer(PCMFormat: fileFormat, frameCapacity: frameCount)
let audioEngine = AVAudioEngine()
let playerNode = AVAudioPlayerNode()
func startEngine() {
var error: NSError?
file.readIntoBuffer(buffer, error: &error)
audioEngine.attachNode(playerNode)
audioEngine.connect(playerNode, to: audioEngine.mainMixerNode, format: buffer.format)
audioEngine.prepare()
func start() {
var error: NSError?
audioEngine.startAndReturnError(&error)
}
start()
}
startEngine()
let frameCapacity = AVAudioFramePosition(buffer.frameCapacity)
let frameLength = buffer.frameLength
let sampleRate: Double = 44100.0
func play() {
func scheduleBuffer() {
playerNode.scheduleBuffer(buffer, atTime: nil, options: AVAudioPlayerNodeBufferOptions.Interrupts, completionHandler: nil)
playerNode.prepareWithFrameCount(frameLength)
}
if playerNode.playing == false {
scheduleBuffer()
let time = AVAudioTime(sampleTime: frameCapacity, atRate: sampleRate)
playerNode.playAtTime(time)
}
else {
scheduleBuffer()
}
}
// triggered by a "Touch Up Inside" event on a UIButton in my ViewController
#IBAction func triggerPlay(sender: AnyObject) {
play()
}
Update:
Ok I think I've identified the source of the distortion: the volume of the node(s) is too great at output and causes clipping. By adding these two lines in my startEngine function, the distortion no longer occurred:
playerNode.volume = 0.8
audioEngine.mainMixerNode.volume = 0.8
However, I'm still don't know why I need to lower the output- my audio file itself does not clip. I'm guessing that it might be a result of the way that the AVAudioPlayerNodeBufferOptions.Interrupts is implemented. When a buffer interrupts another buffer, could there be an increase in output volume as a result of the interruption, causing output clipping? I'm still looking for a solid understanding as to why this occurs.. If anyone is willing/able to provide any clarification about this that would be fantastic!
Not sure if this is the problem you experienced in 2015, it may be the same issue that #suthar experienced in 2018.
I experienced a very similar problem and was due to the fact that the sampleRate on the device is different to the simulator. On macOS it is 44100 and on iOS Devices (late model ones) it is 48000.
So when you fill your buffer with 44100 samples on a 48000 device, you get 3900 samples of silence. When played back it doesn't sound like silence, it sounds like a glitch.
I used the mainMixer format when connecting my playerNode and also when creating my pcmBuffer. Don't refer to 48000 or 44100 anywhere in the code.
audioEngine.attach( playerNode)
audioEngine.connect( playerNode, to:mixerNode, format:mixerNode.outputFormat(forBus:0))
let pcmBuffer = AVAudioPCMBuffer( pcmFormat:SynthEngine.shared.audioEngine.mainMixerNode.outputFormat( forBus:0),
frameCapacity:AVAudioFrameCount(bufferSize))

Resources