I am having a really difficult time with playing audio in the background of my app. The app is a timer that is counting down and plays bells, and everything worked using the timer originally. Since you cannot run a timer over 3 minutes in the background, I need to play the bells another way.
The user has the ability to choose bells and set the time for these bells to play (e.g. play bell immediately, after 5 minutes, repeat another bell every 10 minutes, etc).
So far I have tried using notifications using DispatchQueue.main and this will work fine if the user does not pause the timer. If they re-enter the app though and pause, I cannot seem to cancel this queue or pause it in anyway.
Next I tried using AVAudioEngine, and created a set of nodes. These will play while the app is in the foreground but seem to stop upon backgrounding. Additionally when I pause the engine and resume later, it won't pause the sequence properly. It will squish the bells into playing one after the other or not at all.
If anyone has any ideas of how to solve my issue that would be great. Technically I could try remove everything from the engine and recreate it from the paused time when the user pauses/resumes, but this seems quite costly. It also doesn't solve the problem of the audio stopping in the background. I have the required background mode 'App plays audio or streams audio/video using Airplay', and it is also checked under the background modes in capabilities.
Below is a sample of how I tried to set up the audio engine. The registerAndPlaySound method is called several other times to create the chain of nodes (or is this done incorrectly?). The code is kinda messy at the moment because I have been trying many ways trying to get this to work.
func setupSounds{
if (attached){
engine.detach(player)
}
engine.attach(player)
attached = true
let mixer = engine.mainMixerNode
engine.connect(player, to: mixer, format: mixer.outputFormat(forBus: 0))
var bell = ""
do {
try engine.start()
} catch {
return
}
if (currentSession.bellObject?.startBell != nil){
bell = (currentSession.bellObject?.startBell)!
guard let url = Bundle.main.url(forResource: bell, withExtension: "mp3") else {
return
}
registerAndPlaySound(url: url, delay: warmUpTime)
}
}
func registerAndPlaySound(url: URL, delay: Double) {
do {
let file = try AVAudioFile(forReading: url)
let format = file.processingFormat
let capacity = file.length
let buffer = AVAudioPCMBuffer(pcmFormat: format, frameCapacity: AVAudioFrameCount(capacity))
do {
try file.read(into: buffer)
}catch {
return
}
let sampleRate = buffer.format.sampleRate
let sampleTime = sampleRate*delay
let futureTime = AVAudioTime(sampleTime: AVAudioFramePosition(sampleTime), atRate: sampleRate)
player.scheduleBuffer(buffer, at: futureTime, options: AVAudioPlayerNodeBufferOptions(rawValue: 0), completionHandler: nil)
player.play()
} catch {
return
}
}
Related
I am currently in the process of creating a music application, and will allow the user to change the pitch and speed at which the sound is being played. I am using buttons as the sound initiators, and every time a button is pressed, the corresponding sound will start playing.
This is my current code :
audioPlayer.stop()
engine.stop()
pitchControl.pitch = 0.0
engine = AVAudioEngine()
audioPlayer.volume = 1.0
let precursor = audioFiles[audioFileLinks[sender.tag]]
let path = Bundle.main.path(forResource: precursor, ofType: "mp3")!
let url = NSURL.fileURL(withPath: path)
let file = try? AVAudioFile(forReading: url)
let buffer = AVAudioPCMBuffer(pcmFormat: file!.processingFormat, frameCapacity: AVAudioFrameCount(file!.length))
do {
try file!.read(into: buffer!)
} catch _ {}
engine.attach(audioPlayer)
engine.attach(pitchControl)
engine.attach(speedControl)
engine.connect(audioPlayer, to: speedControl, format: buffer?.format)
engine.connect(speedControl, to: pitchControl, format: buffer?.format)
engine.connect(pitchControl, to: engine.mainMixerNode, format: buffer?.format)
engine.prepare()
do {
try engine.start()
} catch _ {}
audioPlayer.play()
The point of this code is to get which audio track to play based on the tag of the button pressed by the user. The issue that I am having is that every time I press a button, it will play the sound but it keeps playing the sound indefinitely, even though the actual .mp3 file is only 2 seconds long. I am not sure what I am doing wrong. I do not think I am doing anything wrong, I feel that I am just not setting some value correctly.
All help is appreciated! Thank You in advance!
I am building a karaoke app with the ability to sing with video so here is my problem:
I am recording the user video (video only from the front camera) along with applying voice filters with audiokit on a separate audio records.
Now in my playback, i want to play the video and the audio in a sync mode but it didn't succeed because a have an out sync of video and audio.
I am using akplayer for audio so i can apply voice mod and vlckit for playing user video.
do {
//MARK: VLC kit part of the video setup
Vlc_VideoPlayer = VLCMediaPlayer()
Vlc_VideoPlayer.media = VLCMedia(url: recordVideoURL)
Vlc_VideoPlayer.addObserver(self, forKeyPath: "time", options: [], context: nil)
Vlc_VideoPlayer.addObserver(self, forKeyPath: "remainingTime", options: [], context: nil)
Vlc_VideoPlayer.drawable = self.CameraView
//MARK: Audiokit with AKPlayer Setup
file = try AKAudioFile(forReading: recordVoiceURL)
player = AKPlayer(audioFile: file)
self.player.preroll()
delay = AKVariableDelay(player)
delay.rampTime = 0.5
delayMixer = AKDryWetMixer(player, delay)
reverb = AKCostelloReverb(delayMixer)
reverbMixer = AKDryWetMixer(delayMixer, reverb)
booster = AKBooster(reverbMixer)
tracker = AKAmplitudeTracker(booster)
AudioKit.output = tracker
try AudioKit.start()
}catch{
print (error)
}
self.startPlayers()
now the startPlayers function :
func startPlayers(){
DispatchQueue.main.asyncAfter(deadline: .now() + 1) {
if AudioKit.engine.isRunning {
self.Vlc_VideoPlayer.audio.isMuted = true
self.Vlc_VideoPlayer.play()
self.player.isLooping = false
self.player.play()
}else{
self.startPlayers()
}
}
}
I don't know anything about the VLC player, but with the built in AVPlayer there is an option to sync to a clock:
var time: TimeInterval = 1 // 1 second in the future
videoPlayer.masterClock = CMClockGetHostTimeClock()
let hostTime = mach_absolute_time()
let cmHostTime = CMClockMakeHostTimeFromSystemUnits(hostTime)
let cmVTime = CMTimeMakeWithSeconds(time, preferredTimescale: videoPlayer.currentTime().timescale)
let futureTime = CMTimeAdd(cmHostTime, cmVTime)
videoPlayer.setRate(1, time: CMTime.invalid, atHostTime: futureTime)
AKPlayer then supports syncing to the mach_absolute_time() hostTime using its scheduling functions. As you have above, the two will start close together but there is no guarantee of any sync.
Trying to start two players will work out of pure look and unless you have means to synchronize playback after it started, it will not be perfect. Ideally, you should play the audio with VLC as well to make use of its internal synchronization tools.
To iterate on what you have right now, I would suggest to start playback with VLC until it decoded the first frame, pause, start your audio and continue playback with VLC as soon as you decoded the first audio sample. This will still not be perfect, but probably better.
I'm trying to build an iOS application using Swift 4 that would play different sounds. Part of what I need to do is to limit the duration of the sound file played depending on some settings.
What I'd like to know is if it's possible to set the duration of the sound prior to playing it so it can stop after the set duration. If so, how?
I'm currently using Swift's AVAudioPlayer but I don't know if it can do it. My current code is shown below:
// resName set depending on settings
url = Bundle.main.url(forResource: resName, withExtension: "mp3")
do{
audioPlayer = try AVAudioPlayer(contentsOf: url!)
audioPlayer.prepareToPlay()
audioPlayer.currentTime = 0
} catch let error as NSError{
print(error.debugDescription)
}
audioPlayer.play()
Thanks in advance for the help :)
Well, since you are in control of playing the sounds, one way how to deal with it would be running a Timer when you start playing that would after the given time period stop playing:
let timeLimit = 1.6 // get it from settings
player.play()
Timer.scheduledTimer(withTimeInterval: timeLimit, repeats: false) { (timer) in
player.stop()
}
And in case you need to cancel it, you can keep a reference to the timer and cancel it using timer.invalidate().
As an audio engineer, I recommend editing your audio to be exactly how you want it to avoid any unwanted artifacts that can happen from coding.
I'm using AVAudioEngineto play sound files and record the output to a file. I have lots of sound effect files, each played by tapping a button. To play the file I create an AVAudioPlayerNode and connects it to the engine. After the file has played I try to disconnect/detach the node in the completionHandler closure, to free up memory. If I don't remove nodes I will just keep adding new nodes and new connections to the engine.
However, the next time I try to play a file, the app just freezes. Here's my code:
let url = NSBundle.mainBundle().URLForResource(name, withExtension: "wav")!
let audioPlayerFile = try AVAudioFile(forReading: url)
let playerNode = AVAudioPlayerNode()
self.engine.attachNode(playerNode)
self.engine.connect(playerNode, to: self.engine.mainMixerNode, format: audioPlayerFile.processingFormat)
playerNode.scheduleFile(audioPlayerFile, atTime: nil, completionHandler: { () -> Void in
self.engine.disconnectNodeOutput(playerNode)
self.engine.detachNode(playerNode)
})
playerNode.play()
What's the best way to implement a functionality like this, i.e. playing multiple files, potentially overlapping each other, while recording?
I've had a similar problem. I've found that detaching the node on a background thread keeps the app from freezing.
playerNode.scheduleBuffer(buffer, atTime: nil, options: nil) { () -> Void in
self.detach(playerNode)
}
And here's the method:
func detach(player: AVAudioPlayerNode) {
let priority = DISPATCH_QUEUE_PRIORITY_DEFAULT
dispatch_async(dispatch_get_global_queue(priority, 0)) { () -> Void in
self.engine.detachNode(player)
}
However, I'm still unsure this is a solution per se, because these background threads might be spinning infinitely without my knowledge.
When using an AVAudioPlayerNode to schedule a short buffer to play immediately on a touch event ("Touch Up Inside"), I've noticed audible glitches / artifacts on playback while testing. The audio does not glitch at all in iOS simulator, however there is audible distortion on playback when I run the app on an actual iOS device. The audible distortion occurs randomly (the triggered sound will sometimes sound great, while other times it sounds distorted)
I've tried using different audio files, file formats, and preparing the buffer for playback using the prepareWithFrameCount method, but unfortunately the result is always the same and I'm stuck wondering what could be going wrong..
I've stripped the code down to globals for clarity and simplicity. Any help or insight would be greatly appreciated. This is my first attempt at developing an iOS app and my first question posted on Stack Overflow.
let filePath = NSBundle.mainBundle().pathForResource("BD_withSilence", ofType: "caf")!
let fileURL: NSURL = NSURL(fileURLWithPath: filePath)!
var error: NSError?
let file = AVAudioFile(forReading: fileURL, error: &error)
let fileFormat = file.processingFormat
let frameCount = UInt32(file.length)
let buffer = AVAudioPCMBuffer(PCMFormat: fileFormat, frameCapacity: frameCount)
let audioEngine = AVAudioEngine()
let playerNode = AVAudioPlayerNode()
func startEngine() {
var error: NSError?
file.readIntoBuffer(buffer, error: &error)
audioEngine.attachNode(playerNode)
audioEngine.connect(playerNode, to: audioEngine.mainMixerNode, format: buffer.format)
audioEngine.prepare()
func start() {
var error: NSError?
audioEngine.startAndReturnError(&error)
}
start()
}
startEngine()
let frameCapacity = AVAudioFramePosition(buffer.frameCapacity)
let frameLength = buffer.frameLength
let sampleRate: Double = 44100.0
func play() {
func scheduleBuffer() {
playerNode.scheduleBuffer(buffer, atTime: nil, options: AVAudioPlayerNodeBufferOptions.Interrupts, completionHandler: nil)
playerNode.prepareWithFrameCount(frameLength)
}
if playerNode.playing == false {
scheduleBuffer()
let time = AVAudioTime(sampleTime: frameCapacity, atRate: sampleRate)
playerNode.playAtTime(time)
}
else {
scheduleBuffer()
}
}
// triggered by a "Touch Up Inside" event on a UIButton in my ViewController
#IBAction func triggerPlay(sender: AnyObject) {
play()
}
Update:
Ok I think I've identified the source of the distortion: the volume of the node(s) is too great at output and causes clipping. By adding these two lines in my startEngine function, the distortion no longer occurred:
playerNode.volume = 0.8
audioEngine.mainMixerNode.volume = 0.8
However, I'm still don't know why I need to lower the output- my audio file itself does not clip. I'm guessing that it might be a result of the way that the AVAudioPlayerNodeBufferOptions.Interrupts is implemented. When a buffer interrupts another buffer, could there be an increase in output volume as a result of the interruption, causing output clipping? I'm still looking for a solid understanding as to why this occurs.. If anyone is willing/able to provide any clarification about this that would be fantastic!
Not sure if this is the problem you experienced in 2015, it may be the same issue that #suthar experienced in 2018.
I experienced a very similar problem and was due to the fact that the sampleRate on the device is different to the simulator. On macOS it is 44100 and on iOS Devices (late model ones) it is 48000.
So when you fill your buffer with 44100 samples on a 48000 device, you get 3900 samples of silence. When played back it doesn't sound like silence, it sounds like a glitch.
I used the mainMixer format when connecting my playerNode and also when creating my pcmBuffer. Don't refer to 48000 or 44100 anywhere in the code.
audioEngine.attach( playerNode)
audioEngine.connect( playerNode, to:mixerNode, format:mixerNode.outputFormat(forBus:0))
let pcmBuffer = AVAudioPCMBuffer( pcmFormat:SynthEngine.shared.audioEngine.mainMixerNode.outputFormat( forBus:0),
frameCapacity:AVAudioFrameCount(bufferSize))