I'm streaming music from SoundCloud, using their streaming APIs, which in turn uses Apple's AudioToolbox framework. You can find the git repository here.
The app was streaming fine using ios 5 and below. Now with ios 6 I'm getting EXC_BAD_ACCESS anytime an AudioQueue is disposed via AudioQueueDispose. I've tried commenting out this line; sure enough it doesn't crash anymore, but obviously my audio streams keep playing and never get dealloc'd.
I'm not really sure what could be causing this. Is this a bug that needs to be reported with Apple? Or some new feature in ios 6 that inadvertently causes the audioQueue to be referenced somewhere after it has been disposed? Has anyone noticed behavior like this?
AudioQueueDispose will work in iOS6 devices without fail. You have to pass true as second parameter for AudioQueueDispose. Then its asynchronously stop the queue. But the problem is same thing is not working in iOS 6.1 devices. Can anybody help me for this issue.Thanks for advance.
Related
Situation
I have a component playing audio in my nativescript app (recently updated to ns 7). Since the app should continue to play audio when it is in the background or the device is locked, I need a way to control the audio. iOS comes with the NowPlayingView.
Problem
Since I can not find a link to this feature in the Nativescript Documentation or a ns plugin for that, I am wondering if and how I could use that feature in my app. I did not yet dig into the "native-land" and was primarly using nativescripts own features. So I do not have any first steps that guide me in the right direction to implement such a feature, if possible at all.
Question
Maybe you could help me with some rough tipps or even some code snippets, if you have already implemented the NowPlayingView?
I have a legacy video streaming library that seems to be broken on iPhone 11 (and above) devices. After digging into the problem it seems that the code that is initiating the AVAudioSession is failing due to the call to AVAudioSession.sharedInstance.setPreferredSampleRate(44100) not changing the actual sample rate and keeping it at 48000Hz.
Has anyone else faced this? I know that the method says preferred and it is not guaranteed that it will change the audio sample rate, but it was working on all previous devices.
There are previous topics about waking up an application from the background with BLE advertisement (e.g. How to wake up iOS app with bluetooth signal (BLE), Android / iOS - BLE - wake up a terminated application when a BLE device connects).
However my question is not about that since we had this feature working fine up to iOS 9.2 included.
Coming iOS 9.3 the feature doesn't work as it used to work before, it seems that the terminated by user swiping off the application is not woken up. Nothing changed on the BLE advertisement originator.
After a recheck of various parameters and reading of Apple documentation, nothing springs to our minds. Neither Apple documentation mention any change, unless we missed something.
Have other people notice this issue? Are you aware of a solution?
We wrote to Apple and we are pending on an answer but maybe somebody here has the correct tip.
Many thanks in advance for the attention.
UPDATE: After more testing, it seems that only when the user swipes the application out the wake up doesn't work like it used to in iOS 9.2
Initial testing were more manual and gave us the impression that there was an underlying issue. However not sure why this change took place without any notification from Apple. - Above text was amended based on the update -
UPDATE 2: This issue is not present anymore in iOS 10.
It turns out after a reply from Apple that this is a (new) intended behaviour.
didEnterRegion/didExitRegion events stopped working from the background after update to 9.3
I have an app that has been working almost perfectly for the last 6 months, and after the update these events are no longer getting called. I started this app over last year when I heard about iOS9 coming out, and when I couldn't get the old one working, I started a new one using Swift instead.
After some time and a LOT of driving in and out of my Region, I got the app to work more reliably than it ever had before. I have several devices using the app, and when all of them updated to iOS9.3/9.3.1, the app stopped calling didEnterRegion/didExitRegion events completely.
I have a ticket open with Apple, but I am getting a lot of push back about the service, and that 9.3 didn't change background location at all.
My devices use AT&T of Verizon, we have tried wi-fi assist on and off, I even wiped a system, formatted the HD, installed El Capitan, and XCode 7.3, and it's hasn't helped.
I also found an issue with the Devices Tool, and when you download the Container, and open package, the Documents folder is empty. I'm not getting a warm fuzzy feeling for Apple right now, and I am sure someone is feeling my MEGA MIND WEDGIE at this moment.
Help....
I have a site that every time the javascript executes document.getElementById('audioID').load(); or document.getElementById('audioID').play(); it will cause my iPad/ iPod running iOS8 in standalone mode to suddenly crash and exit to the home screen. The same site running the normal Safari browser on iOS8 works perfectly fine. I could not reproduce this issue on iOS7 either.
This issue seems similar to the following stack issue that appears to be describing an IOS8 bug: Why HTML5 video doesn't play in IOS 8 WebApp(webview)? , except that my issue deals with audio instead of video and is not just failing to play the audio but it is crashing the standalone window.
Has anyone else experienced this, or know what exactly could be causing the standalone mode to crash after?
[UPDATE]
It appears to be the combination of a submit button along with attempting to play audio in iOS8's standalone mode will cause a crash. I have created a quick gist that demos this bug here: https://gist.github.com/macmadill/262d65ad1c02936fca4b
[UPDATE]
I retested this bug on 3 different iPads, here are my results:
iOS 8.1.2 - standalone mode crashed
iOS 8.3 - no issue
iOS 9.2.1 - no issue
I ran across the same issue. For a slightly complex workaround, it turns out that using the Web Audio API worked even when the "webapp" was saved to the ios home screen. See the following:
https://developer.apple.com/library/safari/documentation/AudioVideo/Conceptual/Using_HTML5_Audio_Video/PlayingandSynthesizingSounds/PlayingandSynthesizingSounds.html#//apple_ref/doc/uid/TP40009523-CH6-SW1
https://developer.mozilla.org/en-US/docs/Web/API/AudioContext
http://codetheory.in/solve-your-game-audio-problems-on-ios-and-android-with-web-audio-api/
http://www.html5rocks.com/en/tutorials/webaudio/intro/
Some of the examples use a deprecated API. For example:
noteOn(x) is now start(x).
createGainNode() is now createGain()
The only solution is to use the Web Audio API.
I found out that
https://github.com/goldfire/howler.js/
is a great wrapper that make it easy to use.
Good luck
Since upgrading to xCode 6 and iOS 8 I've noticed serious issues with AVSpeechSynthesizer. Prior to the upgrade, it worked perfectly, but now, several issues have risen.
Speech Utterances are playing at a much faster rate then how they were prior to upgrade.
When I queue up 2 speech utterances, it simply skips over the first utterance and plays the second one first. (This only occurs on the first run of the speech synthesizer. The second run and on works properly.)
Please, any help would be greatly appreciated. Thanks in advance.
For second issue, see this answer for AVSpeechUtterance - Swift - initializing with a phrase.
As for me - iOS 8 also did not support properly languages other than phone language + english.
upd dec-2014: XCode 6.2 beta2 did resolve issues with TTS in simulator and (maybe) with TTS rate.
It looks to me as though a user can only hear the voice if they have specifically downloaded it in their accessibility settings.
What I have not been able to do is work out how to tell which voices they have downloaded.
I have discovered a horrible hack to make voices that have not been specifically downloaded play.
To do so I had to have two synthesizers running and get one to run through all the voices saying something. Then the other synthesizer could use any of the voices. As I say, this is a horrible hack and I cannot guarantee its reliability. In addition, it may stop working at a future varsion of ios8.
In my own apps I have chosen to make a library and get it to cycle through all the voices. Where they take more than zero time to say a phrase, they are a "good" voice and I offer it to the user. This has the advantage that it is likely to be robust against changes in the ios version.