I have a legacy video streaming library that seems to be broken on iPhone 11 (and above) devices. After digging into the problem it seems that the code that is initiating the AVAudioSession is failing due to the call to AVAudioSession.sharedInstance.setPreferredSampleRate(44100) not changing the actual sample rate and keeping it at 48000Hz.
Has anyone else faced this? I know that the method says preferred and it is not guaranteed that it will change the audio sample rate, but it was working on all previous devices.
Related
Issue description
AVSampleBufferDisplayLayer seems to hang on iOS 12.3.1 (>= iOS 12.2 is also affected) after
reboot. It looks that after 5 minutes everything works fine again.
The issue is not reproducible on iOS 11.
In our production code we don't use AVAssetReader, so please ignore any issues
with it, if minor.
I can make application hang on AVSampleBufferDisplayLayer init, enqueue and
requestMediaDataWhenReadyOnQueue.
Please advise how we should implement AVSampleBufferDisplayLayer correctly
(examples are welcome). Especially when we have to maintain and exchange
many display layers at once.
How to reproduce the issue:
Clone repo: https://github.com/mackode/AVSampleBufferDisplayLayer_Hanging
Download:
Tears of Steel segment
Put tearsofsteel_4k.mov_1918x852_2000.mp4 in Resources/ directory
Open AVSampleBufferDisplayLayer_BlackScreen.xcodeproj
Take iOS 12.3.1 device and reboot it
Start debugging AVSampleBufferDisplayLayer_BlackScreen
The video should start playing, then move a slider (seek) one or a few more times
Sample code is available here:
https://github.com/mackode/AVSampleBufferDisplayLayer_Hanging
Observed result:
Video hangs usually on AVSampleBufferDisplayLayer methods
Expected result:
Video seeks and continues to play
Can someone help me with solving this mistery?
My configuration
iPhone X
iOS 12
Problem
Since iOS 11/12 my apps audio for some reason has a periodic crackling/popping sound which appears to get worse/more noticeable the louder or more constant the audio is.
Troubleshooting
I played a 800Hz sine wave from djay2 through AudioBus into my app and saved the output of my app to a file.
Loading my apps output into Audacity I can see that the crackling occurs every 14,112 samples or every 0.320 seconds.
Has anyone got any idea where I should start looking. Changing the internal configuration of my app between 41.1kHz and 48kHz appears to make no difference. I thought it might have been due to downsampling from the hardware sample rate.
Toggling Inter-App Audio Sync on/off within AudioBus appears to have some effect (1 in every ~8 toggles will stop the crackling).
I assume this is due to the AudioSession being restarted or something.
Has anyone got any idea what might be causing this or has experienced this before?
Thanks,
Anthony
I have a site that every time the javascript executes document.getElementById('audioID').load(); or document.getElementById('audioID').play(); it will cause my iPad/ iPod running iOS8 in standalone mode to suddenly crash and exit to the home screen. The same site running the normal Safari browser on iOS8 works perfectly fine. I could not reproduce this issue on iOS7 either.
This issue seems similar to the following stack issue that appears to be describing an IOS8 bug: Why HTML5 video doesn't play in IOS 8 WebApp(webview)? , except that my issue deals with audio instead of video and is not just failing to play the audio but it is crashing the standalone window.
Has anyone else experienced this, or know what exactly could be causing the standalone mode to crash after?
[UPDATE]
It appears to be the combination of a submit button along with attempting to play audio in iOS8's standalone mode will cause a crash. I have created a quick gist that demos this bug here: https://gist.github.com/macmadill/262d65ad1c02936fca4b
[UPDATE]
I retested this bug on 3 different iPads, here are my results:
iOS 8.1.2 - standalone mode crashed
iOS 8.3 - no issue
iOS 9.2.1 - no issue
I ran across the same issue. For a slightly complex workaround, it turns out that using the Web Audio API worked even when the "webapp" was saved to the ios home screen. See the following:
https://developer.apple.com/library/safari/documentation/AudioVideo/Conceptual/Using_HTML5_Audio_Video/PlayingandSynthesizingSounds/PlayingandSynthesizingSounds.html#//apple_ref/doc/uid/TP40009523-CH6-SW1
https://developer.mozilla.org/en-US/docs/Web/API/AudioContext
http://codetheory.in/solve-your-game-audio-problems-on-ios-and-android-with-web-audio-api/
http://www.html5rocks.com/en/tutorials/webaudio/intro/
Some of the examples use a deprecated API. For example:
noteOn(x) is now start(x).
createGainNode() is now createGain()
The only solution is to use the Web Audio API.
I found out that
https://github.com/goldfire/howler.js/
is a great wrapper that make it easy to use.
Good luck
Since upgrading to xCode 6 and iOS 8 I've noticed serious issues with AVSpeechSynthesizer. Prior to the upgrade, it worked perfectly, but now, several issues have risen.
Speech Utterances are playing at a much faster rate then how they were prior to upgrade.
When I queue up 2 speech utterances, it simply skips over the first utterance and plays the second one first. (This only occurs on the first run of the speech synthesizer. The second run and on works properly.)
Please, any help would be greatly appreciated. Thanks in advance.
For second issue, see this answer for AVSpeechUtterance - Swift - initializing with a phrase.
As for me - iOS 8 also did not support properly languages other than phone language + english.
upd dec-2014: XCode 6.2 beta2 did resolve issues with TTS in simulator and (maybe) with TTS rate.
It looks to me as though a user can only hear the voice if they have specifically downloaded it in their accessibility settings.
What I have not been able to do is work out how to tell which voices they have downloaded.
I have discovered a horrible hack to make voices that have not been specifically downloaded play.
To do so I had to have two synthesizers running and get one to run through all the voices saying something. Then the other synthesizer could use any of the voices. As I say, this is a horrible hack and I cannot guarantee its reliability. In addition, it may stop working at a future varsion of ios8.
In my own apps I have chosen to make a library and get it to cycle through all the voices. Where they take more than zero time to say a phrase, they are a "good" voice and I offer it to the user. This has the advantage that it is likely to be robust against changes in the ios version.
I'm streaming music from SoundCloud, using their streaming APIs, which in turn uses Apple's AudioToolbox framework. You can find the git repository here.
The app was streaming fine using ios 5 and below. Now with ios 6 I'm getting EXC_BAD_ACCESS anytime an AudioQueue is disposed via AudioQueueDispose. I've tried commenting out this line; sure enough it doesn't crash anymore, but obviously my audio streams keep playing and never get dealloc'd.
I'm not really sure what could be causing this. Is this a bug that needs to be reported with Apple? Or some new feature in ios 6 that inadvertently causes the audioQueue to be referenced somewhere after it has been disposed? Has anyone noticed behavior like this?
AudioQueueDispose will work in iOS6 devices without fail. You have to pass true as second parameter for AudioQueueDispose. Then its asynchronously stop the queue. But the problem is same thing is not working in iOS 6.1 devices. Can anybody help me for this issue.Thanks for advance.