I am currently using AVSpeechSynthesizer for Text to Speech. Category used for the playback is AVAudioSessionCategoryPlayback and AVAudioSession is set to Active YES.
During the start of the play, [TTS] TTSPlaybackCreate unable to initialize dynamics: -3000 in the Xcode console. When i pause the playback i get [TTS] _BeginSpeaking: couldn't begin playback.
My major issue is MPRemoteCommandCenter doesn't get updated to pause when TTS stopped.
For Stop functionality, I am using this code;
BOOL speechStopped = [self.ttsSpeechSynthesizer stopSpeakingAtBoundary:AVSpeechBoundaryImmediate];
if(!speechStopped) {
[self.ttsSpeechSynthesizer stopSpeakingAtBoundary:AVSpeechBoundaryWord];
}
I had Airplay connected to an Airplay station.
I had a similar issue after updating iOS to the latest version on my phone.
I spent much time trying to understand why my app stopped talking using TextToSpeech while all worked before and code seemed ok.
Siri was talking aloud fine, and the sound in other apps worked as well
Mine was giving no error message in the code and the following in the device log:
Error (730) / LearnByHeart.iOS(TTSSpeechBundle): TTSPlaybackCreate unable to initialize dynamics: -3000
Rebooting the phone did not help.
As funny as it is, all got resolved by turning the physical sound button off and back on.
Hope this saves someone a day
Related
Hi this might be duplicate to this question. But I did not find right answer for this. I have developed an app like zello push to talk using react-native. I want to play the audio message automatically without any user interaction when the app is in background or killed. Whenever any user sending real time audio message, I am sending a push notification to ios app and after receiving notification I am invoking a function which establishes socket connection with the webrtc server and then join in the room in which webrtc audio broadcasting is happening. Now I checked that socket is connected and also joining in room successfully done after receiving push notification when app is in background. But there I could not hear any audio and received following error message.
AURemoteIO.cpp:1668 AUIOClient_StartIO failed (561145187)
Then I have set AVAudioSessionCategoryPlayback and that error gone. But this time also did not hear any audio sound.
The app is working fine when it is in foreground. I am using react-native and this is happening for IOS app only. Any help is appreciated. In xcode I have enabled push-notifications, background-fetch, background-airplay etc.
You need to set your app Capabilities Background Modes (Audio and AirPlay).
To Enable this select your iOS project in iOS then go to Signing & Capabilities tab.
Check for Background Modes and select Audio, AirPlay and Picture In Picture option.
Also, set your AVAudioSession category to AVAudioSessionCategoryPlayback and set it active.
Example:
do {
try AVAudioSession.sharedInstance().setCategory(AVAudioSession.Category.playback, options: AVAudioSession.CategoryOptions.mixWithOthers)
NSLog("Playback OK")
try AVAudioSession.sharedInstance().setActive(true)
NSLog("Session is Active")
} catch {
NSLog("ERROR: CANNOT PLAY MUSIC IN BACKGROUND. Message from code: \"\(error)\"")
}
My app (made with Flutter but this should not matter) has something like a timer functionality that makes a tick sound in regular periods (between 10s and 3min). I have the background mode Audio, AirPlay, and Picture in Picture activated and the following in my Info.plist.
<key>UIBackgroundModes</key>
<array>
<string>audio</string>
</array>
but the audio will still stop when running in background.
This occurs when running the app in profile mode, when I run in debug mode, the audio continues when running in background.
What can I do to have the audio continue to run in background?
There is a relevant note in the audio_service 0.18.0 README which can help here:
Note that the audio background mode permits an app to run in the background only for the purpose of playing audio. The OS may kill your process if it sits idly without playing audio, for example, by using a timer to sleep for a few seconds. If your app needs to pause for a few seconds between audio tracks, consider playing a silent audio track to create that effect rather than using an idle timer.
Well, unless you did this in native code (Swift/Objective-C), your code is running inside the Flutter engine - probably with some Dart Timer.periodic.
The Flutter engine may be killed off at any point in time when the app is in the background. On Android this can even happen when simply switching to the camera and back to the app afterwards. On iOS usually after some fixed time or on high system load.
In this regard, Flutter (and most other cross-platform toolkits) are very different to native apps.
You can start with the official documentation here: https://flutter.dev/docs/development/packages-and-plugins/background-processes
This may be a good article: https://medium.com/vrt-digital-studio/flutter-workmanager-81e0cfbd6f6e
I don't know enough about iOS but I think there is no easy way to schedule execution in the small intervals you require. On Android something like the AlarmManager can be used.
You can try writing the scheduling code natively and schedule it from the app via a MethodChannel when the period is set.
You can look at these libraries:
https://pub.dev/packages/workmanager (probably can't wake up at the small intervals you need)
https://pub.dev/packages/android_alarm_manager_plus (only for Android)
https://pub.dev/packages/audio_service (may give you some idea on how to achieve background execution on iOS)
Edit:
After reading more about Enabling Background Audio on iOS, it seems to me, that this only works when using
an AVAudioSession. Which you are probably not using. To get this working you need some native code. The audio_service package uses such a session. You can try scheduling with Dart code and playing the sound via the audio_service package. Sounds like it could work but I have no experience with this package.
Please, pay attention to the answer from #RyanHeise – he's correct on the point of using AudioSession: in background you should play either sound or silence. As soon as audio will be paused, app can be suspended.
Also, Important Note: when app entering background, scheduled timers will pause. That's why you might think it stopped working. Do not use scheduling via Timers on background - rely on events from the system.
I uploaded an archive on app store and am getting crash when I 'm trying to play an intro sound. I'm using AVAudioEngine to play the sound. When I compile and run code through Xcode everything works fine. When I upload on TestFlight and try to run my app as an internal tester my app crashes. The crash report is:
If I use AVAudioPlayer to play that sound it's ok. I can't understand what is the problem with AVAudioEngine. Any advices?
I faced the same exception only in the release build of my app and specific to iPhone7.
The exception seems to occur at a changing point of audio session category.
In my case, changing from
AVAudioSessionCategorySoloAmbient
to
AVAudioSessionCategoryPlayAndRecord, with: AVAudioSessionCategoryOptions.defaultToSpeaker
I found a workaround which works at least just for me.
The following article
https://forums.developer.apple.com/thread/65656
tells that this kind of exception occurs at initialization of multiple input audio unit.
In order to prevent initialization of multiple input audio unit,
I added the following codes before the change of audio session category
AudioOutputUnitStop((engine.inputNode?.audioUnit)!)
AudioUnitUninitialize((engine.inputNode?.audioUnit)!)
engine is the instance of AVAudioEngine.
I hope it will help you guys!
Have little hope that anyone's encountered this, but there seems to be a system wide bug with AVPlayer. I've tested with my app along with other apps, including Pandora.
Namely, and randomly, if you successfully open the app and begin playing, then put the app in the background, perform a variety of random actions, such as playing media in other apps, making a phone call, etc, and try to come back to the app, the AVPlayer's don't play.
I've replicated this many times, though inconsistently, with Pandora and the app that I'm working on.
I've logged my code and have not found any errors in playback. It just doesn't play.
Has anyone else experienced this strange issue? I have spent countless days on it and am now desperate.
Looks like the AVFoundation media server is crashing and resetting, causing this problem.
The solution:
https://developer.apple.com/library/ios/qa/qa1749/_index.html
In my iOS app on all previous versions of the OS, we record audio occasionally, then sleep for a while, then record again, and cycle for ever (sleep is to maintain battery). This worked fine up to iOS 7, even when the app was in background. Now, when the app is in the background the call to AudioQueueStart fails to start recording with an error: -16981. I can't seem to find this error code in the documentation or on the Web, and if I turn it into an NSError, it says "The operation couldn’t be completed. (OSStatus error -16981.)", which isn't all that helpful.
I have a theory that Apple are closing a hole here; the idea being; why would you want to start recording from a background process, unless you are spying? Well, with the users consent (signed and paid for!), that's exactly what we are doing.
So; can anyone confirm or deny that this is expected, or what I might be able to do about it. It's a bit of a killer for our app. I have filed it as a bug with Apple, and will try to report progress here.
UPDATE: 3rd October 2013
Although the previous answer seemed to work for this for a while; it has stopped working now with -12985, which is because another app has turned on audio. This is of course why I need to use the mixing flag.
UPDATE:
iOS 7.0.3 (and later) seem to have fixed this issue completely.
After playing with different audio session properties, I found that -16981 error takes place when kAudioSessionProperty_OverrideCategoryMixWithOthers is enabled (TRUE). As soon, as I set it to '0', AudioQueueStart() executes successfully. So, before starting the audio session try:
UInt32 allowMixing = 0;
status = AudioSessionSetProperty (
kAudioSessionProperty_OverrideCategoryMixWithOthers,
sizeof (allowMixing),
&allowMixing);
Clearly, this is behavior change in iOS 7. As it was mentioned before, the documentation does not list -16981 error code.