When I try to run an XNA application on a windows 7 machine that has an audio device, but it doesn't have any speakers plugged in, I get the following error message:
Could not find a suitable audio device. Verify that a sound card is
installed, and check the driver properties to make sure it is not
disabled
Is there a way to catch this error an ignore it. I don't really care if the player has any sound or not, the game should still run in this case.
Same answer as here: In theory it should throw a NoAudioHardwareException.
So try doing something with audio (SoundEffect.MasterVolume comes to mind as a possibility, as it is a static method) and see if you can catch the exception. If you do catch an exception, simply do no further audio work.
Related
AudioKit, macOS:
When I do mixer.addInput(myAudioPlayer), the program outputs this message:
2021-09-16 11:41:44.578038+0200 ShowTime[16140:1611137] throwing -10878
... numerous times.
Do you know what -10878 is, and how to fix it?
I would also be interested in knowing what "ShowTime[16140:1611137]" means. Can I use these numbers to track where my program fails?
Thanks.
This happens independently of AudioKit.
It has to do with AVAudioEngine or some lower-level component that AVAudioEngine uses.
I can verify that it happens specifically when I connect a AVAudioPlayerNode to the engine's mainmixer. If I connect the player to the outputNode directly instead, then it doesn't happen... but I also suspect that it's harmless -- it also happens in known production code and Apple code samples.
I only see this "bug" when using an emulator running IOS 15.2. It doesn't occur on my real device (IOS 14.4), OR on an emulator running 14.4.
This means that it is a bug or simply "log noise" introduced some time between IOS 14.4 and 15.2. I haven't tested any versions in-between.
PS - I don't see the "Showtime[####:####]" part of the log, so that part probably is coming from AudioKit -- wrapping the log with NSLog.
I am currently using AVSpeechSynthesizer for Text to Speech. Category used for the playback is AVAudioSessionCategoryPlayback and AVAudioSession is set to Active YES.
During the start of the play, [TTS] TTSPlaybackCreate unable to initialize dynamics: -3000 in the Xcode console. When i pause the playback i get [TTS] _BeginSpeaking: couldn't begin playback.
My major issue is MPRemoteCommandCenter doesn't get updated to pause when TTS stopped.
For Stop functionality, I am using this code;
BOOL speechStopped = [self.ttsSpeechSynthesizer stopSpeakingAtBoundary:AVSpeechBoundaryImmediate];
if(!speechStopped) {
[self.ttsSpeechSynthesizer stopSpeakingAtBoundary:AVSpeechBoundaryWord];
}
I had Airplay connected to an Airplay station.
I had a similar issue after updating iOS to the latest version on my phone.
I spent much time trying to understand why my app stopped talking using TextToSpeech while all worked before and code seemed ok.
Siri was talking aloud fine, and the sound in other apps worked as well
Mine was giving no error message in the code and the following in the device log:
Error (730) / LearnByHeart.iOS(TTSSpeechBundle): TTSPlaybackCreate unable to initialize dynamics: -3000
Rebooting the phone did not help.
As funny as it is, all got resolved by turning the physical sound button off and back on.
Hope this saves someone a day
I uploaded an archive on app store and am getting crash when I 'm trying to play an intro sound. I'm using AVAudioEngine to play the sound. When I compile and run code through Xcode everything works fine. When I upload on TestFlight and try to run my app as an internal tester my app crashes. The crash report is:
If I use AVAudioPlayer to play that sound it's ok. I can't understand what is the problem with AVAudioEngine. Any advices?
I faced the same exception only in the release build of my app and specific to iPhone7.
The exception seems to occur at a changing point of audio session category.
In my case, changing from
AVAudioSessionCategorySoloAmbient
to
AVAudioSessionCategoryPlayAndRecord, with: AVAudioSessionCategoryOptions.defaultToSpeaker
I found a workaround which works at least just for me.
The following article
https://forums.developer.apple.com/thread/65656
tells that this kind of exception occurs at initialization of multiple input audio unit.
In order to prevent initialization of multiple input audio unit,
I added the following codes before the change of audio session category
AudioOutputUnitStop((engine.inputNode?.audioUnit)!)
AudioUnitUninitialize((engine.inputNode?.audioUnit)!)
engine is the instance of AVAudioEngine.
I hope it will help you guys!
I'm implementing the Chromecast feature on an app of mine but I've been having a hard time figuring out a solution to my problem:
One specific stream doesn't get started on Chromecast, even though it works fine on iOS's default media player. I've tried using the debug string on mychromecastip:9222 but had no success.
I also checked the mim type, but it seems to be the same as my other working streams.
Any ideas on how to attack this problem?
You need to explain what you mean by "tried that and had no success". In the most recent build, you are required to run your own app first and then try to attach the debugger to port 9222; if you do it before connecting your app, it will not work. In addition, make sure you have turned on logging (see the Debugging section here) for details.
In my iOS app on all previous versions of the OS, we record audio occasionally, then sleep for a while, then record again, and cycle for ever (sleep is to maintain battery). This worked fine up to iOS 7, even when the app was in background. Now, when the app is in the background the call to AudioQueueStart fails to start recording with an error: -16981. I can't seem to find this error code in the documentation or on the Web, and if I turn it into an NSError, it says "The operation couldn’t be completed. (OSStatus error -16981.)", which isn't all that helpful.
I have a theory that Apple are closing a hole here; the idea being; why would you want to start recording from a background process, unless you are spying? Well, with the users consent (signed and paid for!), that's exactly what we are doing.
So; can anyone confirm or deny that this is expected, or what I might be able to do about it. It's a bit of a killer for our app. I have filed it as a bug with Apple, and will try to report progress here.
UPDATE: 3rd October 2013
Although the previous answer seemed to work for this for a while; it has stopped working now with -12985, which is because another app has turned on audio. This is of course why I need to use the mixing flag.
UPDATE:
iOS 7.0.3 (and later) seem to have fixed this issue completely.
After playing with different audio session properties, I found that -16981 error takes place when kAudioSessionProperty_OverrideCategoryMixWithOthers is enabled (TRUE). As soon, as I set it to '0', AudioQueueStart() executes successfully. So, before starting the audio session try:
UInt32 allowMixing = 0;
status = AudioSessionSetProperty (
kAudioSessionProperty_OverrideCategoryMixWithOthers,
sizeof (allowMixing),
&allowMixing);
Clearly, this is behavior change in iOS 7. As it was mentioned before, the documentation does not list -16981 error code.