sometimes the execution of
AudioOutputUnitStop(inputUnit)
results in the app to be freezed for about 10/15 seconds and the following console message:
WARNING: [0x3b58918c] AURemoteIO.cpp:1225: Stop: AURemoteIO::Stop: error 0x10004003 calling TerminateOwnIOThread
Such code is handled by the Novocaine library and in particular it occurs in the [Novocaine pause] method which I invoke to stop the execution of the playback of an audio file (https://github.com/alexbw/novocaine/blob/master/Novocaine/Novocaine.m).
Any hint is really appreciated!
Thanks,
DAN
Related
I have a music-making app that uses AUSampler & AURemoteIO units to play back user-defined notes. I'm having an issue where after some use, the call to AudioUnitRender on the sampler never returns, hanging the audio thread and silencing the audio output. The CPU usage also shoots up at this point, as the audio thread continuously spits out error messages to the device console (not the debugger output):
May 11 11:45:12 <device name> mediaserverd(CoreAudio)[2296] <Notice>: HALS_IOContext.cpp:1496:IOWorkLoop_HandleOverload: HALS_IOContext::IOWorkLoop_HandleOverload: Overload on context 96 current time: 11788974 deadline: 11788958
This message is being logged by _os_log_impl inside the AUSampler render (specifically VoiceEnvelope::GetRunFrameCount).
Does anyone have suggestions on why this may be happening and how to avoid it?
I discovered the issue. I was passing offsets greater than the size of the buffer into MusicDeviceMIDIEvent which would cause the issue. This was happening due to occasional jumps in the timestamp provided to the render callback. I was able to fix the issue by checking for and ignoring events with offsets greater than the frame count for the current callback.
I am working on video record. I use GPUImage. The error occured on iOS 10.2.0.
According the method call stacks, i found it occured randomly when call -[GPUImageMovieWriter startRecording].
I searched everywhere, but all the questions are about when status is 3 not 4.
Of the AVAssetWriter statuses, you're running into writing. This means that you are calling [GPUImageMovieWriter startRecording] multiple times. This is easily fixed by checking the status of GPUImageMovieWriter and skipping the startRecording call if the status is writing.
When I use <AudioToolbox/AudioServices.h> to implement a recording function, sometimes there will be unsuccessful recording. The reason is AudioQueueStart returned value -66681. The document says:
The audio queue has encountered a problem and cannot start
I found documents but I have no idea about that.
i use UIAutomation and instrument for my UI Tests and when i try to tap some letters, instrument return me an error:
Script threw an uncaught JavaScript error: target.frontMostApp().keyboard() failed to tap 'V' on line 27
A part of code:
passwordField.tap();
target.frontMostApp().keyboard().typeString("VEMO");
Has anyone some ideas about it?
Thank
There is an undocumented function that exists on the UIAKeyboard object that will help you avoid this problem.
var keyboard = target.frontMostApp().keyboard();
keyboard.setInterKeyDelay(seconds);
keyboard.typeString("VEMO");
You can push the delay up as high as you want but I found that a delay of 0.1 was enough to prevent the keyboard from failing.
I am using a demo app from the dragon dictation api. I have made no modifications to the demo app, so I don't think there is anything wrong with it. When I open the app and run it on my phone it opens and runs. I click the record button and talk to it. Then it tries to connect to the server, but it gives me the error saying it can't connect to the speech server.
The output says:
2013-08-10 13:54:11.582 Recognizer[655:907] set session Active 0
2013-08-10 13:54:11.803 Recognizer[655:907] sample rate = 44100.000000
2013-08-10 13:54:11.823 Recognizer[655:907] audio input route(iOS5 or above): MicrophoneBuiltIn
2013-08-10 13:54:11.828 Recognizer[655:907] audiosource = MicrophoneBuiltIn
2013-08-10 13:54:11.889 Recognizer[655:907] [NMSP_ERROR] check status Error: 696e6974 init -> line: 485
2013-08-10 13:54:11.979 Recognizer[655:907] Application windows are expected to have a root view controller at the end of application launch
2013-08-10 13:54:13.513 Recognizer[655:907] Recognizing type:'websearch' Language Code: 'en_US' using end-of-speech detection:2.
2013-08-10 13:54:14.517 Recognizer[655:907] Recording started.
2013-08-10 13:54:16.490 Recognizer[655:907] Recording finished.
2013-08-10 13:54:26.903 Recognizer[655:4103] [NMSP_ERROR] Connection timed out!
2013-08-10 13:54:27.167 Recognizer[655:907] Got error.
2013-08-10 13:54:27.170 Recognizer[655:907] Session id [(null)].
I have no clue what's going on here, and any help would be greatly appreciated.
If when you try to record it immediately says “Cancelled” and shows an error like “recorder is null” or “[NMSP_ERROR] check status Error: 696e6974 init -> line: 485″, this probably means either something is wrong with your SpeechKit keys, or the SpeechKit servers are down. Double check your keys, and/or try again later.
Reference: http://www.raywenderlich.com/60870/building-ios-app-like-siri
in my case the error was that i called the cancel: method on a nil SKRecognizer object.