I'm setting a preferred buffer duration of 0.0001 seconds using AVAudioSession and not getting logical results using the simulators.
[session setPreferredIOBufferDuration:self.bufferDuration error:&audioSessionError];
if (audioSessionError) {
NSLog(#"Error %ld, %#",
(long)audioSessionError.code, audioSessionError.localizedDescription);
}
The problem is that in my Audio Unit render callbacks, I always get 512 frames to process as the inNumberFrames argument.
On my device, setting the preferred buffer duration results in different buffer rates. For example, if I set self.bufferDuration and then set the AVAudioSession with 0.1, than I will get 4096 sized inNumberFrames arguments to my render callbacks. On the simulator, it will be 512.
I wanted to check if this is normal behavior (i know many things don't work identically on the simulator and device), or if this is a problem with my assumptions.
Note that the setPreferredIOBufferDuration setting is just a preference suggestion not a hard setting. The OS is free at run-time to choose a number of frames (duration of actual data times the sample rate) to send to audio callbacks, and to even change that number while an app is running audio in the foreground. The actual duration may vary between different iOS devices and Mac systems. The duration might also depend the sample rate or format, the audio route, whether there are any other background app audio sessions that are currently active, the audio settings used by the immediately prior app, and/or on the OSX or iOS version and version of iOS Simulator.
For a requested buffer duration of 0.0053, I seem to get 512 frames on the iOS 9.2 Simulator, and 256 frames on an iPhone 6s (only the latter matching the request, but this will not be true at all common sample rates). Some older iOS devices will not return a frame count below 256.
It is invalid to assume that inNumberFrames will correspond to an app's preferred buffer duration setting.
Related
I have a legacy video streaming library that seems to be broken on iPhone 11 (and above) devices. After digging into the problem it seems that the code that is initiating the AVAudioSession is failing due to the call to AVAudioSession.sharedInstance.setPreferredSampleRate(44100) not changing the actual sample rate and keeping it at 48000Hz.
Has anyone else faced this? I know that the method says preferred and it is not guaranteed that it will change the audio sample rate, but it was working on all previous devices.
My configuration
iPhone X
iOS 12
Problem
Since iOS 11/12 my apps audio for some reason has a periodic crackling/popping sound which appears to get worse/more noticeable the louder or more constant the audio is.
Troubleshooting
I played a 800Hz sine wave from djay2 through AudioBus into my app and saved the output of my app to a file.
Loading my apps output into Audacity I can see that the crackling occurs every 14,112 samples or every 0.320 seconds.
Has anyone got any idea where I should start looking. Changing the internal configuration of my app between 41.1kHz and 48kHz appears to make no difference. I thought it might have been due to downsampling from the hardware sample rate.
Toggling Inter-App Audio Sync on/off within AudioBus appears to have some effect (1 in every ~8 toggles will stop the crackling).
I assume this is due to the AudioSession being restarted or something.
Has anyone got any idea what might be causing this or has experienced this before?
Thanks,
Anthony
I thought I had the whole bluetooth restoration thing working (acting as a central), then I tried the following:
1 - Peripheral sends single bluetooth packet every 1 second
2 - At random time, I kill app (either using the debugger stop, or another app to eat all the memory)
3 - Check if restoration took place
This worked about 80% of the time and the errors were seemingly random.
After two days I was able to reliably reproduce the problem:
1 - Peripheral sends two packets 1000 ms apart
2 - Kill the app programatically (kill(getpid(),SIGKILL) some delta after the first packet (100ms,200ms...) which is some delta before the next packet (900ms, 800ms)
|--PKT-------850ms------KILL--150ms--PKT--|
3 - Second packet arrives and wakes the app up
What I found is that if the time between app termination and the next packet arriving is greater than about 150ms, restoration takes place as it should 100% of the time.
If the time between app termination and the next packet arriving is less than about 150ms, if I open the app manually within 10 seconds, restoration takes place 100% of the time. If I open the app manually after more than 10 seconds, it's as though no restoration took place. Also, once the app is killed, I watch the bluetooth symbol in the status bar and after exactly 10 seconds the connection is dropped.
Testing took place on an iPhone 4s running 8.1 and then 8.2.
This seems like a bug... I can provide my code if that helps however I've stripped it back to the bare minimum delegate implementations. I've tried putting the central manager on a different queue to no avail. This is real issue for my product as it relies upon session-based, background bluetooth tracking. Any thoughts?
This seems to be resolve in iOS 9+
I have successfully been running this AVAudioRecorder example through a regular XCode based project. I would like to record from a daemon application running on a jailbroken device and tried to implement the same functionality from a simple command line app.
In my simple test, I start the recording from main and sleep for a few seconds before stopping the recording. The result is always the same: A truncated .caf file of size 4096 bytes (looks like a correct header and null data). It doesn't make any difference if I initiate the recording from a separately spawned thread. I tried both record() and recordForDuration().
All methods invoked return an OK result. If I invoke recording() on the AVAudioRecorder instance, it returns YES while recording.
Do I miss some fundamental initialization that a regular XCode based GUI app takes care of behind the scenes?
There is little consensus on whether the iOS interface Apple provides to shut off automatic gain control is actually implemented. Does anyone know definitively if it is possible to shut off AGC when recording audio on the iPad and, if the answer is yes, how?
if you are using iOS< 5...the answer is NO.
if you are using iOS>=5 on an iPad2...the answer is still NO. Not
if you are using iOS>=5 on iPhone 3GS, iPod(4th gen), iPad1 (1st gen) the answer seems to be YES.
ACG is turned off when the AudioSessionMode is changed to kAudioSessionMode_Measurement. Check the Audio Session Services Reference.
input gain can be controlled by:
1) Set your audioSession's mode to kAudioSessionMode_Measurement.
2) Be sure that the device you are using has input gain available by using the kAudioSessionProperty_InputGainAvailable property.
3) Set the property kAudioSessionProperty_InputGainScalar to your desired gain level (between 0 and 1.0)
*haven't gotten my hands on the newest iPad yet, so cant confirm.