I've got crash with this code
// ViewController.swift
import UIKit
import AVFoundation
class ViewController: UIViewController {
var engine:AVAudioEngine!
var EQNode:AVAudioUnitEQ!
override func viewDidLoad() {
engine.reset()
let Format = engine.inputNode.outputFormat(forBus: 0)
print("channelcount:",engine.inputNode.inputFormat(forBus: 0).channelCount)
//----->Start CRASH App stoped here
engine.connect(engine.inputNode, to: EQNode, format: Format)
engine.connect(EQNode, to: engine.mainMixerNode, format: Format)
var error: NSError?
engine.prepare()
print("done prepare")
do {
try engine.start()
} catch {
print(error)
}
print("done start")
}
}
And if I change Format to nil it make my app not working but not crash.
All of this work perfectly fine on Xcode simulator with no error.
But in the real iOS device (I use iPad 2019) test it crash.
Detail about my app: Live microphone adjust in Equalizer and display Equalized sound real-time.
ERROR:
SelfhearEQ[3532:760180] [aurioc] AURemoteIO.cpp:1086:Initialize: failed: -10851
(enable 1, outf< 2 ch, 0 Hz, Float32, non-inter> inf< 1 ch, 44100 Hz, Float32>)
channelcount: 0
2019-10-22 18:01:29.891748+0700 SelfhearEQ[3532:760180] [aurioc] AURemoteIO.cpp:1086:Initialize: failed: -10851
(enable 1, outf< 2 ch, 0 Hz, Float32, non-inter> inf< 1 ch, 44100 Hz, Float32>)
2019-10-22 18:01:29.892326+0700 SelfhearEQ[3532:760180] [avae]
AVAEInternal.h:76 required condition is false: [AVAudioEngineGraph.mm:2127:_Connect: (IsFormatSampleRateAndChannelCountValid(format))]
2019-10-22 18:01:29.896270+0700 SelfhearEQ[3532:760180] *** Terminating app due to uncaught exception 'com.apple.coreaudio.avfaudio', reason: 'required condition is false: IsFormatSampleRateAndChannelCountValid(format)'
I found the answer for this it nothing about formatting causing this error.
Check it out on my other question it fixed.
avaudioengine-connect-crash-on-hardware-not-simulator
Related
I am trying to use fftTap from iOS microphone with AudioKit, and here is the code
fft = FFTTap(highPassFilter2!, bufferSize: 8192 * 4, fftValidBinCount: nil, handler: {data in})
I tried using a bufferSize of 8192 * 4, which has allowed me to detect HZ at intervals of about 1.3
(Sample Rate is 44100, 44100 / (8192 * 4) = 1.34582519531, am I correct ?)
but I always get the exception error in File BaseTap around line 90.
private func handleTapBlock(buffer: AVAudioPCMBuffer, at time: AVAudioTime) {
// Call on the main thread so the client doesn't have to worry
// about thread safety.
buffer.frameLength = bufferSize <<<<< here
The log shows
libc++abi: terminating with uncaught exception of type NSException
*** Terminating app due to uncaught exception 'com.apple.coreaudio.avfaudio', reason: 'required condition is false: length <= _imp->_frameCapacity'
terminating with uncaught exception of type NSException
It seems not work, but I want to be able to capture the accuracy less than 2 Hz, is there any way to try?
Really Thanks.
Okay, I'm clearly missing some important piece here. I'm trying to do low-latency audio across the network, and my fundamental frames are 10ms. I expected this to be no problem. My target phone is an iPhone X speakers--so my hardware sample rate should be locked to 48000Hz. I'm requesting 10ms which is a nice even divisor and should be 480, 960, 1920 or 3840 depending upon how you want to slice frames/samples/bytes.
Yet, for the life of me, I absolute cannot get iOS to do anything I regard as sane. I get 10.667ms buffer duration which is ludicrous--iOS is going out of it's way to give me buffer sizes that aren't integer multiples of the sampleRate. Even worse, the frame is sightly LONG which means that I have to absorb not one but two packets of latency in order to be able to fill that buffer. I can't get maximumFrameToRender to change at all, and the system is returning 0 as my sample rate even though it quite plainly is rendering at 48000Hz.
I'm clearly missing something important--what is it? Did I forget to disconnect/connect something in order to get a direct hardware mapping? (My format is 1 which pcmFormatFloat32--I would expect pcmFormatInt16 or pcmFormatInt32 for mapping directly to hardware so something in the OS is probably getting in the way) Pointers are appreciated and I'm happy to go read more. Or is AUAudioUnit simply half-baked and I need to go backward to older, more useful APIs? Or did I completely miss the plot and low-latency audio folks use a whole different set of audio management functions?
Thanks for the help--it's much appreciated.
Output from code:
2019-11-07 23:28:29.782786-0800 latencytest[3770:50382] Ready to receive user events
2019-11-07 23:28:34.727478-0800 latencytest[3770:50382] Start button pressed
2019-11-07 23:28:34.727745-0800 latencytest[3770:50382] Launching auxiliary thread
2019-11-07 23:28:34.729278-0800 latencytest[3770:50445] Thread main started
2019-11-07 23:28:35.006005-0800 latencytest[3770:50445] Sample rate: 0
2019-11-07 23:28:35.016935-0800 latencytest[3770:50445] Buffer duration: 0.010667
2019-11-07 23:28:35.016970-0800 latencytest[3770:50445] Number of output busses: 2
2019-11-07 23:28:35.016989-0800 latencytest[3770:50445] Max frames: 4096
2019-11-07 23:28:35.017010-0800 latencytest[3770:50445] Can perform output: 1
2019-11-07 23:28:35.017023-0800 latencytest[3770:50445] Output Enabled: 1
2019-11-07 23:28:35.017743-0800 latencytest[3770:50445] Bus channels: 2
2019-11-07 23:28:35.017864-0800 latencytest[3770:50445] Bus format: 1
2019-11-07 23:28:35.017962-0800 latencytest[3770:50445] Bus rate: 0
2019-11-07 23:28:35.018039-0800 latencytest[3770:50445] Sleeping 0
2019-11-07 23:28:35.018056-0800 latencytest[3770:50445] Buffer count: 2 4096
2019-11-07 23:28:36.023220-0800 latencytest[3770:50445] Sleeping 1
2019-11-07 23:28:36.023400-0800 latencytest[3770:50445] Buffer count: 190 389120
2019-11-07 23:28:37.028610-0800 latencytest[3770:50445] Sleeping 2
2019-11-07 23:28:37.028790-0800 latencytest[3770:50445] Buffer count: 378 774144
2019-11-07 23:28:38.033983-0800 latencytest[3770:50445] Sleeping 3
2019-11-07 23:28:38.034142-0800 latencytest[3770:50445] Buffer count: 566 1159168
2019-11-07 23:28:39.039333-0800 latencytest[3770:50445] Sleeping 4
2019-11-07 23:28:39.039534-0800 latencytest[3770:50445] Buffer count: 756 1548288
2019-11-07 23:28:40.041787-0800 latencytest[3770:50445] Sleeping 5
2019-11-07 23:28:40.041943-0800 latencytest[3770:50445] Buffer count: 944 1933312
2019-11-07 23:28:41.042878-0800 latencytest[3770:50445] Sleeping 6
2019-11-07 23:28:41.043037-0800 latencytest[3770:50445] Buffer count: 1132 2318336
2019-11-07 23:28:42.048219-0800 latencytest[3770:50445] Sleeping 7
2019-11-07 23:28:42.048375-0800 latencytest[3770:50445] Buffer count: 1320 2703360
2019-11-07 23:28:43.053613-0800 latencytest[3770:50445] Sleeping 8
2019-11-07 23:28:43.053771-0800 latencytest[3770:50445] Buffer count: 1508 3088384
2019-11-07 23:28:44.058961-0800 latencytest[3770:50445] Sleeping 9
2019-11-07 23:28:44.059119-0800 latencytest[3770:50445] Buffer count: 1696 3473408
Actual code:
import UIKit
import os.log
import Foundation
import AudioToolbox
import AVFoundation
class AuxiliaryWork: Thread {
let II_SAMPLE_RATE = 48000
var iiStopRequested: Int32 = 0; // Int32 is normally guaranteed to be atomic on most architectures
var iiBufferFillCount: Int32 = 0;
var iiBufferByteCount: Int32 = 0;
func requestStop() {
iiStopRequested = 1;
}
func myAVAudioSessionInterruptionNotificationHandler(notification: Notification ) -> Void {
os_log(OSLogType.info, "AVAudioSession Interrupted: %s", notification.debugDescription)
}
func myAudioUnitProvider(actionFlags: UnsafeMutablePointer<AudioUnitRenderActionFlags>, timestamp: UnsafePointer<AudioTimeStamp>,
frameCount: AUAudioFrameCount, inputBusNumber: Int, inputData: UnsafeMutablePointer<AudioBufferList>) -> AUAudioUnitStatus {
let ppInputData = UnsafeMutableAudioBufferListPointer(inputData)
let iiNumBuffers = ppInputData.count
if (iiNumBuffers > 0) {
assert(iiNumBuffers == 2)
for bbBuffer in ppInputData {
assert(Int(bbBuffer.mDataByteSize) == 2048) // FIXME: This should be 960 or 1920 ...
iiBufferFillCount += 1
iiBufferByteCount += Int32(bbBuffer.mDataByteSize)
memset(bbBuffer.mData, 0, Int(bbBuffer.mDataByteSize)) // Just send silence
}
} else {
os_log(OSLogType.error, "Zero buffers from system")
assert(iiNumBuffers != 0) // Force crash since os_log would cause an audio hiccup due to locks anyway
}
return noErr
}
override func main() {
os_log(OSLogType.info, "Thread main started")
#if os(iOS)
let kOutputUnitSubType = kAudioUnitSubType_RemoteIO
#else
let kOutputUnitSubType = kAudioUnitSubtype_HALOutput
#endif
let audioSession = AVAudioSession.sharedInstance() // FIXME: Causes the following message No Factory registered for id
try! audioSession.setCategory(AVAudioSession.Category.playback, options: [])
try! audioSession.setMode(AVAudioSession.Mode.measurement)
try! audioSession.setPreferredSampleRate(48000.0)
try! audioSession.setPreferredIOBufferDuration(0.010)
NotificationCenter.default.addObserver(
forName: AVAudioSession.interruptionNotification,
object: nil,
queue: nil,
using: myAVAudioSessionInterruptionNotificationHandler
)
let ioUnitDesc = AudioComponentDescription(
componentType: kAudioUnitType_Output,
componentSubType: kOutputUnitSubType,
componentManufacturer: kAudioUnitManufacturer_Apple,
componentFlags: 0,
componentFlagsMask: 0)
let auUnit = try! AUAudioUnit(componentDescription: ioUnitDesc,
options: AudioComponentInstantiationOptions())
auUnit.outputProvider = myAudioUnitProvider;
auUnit.maximumFramesToRender = 256
try! audioSession.setActive(true)
try! auUnit.allocateRenderResources() // Make sure audio unit has hardware resources--we could provide the buffers from the circular buffer if we want
try! auUnit.startHardware()
os_log(OSLogType.info, "Sample rate: %d", audioSession.sampleRate);
os_log(OSLogType.info, "Buffer duration: %f", audioSession.ioBufferDuration);
os_log(OSLogType.info, "Number of output busses: %d", auUnit.outputBusses.count);
os_log(OSLogType.info, "Max frames: %d", auUnit.maximumFramesToRender);
os_log(OSLogType.info, "Can perform output: %d", auUnit.canPerformOutput)
os_log(OSLogType.info, "Output Enabled: %d", auUnit.isOutputEnabled)
//os_log(OSLogType.info, "Audio Format: %p", audioFormat)
var bus0 = auUnit.outputBusses[0]
os_log(OSLogType.info, "Bus channels: %d", bus0.format.channelCount)
os_log(OSLogType.info, "Bus format: %d", bus0.format.commonFormat.rawValue)
os_log(OSLogType.info, "Bus rate: %d", bus0.format.sampleRate)
for ii in 0..<10 {
if (iiStopRequested != 0) {
os_log(OSLogType.info, "Manual stop requested");
break;
}
os_log(OSLogType.info, "Sleeping %d", ii);
os_log(OSLogType.info, "Buffer count: %d %d", iiBufferFillCount, iiBufferByteCount)
Thread.sleep(forTimeInterval: 1.0);
}
auUnit.stopHardware()
}
}
class FirstViewController: UIViewController {
var thrAuxiliaryWork: AuxiliaryWork? = nil;
override func viewDidLoad() {
super.viewDidLoad()
// Do any additional setup after loading the view.
}
#IBAction func startButtonPressed(_ sender: Any) {
os_log(OSLogType.error, "Start button pressed");
os_log(OSLogType.error, "Launching auxiliary thread");
thrAuxiliaryWork = AuxiliaryWork();
thrAuxiliaryWork?.start();
}
#IBAction func stopButtonPressed(_ sender: Any) {
os_log(OSLogType.error, "Stop button pressed");
os_log(OSLogType.error, "Manually stopping auxiliary thread");
thrAuxiliaryWork?.requestStop();
}
#IBAction func muteButtonPressed(_ sender: Any) {
os_log(OSLogType.error, "Mute button pressed");
}
#IBAction func unmuteButtonPressed(_ sender: Any) {
os_log(OSLogType.error, "Unmute button pressed");
}
}
You cannot beat iOS silicon hardware into submission by assuming the API will do it for you. You have to do your own buffering if you want to abstract the hardware.
For the very best (lowest) latencies, your software will have to (potentially dynamically) adapt to the actual hardware capabilities, which can vary from device to device, and mode to mode.
The hardware sample rate appears to be either 44.1ksps (older iOS devices), 48ksps (newer arm64 iOS devices), or an integer multiple thereof (and potentially other rates when plugging in non-AirPod Bluetooth headsets, or external ADCs). The actual hardware DMA (or equivalent) buffers seem to always be a power of 2 in size, potentially down to 64 samples on newest devices. However various iOS power saving modes will increase the buffer size (by powers of 2) up to 4k samples, especially on older iOS devices. If you request a sample rate other than the hardware rate, the OS might resample the buffers to a different size than a power of 2, and this size can change from Audio Unit callback to subsequent callback if the resampling ratio isn't an exact integer.
Audio Units are the lowest level accessible via public API on iOS devices. Everything else is built on top, and thus potentially incurs greater latencies. For instance, if you use the Audio Queue API with non-hardware buffer sizes, the OS will internally use power-of-2 audio buffers to access the hardware, and chop them up or fractionally concatenate them to return or fetch Audio Queue buffers of non-hardware sizes. Slower and jittery.
Far from being half-baked, for a long time the iOS API was the only API usable on mobile phones and tablets for live low-latency music performance. But by developing software matched to the hardware.
I am developing a VoIP application using Pjsip in Objective-C.
I want to try and integrate CallKit but I got an error on configureAudioSession. I copied AudioController.h and AudioController.mm from SpeakerBox from Apple into my project.
And I added this code :
AudioController *audioController;
- (void)configureAudioSession {
if (!audioController) {
audioController = [[AudioController alloc] init];
}
}
- (void)handleIncomingCallFrom:(NSString *)dest {
CXCallUpdate *callUpdate = [[CXCallUpdate alloc] init];
[callUpdate setLocalizedCallerName:dest];
[callUpdate setHasVideo:NO];
CXHandle *calleeHandle = [[CXHandle alloc] initWithType:CXHandleTypeGeneric value:dest];
[callUpdate setRemoteHandle:calleeHandle];
[provider reportNewIncomingCallWithUUID:[NSUUID UUID] update:callUpdate completion:^(NSError *error){
[self configureAudioSession];
}];
}
Phone is ringing, I can handle the call but it crashes whenever I answer. I receive this error :
AVAudioSession error activating: Error Domain=NSOSStatusErrorDomain Code=561017449 "(null)"
2017-03-09 18:17:48.830893 MyVoIPProject[1620:971182] [aurioc] 892: failed: '!pri' (enable 3, outf< 1 ch, 16000 Hz, Int16> inf< 1 ch, 16000 Hz, Int16>)
2017-03-09 18:17:48.841301 MyVoIPProject[1620:971182] [aurioc] 892: failed: '!pri' (enable 3, outf< 1 ch, 44100 Hz, Int16> inf< 1 ch, 44100 Hz, Int16>)
2017-03-09 18:17:48.850282 MyVoIPProject[1620:971182] [aurioc] 892: failed: '!pri' (enable 3, outf< 1 ch, 48000 Hz, Int16> inf< 1 ch, 48000 Hz, Int16>)
.
.
.
.
Can you tell me how can I integrate Callkit?
This bug causes by you forget to add the Mircophone description in your Info.plist.
Reference: SpeakerBox from Apple
iOS - AudioUnitInitialize returns error code 561017449
In the below code AudioKit.start() crashes on my iPhone SE with iOS 10.1.1. It works fine on the Simulator.
private func play(note: Int) {
let pluckedString = AKPluckedString()
AudioKit.output = pluckedString
AudioKit.start() // <-- Crash here!
let frequency = note.midiNoteToFrequency()
pluckedString.trigger(frequency: frequency)
}
The console error log is
2016-12-04 10:51:45.765130 MyApp[1833:720319] [aurioc] 889: failed: -10851 (enable 2, outf< 2 ch, 0 Hz, Float32, non-inter> inf< 2 ch, 0 Hz, Float32, non-inter>)
2016-12-04 10:51:45.766519 MyApp[1833:720319] [aurioc] 889: failed: -10851 (enable 2, outf< 2 ch, 44100 Hz, Float32, non-inter> inf< 2 ch, 0 Hz, Float32, non-inter>)
2016-12-04 10:51:45.767008 MyApp[1833:720319] [aurioc] 889: failed: -10851 (enable 2, outf< 2 ch, 44100 Hz, Float32, non-inter> inf< 2 ch, 0 Hz, Float32, non-inter>)
2016-12-04 10:51:45.767982 MyApp[1833:720319] [central] 54: ERROR: [0x1b42d7c40] >avae> AVAudioEngineGraph.mm:2515: PerformCommand: error -10851
What have I missed? I can't find any documentation about any needed extra setup for devices compared to the simulator. The version of AudioKit is 3.5. XCode version is 8.1
I found the issue. I had a recording category set for the audio session. By making sure that the audio session category isn't AVAudioSessionCategoryRecord on playback; my app doesn't crash anymore.
Im trying to make sure my app works on IOS. And doing so by trying to launch it to a IPhonesimulator on my mac.
doing:
./gradlew ios:launchIPhoneSimulator
Makes my app start on the simulator, the standard splashscreen by libgdx appears and then it shuts down and nothing more happends.
But when I build it ./gradlew ios:build ios:launchIPhoneSimulator I get these errors:
2015-06-12 17:39:36.226 IOSLauncher[730:16518] [debug] IOSApplication: iOS version: 8.3
2015-06-12 17:39:36.227 IOSLauncher[730:16518] [debug] IOSApplication: Running in 64-bit mode
2015-06-12 17:39:36.229 IOSLauncher[730:16518] [debug] IOSApplication: scale: 2.0
2015-06-12 17:39:36.347 IOSLauncher[730:16518] [debug] IOSApplication: Unscaled View: Portrait 375x667
2015-06-12 17:39:36.347 IOSLauncher[730:16518] [debug] IOSApplication: View: Portrait 750x1334
2015-06-12 17:39:36.348 IOSLauncher[730:16518] [debug] IOSGraphics: 750.0x1334.0, 2.0
2015-06-12 17:39:37.104 IOSLauncher[730:16518] [debug] IOSGraphics: Display: ppi=264, density=1.65
2015-06-12 17:39:37.631 IOSLauncher[730:16658] 17:39:37.614 ERROR: 98: Error '!obj' trying to fetch default input device's sample rate
2015-06-12 17:39:37.631 IOSLauncher[730:16658] 17:39:37.631 ERROR: 100: Error getting audio input device sample rate: '!obj'
2015-06-12 17:39:37.632 IOSLauncher[730:16658] 17:39:37.632 WARNING: 230: The input device is 0x0; '(null)'
2015-06-12 17:39:37.632 IOSLauncher[730:16658] 17:39:37.632 WARNING: 234: The output device is 0x26; 'AppleHDAEngineOutput:1B,0,1,2:0'
2015-06-12 17:39:37.632 IOSLauncher[730:16658] 17:39:37.632 ERROR: 296: error '!obj'
2015-06-12 17:39:37.632 IOSLauncher[730:16618] 17:39:37.632 ERROR: 296: error -66680
2015-06-12 17:39:37.633 IOSLauncher[730:16518] 17:39:37.632 ERROR: >aurioc> 806: failed: -10851 (enable 2, outf< 2 ch, 44100 Hz, Int16, inter> inf< 2 ch, 0 Hz, Int16, inter>)
2015-06-12 17:39:37.633 IOSLauncher[730:16618] 17:39:37.633 ERROR: 113: * * * NULL AQIONode object
2015-06-12 17:39:37.633 IOSLauncher[730:16518] OAL Error: +[ALWrapper openDevice:]: Could not open device (null)
2015-06-12 17:39:37.633 IOSLauncher[730:16518] OAL Error: -[ALDevice initWithDeviceSpecifier:]: : Failed to create OpenAL device (null)
2015-06-12 17:39:37.635 IOSLauncher[730:16518] OAL Error: +[ALWrapper closeDevice:]: Invalid Value (error code 0x0000a004)
2015-06-12 17:39:37.635 IOSLauncher[730:16618] 17:39:37.635 ERROR: 703: Can't make UISound Renderer
2015-06-12 17:39:37.636 IOSLauncher[730:16518] OAL Warning: -[OALAudioSession onAudioError:]: Received audio error notification, but last reset was 0.377221 seconds ago. Doing nothing.
2015-06-12 17:39:37.636 IOSLauncher[730:16518] OAL Error: -[OALSimpleAudio initWithSources:]: : Could not create OpenAL device
2015-06-12 17:39:37.656 IOSLauncher[730:16518] [error] IOSAudio: No OALSimpleAudio instance available, audio will not be availabe
2015-06-12 17:39:37.944 IOSLauncher[730:16518] [debug] IOSApplication: created
2015-06-12 17:39:39.155 IOSLauncher[730:16658] 17:39:39.155 ERROR: 296: error -66680
2015-06-12 17:39:39.156 IOSLauncher[730:16664] 17:39:39.156 ERROR: >aurioc> 806: failed: -10851 (enable 2, outf< 2 ch, 44100 Hz, Int16, inter> inf< 2 ch, 0 Hz, Int16, inter>)
2015-06-12 17:39:39.157 IOSLauncher[730:16664] OAL Error: +[ALWrapper openDevice:]: Could not open device (null)
2015-06-12 17:39:39.157 IOSLauncher[730:16664] OAL Error: -[ALDevice initWithDeviceSpecifier:]: : Failed to create OpenAL device (null)
2015-06-12 17:39:39.157 IOSLauncher[730:16664] OAL Error: +[ALWrapper closeDevice:]: Invalid Value (error code 0x0000a004)
2015-06-12 17:39:39.157 IOSLauncher[730:16664] OAL Warning: -[OALAudioSession onAudioError:]: Received audio error notification. Resetting audio session.
BUILD SUCCESSFUL
And hereĀ“s my IOSLauncher if its to any help..
public class IOSLauncher extends IOSApplication.Delegate {
#Override
protected IOSApplication createApplication() {
IOSApplicationConfiguration config = new IOSApplicationConfiguration();
return new IOSApplication(new MainClass(null,null), config);
}
public static void main(String[] argv) {
NSAutoreleasePool pool = new NSAutoreleasePool();
UIApplication.main(argv, null, IOSLauncher.class);
pool.close();
}
}
You simulator's audio output is borked. The openal audio toolkit is failing init which is causing everything to crash.