Sending Serial Data via AudioQueue in conjunction with Audio via Audio Out - ios

Looking to send 9600 baud symbols generated from AudioQueue syncronized with audio, both of which will output via audio out port. If the serial data is at 19.2kHz is that effectively out of hearing range? Trying to get the audio out clean without audible distortion from serial data.
Thanks for input.

Related

Use external audio stream as microphone input for Twilio Voice call

I want to use an incoming audio stream (microphone from an external device) as the microphone input for an outbound Twilio Voice call.
The external device serves as a softphone, and does not currently support WebRTC. Instead it currently sets up 2 separate connections to a server: 1 for outgoing audio (microphone), and 1 for incoming audio. Bots connections (streams) are set up using gstreamer (gst-launch).
The server sets up a voice call and should somehow use the incoming audio stream as the microphone input for this call. I have already found the Stream instruction able to send the calls' audio back to the external device.
Can anyone point me in the right direction, maybe suggest some SDK functionality?

Real time Microphone memory stream capture

is there any way to send Microphone audio stream to service side in real time?
I am using WCF service at the middle layer where I am converting audio to text using system.speech. It is working fine if I am sending wav file as memory stream but how it possible in a live scenario using the microphone?

iOS - Audio Output Endpoint Routing

Is there a way create a virtual audio output device that would make it show up in the Music app's or Spotify's output options? Alternatively, is there a way to intercept the audio stream and then force audio output to something unused (say, open headphone port)?
What I want to do is take the raw audio stream, encode/compress it via a codec, and then send over BLE (not BT Classic). Ideally my "device", or service, would show up in the output options of the music/Spotify apps

difference between AudioQueue time and AudioQueue Device time

I'm trying to sync music sent from a host iPhone to a client iPhone.. the audio is read using AVAssetReader and sent via packets to the client, which in turns feeds it to a ring buffer, which in turn populates the audioqueue buffers and starts playing.
I was going over the AudioQueue docs and there seems to be two different concepts of a timestamp related to the audioQueue: Audio Queue Time and Audio Queue Device Time. I'm not sure how those two are related and when one should be used rather (or in conjunction with) the other.

iOS: Route audio-IN thru jack, audio-OUT thru inbuilt speaker

My project involves a magnetic card reader device that plugs into the phono socket (ie only uses microphone)
Can I get my project to output sound through the inbuilt speaker while simultaneously listening for input from the device?
Research suggests this is not possible:
iPhone audio playback: force through internal speaker?
Force iPhone to output through the speaker, while recording from headphone mic
Audio Session Services: kAudioSessionProperty_OverrideAudioRoute with different routes for input & output
The only way round I can see is actually changing the audio session every time I wish to emit a sound.
Is this really the only option? And is it practical to do this? How long would it take for the audio session to reconfigure itself?

Resources