Is there a way create a virtual audio output device that would make it show up in the Music app's or Spotify's output options? Alternatively, is there a way to intercept the audio stream and then force audio output to something unused (say, open headphone port)?
What I want to do is take the raw audio stream, encode/compress it via a codec, and then send over BLE (not BT Classic). Ideally my "device", or service, would show up in the output options of the music/Spotify apps
Related
I want to use an incoming audio stream (microphone from an external device) as the microphone input for an outbound Twilio Voice call.
The external device serves as a softphone, and does not currently support WebRTC. Instead it currently sets up 2 separate connections to a server: 1 for outgoing audio (microphone), and 1 for incoming audio. Bots connections (streams) are set up using gstreamer (gst-launch).
The server sets up a voice call and should somehow use the incoming audio stream as the microphone input for this call. I have already found the Stream instruction able to send the calls' audio back to the external device.
Can anyone point me in the right direction, maybe suggest some SDK functionality?
is there any way to send Microphone audio stream to service side in real time?
I am using WCF service at the middle layer where I am converting audio to text using system.speech. It is working fine if I am sending wav file as memory stream but how it possible in a live scenario using the microphone?
We need to record the device audio output from cable or bluetooth headset speakers. For example if user listens to some song when Music app runs in background we want to record this song. Is it possible to implement output recording with AudioUnit? We use PalayAndRecord category for shared audioSession with AVAudioSessionCategoryOptionMixWithOthers option.
No, in general you can not listen to the output from another app. There is inter app audio and audio unit technology that enables this, but the connection must be arranged by both apps. It is not possible to just eavesdrop on whatever the user outputs from the device.
We're looking to send some serial data out from the headphone jack, but would like to still be able to play audio from the speakers. Is it possible to send output to both? If so, is it possible to send different audio to each?
Not as far as I'm aware. You can get programatic notification of when the routing has changed (i.e. when someone connects a headphone cable), but you are unable to specify which device(s) to use for output.
My project involves a magnetic card reader device that plugs into the phono socket (ie only uses microphone)
Can I get my project to output sound through the inbuilt speaker while simultaneously listening for input from the device?
Research suggests this is not possible:
iPhone audio playback: force through internal speaker?
Force iPhone to output through the speaker, while recording from headphone mic
Audio Session Services: kAudioSessionProperty_OverrideAudioRoute with different routes for input & output
The only way round I can see is actually changing the audio session every time I wish to emit a sound.
Is this really the only option? And is it practical to do this? How long would it take for the audio session to reconfigure itself?