is there any way to send Microphone audio stream to service side in real time?
I am using WCF service at the middle layer where I am converting audio to text using system.speech. It is working fine if I am sending wav file as memory stream but how it possible in a live scenario using the microphone?
Related
I want to use an incoming audio stream (microphone from an external device) as the microphone input for an outbound Twilio Voice call.
The external device serves as a softphone, and does not currently support WebRTC. Instead it currently sets up 2 separate connections to a server: 1 for outgoing audio (microphone), and 1 for incoming audio. Bots connections (streams) are set up using gstreamer (gst-launch).
The server sets up a voice call and should somehow use the incoming audio stream as the microphone input for this call. I have already found the Stream instruction able to send the calls' audio back to the external device.
Can anyone point me in the right direction, maybe suggest some SDK functionality?
I have a device with a camera and i want to connect to it using my iPhone via Bluetooth, so the question: is it possible to send real-time video stream by bluetooth using Swift/Objective - C?
It is not possible. The transmission speed of Bluetooth is not strong enough to stream a video in real-time to another device. If it was audio it is potentially a different story.
You can use bluetooth to transfer a video to another device, but not to stream as far as I know.
Is there a way create a virtual audio output device that would make it show up in the Music app's or Spotify's output options? Alternatively, is there a way to intercept the audio stream and then force audio output to something unused (say, open headphone port)?
What I want to do is take the raw audio stream, encode/compress it via a codec, and then send over BLE (not BT Classic). Ideally my "device", or service, would show up in the output options of the music/Spotify apps
Is it possible to have a common implementation of a Core Audio based audio driver bridge for iOS and OSX ? Or is there a difference in the Core Audio API for iOS versus the Core Audio API for OSX?
The audio bridge only needs to support the following methods:
Set desired sample rate
Set desired audio block size (in samples)
Start/Stop microphone stream
Start/Stop speaker stream
The application supplies 2 callback function pointers to the audio bridge and the audio bridge sets everything up so that:
The speaker callback is called on regular time intervals where it's requested to return an audio block
The microphone callback is called on regular time intervals where it receives an audio block
I was told that it's not possible to have a single implementation which works on both iOS and OSX as there are differences between the iOS Core Audio API and the OSX Core Audio API.
Is this true?
There are no significant differences between the Core Audio API on OS X and on iOS. However there are significant differences in obtaining the correct Audio Unit for the microphone and the speaker to use. There are only 2 units on iOS (RemoteIO and one for VOIP), but more and potentially many more on a Mac, plus the user might change the selection. There are also differences in some of the Audio Unit parameters (buffer size, sample rates, etc.) allowed/supported by the hardware.
I'm trying to sync music sent from a host iPhone to a client iPhone.. the audio is read using AVAssetReader and sent via packets to the client, which in turns feeds it to a ring buffer, which in turn populates the audioqueue buffers and starts playing.
I was going over the AudioQueue docs and there seems to be two different concepts of a timestamp related to the audioQueue: Audio Queue Time and Audio Queue Device Time. I'm not sure how those two are related and when one should be used rather (or in conjunction with) the other.