I want to create a voice chat software using FMOD.
Now I can receive data from microphone and play it immediately.
It's also easy to send sound data to another computer on the network.
But I don't know how to play the sent data on the other computer with FMOD.
Can anyone help me?
When you receive the incoming sound data on the destination machine you need to create a streaming buffer to play the audio. The simplest method would be to look at the userccreatedsound example. It shows how to create a custom stream buffer and use the pcmreadcallback to populate the sound with data as needed.
Related
I want to use an incoming audio stream (microphone from an external device) as the microphone input for an outbound Twilio Voice call.
The external device serves as a softphone, and does not currently support WebRTC. Instead it currently sets up 2 separate connections to a server: 1 for outgoing audio (microphone), and 1 for incoming audio. Bots connections (streams) are set up using gstreamer (gst-launch).
The server sets up a voice call and should somehow use the incoming audio stream as the microphone input for this call. I have already found the Stream instruction able to send the calls' audio back to the external device.
Can anyone point me in the right direction, maybe suggest some SDK functionality?
I am creating a voice only (no video) chat application. I have created my own node.js/socket.io based server for signaling.
For WebRTC, I am using the following pod: https://cocoapods.org/pods/WebRTC
I have been successful in creating peer connection, adding local stream, setting local/remote sdp, and send/receive ice candidates. The "didAddStream" delegate method is also called successfully having audio tracks but I am stuck here. I don't know what should I do with the audio track. What should be the next step? How would I send/receive audio on both sides?
Also, if I integrate CallKit, what changes do I need to make.
I got stuck on this one too. You have to retain the RTCMediaStream object in order for the audio to play. You don't need to do anything with the RTCAudioTrack, it will play automatically. I simply assign it to property so it can get retained. See my example here: https://github.com/redfearnk/WebRTCVideoChat/blob/master/WebRTCVideoChat/WebRTCClient.swift#L143
I have a CoreAudio based player that streams remote mp3s.
It uses NSURLConnection to retrieve the mp3 data -> uses AudioConverter to convert the stream into PCM -> and feeds the stream into an AUGraph to play audio.
The player works completely fine in my demo app(it only contains a play button), but when i add the player to another project, but when coupled with a project that already makes networking calls, and updates UI, the player fails to play audio past a few seconds.
Am possibly experiencing a threading issue? What are some preventative approaches that i can take or look into that can prevent this from happening?
You do not mention anything in your software architecture about buffering your data between receiving it via NSURLConnection and when you send it to your player.
Data will arrive in chunks with inconsistent arrival rates.
Please see these answers I posted regarding buffering and network jitter.
Network jitter
and
Network jitter and buffering queue
In a nutshell, you can receive data and immediately send it to your player because the next data may not arrive in time.
You don't mention the rate that the mp3 file is delivered. If it is delivered very quickly over a fast connection... are you buffering all of the data received or is it getting lost somewhere in your app? There is a chance that your problem is that you are receiving way too much data too fast and not properly buffering up the data received.
In my app I am streaming audio and there is a period of 5-10 sec depending on the connection where the buffer is loading and after this, my app starts to play the audio. When it starts to play the audio this symbol comes up in the screen.
Here is an image of what im talking about.
http://img27.imageshack.us/img27/3667/img0596.png
I want to change a label in my app when this symbol comes up in the screen, but i dont know which function let me detect this.
The symbol is the "Play" button common to music devices. There is most likely an NSNotification center message that can be "listened for". However, depending on how you are buffering your sounds there is probably a delegate that can notify a selector once it has begun playback. Without more details I can not give more detailed advice. If I were in your position I would take a very hard look at the API you are utilizing, most likely several methods exist to either post notification or send delegate messages notifying the state of the stream as well as playback. I have worked with some streaming audio API and I was able to get status of the buffer as well many other messages from the stream object(s). These are just part of good design, so most likely it is there.
We're looking to send some serial data out from the headphone jack, but would like to still be able to play audio from the speakers. Is it possible to send output to both? If so, is it possible to send different audio to each?
Not as far as I'm aware. You can get programatic notification of when the routing has changed (i.e. when someone connects a headphone cable), but you are unable to specify which device(s) to use for output.