I'm building a voice-only IOS (swift) app and Tokbox is my VoIP provider.
My app is simple: user1 is talking to user2. However, I would like to get access to the audio stream in real-time. I'm ok with both options: 1. The audio stream goes to my piece of code then I stream it back to Tokbox 2. The audio stream is forked to Tokbox and to my code in parallel.
The only way I was able to put my hand on the audio stream is by using their archiving capabilities, but that is too late (only after the session ends)
Any ideas? or maybe other providers that give me that option?
Option 1 can be done using the external/custom audio driver, take look at this example on how to use/implement it https://github.com/opentok/opentok-ios-sdk-samples/tree/master/Custom-Audio-Driver
Related
I have to be able to record an incoming video call into a file. The recording must be done on the desktop application, built with electron. I'm using OpenVidu as a streaming platform. Is there any way to do that?
#Vasniktel Technically it could be possible to record the video client side as there are a number of WebRTC examples that record locally on the client, however this is not natvie to openvidu. However recording on electronjs is...
github.com/hokein/electron-screen-recorder
tutorialspoint.com/electron/… You could integrate recording separately along side your openvidu app.
The main difference here is that you want to record an incoming call and while you likely won't be able to just write the incoming webrtc data you should be able to record the area of the app (canvas) where the video player is rendered. You will be re-encoding the decoded rendered video stream, but it shouldn't be too much of a hit performance wise.
I'm trying to receive a live RTP audio stream in my iPhone but I don't know how to start. I'm seeking some samples but I can't find them anywhere.
I have a Windows desktop app which captures audio from the selected audio interface and streams it as µ-law or a-law. This app works as an audio server that serves any incoming connection with that streaming. I have to say that I've developed an Android app that receives that stream and it works, so I want to replicate this functionality on iOS. In Android we have "android.net.rtp" package to manage this and transmit or receive data streams over the network.
Is there any kind of equivalent package for iOS to implement this? Could you give me any kind of reference / sample to do this, or just tell me where to start?
You can see this libraryHTTPLiveStreaming, But his protocol maybe is not standard one, You can check my fork aelam/HTTPLiveStreaming-1, I'm still working on it, it can be played by ffplay. You can try
check the file rtp.c in ffmpeg, I think it will help out
I want to create an App, which can play sound received by rtp-stream and capture sound from the microphone and send it via rtp-stream. The problem is, that I have no idea how to start. Can someone tell me a library for rtp-streaming on ios devices? I took a look at Live555, but I didn't understand how it works.
Thanks
I can send files, messages and even video but no microphone streamming code seen to work as should.
files, messages and video have some kind of minimal structure that I can send and rebuild in the other side, but audio seems to have no structure simple to work on.
At this point already tried to use this project as base and send the AudioBuffer whith MCSession, but the code just crash when i tried to modify the buffer on the receive side.
I also tried to modify this stream program to uses the microphone but the stream of files seems different then the excepted for real-time audio input.
Have someone tried something like streaming real-time microphone data to another iOS device? or have any advice or clue that i can work on?
thank you in advance!
I'm in the process of building an app that uses the IPod Library Access API, so that I can play songs from iTunes in the app. However I also need to access the audio data of whatever song is playing, like how a music visualizer would.
So far, looking through the Audio Session documentation and the Core Audio documentation, I haven't found any officially supported means of achieving this. As I understand it, it would require accessing audio data from another audio session, as in the iPod access api, technically iTunes is playing in the background as I understand it and thus has a different audio session.
So basically how can you access audio data from another audio session? Specifically getting the audio data of songs played through the iPod Access API?
See the AudioTapProcessor iOS Sample sample app in Apple's iOS Developer Library (sign on required), for the use of the AVFoundation MTAudioProcessingTap API.