I want to use WebRTC's media layer with proprietary signaling on IOS. Is it possible to use only the WebRTC media layer from the ObjC library that has been released for IOS (libjingle_peerconnection_objc.a)?
yes.
the peer connection object provides all webRTC API, which by default does not include hardware capture, media rendering, and signaling. If you want a complete solution, you will need those 3 pieces.
appRTCDemo code (webrtc.org), provides an implementation of audio and video capturers and renderers leveraging native iOS frameworks that you can reuse out of the box.
You could then just replace the signaling (GAE Channel) by your own. Use the signaling for the original handshake (Offer/answer) and the media/data path setup (ICE candidate exchange) and the webRTC part will be taken care of.
If you want to replace media part only in your proprietary solution you can use VoiceEngine from WebRTC.
It's a part of webrtc's core and peer connection api is built on top of this. You should be aware that at your disposal will be RTP sender/receiver + voice processing. Security layer, NAT traversal, etc should be implemented by yourself.
Related
I'm currently experiencing an intermittent issue with some VOIP WebRTC voice calls.
The symptom is that the outbound audio can sometimes fade in and out and sounds extremely muffled or even disappears momentarily. The 2 audio files reference here show examples or a snippet from a good call and then a bad call, both very close together. The audio is captured server side.
Good quality call - https://s3-eu-west-1.amazonaws.com/audio-samples-mlcl/Good.mp3
Poor quality call - https://s3-eu-west-1.amazonaws.com/audio-samples-mlcl/Poor.mp3
The tech stack is comprised of…
Electron application running on Mac/Windows
Electron wraps Chromium v66
WebRTC used within Chromium
OPUS codec used from client to server.
Wired network connection (stats show no packet loss and Jitter, RTT and delay are all very low)
SRTP used for media between client and TURN server (Coturn)
Cotur
Janus WebRTC Gateway
Freeswitch
These are using high-quality headsets and have been tested with various different manufacturers connecting to the Mac/Windows using USB.
Any ideas/help would be greatly be appreciated.
This might be a result of auto gain control. Try disabling it by passing autoGainControl: false to getUserMedia. In Chrome/electron googAutoGainControl and googAutoGainControl2 might still work.
I need a multi-window app to share media streams. Is there anyway to do that? In nw.js I can create a proof of concept, where a MediaStream created in one window can be played in the other, but it appears I cannot do this in Electron. Am I correct?
I know for certain that it's possible with WebRTC to stream audio/video from a MediaStream to another window process. Been there, done that, based on the electron-peer-connection library (it makes the process quite easy, actually).
Unfortunately, there are a lot of limitations to consider if you take this approach (WebRTC will compress your audio with lossy compression, you'll have a big latency, an Electron bug currently causes the audio to become mono, things like that).
So this is fine for things like voice, but not for e.g. high-end native-quality audio processing.
Additionally, if your app is not a monster beast with insane performance requirements, you can also use Web Audio API and a ScriptProcessorNode (AudioWorklet is still not available in Electron) to access audio sample data from the MediaStream directly, and send that over with standard electron-IPC.
You can then rebuild the MediaStream in the other window process using Web Audio API and MediaStreamDestinationNode.
You should be able to communicate between windows using the ipc module by emitting events through main process and add listeners for them in the windows.
I am trying to implement adaptive bit rate with AVPlayer but i don't know how to switch between a low/high stream. I am a bit confused and have few questions:
Is it the sole responsibility of the server to implement HLS on its side OR the client also has to do something about it OR the client handles it automatically?
I am getting the following URLs from server, can someone tell me how to switch between the them based on network speed and what other steps are involved?
{
"VideoStreamUrl": "http://50.7.149.74:1935/pitvlive/aplus3.stream/playlist.m3u8?",
"VideoStreamUrlLow": "http://50.7.149.74:1935/pitvlive/aplus3_240p.stream/playlist.m3u8?",
"VideoStreamUrlHD": null
}
AVPlayer supports HLS natively from the framework so you shouldnt need to do anything to support this.
The framework will automatically switch between low and high streams according to the current available bandwidth, so you dont actually need to pick a stream.
Is there any way, using currently available SDK frameworks on Cocoa (touch) to create a streaming solution where I would host my mp4 content on some server and stream it to my iOS client app?
I know how to write such a client, but it's a bit confusing on server side.
AFAIK cloudKit is not suitable for that task because behind the scenes it keeps a synced local copy of datastore which is NOT what I want. I want to store media content remotely and stream it to the client so that it does not takes precious space on a poor 16 GB iPad mini.
Can I accomplish that server solution using Objective-C / Cocoa Touch at all?
Should I instead resort to Azure and C#?
It's not 100% clear why would you do anything like that?
If you have control over the server side, why don't you just set up a basic HTTP server, and on client side use AVPlayer to fetch the mp4 and play it back to the user? It is very simple. A basic apache setup would do the job.
If it is live media content you want to stream, then it is worth to read this guide as well:
https://developer.apple.com/Library/ios/documentation/NetworkingInternet/Conceptual/StreamingMediaGuide/StreamingMediaGuide.pdf
Edited after your comment:
If you would like to use AVPlayer as a player, then I think those two things don't fit that well. AVPlayer needs to buffer different ranges ahead (for some container formats the second/third request is reading the end of the stream). As far as I can see CKFetchRecordsOperation (which you would use to fetch the content from the server) is not capable of seeking in the stream.
If you have your private player which doesn't require seeking, then you might be able to use CKFetchRecordsOperation's perRecordProgressBlock to feed your player with data.
Yes, you could do that with CloudKit. First, it is not true that CloudKit keeps a local copy of the data. It is up to you what you do with the downloaded data. There isn't even any caching in CloudKit.
To do what you want to do, assuming the content is shared between users, you could upload it to CloudKit in the public database of your app. I think you could do this with the CloudKit web interface, but otherwise you could create a simple Mac app to manage the uploads.
The client app could then download the files. It couldn't stream them though, as far as I know. It would have to download all the files.
If you want a streaming solution, you would probably have to figure out how to split the files into small chunks, and recombine them on the client app.
I'm not sure whether this document is up-to-date, but there is paragraph "Requirements for Apps" which demands using HTTP Live Streaming if you deliver any video exceeding 10min. or 5MB.
I'm looking to use an existing video player library for iOS apps with HLS support so that I can implement a player with some very specific networking behavior, as opposed to letting Apple decide the size and timing of requests. It needs to be customizable enough to support new networking policies such that I can override the request sizes, change what files are requested, and read data from an existing local cache file. In short, I'm attempting to override the networking portion that actually calls out and fetches the segments such that I can feed in data from partial cache as well as make specific algorithmic changes to the timing and size of external HTTP requests.
I've tried AV Foundation's AVAssetResourceLoaderDelegate Protocol but, unless there is something I'm not seeing, there doesn't seem to be a way to override the outgoing requests and just feed bytes to the media player.
I've also looking into VLC but unfortunately my current project is incompatible with a GPL license.
Ideally, there would be a way to directly feed bytes or complete segments to MPMoviePlayerController, but I can't find any way of accomplishing this in the API. The only method I'm aware that works is using a local HTTP server, which I have been doing but it seems overly complicated when all I'm really trying to do is override some internal networking code.
Any suggestions of another way of doing this?