Working with WebRTC on IOS - ios

I am happy that I got Video Chat working with WebRTC on iOS by following the tutorial here:
http://ninjanetic.com/how-to-get-started-with-webrtc-and-ios-without-wasting-10-hours-of-your-life/
But, I am not able to understand how is it Peer to Peer Video Chat when I am connecting to the appspot server (Google App Engine using Channel). Is it possible to remove this appspot. I have my own client verification system. So, I am pretty sure to maintain the proper authentication of who is going to connect to whom.

The GAE channel is used for signaling. Signaling is not part of webrtc and you can use any signaling method you like.
"Exchange of information via signaling must have completed successfully
before peer-to-peer streaming can begin"
You can find more information here and here

Related

Socket.io vs xmpp for a mobile chat app

I have to build a realtime chat app in iOS, which can later also have voice and video calling. I want to use a scalable and light weight solution integrated with the backend, making sure that the solution also supports calling in the future.
I'm not too sure if socket.io supports voice and video calls; Should I use that or xmpp? Or any other similar solution?
As it was written above socket.io is a chat server implementation using Websockets, while XMPP is a protocol.
I'd recommend using an XMPP chat server in this case.
For audio/video calls implementation you will need to implement signaling via XMPP to establish connection between the devices before the call.
Also for audio/video chat implementation you will need STUN/TURN/ICE server and you will need to add client-side implementation for passing media streams from peer-to-peer if you choose WebRTC peer-to-peer option.
There is an easier way as well. You can use a ready XMPP based server and SDK to build your app. For example, ConnectyCube provides such service.
They have a ready backend and SDKs you can use for building chat and audio/video chat apps. Also they already have a TURN server, so you do not need to worry about this part too.

Video Streaming and Broadcasting using WebRTC

I am very new to Real Time Protocols and I had some questions about how WebRTC works and how I can implement it. I am trying to create a one to many livestream like facebook or periscope, where one user broadcasts and other users join and stream the video. I am using Swift from my client end.
My questions are:
How do I broadcast a video using WebRTC
Is there an SDK for WebRTC in Swift/iOS
I know the questions are very vague but a guidance to the right direction would be great because I am not sure where to start
You will need to use backend servers for that.
If you plan on broadcasting to multiple users directly from your mobile app then stop...
You need to connect your mobile app to a backend media server which then can be used to broadcast the video to a larger audience.
There are several commercial and open source alternatives that enable you to do that. I'd check Red5Pro, Wowza, SwitchRTC, Jitsi, Janus and Kurento for this task.
For the client side, look at react-native-webrtc
You can find more tools for WebRTC developers here.
Regarding your question (2), there's also a SDK for iOS here and a neat get-started-page here (although 2.5ys old, but I haven't found anything better so far yet)

How can I get Alexa working on my iOS app?

I have been checking out the Alexa Skills kit the past few days. I have also been poring through the documentations for both the Skills kit and the Voice Service. I am just having a little hiccup trying to understand the flow. I have implemented one of amazon's sample skills (favourite colour sample) in the developer console and also wrote a sample lambda function to handle the type of response that will be delivered. Its working on the test simulator and what left is basically getting lambda running through my ios app. However I have the impression that I don't have to use the voice service. Am I wrong? I am quite confused, it would be awesome if anybody who has some more clarity could shed some light on the matter. If I get lambda working also, I think it will accept requests that are in a particular format. Where do I have to send the encoded audio to get a json response to send to the skills kit? To the Alexa Voice Service?
Also I am authenticating my app using cognito and dynamo db. If I were to use Alexa Voice Service, then it is mentioned that the user will have to also login to amazon. So do I still have to work with the login with amazon sdk? Or is there a workaround?
Based on Amazon documentation there are two ways to interact with Alexa:
Sounds like you want to implement the app thru the Companion method.
As far as the JSON goes, i am currently resolving that issue now, (will post answer once I have it resolved).
Basically you have to use AVFoundation to capture audio from iPhone and send 2 https messages to Alexa (One message with JSON Body & the second message with audio captured as body.) Bases on Documentation
Companion App
(You have a device (such as a smart speaker) that you want to add Alexa to. So, you build in support for AVS. Great! Now you need a way to authorize it and associate it with the user's account. This is the "companion app" approach. The companion app connects to your smart product and allows the user to login and authorize the speaker to use Alexa and connect to their Amazon account.)
Mobile OR Website
AVS App
(You don't have a device you need to authorize - instead you want to speak to Alexa from within your Android/Iphone application.)
Android or Iphone
You can find a swift example on github on how to implement a iOS AVS client
https://github.com/chintan1891/iOS-Alexa

Integrating PubNub WebRTC SDK for iOS

I Am stuck with integrating the PubNub WebRTC SDK for iOS application.
Its a JavaScript SDK. How To integrate this with my iOS app.
Thanks in advance.....
This does not directly answer the Objective-C implementation, but it might help with understanding the overall solution and the role that PubNub plays.
Why PubNub? - Signaling
WebRTC is not a standalone API, it needs a signaling service to coordinate communication. Metadata needs to be sent between callers before a connection can be established. This metadata includes information such as:
Session control messages to open and close connections
Error messages
Codecs/Codec settings, bandwidth and media types
Keys to establish a secure connection
Network data such as host IP and port
Once signaling has taken place, video/audio/data is streamed directly between clients, using WebRTC’s PeerConnection API. This peer-to-peer direct connection allows you to stream high-bandwidth robust data, such as video. HTML5Rocks provides a great guide on all things WebRTC (no need to read as it is summarize below).
PubNub makes this signaling incredibly simple, and in addition, gives you the power to do so much more with your WebRTC applications.
What PubNub is Not
PubNub is not a server for WebRTC. A signaling service specifies ICE servers that the video chat can stream over. Public STUN servers provided by google can be used, but they are not very reliable. STUN or TURN servers are required to circumnavigate a firewall, else chat will fail. Many services provide the “total package” of signaling and server in one, that is not PN. Our audience are the people who want to build their own, more custom service.
XirSys
XirSys already have a WebRTC-PubNub demo using rails on their GitHub. They host STUN and TURN servers catering to the needs of WebRTC.
Open Source
There are a few open source STUN and TURN server projects that can be downloaded and hosted with ease:
Amazon AWS VM: Pre-made ready to deploy
RFC5766 TURN: Google Code, TURN server
One-to-many: Instructions on MCU for 1-to-many media servers. Necessary for large group chats and streams with hundreds+ users.
So as you can see, we do not provide audio/video streaming services but if you are building this solution, PubNub is a necessary piece to tie it all together with the signal protocol.
AndroidRTC
And here is an PubNub AndroidRTC example by our interns.

WebRTC for iOS for VoIP communication

Is there any WebRTC solution for iOS for free with easy setup?
I tried to use http://www.webrtc.org/native-code/ios because our web end is already done with its web api and I thought I may not have other way around for letting calls go between web and iOS too. But iOS API's setup is very tedious and time taking (The downloading of WebRTC checkout is taking like lives with no gain).
I searched around and found a few like tokBox and quickblox but they are not free.
Did you look at RestComm iOS SDK ? It supports WebRTC Audio only right now but we are working on adding video in the next few weeks. Also it uses SIP as a signalling protocol.
https://github.com/Mobicents/restcomm-ios-sdk
http://www.telestax.com/how-to-integrate-the-restcomm-ios-client-sdk-in-your-app/
http://docs.telestax.com/restcomm-client-ios-sdk-quick-start/
Take a look at https://github.com/oney/RCTWebRTCDemo .
This is a React Native WebRTC project which works on iOS and Android and also has a signaling server example (but you can also use the online version for quick tests!).
Since the WebRTC requires DTLS-RTP, RTCP-FB, ICE and a lot of other newest standards, but the VoIP standards are old about 10+ years, therefore you need setup a gateway to convert the signaling and transcoding the RTP.
With the WebRTC Gateway, in the browser side, you can create the HTML5 application to connect to WebRTC gateway, the gateway will communicates with your PBX, and your iOS client connects to your PBX, then the call can be established between browser with iOS client app.

Resources