Integrating PubNub WebRTC SDK for iOS - ios

I Am stuck with integrating the PubNub WebRTC SDK for iOS application.
Its a JavaScript SDK. How To integrate this with my iOS app.
Thanks in advance.....

This does not directly answer the Objective-C implementation, but it might help with understanding the overall solution and the role that PubNub plays.
Why PubNub? - Signaling
WebRTC is not a standalone API, it needs a signaling service to coordinate communication. Metadata needs to be sent between callers before a connection can be established. This metadata includes information such as:
Session control messages to open and close connections
Error messages
Codecs/Codec settings, bandwidth and media types
Keys to establish a secure connection
Network data such as host IP and port
Once signaling has taken place, video/audio/data is streamed directly between clients, using WebRTC’s PeerConnection API. This peer-to-peer direct connection allows you to stream high-bandwidth robust data, such as video. HTML5Rocks provides a great guide on all things WebRTC (no need to read as it is summarize below).
PubNub makes this signaling incredibly simple, and in addition, gives you the power to do so much more with your WebRTC applications.
What PubNub is Not
PubNub is not a server for WebRTC. A signaling service specifies ICE servers that the video chat can stream over. Public STUN servers provided by google can be used, but they are not very reliable. STUN or TURN servers are required to circumnavigate a firewall, else chat will fail. Many services provide the “total package” of signaling and server in one, that is not PN. Our audience are the people who want to build their own, more custom service.
XirSys
XirSys already have a WebRTC-PubNub demo using rails on their GitHub. They host STUN and TURN servers catering to the needs of WebRTC.
Open Source
There are a few open source STUN and TURN server projects that can be downloaded and hosted with ease:
Amazon AWS VM: Pre-made ready to deploy
RFC5766 TURN: Google Code, TURN server
One-to-many: Instructions on MCU for 1-to-many media servers. Necessary for large group chats and streams with hundreds+ users.
So as you can see, we do not provide audio/video streaming services but if you are building this solution, PubNub is a necessary piece to tie it all together with the signal protocol.
AndroidRTC
And here is an PubNub AndroidRTC example by our interns.

Related

Socket.io vs xmpp for a mobile chat app

I have to build a realtime chat app in iOS, which can later also have voice and video calling. I want to use a scalable and light weight solution integrated with the backend, making sure that the solution also supports calling in the future.
I'm not too sure if socket.io supports voice and video calls; Should I use that or xmpp? Or any other similar solution?
As it was written above socket.io is a chat server implementation using Websockets, while XMPP is a protocol.
I'd recommend using an XMPP chat server in this case.
For audio/video calls implementation you will need to implement signaling via XMPP to establish connection between the devices before the call.
Also for audio/video chat implementation you will need STUN/TURN/ICE server and you will need to add client-side implementation for passing media streams from peer-to-peer if you choose WebRTC peer-to-peer option.
There is an easier way as well. You can use a ready XMPP based server and SDK to build your app. For example, ConnectyCube provides such service.
They have a ready backend and SDKs you can use for building chat and audio/video chat apps. Also they already have a TURN server, so you do not need to worry about this part too.

iOS WebRTC P2P Connection with ICE Server

I have an ios app written in swift setup with AppRTC code from here.
I have the app setup on two phone and everything works when connecting to googles http://appr.tc. I would like to take google's apprtc out. When I setup both apps with ICE servers (STUN/TURN) but no rtc server the apps are not able to connect to each other. They both log WARNING: Renegotiation needed but unimplemented. How can I have the two apps webRTC communicate back and forth using only the ICE servers?
WebRTC needs a signaling server to exchange ICE credentials + candidates as well as DTLS fingerprints. The ICE servers are not a replacement for the signaling server.
See https://bloggeek.me/media-signaling-flows-look-like-webrtc/ or https://www.html5rocks.com/en/tutorials/webrtc/infrastructure/ for two pretty good introductions to the topic.

Working with WebRTC on IOS

I am happy that I got Video Chat working with WebRTC on iOS by following the tutorial here:
http://ninjanetic.com/how-to-get-started-with-webrtc-and-ios-without-wasting-10-hours-of-your-life/
But, I am not able to understand how is it Peer to Peer Video Chat when I am connecting to the appspot server (Google App Engine using Channel). Is it possible to remove this appspot. I have my own client verification system. So, I am pretty sure to maintain the proper authentication of who is going to connect to whom.
The GAE channel is used for signaling. Signaling is not part of webrtc and you can use any signaling method you like.
"Exchange of information via signaling must have completed successfully
before peer-to-peer streaming can begin"
You can find more information here and here

What is the major scenario to use Socket.IO

I just wonder why and for what kind of application or case we need the Socket.IO.
I am the iOS developer of a known open source project socket.IO-objc
Usually, we need HTTP or HTTPS to communicate with server. The socket aims to conduct real time communication (It should always keep a live HTTP connection.)
Libraries like socket.IO are needed when we need real-time in our app. Let me explain this in little more detail. Let's assume that you are developing a game, which is multiplayer and 2 or more users can play that simultaneously. Then, in that case, you won't be making HTTP or HTTPS calls because of many reasons and one of them is that their packet size is large and other is that these calls are very slow. In such scenarios we use libraries like sockets to send and receive data to and from the server. Sockets are really fast and are capable of sending only those data packets which are needed. Using HTTP programming you can never create any multiplayer game or any app which will be interacting with a server on a realtime basis.
Let's take another example. Let's assume that you are working on a chat application. When user A is typing something then user B should know that A is typing (similar to gtalk of facebook messenger). If you will use HTTP calls at that point of time then "B" will never be able to see the actual status of the other person because of the delay. So what we can use is sockets so that when user A is typing anything then his device will send only one data packet which will just notify the server that he is typing and will be delivered to user B, this process is really fast (almost realtime) and will reduce the data transfer also.
I'm working on chat application using socket.io also. So it seems to replacing everythings with socket.io. This is making me in doubt and curiousness. I totally agree with real-time app like chat suits for socket.io. However there is round-trip communication (such as user login) that's more suitable for HTTP.
Socket.io uses web socket to pass data among users who are all connected to a web server. With web socket, there is no negotiation protocols and connection remain open as long as users concerned are registering for service with the web server. As pointed out also, the payload is significantly less than http/https protocol.
Socket.IO is a JavaScript library for realtime web applications. It enables realtime, bi-directional communication between web clients and server. It has two parts: a client-side library that runs in the browser, and a server-side library for node.js. Both components have a nearly identical API.

Which connection class type should we use when connecting to a server to stream an audio file?

I need to connect to a server where resides an mp3 file and stream it. If I am not using rtsp connection what do u suggest me to use? Can we use rstp in this case?
Is it fine to use a http connection? I used that but it seems to take a long time to actually connect to that server. Can I improve the performance by using any other connection class?
Thanks
HTTP over Wi-Fi, Direct TCP, or WAP2 is the best connection method to use for streaming audio if you want to reduce the number of intermediate proxies and carrier networks. Transports such as BES and BIS go through third-party infrastructure (enterprise in the base of BES and RIM in the base of BIS), which adds another point of failure in the path. Not only that, but I have asked RIM employees directly what their thoughts were on streaming media over BIS and their short but sweet response is "don't". They don't want the extra traffic going over their network. I have heard from the BlackBerry forums that large HTTP transfers aren't very reliable over BIS, anyways. In a similar way, BES admins probably don't like apps that try to stream a lot of media through their servers as well.
That leaves Wi-Fi, WAP2 and Direct TCP. Wi-Fi is a no-brainer for devices that have it (and users who are connected) but remember most CDMA devices don't have Wi-Fi so only a small percentage of users may have it. WAP2 is nice in that it doesn't require manual "APN" configuration, however in my experience not all carriers are set up for WAP2. So you may want to try that first and if it doesn't work (i.e. no WAP2 service records or connections over WAP2 fail), use Direct TCP.

Resources