PubNub long polling vs sockets - mobile battery life - ios

I recently began using PubNub in my iOS app, and am very happy with it. However, I have been looking at the other options available, such as Pusher and Realtime.co, which use Websockets. PubNub, on the other hand, uses long polling. I have done my own little speed comparisons and for my purposes, I find that they are all fast enough.
PubNub offers some nice features like message history and a list of everyone in the channel , so barring everything else I am leaning toward them. My question is, should I be concerned with battery life and heavy usage with a long-polling solution like PubNub? Would a Websockets solution be significantly more power efficient?

PubNub on Mobile with Battery Saving
As a preface to battery performance and efficiency, PubNub is an optimized service for mobile devices on-the-go when compared to alternative or self-hosted websocket solutions. PubNub offers a catch-up feature on Mobile phones that will automatically redeliver messages that were missed, especially for devices that are moving between cell-network towers and changing from 3G/4G to WiFi. Websockets tend to be unrecommended for mobile due to reliability in common scenarios and that is why PubNub will select the best transport for your device automatically; so you don't have to decide what makes the most sense for the phone in transit.
Battery Savings Pattern with PubNub
PubNub has a keep-alive connection that is uncommonly long and set to one hour. A ping is sent each 300 seconds (300,000ms). This long enough to provide the best mix between mobile performance and battery saving.
Battery Saving Tips on Mobile
Keeping messages as small as possible.
Sending Fewer messages less frequently.
Connect to only one channel rather than two or more.
Automatic Transport Detection
PubNub will automatically select the best transport for you when needed especially on mobile devices. An interesting conversation about websockets occurred in Portland, Oregon this last October 2012 at KRTConf that I recommend to you https://speakerdeck.com/3rdeden/realtimeconf-dot-oct-dot-2012
Let me know if this was helpful.

I don't think this is correct. See http://eon.businesswire.com/news/eon/20120927005429/en/Kaazing/HTML/HTML5
I am the one who actually did the testing for Kaazing on comparing WebSocket and regular http-based message transfers. I saw a drastic decrease in battery consumption with WebSocket. Now Kaazing has additional technology above and beyond WebSocket to reduce battery consumption, but even if you don't use Kaazing, you will still see some battery consumption efficiencies with WebSocket. I tried this out myself by running actual tests even for basic WS versus http without any special battery optimization algorithms.

Related

Differences between websockets and long polling for turn based game server

I am writing a server for an iOS game. The game is turn-based and the only time the server needs to push information to the client is to notify of the opponent's move.
I am curious if anyone could comment on the performance and ease of implementation differences between using WebSockets and long polling. Also, if I used WebSockets, should I only use it to receive information and send POST requests for everything else, or should all communication be through the WebSocket?
Additionally, is there anything extra to consider between WebSockets and long polling if I am interested in also making a web client?
For anyone else who may be wondering, it could depends on how long typical interactions go between events?
Websocket: Anything more than a few tens of seconds, I don't think keeping a websocket open is particularly efficient (not to mention that IIRC it would disconnect anyway if the app loses focus)
Long polling: This forces a trade-off between server load (anything new now? how about now? ...) and speediness of knowing a change has occurred.
Push notifications: While this may be technically more complex to implement, it really would be the best solution IMO, since:
the notification can be sent (and delivered) almost immediately after an event occurs
there is no standby server load (either from open websockets, or "how about now?" queries) - which is especially important as your use-base grows
you can override what happens if a notification comes in while the user is in-app
should I only use it to receive information and send POST requests for
everything else
Yes, you should use WebSockets to fetch real-time updates only, and REST APIs to do BREAD stuff.
should all communication be through the WebSocket?
Short answer: No,
Check this article from PieSocket for more information about the best use cases for WebSockets. What Is WebSocket: Introduction And Usage

What technology do apps use for real time syncing of data?

A lot of apps such as Uber, Lyft, and GroupMe seem like they have real-time data getting pushed down from the server. Obviously they could be faking it by refreshing every n seconds. Another thought was that they could be opening TCP sockets? Or potentially other technologies that I am unaware of.
If programming an iOS app what is the industry standard for syncing data between the client and the server in real time, without user interaction such as swiping up?
WebSockets or polling are the general solutions. Push Notifications can also be used to trigger a poll in some cases.
In addition to Neal's answer, check out the Rocket technique, which "leverages web standards like Server-Sent Events and JSON Patch."
AFRocketClient uses AFNetworking to support this on iOS/Mac, as long as the server supports these technologies.

Real time audio conversation iOS

I am designing an iOS app for a customer who wants to allow real-time (with minimum lag, max 50ms) conversations between users (a sort of Teamspeak). The lag must be low because the audio can also be live music, played with instruments, so all the users need to synchronize. I need a server, which will request audio recordings to every client and send to others (and make them hear the same sound at the same time).
HTTP is easy to manage/implement and easy to scale, but very low-performing because an average HTTP request takes > 50ms... (with a mid-level hardware), so I was thinking of TCP/UDP connections kept open between clients and server.
But I have some questions:
If I develop the server in Python (using TwistedMatrix, for example), how are its performance ?
I can't develop the server in C++ because it is hard to manage (scalable) and to develop.
Anyone used Nodejs (which is easy to scale) to manage TCP/UDP connections?
If I use HTTP, will it be fast enough with Keep-Alive? Becuase usually the time required for an HTTP Request to be performed is > 50ms (because opening-closing connection is hard), and I want the total procedure to be less than that time.
The server will be running on a Linux machine.
And finally: which type of compression can you suggest me? I thought Ogg Vorbis would be nice, but if there's anything better (and can be used in iOS), I am open to changes.
Thank you,
Umar.
First off, you are not going to get sub 50 ms latency. Others have tried this. See for example http://ejamming.com/ a service that attempts to do what you are doing, but has a musically noticeable delay over the line and is therefore, in the ears of many, completely unusable. They use special routing techniques to get the latency as low as possible and last I heard their service doesn't work with some router configurations.
Secondly, what language you use on server probably doesn't make much difference, as the delay from client to server will be worse than any delay caused by your service, but if I understand your service correctly, you are going to need a lot of servers (or server threads) just relaying audio data between clients or doing some sort of minimal mixing. This is a small amount of work per connection, but a lot of connections, so you need something that can handle that. I would lean towards something like Java, Scala, or maybe Go. I could be wrong, but I don't think this is a good use-case for node, which, as I understand it, does not do multithreading well at this time. Also, don't poo-poo C++, scalable services have been built C++. You could also build the relay part of the service in C++ and the rest in whatever.
Third, when choosing a compression format, you'll have to choose one that can survive packet loss if you plan to use UDP, and I think UDP is the only way to go for this. I don't think vorbis is up to this task, but I could be wrong. Off the top of my head, I'm not sure of anything that works on the iPhone and is UDP friendly, but I'm sure there are lots of things. Speex is an example and is open-source. Not sure if the latency and quality meet your needs.
Finally, to be blunt, I think there are som other things you should research a bit more. eg. DNS is usually cached locally and not checked every http call (though it may depend on the system/library. At least most systems cache dns locally). Also, there is no such protocol as TCP/UDP. There is TCP/IP (sometimes just called TCP) and UDP/IP (sometimes just called UDP). You seem to refer to the two as if they are one. The difference is very important for what you are doing. For example, HTTP runs on top of TCP, not UDP, and UDP is considered "unreliable", but has less overhead, so it's good for streaming.
Edit: speex
What concerns the server, the request itself is not a bottleneck. I guess you have sufficient time to set up the connection, as it happens only in the beginning of the session. Therefore the protocol is not of much relevance.
But consider that HTTP is a stateless protocol and not suitable for audio streaming. There are a couple of real time streaming protocols you can choose from. All of them will work over TCP or UDP (e.g. use raw sockets), and there are plenty of implementations.
In your case, the bottleneck with latency is not the server but the network itself. The connection between an iOS device and a wireless access point (AP) eats up about 40ms if the AP is not misconfigured and connection is good. (ping your iPhone.) In total, you'd have a minimum of 80ms for the path iOS -> AP -> Server -> AP -> iOS. But it is difficult to keep that latency stable. (Typical latency of AirPlay on my local network is about 300ms.)
I think live music over iOS devices is not practicable today. Try skype between two iOS devices and look how close you can get to 50ms. I'd bet no one can do it significantly better, what concerns latency.
Update: New research result!
I have to revise my claims regarding the latency of wifi connections of the iDevice. Apparently when you first ping your device, latency will be bad. But if I ping again no later than 200ms after that, I see an average latency 2ms-3ms between AP and iDevice.
My interpretation is that if there is no communication between AP and iDevice for more than 200ms, the network adapter of the iDevice will go to a less responsive sleep mode, probably to save battery power.
So it seems, live music is within reach again... :-)
Update 2
The ping-interval required for keep alive of low latency apparently differs from device to device. The reported 200ms is for an 3rd gen. iPad. For my iPhone 4 it's more like 50ms.
While streaming audio you probably don't need to bother with this, as data is exchanged on a more frequent basis. In my own context, I have sparse communication between an iDevice and a server, but low latency is crucial. A keep alive therefore is the way to go.
Best, Peter

Large number of WebSocket connections

I am writing an application that keeps track of content pushed around between users of a certain task. I am thinking of using WebSockets to send down new content as they are available to all users who are currently using the app for that given task.
I am writing this on Rails and the client side app is on iOS (probably going to be in Android too). I'm afraid that this WebSocket solution might not scale well. I am after some advice and things to consider while making the decision to go with WebSockets vs. some kind of polling solution.
Would Ruby on Rails servers (like Heroku) support large number of WebSockets open at the same time? Let's say a million connections for argument sake. Any material anyone can provide me of such stuff?
Would it cost a lot more on server hosting if I architect it this way?
Is it even possible to maintain millions of WebSockets simultaneously? I feel like this may not be the best design decision.
This is my first try at a proper Rails API. Any advice is greatly appreciated. Thx.
Million connections over WebSockets, using Ruby, I can't see its real if you not using clustering to spread connections between different instances to handle all the data processing.
The problem here is serializing and deserializing data.
As well you have to research of how often you will need to pull data to client from server, and if it worth to have just periodical checks using AJAX, then handling connection for whole time. Because if you do handle connection and then you not using it - it is waste of resources. WebSockets are build on top of TCP layer, and all connections are not "cheap" as well going through for OS and asking them for data available again is not the simple process, with millions connections it is something really almost impossible without using most advanced technologies in the world.
I head that Erlang is able to handle millions of connections, but I don't have details over it. As well connection is one thing, another is processing data and interaction between connections - this you might want to check, because if you have heavy processing algorithms, then you definitely need to look into horizontal scaling options over clustering solutions.
If you are implementing chat, use websockets.
If you are implementing 1 way messages in realtime use server sent events.
If you are implementing 1 way messages sent every few hours or so, use APNS.
The saying goes phone in hand, use websockets / server sent events.
Phone in pocket, use APNS.
APNS will alleviate wifi dips, tcp/ip socket hangs and many other issues. Really useful. There is the chance that it may take a little time to get through. But then again, there is the chance that websockets will take
Recent versions of iOS let you send APNS to the client without a popup message to the client so it can ask the server for more information. That along with some backgrounding implementations really improves things.
If possible, do not implement totally anonymous clients. It is very tricky to detect if a client reinstalls the app. So you'll end up sending duplicates to the client. Need to take that into account.
APNS looks trivial to implement in ruby, but I'd suggest avoiding the urge and going to using an existing gem/service out there that supports both google and apple. It is much trickier to implement than it may seem at first.
If you decide to stick with websockets, it may make sense to just leverage websockets in nginx like https://github.com/wandenberg/nginx-push-stream-module
ASIDE:
Using SMS where speed is critical is very expensive. $1/month per phone number only sends a max rate of 1 message per second. So sending 100 messages per second = $100/month plus message fees. Do note that 100 messages at a rate of 50 messages/second = $50/month. But if you want to send 1k messages, that takes 20 seconds.
Good luck

What is the performance overhead of Apache ActiveMQ vs. raw sockets?

We're looking to implement ActiveMQ to handle messaging between two of our servers, over a geographically diverse environment (Australia to the UK and back, via the internet).
I've been looking for some vague indicators of performance round the net but so far have had no luck.
My question: compared to a DIY TCP/SSL implementation of basic messaging, how would ActiveMQ perform? Similar systems of our own can send and receive messages across Australia in 100-150ms, over a SSL layer with an already established connection.
Also, does ActiveMQ persist its TLS/SSL connections, thus saving a substantial amount of time that would already be used in connection creation/teardown?
What I am hoping is that it will at least perform better than HTTPS, at a per-request level.
I am aware that performance can vary remarkably, depending on hardware, networks, code and so on. I'm just after something to start with.
I know the above is a little fuzzy - if you need any clarification please let me know and I will only be too happy to oblige.
Thank you.
What Tim means is that this is not an apples to apples comparison. If you are solely concerned with the performance of a single point to point connection to transfer data, a direct link will give you a good result (although DIY is still a dubious design decision). If you are building a system that requires the transfer of data and you have more complex functional requirements, then a broker-based messaging platform like ActiveMQ will come into play.
You should consider broker-based messaging if you want:
a post-office style system where a producer sends a message, and knows that it will be consumed at some point, even if there is no consumer there at that time
to not care where the consumer of a message is, or how many of them there are
a guarantee that a message will be consumed, even if the consumer that first handle it dies mid-way through the process (transactions, redelivery)
many consumers, with a guarantee that a message will only be consumed once - queues
many consumers that will each react to a single message - topics
These patterns are pretty standard, and apply to all off the shelf messaging products. As a general rule, DIY in this domain is a bad idea, as messaging is complex (see http://www.ohloh.net/p/activemq/estimated_cost for an estimate of how long it would take you do do same); and has many existing implementations of various flavours (some without a broker) that are all well used, commercially supported and don't require you to maintain them. I would think very hard before going down to the TCP level for any sort of data transfer as there is so much prior art.

Resources