ios sendAsynchronousRequest timer - ios

This is my situation:
Im using sendAsynchronousRequest. I quickly relized that it has a default timeout of 60 seconds. My app is designed to wait for an opponent to start the game (its a word-game).
Actually, it could take hours before the opponent starts the game. Which means the async-request could be waiting for hours.
Is that bad? I mean, I can probably change the default timeout. But the question is if this iss a bad design. The thing is that I wanted to avoid pulling the server at intervals to know if the opponent has started the match or not.
If this is a bad design: can somebody suggest an alternativ way?

Your best bet is to poll the server if you want to be up and running quickly and don't have a lot of resources (time/money).
If for some reason you need more real-time then there is a high amount of complexity involved in creating an open socket to your server for communication and you are best off using an existing framework like Pusher ($), PubNub ($) or socket.io (free but you will have to handle the server side). If you want to create your own client/server notification system then you may want to check out SocketRocket from Sqaure which provides a client side WebSocket implementation for iOS.

Related

Implementing autosave in a Rails app with Websockets

I have a simple text editor and want to implement auto save so that any time a change is made to the text, it is immediately sent to the server.
There are two ways to do this:
Open a socket connection and send changes through the socket every second.
Set a 750ms idle keyboard change timer that sends changes any time the user has stopped typing for 750ms.
I understand websockets are appropriate when you don't want to poll to check the server for new data. But is it also appropriate for when you want to constantly send data to the server?
Is 1 request/user/second on a web socket more performant in general than 1 request/user/second on a regular http connection?
Update:
For the record, I looked into Google Docs and it seems to use post requests and not websockets for autosave:
It fires with about a 150ms keyboard idle timer, and only sends incremental changes.
WebSocket is entirely appropriate for permanently sending small amounts of data to the server.
There are two main advantages:
You do not need to establish a connection each time you send data, which makes things faster (though this may not be all that important for your application).
You save on message size, since the HTTP headers are much larger than those of WebSocket messages.
(For more on this see this thorough StackOverflow answer.

Why do we need simple_one_for_one?

Somebody told me that simple_one_for_one is very useful for chat applications, because each chat client is a server process (gen_server). Is this right?
And I wonder why do we need it? Why not create only one center server (gen_server) to handle all chat client communication? Because maybe the number of chat clients is very large so only one server couldn't handle fast, make the system slow down?
I think maybe creating too many servers like simple_one_for_one may take too much system resource. I'm a new OTP guy, so I really need explanation about this point.
Yes, the idea is that you would have a process (gen_server) per client.
This lets you isolate failure of one client from another.
If you had everyone in a single process, you have to be very careful to handle all the things that might go wrong and crash you process (thus, disconnecting all your clients).
With one process per client, you can code for the happy path and just let it crash when things go wrong. Worst case is you drop a single client.
Processes are fairly cheap (nothing like creating threads). On a modern machine you can have millions.
If your user base is in the many millions, I'm sure you'd end up with more than one server anyway. So something that can easily scale to the hundreds of thousands to low millions on a box is plenty.

Is long polling possible with a Rails application using EventMachine?

I'm writing a simple chat room application in Rails 3.1 - for learning purposes.
For starters I have all the needed models (messages, users, rooms, etc.) and things work great.
The clients poll the server every minute (for example) and get new messages if they have any.
I would like to change the simple polling to long polling and can't figure out if this can be done in the same app or do I have to create some other Push server for the long polling.
I read a lot about EventMachine and changed my rails app to user it as I wanted to use EventMachine for the event driven mechanics. I thought that the EventMachine channel would come in handy for this.
A client would connect and wait for a message in the chat room and it will receive a message only when one was sent to the room.
What I can't figure out is how can I share the EventMachine::Channel instance between all my client connections.
Is this approach even possible or am I going at it the wrong way?
If possible I would like a solution that can run as a single rails application hosted on Heroku.
Yeah sure. I just wrote a demo using event machine. My case is that player walking around a map, and other players should be able to see it.
The demo looks like that:
A client establishes a connection, reporting its own coordinate(generated randomly)
There is an array preserving all the coordinates for each client
When a client moves, it sends its new coordinate to the server. Then the server finds out people near him(from the array), and push the new coordinate to those clients.
I tested it with nearly 5000 clients, and each second 20-30 players moves its position. And the server process only takes less that 100M memory & 50%-60% cpu usage(on a single core).
In your case, I think you should probably try faye too. It's based on event machine, and an appropriate solution to things like chat room.
Expanding what I've mentioned on the comment, check this blog post that explains how to create a text based chat app using EM, and uses AMQP to broadcast the messages to the other users.
I think you can probably do the same or use some in memory queues to share messages, and this definitely should work on heroku, as you don't have a dependency to an external service such as RabbitMQ.
Here's a good discussion about different queue frameworks: ActiveMQ or RabbitMQ or ZeroMQ or
Rails will have streaming added in version 4.
For now, you can streaming (long polling) like in this example with Sinatra and Redis's Pub/Sub feature as a backend. You will have to add another action to handle user sent messages, adding them to Redis's with PUBLISH command. You should use an evented server like Thin or Puma.

A way to hand off a Rail connection to delayed_job

We have an existing API where a client asks our server for information that we have to get from another external server. When the external server takes a long time, say 10 seconds, it holds up a Rails passenger instance for that whole 10 seconds.
Is there some way to pass the rendering of our reply to delayed_job so that I can free up the Rails instance?
NOTE: Ideally, we would just update our API and reply to our API client that we are busy and to try back again in a few seconds to see if we are ready. However, there are already thousands of clients out there and changing them is not practical at this time.
The usual way to handle this is to queue up the job and return immediately, then poll or use some async notification framework like Pusher or Faye to update the remote client. You definitely cannot pass the connection to DJ as you describe. Another avenue you might investigate is using EventMachine to handle it, a lá http://railstips.org/blog/archives/2011/05/04/eventmachine-and-passenger/. A third alternative would be to precache the data from the remote web service, but that is an avenue very dependent on what you're doing (authorization, for example, is not something you could do there.)
The basic bottom-line is that you're dealing with a bit of an architecture issue. If you absolutely have to talk to the remote service AND output the results in the request cycle, there's not a lot you can do about it short of changing to a more evented backend like EventMachine or Node.js.

Is it worth using a daemon?

Hey guys, I have a program that uses ajax to send a post to multiple social networks via their APIs based on user form input. I was wondering if this process (which doesn't take more than 2-3 seconds when I test it myself) is worth daemonizing with something like BackgroundRB? In other words, were this program to become used by 100+ people, would the simple call to an action via AJAX slow the entire application down?
Yeah I'd recommend using DelayedJob to accomplish this task. You want to avoid unnecessary HTTP requests to your app. With DelayedJob, it connects to your database and makes third party connections without initiating any HTTP requests to your app.
I wouldn't recommend BackgroundRB.
Sort answer: you have to go into background, use delayed_job
Longer answer:
The problem is that although it takes only 2-3 seconds, it completely locks the application server while it does it. so if you have lets say 5 mongrels, or passenger app servers running, it means that if 5 people decide to do this action within 2-3 seconds interval no other requests will be able to be processed.
So while its ok to do it during the development it's a must to move it to background in production.
I wouldn't recommend BackgroundRB. For what you need it seems you need delayed_job
You have a lot of solution to made that
bj
delayed_job
resque

Resources