I have a simple text editor and want to implement auto save so that any time a change is made to the text, it is immediately sent to the server.
There are two ways to do this:
Open a socket connection and send changes through the socket every second.
Set a 750ms idle keyboard change timer that sends changes any time the user has stopped typing for 750ms.
I understand websockets are appropriate when you don't want to poll to check the server for new data. But is it also appropriate for when you want to constantly send data to the server?
Is 1 request/user/second on a web socket more performant in general than 1 request/user/second on a regular http connection?
Update:
For the record, I looked into Google Docs and it seems to use post requests and not websockets for autosave:
It fires with about a 150ms keyboard idle timer, and only sends incremental changes.
WebSocket is entirely appropriate for permanently sending small amounts of data to the server.
There are two main advantages:
You do not need to establish a connection each time you send data, which makes things faster (though this may not be all that important for your application).
You save on message size, since the HTTP headers are much larger than those of WebSocket messages.
(For more on this see this thorough StackOverflow answer.
Related
I have a rails app that easily handles the traffic we currently experience, except once a day when we receive a large number of pings within a few seconds from an external service's webhook that is reporting on past transactions. Currently this causes the app to time out due to lack of db connection availability, meaning we lose some of the webhooks as well as bringing the site down for a few seconds. It's not important that the data contained in these webhooks be processed instantaneously, so I am looking for a good way to spread out the responses, rather than do an expensive upgrade just to handle these bursts with additional db connection capability.
Is it okay to just have the relevant controller method sleep for a small, random number of seconds before doing anything that would open a db connection to spread things out? Or is there a better way to do this?
Setup a background/async processing system like Sidekiq (or whatever Heroku offers). Modify your controller action to do nothing but shove the parameters into a background job and return "ok". Then process the job in the background.
This is my situation:
Im using sendAsynchronousRequest. I quickly relized that it has a default timeout of 60 seconds. My app is designed to wait for an opponent to start the game (its a word-game).
Actually, it could take hours before the opponent starts the game. Which means the async-request could be waiting for hours.
Is that bad? I mean, I can probably change the default timeout. But the question is if this iss a bad design. The thing is that I wanted to avoid pulling the server at intervals to know if the opponent has started the match or not.
If this is a bad design: can somebody suggest an alternativ way?
Your best bet is to poll the server if you want to be up and running quickly and don't have a lot of resources (time/money).
If for some reason you need more real-time then there is a high amount of complexity involved in creating an open socket to your server for communication and you are best off using an existing framework like Pusher ($), PubNub ($) or socket.io (free but you will have to handle the server side). If you want to create your own client/server notification system then you may want to check out SocketRocket from Sqaure which provides a client side WebSocket implementation for iOS.
I am trying to do some load testing and I was told that as parameters for testing, I should include both the number of concurrent requests and the number of concurrent connections. I really don't understand how there can be multiple requests on a given connection. When a client requests a webpage from a server, it first opens a connection, sends a request and gets a reponse and then closes a connection. What am I missing here?
UPDATE:
I meant to ask how it was possible for a single connection to have multiple requests concurrently (meaning simultaneously.) Otherwise, what would be the point of measuring both concurrent requests and concurrent connections? Would counting both of them be helpful in knowing how many connections are idle at a time? I realize that a single connection can handle more than one request consecutively, sorry for the confusion.
HTTP supports a feature called pipelining, which allows the browser to send multiple requests to the server over a single connection without waiting for the responses. The server must support this. IIRC, the server has to send a specific response to the request that indicates "yeah, I'll answer this request, and you can go ahead and send other requests while you're waiting". Last time I looked (many years ago), Firefox was the only browser that supported pipelining and it was turned off by default.
It is also worth noting that even without pipelining, concurrent connections is not equal to concurrent requests, because you can have open connections that are currently idle (no requests pending).
A server may keep a single connection open to serve multiple requests. See http://en.wikipedia.org/wiki/HTTP_persistent_connection. It describes HTTP persistent (also called keep-alive) connections. The idea is that if you make multiple requests, it removes some of the overhead of setting up and tearing down a new connection.
I am running a web2py server which handles some requests which may take a total completion time of few seconds to few minutes. Once a connection is made to the server and it is processing a request which takes about 2-3 minutes, new connections to the server have to wait untill the former's request is completed.
I don't know if we can tweak some parameters in web2py for this. Do we have any way out of this problem.
web2py does not lock the server when busy with a connection but it does lock the user session, on purpose. That means other users can connect but not the one that started the original request. In the acton that takes time you can do:
session._unlock(response)
and this problem (if diagnosis is correct) will go away.
Anyway, it is not a good idea to have requests that take so long. The web server may kill your process and it is not good for usability. You should have a db table where you queue such tasks and handle them in a background process (explained in the manual) than use ajax or html5 websockets (web2y/gluon/contrib/comet_messaging.py) to check progress on the long running task.
Please bring this up on the web2py mailing list and we will help with more concrete examples.
I wonder what is the best approach to handle the following scenario:
I have a server that is designed to handle only 10 connections at a time, during which the server is busy with interacting with the clients. However, while the the server is busy, there may be new clients who want to connect (as part of the next 10 connections that the server is going to accept). The server should only accept the new connections after it finishes with all previous 10 agents.
Now, I would like to have an automatic way for the pending clients to wait and connect to the server once it becomes available (i.e. finished with the previous 10 clients).
So far, I can think of two approaches: 1. have a file watch on the client side, so that the client will watch for a file written by the server. When the server finishes with 10 clients, it will write the file, and the pending clients will know it's time to connect; 2. make the pending clients try to connect the server every 5 - 10 secs or so until success, and the server will return a message indicating whether it is ready.
Any other suggestion would be much welcome. Thanks.
Of the two options you provide, I am inclined toward the 2nd option of "Pinging" the server. I think it is more complicated to have the server write a file to the client triggering another attempt.
I would think that you should be able to have the client waiting and simply send a READY signal. Keep a running Queue of connection requests (from Socket.Connection.EndPoint, I believe). When one socket completes, accept the next Socket off the queue.