How to handle pending connections to a server that is designed to handle a limited number of connections at a time - network-programming

I wonder what is the best approach to handle the following scenario:
I have a server that is designed to handle only 10 connections at a time, during which the server is busy with interacting with the clients. However, while the the server is busy, there may be new clients who want to connect (as part of the next 10 connections that the server is going to accept). The server should only accept the new connections after it finishes with all previous 10 agents.
Now, I would like to have an automatic way for the pending clients to wait and connect to the server once it becomes available (i.e. finished with the previous 10 clients).
So far, I can think of two approaches: 1. have a file watch on the client side, so that the client will watch for a file written by the server. When the server finishes with 10 clients, it will write the file, and the pending clients will know it's time to connect; 2. make the pending clients try to connect the server every 5 - 10 secs or so until success, and the server will return a message indicating whether it is ready.
Any other suggestion would be much welcome. Thanks.

Of the two options you provide, I am inclined toward the 2nd option of "Pinging" the server. I think it is more complicated to have the server write a file to the client triggering another attempt.
I would think that you should be able to have the client waiting and simply send a READY signal. Keep a running Queue of connection requests (from Socket.Connection.EndPoint, I believe). When one socket completes, accept the next Socket off the queue.

Related

Implementing autosave in a Rails app with Websockets

I have a simple text editor and want to implement auto save so that any time a change is made to the text, it is immediately sent to the server.
There are two ways to do this:
Open a socket connection and send changes through the socket every second.
Set a 750ms idle keyboard change timer that sends changes any time the user has stopped typing for 750ms.
I understand websockets are appropriate when you don't want to poll to check the server for new data. But is it also appropriate for when you want to constantly send data to the server?
Is 1 request/user/second on a web socket more performant in general than 1 request/user/second on a regular http connection?
Update:
For the record, I looked into Google Docs and it seems to use post requests and not websockets for autosave:
It fires with about a 150ms keyboard idle timer, and only sends incremental changes.
WebSocket is entirely appropriate for permanently sending small amounts of data to the server.
There are two main advantages:
You do not need to establish a connection each time you send data, which makes things faster (though this may not be all that important for your application).
You save on message size, since the HTTP headers are much larger than those of WebSocket messages.
(For more on this see this thorough StackOverflow answer.

What is the difference between a concurrent connection and a concurrent request?

I am trying to do some load testing and I was told that as parameters for testing, I should include both the number of concurrent requests and the number of concurrent connections. I really don't understand how there can be multiple requests on a given connection. When a client requests a webpage from a server, it first opens a connection, sends a request and gets a reponse and then closes a connection. What am I missing here?
UPDATE:
I meant to ask how it was possible for a single connection to have multiple requests concurrently (meaning simultaneously.) Otherwise, what would be the point of measuring both concurrent requests and concurrent connections? Would counting both of them be helpful in knowing how many connections are idle at a time? I realize that a single connection can handle more than one request consecutively, sorry for the confusion.
HTTP supports a feature called pipelining, which allows the browser to send multiple requests to the server over a single connection without waiting for the responses. The server must support this. IIRC, the server has to send a specific response to the request that indicates "yeah, I'll answer this request, and you can go ahead and send other requests while you're waiting". Last time I looked (many years ago), Firefox was the only browser that supported pipelining and it was turned off by default.
It is also worth noting that even without pipelining, concurrent connections is not equal to concurrent requests, because you can have open connections that are currently idle (no requests pending).
A server may keep a single connection open to serve multiple requests. See http://en.wikipedia.org/wiki/HTTP_persistent_connection. It describes HTTP persistent (also called keep-alive) connections. The idea is that if you make multiple requests, it removes some of the overhead of setting up and tearing down a new connection.

PostgreSQL: Session Timeout?

I search for a way to control the session timeout of the PGSQL (9.0) client (Windows).
When a Session dying? What happened with them after die.
How can I force a Session to die? (For example it is "locked", on some wrong long query, and I want to force the server to release the resources).
Thanks for it:
dd
I extend this to understand it:
The databases need to know which session is dead.
Dead session must be released, because it is only hold the resources, and if this operation not finished, many locks we should get, or we can out of available connections (reach the maximum).
Other DataBases (FireBird, EDB) defines a TimeOut parameter for it.
When it reached, the session set to dead, and user connection aborted.
To avoid exhausting you need to periodically do something, that extend the period.
Theres is 3 ways to reach the timeout:
1.) the client program hangs, or freezed, or closed.
2.) the network connection broken
3.) the client send some very long query/stored procedure that don't finish.
If the timeout not handled by server, may somebody's transaction, lock, etc still alive for X hours, and you have to only one way to remove it: restart the db server service.
Other databases handle dead sessions as they no more interact to the server, so the client got some error, it need to restart the client software.
Some databases supports the return to the "inactive" but "not dead" session, and they can continue the work.
So, with this preface I ask my question again:
How can I control the client's session timeout under pgsql? System variable, SQL parameter, etc?
How can I extend this time?
What happens if a long query is exhausting the period?
When does the pgsql server release the resources held by the client ?
Thanks:
dd
I don't understand the first part of your question, but to kill a running session you can use pg_terminate_backend()
To kill the query of a running session use pg_cancel_query()
Both functions are explained in the manual:
http://www.postgresql.org/docs/current/static/functions-admin.html#FUNCTIONS-ADMIN-SIGNAL-TABLE
Theres is 3 ways to reach the timeout: 1.) the client program hangs, or freezed, or closed. 2.) the network connection broken 3.) the client send some very long query/stored procedure that don't finish.
For 2, the tcp_keepalives_* settings might be useful: http://www.postgresql.org/docs/8.4/static/runtime-config-connection.html
For 3, there is a statement_timeout setting: http://www.postgresql.org/docs/8.4/static/runtime-config-client.html but this will only terminate the statement, not the connection.

Web2py server problem

I am running a web2py server which handles some requests which may take a total completion time of few seconds to few minutes. Once a connection is made to the server and it is processing a request which takes about 2-3 minutes, new connections to the server have to wait untill the former's request is completed.
I don't know if we can tweak some parameters in web2py for this. Do we have any way out of this problem.
web2py does not lock the server when busy with a connection but it does lock the user session, on purpose. That means other users can connect but not the one that started the original request. In the acton that takes time you can do:
session._unlock(response)
and this problem (if diagnosis is correct) will go away.
Anyway, it is not a good idea to have requests that take so long. The web server may kill your process and it is not good for usability. You should have a db table where you queue such tasks and handle them in a background process (explained in the manual) than use ajax or html5 websockets (web2y/gluon/contrib/comet_messaging.py) to check progress on the long running task.
Please bring this up on the web2py mailing list and we will help with more concrete examples.

concurrent application

i have used erlang for the passed five month and i have liked it now it is my time to write down a concurrent application that will interact with the YAWS web server and mnesia DBMS and to work on a distributed system may any one help me with a sketchy draft in Erlang?
i mean the application should have both the sever end and the client end where by the server can accept subscriptions from clients, Forwards notifications from event processes to each of the subscribers, accept messages to add events and start the needed processes, can accept messages to cancel an event and subsequently kill the event processes. whereas the client should be able to ask the server to add an event with all its details,ask the server to cancel an event, monitors the server (to know if it goes down) and shut down the event server if needed. The events requested from the server should contain a deadline
Spend some time browsing github, you can find projects corresponding to your description:
http://www.google.ca/search?hl=en&biw=1405&bih=653&q=site%3Agithub.com+erlang+yaws+mnesia&aq=f&aqi=&aql=&oq=

Resources