What is the best algorithm/technique to control client connections to the server? - network-programming

I have over 50 clients connected to one server (low end server, running windows 2003 server), every time there is a power failure or switch failure the clients will disconnect from the server, the server might remain on during this incidents (if power backup is installed), when the clients came back they automatically detect the server and initiate a connection procedure, at this point the server will start dishing out the relevant data to the clients. Its at this point you realize some clients will start freezing becouse the server is not quick enough to dish out data and so it blocks the rest of the clients.
I have implemented a crude method to control this client storm but i was asking if guys out there have better algorithms to perform this kind of task.
NB: Am using Asta sockets components on a delphi application, but i dont mind examples from different fields,

Similar to network collision-detection protocols, perhaps clients could wait a random period of time before initiating their connection at startup?

In addition to the random startup delay suggested by Bremen, implement some sort of "too busy; try again later" message in your protocol. Rejecting a client with a short message should not be a problem for 50, 100, or even 1000 clients. Have the clients respond by doing a random delay and retrying + exponential backoff.

The solution depends on your preferences as well. Is it ok for you to drop down the connections request or send busy message?
Another option can be that you start sending data to the clients in sort of roundrobin manner. To this end you can have different threads responsible for sending data to different clients. Advantage of this case can be that none of the clients will be starved.

Related

is there restriction for opening imap connection from same ip address?

Hi I am implementing Email Client Application. My requirement is i need to monitor all the mailboxes available in specified IMAP server. I am created separate TCP Connection for each mailboxes. But i am getting disconnected from IMAP Server. I am trying Gmail/yahoo for my testing purpose. Is there any restriction to open multiple connection from same ip to particular IMAP Server? Particularly in Gmail and Yahoo.
or is there anyway to Monitor all the mailboxes in Single Connection without using IMAP-NOTIFY seems it does not supported in both Gmail/Yahoo...
Please Help me out...
This is something which I have answered on stackoverflow before, but which is now only available via the wayback machine. The question was about how to "kill too many parallel IMAP connections". Reprinted below; the core takeaway message is that for some reason, most server administrators prefer to have smaller number of short-lived connections instead of more connections which are active over longer period of time, yet they spend most of their time silently idling in the background. What they do not get is that the IMAP protocol is designed with long-lived connections in mind, and trying to prevent that will lead to wasting resources because the clients will constantly resync mailboxes as they are hopping among them.
The original answer follows:
Nope, it's a very wrong idea. IMAP is designed so that monitoring a single mailbox takes one connection; in most IMAP server implementations, this means a single process. However, unless the client the user is using is terribly broken, all these connections enter the IDLE mode. In IDLE, the clients are passively notified about any updates to the mailbox state. If you disable these connections, the clients would have to activelly poll for changes in many mailboxes. Now decide for yourself -- what is worse, having ten processes sitting idle, or one process doing heavy polling every two minutes? Which of these solutions would consume more energy, CPU time and IO operations? That's for the number of parallel connections.
The second question was about the long-lived connections. Again, this is a critical aspect of IMAP -- each connection carries a lot of associated state information which is rather expensive to obtain. Unless your server implements certain extensions and your clients use them (ESEARCH, CONDSTORE, QRESYNC are the crucial bits), opening a mailbox can require O(n) operations. I don't know how many messages your users have, but do you really want to transfer e.g. message flags for 250k messages when you decided to kill a connection because it has been active for "too long"?
Finally, any reasonable IMAP server vendor offers a way to configure a per-user session limit on the number of concurrent processes. Using that is much better than maintaining a script for ad-hoc killing of "unused" connections.
If you would like to learn more about the synchronization process, my thesis about using IMAP on clients with flaky network and limited resources describes what the clients have to do in order to show an updated view of mailboxes to their users.

Transfer files & Message Passing from One System to remote System using Internet

Which one is (TCP/UDP) best to send files from Client to Remote server using Internet? i.e Which one is fast & reliable of my following requirement.
I have two requirements basically
1. Sending Files from Client to Server (Daily Once)
2. In Client system running one software, its having different product information,
latest packet Time, Product Status, etc .
This information is updated every one second.
My problem is, To know the Client status at server.
I am not able to decided which design is best of my requirement. They are
A.Using TIdTCPClient & TIdTCPServer
B.Using TIdTCPClient & TIdCmdTCPServer
C.Using TIdCmdClient & TIdTCPServer
D.Using TIdCmdClient & TIdCMDTCPServer
Please guide to me which design is best & how to implement it with example.
TCP/IP is slower but it assures you that you don't lose any packet without implementing that in your application
UDP is faster but you don't have the guarantee that the packet will arrive and you must implement some sort of confirmation
In your situation i think TCP is the best and TIdTCPClient with TIdTCPServer will do the trick.
Post some code if you get stuck.

How to listen to thousands of imap idle inboxes via spring/amqp/rabbit

Use case is to have a server connect to thousands of users email accounts and sniff incoming mail in java preferably with java mail and spring integration/amqp/rabbit mq type scalable infrastructure.. And imap idle type connections and add server processing nodes as needed.
Single inbound channel is easy with imap idle inbound adapter.. You could configure few in XML. But if you need a persistent listener/imapidlechannel adapters queue of thousands of these adapters and Needed to add new user connection dynamically for server processing.. This would be a challenge. Also need fault taulerance that if the java listener dies or server reboots all these listeners and their configuration also reboot vs rebuilding thousands of these connections and recovery if some connections loose their idle receive capability without rebuilding all user connections for the idle receiving.
Any ideas welcome as searched a lot however could not find anything? This seems to be a significant scalability issue about e mail receive connections open.
If you want to use the IMAP IDLE command to listen for new messages using JavaMail, you'll need one thread per mailbox, which is likely to impact your scalability. Even keeping thousands of connections open might be an issue.
You don't say how quickly you need to react to new messages. Unless you have near real time requirements, it might be better to poll a subset of mailboxes every so often, eventually cycling through all the mailboxes.
You'll need to deal with the fault tolerance issues yourself, using checkpointing or transactions or whatever seems appropriate for your application.
The other option is to perhaps take a look at something like Akka with actors performing the async io. You'll need to ditch the JavaMail package and parse the imap commands yourself but there's lots of packages out there to do that. Would love to hear if you had a better solution.

Two-way TCP communication in Indy 10?

I am using TIdCmdTCPClient and TIdCmdTCPServer. Suddenly I find that I might like to have bi-directional communication.
What would be best? Should I possibly use some other components? If so, which? Or should I kludge and have the 'client' poll the 'server' to ask if it wishes to communciate anything?
This is a very small system. Two clients and ten servers, with a burst of one tarnscation every 30 to 60 seconds for a few minutes once a day, so overhead for polling is inconsequential.
I'm just woder if there is a 'correct' way.
Update: this really is an incredibly simple system. Very little traffic and all of it simple. All transmissions are an indication of even type an an optional single parameter.
<event type> [ <parameter>] e.g. "HERE_IS_SOME_DATA 42"
This can be sent in both directions, hover here is no "reply" as such. Just fire off a message (and hope that it got there)? Receive an Ack with no data? Non-catching of an exception indicates that message was successfully sent?)
Would it be possible (would it be overkill) to use two TIdCmdTCPServer?
Both TIdCmdTCPClient and TIdCmdTCPServer continuously poll their socket endpoints for inbound data during the lifetime of the connection. You do not have to do anything extra for that. So, as soon as a TIdCmdTCPClient connects to the TIdCmdTCPServer, both components will initially be in a reading state until one of them sends a command to the other.
Now, there is a problem with doing that - as soon as either component sends that first command, the receiving component will interpret it as a command and send back a reply, which the other component will interpret as a command and send back a reply, which will be interpretted as a command and send back a reply, and so on, causing an endless cycle of replies back and forth. For that reason, it is not wise to use TIdCmdTCPClient and TIdCmdTCPServer together. You should either use TIdTCPClient with TIdCmdTCPServer, or use TIdCmdTCPClient with TIdTCPServer. Depending on what exactly your protocol looks like, you may have to forgo using TIdCmdTCPClient and TIdCmdTCPServer altogether and just use TIdTCPClient with TIdTCPServer so you have more control over reading and writing on both ends. It is hard to answer with actual code without first knowing what the communication protocol should look like.
A single TCP socket connection can be used in two directions. The server can send data asynchronously to the client at any time. It is up to the client however to read the socket, for asynchronous processing this is done in a listener thread which reads from the socket and synchronizes incoming data operations with the main worker thread.
An example use case in the Indy components is the Telnet client component (TIdTelnet) which has a receive thread listening for server messages.
But you also asked about the 'correct' way - and then the answer depends on other factors such as network stability, guaranteed delivery and how to handle temporary server outages. In enterprise environments, one central messaging hub is preferred in many use cases, so that all parties connect only to this central server which is only responsible for reliable message delivery, and keeps messages until the recipient is available.
You can download the INDY 10 TCP server demo sample code here.

Datasnap : Is there a way to detect connection loss globally?

I'm looking to detect local connection loss. Is there a mean to do that, as with the events on the Corelabs components ?
Thanks
EDIT:
Sorry, I'm going to try to be more specific:
I'm currently designing a prototype using datasnap 2009. So I've got a thin client, a stateless server app and a database server.
What I would be able to do is to detect and handle connection loss (internet connectivity) between the client and the server app to handle it appropriately, ie: Display an informative error message to the user or to detect a server shutdown to silently redirect on another app server.
In 2-tier I used to manage that with ODAC components, the TOraSession have some events to handle this issues.
Normally there is no event fired when a connection is broken, unless a statement is fired against the database. This is because there is no way of knowing a connection loss unless there is some sort of is-alive pinging going on.
Many frameworks check if a connection is still valid by doing a very small query against the server. Could be getting the time from a server. Especially in a connection pooling environment.
You can implement a connection checking function in your application in some of the database events (beforeexecute?). Or make a timer that checks every 10 seconds.
Spawn a thread on the client which periodically sends some RPC 'Ping' or 'Heartbeat' commands to the server.
if this fails, the client knows that something happened to the connection
if the server does not hear the client anymore for some time period (for example, two times the heartbeat interval), he can conclude that the client disconnected, however this requires a stateful server (and your design is stateless so it would require event processing in a secondary system, which could be fed through a message queue)

Resources