Best practice: Keep TCP/IP connection open or close it after each transfer? - delphi

My Server-App uses a TIdTCPServer, several Client apps use TIdTCPClients to connect to the server (all computers are in the same LAN).
Some of the clients only need to contact the server every couple of minutes, others once every second and one will do this about 20 times a second.
If I keep the connection between a Client and the Server open, I'll save the re-connect, but have to check if the connection is lost.
If I close the connection after each transfer, it has to re-connect every time, but there's no need to check if the connection is still there.
What is the best way to do this?
At which frequency of data transfers should I keep the connection open in general?
What are other advantages / disadvantages for both scenarios?

I would suggest a mix of the two. When a new connection is opened, start an idle timer for it. Whenever data is exchanged, reset the timer. If the timer elapses, close the connection (or send a command to the client asking if it wants the connection to remain open). If the connection has been closed when data needs to be sent, open a new connection and repeat. This way, less-often-used connections can be closed periodically, while more-often-used connections can stay open.

Two Cents from experiment...
My first TCP/IP client/server application was using a new connection and a new thread for each request... years ago...
Then I discovered (using ProcessExplorer) that it consummed some network resources because all closed connection are indeed not destroyed, but remain in a particular state for some time. A lot of threads were created...
I even had some connection problems with a lot of concurent requests: I didn't have enough ports on my server!
So I rewrote it, following the HTTP/1.1 scheme, and the KeepAlive feature. It's much more efficient, use a small number of threads, and ProcessExplorer likes my new server. And I never run out of port again. :)
If the client has to be shutdown, I'll use a ThreadPool to, at least, don't create a thread per client...
In short: if you can, keep your client connections alive for some minutes.

While it may be fine to connect and disconnect for an application that is active once every few minutes, the application that is communicating several times a second will see a performance boost by leaving the connection open.
Additionally, your code will be much simple if you aren't trying to constantly open, close, or diagnose an open connection. With the proper open and close logic, and SEH around your read and writes, there's no reason to test if the socket is still connected before using, just use it. It will tell you when there is a problem.
I'd lean towards keeping a single connection open in most enterprise applications. It generally will lead to cleaner code, that is easier to maintain.
/twocents

I guess it all depends on your goal and the amount of requests made on the server in a given time not to mention the available bandwidth and the hardware on the server.
You need to think for the future as well, is there any chance that in the future you will need connections to be left open? if so, then you've answered your own question.
I've implemented a chat system for a project in which ~50 people(the number is growing with each 2 months) are always connected and besides chatting it also includes data transfer, database manipulation using certain commands, etc. My implementation is keeping the connection to the server open from the application startup until the application is closed, no issues so far, however if a connection is lost for some reason it is automatically reestablished and everything continues flawlessly.
Overall I suggest you try both(keeping the connection open and closing it after it's being used) and see which fits your needs best.

Unless you are scaling to many hundreds of concurrent connections I would definitely keep it open - this is by far the better of the two options. Once you scale past hundreds into thousands of concurrent connections you may have to drop and reconnect. I have architected my entire framework around this (http://www.csinnovations.com/framework_overview.htm) since it allows me to "push" data to the client from the server whenever required. You need to write a fair bit of code to ensure that the connection is up and working (network drop-outs, timed pings, etc), but if you do this in your "framework" then your application code can be written in such a way that you can assume that the connection is always "up".

The problem is the limit of threads per application, around 1400 threads. So max 1300 clients connected at the same time +-.

When closing connections as a client the port you used will be unavailable for a while. So at high volume you’re using loads of different ports. For anything repetitive i’d keep it open.

Related

What is the best practice for IMAP connection?

I think that the concept of connection is not fully clear to me.
I am building a small read-only webmail for a project, and I am using the net/imap library of ruby.
Should I open a connection, authenticate, do the action and disconnect each time?
Or should I open a connection and passing around for my application?
Can someone clear to me the concept of IMAP connection?
I have see that a lot of client open multiple connection at the same time, why?
An IMAP connection is expensive enough that if you'll want to keep it if you're going to use it again in the next seconds (perhaps even minutes). It contains much more state and is much more expensive to set up than the HTTP connections with which you're probably familiar.
However, IMAP connections die randonly. So many NAT middleboxes are surprised when a TCP connection remains quiet for three minutes, as IMAP connections often do. So you'll probably want to accept that the connections can die, and reopen if necessary.

Can Websockets Work On Mobile Phones?

I'm currently thinking about creating a soft realtime mobile phone webapp, but when I started researching websockets, I found a load of scare stories about websocket connections dropping out on mobile phones:
WebSockets over a 3G connection
http://blog.hekkers.net/2012/12/09/websockets-and-mobile-network-operators/
Can this still be considered a problem?
Relatedly, I suspect a long polling client might be a good way to implement similar functionality, but wondered about the mobile specific issues I'm likely to encounter.
So far, I've read that long polling requests may have a considerable impact on battery life. I also hear that iOS somehow limits the number of connection made to a single server, which might be a problem.
Have any of you worked on a mobile application with a realtime component? And if you have, what challenges did you encounter, and how did you overcome them?
I have built several websocket webapps with real-time data, and they perform very well on the iPhone and mobile. Websockets use a ping/pong connection to see if the connection is still alive. Things that have caused disconnection:
If you close down the app, then the connection will be dropped (on iOS webapps).
If the network does go down (wifi/3g/4g), then the connection will be dropped and not recover anything that was sent in that dropped time.
Considerations:
Write a simple reconnection routine into the onclose part of your javascript that tries to reconnect after a certain amount of seconds.
function connect(){
websocket = new WebSocket("wss://myws:5020");
websocket.onclose=function(event){
console.log(event);
setTimeout(connect,5000); //re-connect after 5 seconds
//..and so on
}

TCP/IP long-term connections

I have a server application which runs on a Linux machine. I can connect this application from Windows/Linux machines and can send/recieve data. After a few hours, something occurs and I get following error on the client side.
On Windows: An existing connection was forcibly closed by the remote host
On Linux: Connection timed out
I have made a search on the web and found some posts which suggest to increase/decrease OS's keep alive time. However, it didin't work for me.
Can I found a soultion to this problem or should I simply try to reconnect to the server when the connection is forcibly closed?
EDIT: I have tracked the situation. I sent a data to the remote node and sent another data after waiting 5 hours. Sending side sent the first data, but whet the sender sent the second data it didn't response. TCP/IP stack of the sender repeated this 5 times by incrementing the times between retries. Finally, sender reset the connection. I can't be sure why this is happening (Maybe because of a firewall or NAT - see Section 2.4) but I applied two different approach to solve this problem:
Use TCP/IP keep alive using setsockopt (Section 4.2)
Make an application level keep alive. This is more reliable since the first approach is OS related.
It depends on what your application is supposed to do. A little more information and perhaps the code you use for listening and handling connections could be of help.
Regardless, technically a longer keep alive time, should prevent the OS from cutting you off. So perhaps it is something else causing the trouble.
Such a thing could be router malfunction or traffic causing your keep-alive packet to get lost.
If you aren't already testing it on a LAN (without heavy trafic) I suggest doing so.
It might also be due to how your socket is handled (which I can't determine from your question)
This article might help.
Non blocking socket with timeout
I'm not used to how connections are handled on Linux, but I expect the OS won't cut off a connection unnecessary.
You can re-establish connection as a recovery, but you need to take into account that not all disconnects are gentle, and therefore you could end up making recovery on a connection you actually wish to be closed.
Since it is TCP, it will do its best to make a gentle disconnect, but you can send a custom message telling the server or client not to re-establish the connection right before disconnecting. That way you be absolutely sure, despite that it should be unnecessary to do so.

Asyncronously send file over TCP connection

so I'm making an iOS app, but this is more of a general networking question.
So what I have is one phone that acts as the server and then a bunch of phones connect to the phone as the client. Basically it's a game/music sharer.
It's kind of hard to really get into the semantics of it, but that isn't important.
What is important is that the server and client are repeatedly sending each other commands and positions rapidly over a TCP connection, and sometimes the client wants to send the server a music file (4MB usually) to play as the music.
The problem I initially encountered was that when sending the large file, it would hang the sending of commands from the client to the server.
My naive solution was to create another socket to connect to the server to send the file to the server, the server would check the IP of the new socket, and if it has the IP of an existing connection then it would just tie it to that connection, receive the file, and then disconnect the socket.
But the problem with this is that it takes a 1-2 second delay for the socket to connect, and I'm aware that there are man-in-the-middle attacks that can occur.
Is there a more elegant solution to this problem?
I would not call your solution naive, this is largely how FTP works, separating data and control paths is a good design pattern in my view.
I wouldn't worry about the man in the middle thing. If you wanted, you could add a command to the client that it responds to over the data connection with a secret the server supplies, this would let you associate the connections without using the ip addressing.
If the delay is a problem then why not establish both connections at the start, the overhead of a few tcp connections on an operating system is not usually significant.
You could also use the two connections for both commands and data, alternating between them. Since both the server and client know when a connection is busy they can choose to use the idle one. The advantage of this is that it will keep both connections busy to ensure they are both known to be working.
You probably should also use a different thread for each socket but I suspect you are doing this since it won't work too well without it.

What is the best algorithm/technique to control client connections to the server?

I have over 50 clients connected to one server (low end server, running windows 2003 server), every time there is a power failure or switch failure the clients will disconnect from the server, the server might remain on during this incidents (if power backup is installed), when the clients came back they automatically detect the server and initiate a connection procedure, at this point the server will start dishing out the relevant data to the clients. Its at this point you realize some clients will start freezing becouse the server is not quick enough to dish out data and so it blocks the rest of the clients.
I have implemented a crude method to control this client storm but i was asking if guys out there have better algorithms to perform this kind of task.
NB: Am using Asta sockets components on a delphi application, but i dont mind examples from different fields,
Similar to network collision-detection protocols, perhaps clients could wait a random period of time before initiating their connection at startup?
In addition to the random startup delay suggested by Bremen, implement some sort of "too busy; try again later" message in your protocol. Rejecting a client with a short message should not be a problem for 50, 100, or even 1000 clients. Have the clients respond by doing a random delay and retrying + exponential backoff.
The solution depends on your preferences as well. Is it ok for you to drop down the connections request or send busy message?
Another option can be that you start sending data to the clients in sort of roundrobin manner. To this end you can have different threads responsible for sending data to different clients. Advantage of this case can be that none of the clients will be starved.

Resources