How do I keep Advantage Database connections from timing out? - delphi

I have a Windows Service that works with an advantage database and occasionally makes some http calls. On rare occasions these calls can be very long. To the tune that my database connection times out. I'm not using a Data Module or anything. Just creating the connection manually.
My primary question is what usually prevents the connection from timing out if I just haven't used it in a while? Do the TAdsComponents send a keep alive message that gets called in the background somehow? Is that dependent on the vcl so I don't have that in my service? Somehow I feel like creating a thread to make my http call, and in the main thread checking for it to finish every few seconds would prevent the connection from dying. Is that ever true?

Yes, there is a keepalive mechanism as you expect. The client (for all communication types, TCP, UDP, Shared memory) sends a "ping" to the server every so often to let the server know that connection is still alive. The frequency of that keepalive ping is based on the server configuration parameter CLIENT_TIMEOUT. With the default settings, I believe the keepalive ping is sent every 30 seconds.
The keepalive logic runs in a separate thread that is started by the code that handles the communication. In other words, it does not depend on any of the VCL components; if you have a connection to the server, then that thread should be running.
One way to check if your connections are timing out is to look in the Advantage error log. There should be 7020 errors corresponding to timed out connections.
Some things that come to mind that might result in timed out connections include:
The client process being suspended for some reason so that the keepalive thread could not run. This seems unlikely.
The keepalive thread was killed for some reason. This also seems unlikely; you would have to go out of your way to make this happen.
A firewall may close the connection if there is no activity for a time. I would think, though, that a 30 second interval would be sufficient to prevent that.
A firewall may disallow the UDP keepalive packets. Firewalls, by nature, are "suspicious" of UDP packets. You might make sure you are using TCP/IP.

Related

IdTCPServer does not release clients who disconnect abruptly

I tested a simulation disconnect of multiple clients by cutting their internet connection. I found that TIdTCPServer did not discharge their threads, it did not detect their disconnect. By comparison, when I closed a client manually, the server detected the disconnection and discharged its thread.
Abnormal disconnects are not detected by the OS in a timely manner. It can take a considerable amount of time for a lost socket connection to timeout internally so the OS can invalidate it. Until the OS does that, Indy has no way of knowing that the client connection is gone.
To account for that, you should either:
implement a timeout in your application-layer data protocol. If you are expecting a client to send something to your server, and it does not do so for a certain amount of time, assume the client is gone and close the connection. During periods of idle activity, require clients to send a heartbeat command to your server at regular intervals to keep their connections alive. You can use the AContext.Connection.IOHandler.CheckForDataOnSource() method to wait for data to arrive, or you can use the AContext.Binding.SetSockOpt() method to specify an SO_RCVTIMEO timeout on blocking reads.
if you cannot change your data protocol, you can at least enable TCP-level keep-alives on the socket itself. In the server's OnConnect event, you can call the AContext.Binding.SetKeepAliveValues() method to enable keep-alives. The OS will then handle the keep-alives for you, and will invalidate the connection if the timeout elapses.
With that said, also make sure that your server event handlers are not swallowing Indy exceptions (derived from EIdException). That can also cause the server to not terminate threads correctly, if a connection is lost and Indy raises an exception about it but you are not allowing the server to process it. If you need to catch exceptions (for logging, etc), make sure to re-raise any EIdException-derived exception and let the server handle it.

TCP/IP long-term connections

I have a server application which runs on a Linux machine. I can connect this application from Windows/Linux machines and can send/recieve data. After a few hours, something occurs and I get following error on the client side.
On Windows: An existing connection was forcibly closed by the remote host
On Linux: Connection timed out
I have made a search on the web and found some posts which suggest to increase/decrease OS's keep alive time. However, it didin't work for me.
Can I found a soultion to this problem or should I simply try to reconnect to the server when the connection is forcibly closed?
EDIT: I have tracked the situation. I sent a data to the remote node and sent another data after waiting 5 hours. Sending side sent the first data, but whet the sender sent the second data it didn't response. TCP/IP stack of the sender repeated this 5 times by incrementing the times between retries. Finally, sender reset the connection. I can't be sure why this is happening (Maybe because of a firewall or NAT - see Section 2.4) but I applied two different approach to solve this problem:
Use TCP/IP keep alive using setsockopt (Section 4.2)
Make an application level keep alive. This is more reliable since the first approach is OS related.
It depends on what your application is supposed to do. A little more information and perhaps the code you use for listening and handling connections could be of help.
Regardless, technically a longer keep alive time, should prevent the OS from cutting you off. So perhaps it is something else causing the trouble.
Such a thing could be router malfunction or traffic causing your keep-alive packet to get lost.
If you aren't already testing it on a LAN (without heavy trafic) I suggest doing so.
It might also be due to how your socket is handled (which I can't determine from your question)
This article might help.
Non blocking socket with timeout
I'm not used to how connections are handled on Linux, but I expect the OS won't cut off a connection unnecessary.
You can re-establish connection as a recovery, but you need to take into account that not all disconnects are gentle, and therefore you could end up making recovery on a connection you actually wish to be closed.
Since it is TCP, it will do its best to make a gentle disconnect, but you can send a custom message telling the server or client not to re-establish the connection right before disconnecting. That way you be absolutely sure, despite that it should be unnecessary to do so.

Asyncronously send file over TCP connection

so I'm making an iOS app, but this is more of a general networking question.
So what I have is one phone that acts as the server and then a bunch of phones connect to the phone as the client. Basically it's a game/music sharer.
It's kind of hard to really get into the semantics of it, but that isn't important.
What is important is that the server and client are repeatedly sending each other commands and positions rapidly over a TCP connection, and sometimes the client wants to send the server a music file (4MB usually) to play as the music.
The problem I initially encountered was that when sending the large file, it would hang the sending of commands from the client to the server.
My naive solution was to create another socket to connect to the server to send the file to the server, the server would check the IP of the new socket, and if it has the IP of an existing connection then it would just tie it to that connection, receive the file, and then disconnect the socket.
But the problem with this is that it takes a 1-2 second delay for the socket to connect, and I'm aware that there are man-in-the-middle attacks that can occur.
Is there a more elegant solution to this problem?
I would not call your solution naive, this is largely how FTP works, separating data and control paths is a good design pattern in my view.
I wouldn't worry about the man in the middle thing. If you wanted, you could add a command to the client that it responds to over the data connection with a secret the server supplies, this would let you associate the connections without using the ip addressing.
If the delay is a problem then why not establish both connections at the start, the overhead of a few tcp connections on an operating system is not usually significant.
You could also use the two connections for both commands and data, alternating between them. Since both the server and client know when a connection is busy they can choose to use the idle one. The advantage of this is that it will keep both connections busy to ensure they are both known to be working.
You probably should also use a different thread for each socket but I suspect you are doing this since it won't work too well without it.

idFTP takes too long to give connection result

I developped an application that uses indy component to download updates from a remote server.
The problem is that if the FTP server is down or the IP address is not correct, the idFTP.connect() takes too long to give the result (connection failure).
What is the best way to accelerate the connection answer, or may be checking ip address before connection to idFTP.
Thanks in advance.
You should set ReadTimeout property, by default it is set to one minute.
By default, Indy clients wait as long as it takes for the OS to report whether the connection was successful or not. Yes, that can take a long time, if the OS has to look up the hostname with DNS, do network checks, deal with network latency, etc. If you do not want to wait that long, you can use the Timeout parameter of Connect() in Indy 9 and earlier, or the ConnectTimeout property in Indy 10, to reduce the amount of time waited on. HOWEVER, that only applies to the actual socket connect attempt once the server IP has been determined. If you set the Host property to a non-IP hostname, Indy asks the OS to perform a DNS lookup to get the hostname's IP, and there is no logic available in Connect() to control the time it takes to do that lookup. If you need that much control, then use TIdDNSResolver to get the IP manually and then assign it to the Host property before calling Connect().
Well, native connect() API timeouts are notoriously lengthy by design, (to accommodate high latency links like modems). Artificially shortening the timeout may result in premature failure notification, (though as many developers have never seen a modem, it's not that much of a problem today:).
FTP is a reasonably complex transfer requiring two TCP connections and perhaps a DNS lookup - any of these could conceivably generate long connection delays. TidFTP has an inherited 'ReadTimeout' property and a connect() overload with a timeout parameter, but I'm not sure how effective they are.
Historically, I have always timed out such operations myself using a TTimer or similar - if the FTP thread does not respond with a suitable signal, (eg. TThread.Sychronize or user-defined Windows message SendMessage()'d to the GUI), in time, a 'FTP failed' actions are taken and a flag is set in the FTP thread that tells it to ignore any replies and self-terminate. Don't use PostMessage - if you do, there is a small window of time in which a posted response my be queued up while the TTimer is firing - a race.
Oh - and if you are just plonking a TidFTP onto the form, (or creating one in TForm.FormCreate), and trying to run it from the main GUI thread, (with, or without, TidAntiFreeze), stop doing it and thread off the FTP.

Datasnap : Is there a way to detect connection loss globally?

I'm looking to detect local connection loss. Is there a mean to do that, as with the events on the Corelabs components ?
Thanks
EDIT:
Sorry, I'm going to try to be more specific:
I'm currently designing a prototype using datasnap 2009. So I've got a thin client, a stateless server app and a database server.
What I would be able to do is to detect and handle connection loss (internet connectivity) between the client and the server app to handle it appropriately, ie: Display an informative error message to the user or to detect a server shutdown to silently redirect on another app server.
In 2-tier I used to manage that with ODAC components, the TOraSession have some events to handle this issues.
Normally there is no event fired when a connection is broken, unless a statement is fired against the database. This is because there is no way of knowing a connection loss unless there is some sort of is-alive pinging going on.
Many frameworks check if a connection is still valid by doing a very small query against the server. Could be getting the time from a server. Especially in a connection pooling environment.
You can implement a connection checking function in your application in some of the database events (beforeexecute?). Or make a timer that checks every 10 seconds.
Spawn a thread on the client which periodically sends some RPC 'Ping' or 'Heartbeat' commands to the server.
if this fails, the client knows that something happened to the connection
if the server does not hear the client anymore for some time period (for example, two times the heartbeat interval), he can conclude that the client disconnected, however this requires a stateful server (and your design is stateless so it would require event processing in a secondary system, which could be fed through a message queue)

Resources