datasnap TSocketconnection hangs - delphi

I have a Datasnap application(Delphi 7) which uses TSocketConnectiom to connect to application server. If my application stays idle for a long time after opening a clientdataset, most of the times when i want to refresh the clientdataset the application freezes without raising any exceptions.It seems that the connection is dropped and the Socketconnection is not aware of that.I am experiencing this problem very often and I am not sure where can I find the solution. Could it be a bug in TSocketconnection?
Best Regards

Firewalls sometimes drop inactive TCP connections after some time to keep their cache usage low. In this case it helps to call some server method (maybe every five minutes).
If the "setup and teardown" code for the server side DataSnap session is not to resource-consuming, you can also disconnect and reconnect the DataSnap client between all actions. This will initiate a fresh TCP connection, execute, and close it.

Related

TCP/IP long-term connections

I have a server application which runs on a Linux machine. I can connect this application from Windows/Linux machines and can send/recieve data. After a few hours, something occurs and I get following error on the client side.
On Windows: An existing connection was forcibly closed by the remote host
On Linux: Connection timed out
I have made a search on the web and found some posts which suggest to increase/decrease OS's keep alive time. However, it didin't work for me.
Can I found a soultion to this problem or should I simply try to reconnect to the server when the connection is forcibly closed?
EDIT: I have tracked the situation. I sent a data to the remote node and sent another data after waiting 5 hours. Sending side sent the first data, but whet the sender sent the second data it didn't response. TCP/IP stack of the sender repeated this 5 times by incrementing the times between retries. Finally, sender reset the connection. I can't be sure why this is happening (Maybe because of a firewall or NAT - see Section 2.4) but I applied two different approach to solve this problem:
Use TCP/IP keep alive using setsockopt (Section 4.2)
Make an application level keep alive. This is more reliable since the first approach is OS related.
It depends on what your application is supposed to do. A little more information and perhaps the code you use for listening and handling connections could be of help.
Regardless, technically a longer keep alive time, should prevent the OS from cutting you off. So perhaps it is something else causing the trouble.
Such a thing could be router malfunction or traffic causing your keep-alive packet to get lost.
If you aren't already testing it on a LAN (without heavy trafic) I suggest doing so.
It might also be due to how your socket is handled (which I can't determine from your question)
This article might help.
Non blocking socket with timeout
I'm not used to how connections are handled on Linux, but I expect the OS won't cut off a connection unnecessary.
You can re-establish connection as a recovery, but you need to take into account that not all disconnects are gentle, and therefore you could end up making recovery on a connection you actually wish to be closed.
Since it is TCP, it will do its best to make a gentle disconnect, but you can send a custom message telling the server or client not to re-establish the connection right before disconnecting. That way you be absolutely sure, despite that it should be unnecessary to do so.

Is there a way to make sure an Indy 9 socket connection is truly broken (Delphi 6 application)?

I have a Delphi 6 application that exchanges audio data over a socket with an external hardware device. The hardware device has a problem internally where sometimes its internal buffer processing slows, especially during long periods of use, and nasty delays creep into the audio streams. This is a significant problem since the audio data frequently underlies a two way real time conversation between people. However, breaking the connection and re-establishing it fixes the problem.
I know how to close/disconnect a socket with Indy, that is quite easy. My concern is that some connection caching mechanism in Indy or the Windows socket layer itself may defeat my disconnect efforts if I try to re-connect too quickly. Is there a way to make sure that the socket connection with the external hardware device is truly broken? Better asked, is there a way to make sure that my re-connection attempt forces the creation of a fresh new socket (handle?) rather than re-using the old socket connection?
The external hardware device only "resets" if a brand new connection is created, probably because it flushes its internal queues and starts fresh (speculation on my part since I don't have the source code for the device).
Indy will not prevent you from re-connecting immediately. The only time Windows will do so is if you are assigning the same local port for the client to bind to each time. In that case, you have to wait for Windows to release that port for re-use. You can manually set the socket's linger option to disable Windows' delay and release immediately. Or don't assign a local port, and a random port will be used on each (re)connect.

Best practice: Keep TCP/IP connection open or close it after each transfer?

My Server-App uses a TIdTCPServer, several Client apps use TIdTCPClients to connect to the server (all computers are in the same LAN).
Some of the clients only need to contact the server every couple of minutes, others once every second and one will do this about 20 times a second.
If I keep the connection between a Client and the Server open, I'll save the re-connect, but have to check if the connection is lost.
If I close the connection after each transfer, it has to re-connect every time, but there's no need to check if the connection is still there.
What is the best way to do this?
At which frequency of data transfers should I keep the connection open in general?
What are other advantages / disadvantages for both scenarios?
I would suggest a mix of the two. When a new connection is opened, start an idle timer for it. Whenever data is exchanged, reset the timer. If the timer elapses, close the connection (or send a command to the client asking if it wants the connection to remain open). If the connection has been closed when data needs to be sent, open a new connection and repeat. This way, less-often-used connections can be closed periodically, while more-often-used connections can stay open.
Two Cents from experiment...
My first TCP/IP client/server application was using a new connection and a new thread for each request... years ago...
Then I discovered (using ProcessExplorer) that it consummed some network resources because all closed connection are indeed not destroyed, but remain in a particular state for some time. A lot of threads were created...
I even had some connection problems with a lot of concurent requests: I didn't have enough ports on my server!
So I rewrote it, following the HTTP/1.1 scheme, and the KeepAlive feature. It's much more efficient, use a small number of threads, and ProcessExplorer likes my new server. And I never run out of port again. :)
If the client has to be shutdown, I'll use a ThreadPool to, at least, don't create a thread per client...
In short: if you can, keep your client connections alive for some minutes.
While it may be fine to connect and disconnect for an application that is active once every few minutes, the application that is communicating several times a second will see a performance boost by leaving the connection open.
Additionally, your code will be much simple if you aren't trying to constantly open, close, or diagnose an open connection. With the proper open and close logic, and SEH around your read and writes, there's no reason to test if the socket is still connected before using, just use it. It will tell you when there is a problem.
I'd lean towards keeping a single connection open in most enterprise applications. It generally will lead to cleaner code, that is easier to maintain.
/twocents
I guess it all depends on your goal and the amount of requests made on the server in a given time not to mention the available bandwidth and the hardware on the server.
You need to think for the future as well, is there any chance that in the future you will need connections to be left open? if so, then you've answered your own question.
I've implemented a chat system for a project in which ~50 people(the number is growing with each 2 months) are always connected and besides chatting it also includes data transfer, database manipulation using certain commands, etc. My implementation is keeping the connection to the server open from the application startup until the application is closed, no issues so far, however if a connection is lost for some reason it is automatically reestablished and everything continues flawlessly.
Overall I suggest you try both(keeping the connection open and closing it after it's being used) and see which fits your needs best.
Unless you are scaling to many hundreds of concurrent connections I would definitely keep it open - this is by far the better of the two options. Once you scale past hundreds into thousands of concurrent connections you may have to drop and reconnect. I have architected my entire framework around this (http://www.csinnovations.com/framework_overview.htm) since it allows me to "push" data to the client from the server whenever required. You need to write a fair bit of code to ensure that the connection is up and working (network drop-outs, timed pings, etc), but if you do this in your "framework" then your application code can be written in such a way that you can assume that the connection is always "up".
The problem is the limit of threads per application, around 1400 threads. So max 1300 clients connected at the same time +-.
When closing connections as a client the port you used will be unavailable for a while. So at high volume you’re using loads of different ports. For anything repetitive i’d keep it open.

Which strategy about connection management should we use when developing an application?

Which use of connection management is better while developing a windows based application which uses a Database as its data store? What about web-based applications?
when user loads the first form of an application, the global
connection opens and on closing the last form of the application
the connection closes and disposes.
for each form within the application, there is a local connection
(form scope) and when user wants to perform an operation like
insert, update, delete, search, ... the application uses the
connection and by unloading the form the connection also closes and
disposes.
for every operation within a form of an application, there is a
local connection (procedure scope) and when user wants to perform
an operation like insert, update, delete, search, ... the
application uses procedure connection and at the end of every
procedure within the form, the connection also closes and disposes.
Go with #3
You should try to only ever keep connections open for just as long as is required.
Also have a look at
Understanding Connection Pooling
SQL Server Connection Pooling
(ADO.NET)
Connecting to a database server
typically consists of several
time-consuming steps. A physical
channel such as a socket or a named
pipe must be established, the initial
handshake with the server must occur,
the connection string information must
be parsed, the connection must be
authenticated by the server, checks
must be run for enlisting in the
current transaction, and so on.
In practice, most applications use
only one or a few different
configurations for connections. This
means that during application
execution, many identical connections
will be repeatedly opened and closed.
To minimize the cost of opening
connections, ADO.NET uses an
optimization technique called
connection pooling.
Connection pooling reduces the number
of times that new connections must be
opened. The pooler maintains ownership
of the physical connection. It manages
connections by keeping alive a set of
active connections for each given
connection configuration. Whenever a
user calls Open on a connection, the
pooler looks for an available
connection in the pool. If a pooled
connection is available, it returns it
to the caller instead of opening a new
connection. When the application calls
Close on the connection, the pooler
returns it to the pooled set of active
connections instead of closing it.
Once the connection is returned to the
pool, it is ready to be reused on the
next Open call.
This is quite a broad question. But usually, for any database server and application environment, opening and keeping a new connection is an expensive operation. That's why you definitely don't want to open multiple connections from a single client, and should stick to process-scope for connections.
In a desktop application using a database server, strategy for handling it's single connection depends a lot on the DB usage pattern. Say, if the app reads or writes something a lot within 5 minutes, and then just does nothing with the DB for hours, it makes no sense to keep the connection open all the time (assuming there are many other clients). You may introduce some kind of time-out for closing a connection.
The Web server situation depends a lot on the used technology. Say, in PHP every request is a "fresh start" WRT database connection. You open and close a connection for each mouse click. While popular Java application servers have DB connections pool, reusing the same connection instances for many HTTP request handling threads.

Datasnap : Is there a way to detect connection loss globally?

I'm looking to detect local connection loss. Is there a mean to do that, as with the events on the Corelabs components ?
Thanks
EDIT:
Sorry, I'm going to try to be more specific:
I'm currently designing a prototype using datasnap 2009. So I've got a thin client, a stateless server app and a database server.
What I would be able to do is to detect and handle connection loss (internet connectivity) between the client and the server app to handle it appropriately, ie: Display an informative error message to the user or to detect a server shutdown to silently redirect on another app server.
In 2-tier I used to manage that with ODAC components, the TOraSession have some events to handle this issues.
Normally there is no event fired when a connection is broken, unless a statement is fired against the database. This is because there is no way of knowing a connection loss unless there is some sort of is-alive pinging going on.
Many frameworks check if a connection is still valid by doing a very small query against the server. Could be getting the time from a server. Especially in a connection pooling environment.
You can implement a connection checking function in your application in some of the database events (beforeexecute?). Or make a timer that checks every 10 seconds.
Spawn a thread on the client which periodically sends some RPC 'Ping' or 'Heartbeat' commands to the server.
if this fails, the client knows that something happened to the connection
if the server does not hear the client anymore for some time period (for example, two times the heartbeat interval), he can conclude that the client disconnected, however this requires a stateful server (and your design is stateless so it would require event processing in a secondary system, which could be fed through a message queue)

Resources