Do I need to close expired connections in Apache HttpAsyncClient as is done with HttpClient?
I just double-checked the source code. Yes, one still has to explicitly evict expired connections from the pool. However, the underlying NIO channel and socket will get closed and released immediately. The problem with expired connections not being automatically evicted from the pool is a bug in HttpAsyncClient 4.0 beta3. Feel free to raise a JIRA for this defect.
Related
I don't understand the application of expiryTimeout field in Activemq PooledConnectionFactory. The java doc said "allow connections to expire, irrespective of load or idle time. This is useful with failover to force a reconnect from the pool, to reestablish load balancing or use of the master post recovery". please give me an example, a real scenario which expiryTimeout field effect in it.
The expiry timeout option is a bit of a legacy feature of the Pool that isn't all that useful in most applications these days. The way it works is that if you configure an expiration time then the Connection that is loaned out and is later closed will be completely closed and dropped should there be no other active users of the Connection, otherwise it stays alive until all active instances are closed, then the underlying Connection object is closed.
This works slightly differently than the Idle timeout which applies to Connection instances that are sitting unused in the pool and are closed after some length of time to release resources on the Broker side.
These days you are better off using a failover URI in the PooledConnectionFactory with broker support for rebalance of cluster clients enabled which would then dynamically redistribute the load in the broker cluster as opposed to the expiry timeout which only closes down Connection instances once everyone that is currently actively using them has released them by calling close on them.
I have a server application which runs on a Linux machine. I can connect this application from Windows/Linux machines and can send/recieve data. After a few hours, something occurs and I get following error on the client side.
On Windows: An existing connection was forcibly closed by the remote host
On Linux: Connection timed out
I have made a search on the web and found some posts which suggest to increase/decrease OS's keep alive time. However, it didin't work for me.
Can I found a soultion to this problem or should I simply try to reconnect to the server when the connection is forcibly closed?
EDIT: I have tracked the situation. I sent a data to the remote node and sent another data after waiting 5 hours. Sending side sent the first data, but whet the sender sent the second data it didn't response. TCP/IP stack of the sender repeated this 5 times by incrementing the times between retries. Finally, sender reset the connection. I can't be sure why this is happening (Maybe because of a firewall or NAT - see Section 2.4) but I applied two different approach to solve this problem:
Use TCP/IP keep alive using setsockopt (Section 4.2)
Make an application level keep alive. This is more reliable since the first approach is OS related.
It depends on what your application is supposed to do. A little more information and perhaps the code you use for listening and handling connections could be of help.
Regardless, technically a longer keep alive time, should prevent the OS from cutting you off. So perhaps it is something else causing the trouble.
Such a thing could be router malfunction or traffic causing your keep-alive packet to get lost.
If you aren't already testing it on a LAN (without heavy trafic) I suggest doing so.
It might also be due to how your socket is handled (which I can't determine from your question)
This article might help.
Non blocking socket with timeout
I'm not used to how connections are handled on Linux, but I expect the OS won't cut off a connection unnecessary.
You can re-establish connection as a recovery, but you need to take into account that not all disconnects are gentle, and therefore you could end up making recovery on a connection you actually wish to be closed.
Since it is TCP, it will do its best to make a gentle disconnect, but you can send a custom message telling the server or client not to re-establish the connection right before disconnecting. That way you be absolutely sure, despite that it should be unnecessary to do so.
If a connection is 'inactive' I guess the Weblogic internal data source manager should recover the connection. Why should 'Inactive Connection Timeout' be a configurable parameter. Is there any use case which requires WL to wait for certain period before an inactive connection is recovered?
Thanks in advance.
This variation could occur depending on what downstream systems are doing, or where a system is sometimes under load and not able to respond in an adequate period of time
A leaked connection is a connection that was not properly returned to the connection pool in the data source. To automatically recover leaked connections, you can specify a value for Inactive Connection Timeout on the JDBC Data Source. ( Configuration: Connection Pool page in the Administration Console.) When you set a value for Inactive Connection Timeout, WebLogic Server forcibly returns a connection to the data source when there is no activity on a reserved connection for the number of seconds that you specify. When set to 0 (the default value), this feature is turned off.
After digging into this.. specific to question "Why should it be configurable..", because by default this feature is turned off, you can turn it on if you believe that the application for some reason is not returning connections back to the pool, i.e.
leaking connections. Depending on the application use cases and the amount of resource leakage, one could set the duration. IOW, you don't want to timeout connections too quickly, because it will force the app server to have to recreate those on next need or cause unnecessary errors. But if the application is leaking a lot of connections, then you can tighten the duration so it keeps cleaning up.
In a Client / Server environment, where the server implements the COM interface, there are cases where the connection is lost for some reason (client crashes), but the instance of the thread it stays active on the server, consuming memory until the application is finalized .
There are some way to destroy the instances inactive by the server?
Server using TRemoteDataModule and client TDComConnection.
DCOM garbage collection does this automatically. After the three missed pings at 120 second intervals, the connection will be cleaned up.
So you have nothing to do. You can sit back and let the system do the work.
We have our application deployed to WAS 6 cluster. And recently it is throring following exception.
javax.resource.ResourceException: The back-end resource is currently unavailable. Stuck connections have been detected.
......
Can somebody explain me why db connection was not released by the app and came back to free pool? How can I detect what is blocking connection to be released? I am planning to take thread dump every fee secs.
Everything was working fine and all of a sudden we started getting this exception, which is causing an issue with new user who is trying to login into the app.
Any input will be greatly appreciated. I have very little knowledge about WAS admin.
Thanks
Try using the PMI within the WAS console under Monitoring and Tuning, this will allow you to trace both the JDBC and thread pool usage in real time, I would definitely pay close attention to the WebContainer pool and see if the size of the pool tracks with the JDBC connection.
If the pools themselves are becoming exhausted you can increase the size to provide some legroom by upping the Maximum Connection Settings for the JDBC connection under Resources -> Data Sources -> $NAME -> Connection Pool, and the other connection pool settings under Server -> $SERVERNAME -> Additional Properties -> Thread Pool
Ensuring that the database your connecting to also has sufficent free connections would also be an idea! :)
If you are leaking pool connections, then its likely the code is missing a close connection somewhere.