I'm using PoolingClientConnectionManager and I am suspecting that I'm leaking connections. I have a monitoring thread that prints out the PoolStats as below:
[leased: 126; pending: 0; available: 14; max: 140]
..
[leased: 140; pending: 20; available: 0; max: 140]
..
[leased: 140; pending: 10; available: 0; max: 140]
I spawn an equal number of threads to the number of pool connections (140), so I was never expecting leased + pending > max. Is this assumption valid ? Or is this a case of connections kept alive by the manager ? I'm not sure if this case the connections are attributed to "lease" or "available".
What I have noticed is that connection leaks might occur if the HttpClient connection is interrupted during DNS resolve. In this scenario, leased connections are not released back to the pool. Is there a suggested way of de-allocating proper resources so that connections are properly released back to the pool ?
Thanks in advance.
Yes, it does seem quite likely that there is a connection leak. It is unlikely that DNS lookup could be causing it though. HttpClient is supposed to release connection automatically in case of an i/o, protocol or runtime exception.
As far as resource deallocation is concerned the rule should be fairly simple: as long as there is an entity associated with the response, one must make sure its content gets fully consumed. HttpClient 4.2 and HttpClient 4.3 also provide additional safe-guards for resource deallocation in exceptional cases: HttpUriRequest#releaseConnection in 4.2 and CloseableHttpResponse#close in 4.3
You could also try running your application with connection management context logging turned on as described here and see if that could help you track down requests that never release the underlying connection.
Related
I have a TCP/IP DataSnap server running as a service [Session based LifeCycle] that continuously chews up memory and never comes back to the starting memory size even when there are no connections to it.
In order to eliminate My code as the culprit, I have modeled up a basic TCP/IP DataSnap server running as VCL [Session based LifeCycle] that serves a Server Method class [TDSServerModule] which only contains basic mathematical functions using native data types [no objects to create or free].
When I connect to said DataSnap server with a very thin client I get the same results.
The Memory Usage continuously grows with each connection and sporadically grows when executing the server side methods from the client. Once the connections are closed the DataSnap Server never reduces its Memory Usage [even when left running without connections for 8 hrs].
Any suggestions as to why this occurs or more importantly how to curtail it?
I am using RAD Studio XE2 Update 4 HotFix 1.
Let me quote a "must read" article about DataSnap. This is about XE3 but i hope the code here would work for XE2 as well.
Memory consumption
One of the issues that I observed was related to memory consumption. Why Datasnap server consumes so much memory if the method called does absolutely nothing?
Maybe I don’t know how to explain exactly but i will try. Basically the DataSnap creates a session for each HTTP connection that it receives. This session will be destroyed after 20 minutes, in other words, on the first 20 minutes of the test the memory consumption will only go up, after that it has the tendency of stabilize itself. I really have no idea why Datasnap does this. In a REST application I don’t see much sense in these sessions with a default configuration. Of course, sessions can be helpful, but i can’t understand why it’s a default configuration. Indeed, DataSnap doesn’t have a configuration for that. It appears like you just have to use this session control, without being able to choose otherwise (There is no documentation). The MORMot framework has a session control too but it’s configurable and doesn’t consumes so much memory.
Anyway, there is a way around this problem. Daniele Teti wrote an article on his blog, take a look. The solution that I will show here was placed by him on his blog. Thanks Daniele.
uses System.StrUtils, DataSnap.DSSession, Data.DBXPlatform;
function TServerMethods1.HelloWorld: String;
begin
Result := 'Hello World';
GetInvocationMetaData.CloseSession := True;
end;
After running this method the session will close and memory consumption will be lower. Of course still exists an overhead for creating and destroying this session.
So it seems the best course for you is ending every server method with explicit memory cleanup, if that was possible in XE2. Then you'd better read thosee articles again and prepare for future scalability challenges.
http://www.danieleteti.it/2012/12/15/datasnap-concurrency-problems-and-update1/
http://robertocschneiders.wordpress.com/2013/01/09/datasnap-analysis-based-on-speed-stability-tests-part-2/
I added the below method and called it from the "TWebModule1::WebModuleBeforeDispatch" event. It eliminated the memory consumption and actually allow the idle REST service to return to a state of no-session memory. DataSnap definitely needs to work on this issue.
// ---------------------------------------------------------------------------
/// <summary> Memory Restoration. DataSnap opens a session for each call
/// even when the service is set for invocation.
/// Sessions are building up consuming memory and seem not to be freed.
/// See: https://stackoverflow.com/questions/17748300/how-to-release-datasnap-memory-once-connections-are-closed
/// </summary>
/// <remarks> Iterates session in the session manager and closes then terminates
/// any session that has been idle for over 10 seconds.
/// </remarks>
/// <returns> void
/// </returns>
// ---------------------------------------------------------------------------
void TWebModule1::CloseIdleSessions()
{
TDSSessionManager* sessMgr = TDSSessionManager::Instance;
int sessCount = sessMgr->GetSessionCount();
WriteLogEntry(LogEntryTypeDebug, "TWebModule1::CloseIdleSessions", "Session Count: " + IntToStr(sessCount));
TStringList* sessKeys = new TStringList;
sessMgr->GetOpenSessionKeys(sessKeys);
WriteLogEntry(LogEntryTypeDebug, "TWebModule1::CloseIdleSessions", "Session Keys Count: " + IntToStr(sessKeys->Count));
TDSSession* sess = NULL;
for(int index = 0; index < sessKeys->Count; index++)
{
String sessKey = sessKeys->Strings[index];
sess = sessMgr->Session[sessKey];
unsigned elapsed = (int)sess->ElapsedSinceLastActvity();
if(elapsed > 10000)
{
WriteLogEntry(LogEntryTypeDebug, "TWebModule1::CloseIdleSessions", "CloseSession TerminateSession Key: " + sessKey);
sessMgr->CloseSession(sessKey);
sessMgr->TerminateSession(sessKey);
}
sess = NULL;
}
delete sessKeys;
sessMgr = NULL;
}
You should check the Lifecycle property on TDSServerclass component on your servercontainer. It provides a way to determine the way the session is handled. It defaults to session. Setting it to invokation will free the session after each call (invokation). This will of course mean you have no state. This would be oke in a typical REST server though.
if you still have memory consumption growing. put the following line in your dpr unit.
ReportMemoryLeaksOnShutdown := True;
your application will then show you the memory leaks it has on closing of your datasnap server.
I was reading more about erlang:is_port/1 so I decided to test it with several values.
I saw that with normal sockets it replies true if the socket is up and false otherwise (i.e. socket is down).
Can is_port/1 be used also with ssl sockets? I tried but it always returns false.
If you refer to a SSL Socket as the returned value from (for example) ssl:connect/2,3, then the answer is "no". The SSL Sockets in the context of the SSL application are of a sslsocket() type, which, according to the documentation are opaque to the user and definitely not a port. Specifically, they are records:
%% Looks like it does for backwards compatibility reasons
-record(sslsocket, {fd = nil, pid = nil}).
I used C3P0 connection pool to now but get not stable behavior. I test in various kinds of environments and improvement database options. I found today Tomcat 7 jdbc connection pool released and get it. Do anyone use it and get better performance than C3p0?
(I also test boncp connection pool)
My application is very high load. My problems are:
after pass a hour connection pool throws "Can't Open Connection" exception.
sometimes I get this exception "Attempted to use a closed or broken resource" pool and when restart my connection pool(by its mbean) problem fixed
My C3P0 parameters are:
initialPoolSize = 1
minPoolSize=1
maxPoolSize = 50
maxIdleTime = 20000
debugUnreturnedConnectionStackTraces = true
propertyCycle =60
acquireRetryDelay =1000
maxConnectionAge =0
checkoutTimeout =5000
acquireIncrement =1
numHelperThreads =5
acquireRetryAttempts =1
unreturnedConnectionTimeout =90
breakAfterAcquireFailure =false
I also test this parameters with several value but don't see any perceptible changes.
I haven't tried the tomcat pool yet, but will look into this soon. What you can probably do is tweak your c3p0 pool for optimization. This will vary according to the actual load over your application, but as compared to other pooling technologies, I've found c3p0 to be flexible.
It would be nice if you could elaborate your problem here, and mention the pooling parameters you are using.
I got an EAGAIN when trying to spawn a thread using pthread_create. However, from what I've checked, the threads seem to have been terminated properly.
What determines the OS to give EAGAIN when trying to create a thread using pthread_create? Would it be possible that unclosed sockets/file handles play a part in causing this EAGAIN (i.e they share the same resource space)?
And lastly, is there any tool to check resource usage, or any functions that can be used to see how many pthread objects are active at the time?
Okay, found the answer. Even if pthread_exit or pthread_cancel is called, the parent process still need to call pthread_join to release the pthread ID, which will then become recyclable.
Putting a pthread_join(tid, NULL) in the end did the trick.
edit (was not waitpid, but rather pthread_join)
As a practical matter EAGAIN is almost always related to running out of memory for the process. Often this has to do with the stack size allocated for the thread which you can adjust with pthread_attr_setstacksize(). But there are process limits to how many threads you can run. You can query the hard and soft limits with getrlimit() using RLIMIT_NPROC as the first parameter.
There are quite a few questions here dedicated to keeping track of threads, their number, whether they are dead or alive, etc. Simply put, the easiest way to keep track of them is to do it yourself through some mechanism you code, which can be as simple as incrementing and decrementing a global counter (protected by a mutex) or something more elaborate.
Open sockets or other file descriptors shouldn't cause pthread_create() to fail. If you reached the maximum for descriptors you would have already failed before creating the new thread and the new thread would have already have had to be successfully created to open more of them and thus could not have failed with EAGAIN.
As per my observation if one of the parent process calls pthread_join(), and chilled processes are trying to release the thread by calling pthread_exit() or pthread_cancel() then system is not able to release that thread properly. In that case, if pthread_detach() is call immediately after successful call of pthread_create() then this problem has been solved. A snapshot is here -
err = pthread_create(&(receiveThread), NULL, &receiver, temp);
if (err != 0)
{
MyPrintf("\nCan't create thread Reason : %s\n ",(err==EAGAIN)?"EAGAUIN":(err==EINVAL)?"EINVAL":(err==EPERM)?"EPERM":"UNKNOWN");
free(temp);
}
else
{
threadnumber++;
MyPrintf("Count: %d Thread ID: %u\n",threadnumber,receiveThread);
pthread_detach(receiveThread);
}
Another potential cause: I was getting this problem (EAGAIN on pthread_create) because I had forgotten to call pthread_attr_init on the pthread_attr_t I was trying to initialize my thread with.
am connectting the MQ with below code. I am able connected to MQ successfully. My case is i place the messages to MQ every 1 min once. After disconnecting the cable i get a ResonCode error but IsConnected property still show true. Is this is the right way to check if the connection is still connected ? Or there any best pratcices around that.
I would like to open the connection when applicaiton is started keep it open for ever.
public static MQQueueManager ConnectMQ()
{
if ((queueManager == null) || (!queueManager.IsConnected)||(queueManager.ReasonCode == 2009))
{
queueManager = new MQQueueManager();
}
return queueManager;
}
The behavior of the WMQ client connection is that when idle it will appear to be connected until an API call fails or the connection times out. So isConnected() will likely report true until a get, put or inquire call is attempted and fails, at which point QMgr will then report disconnected.
The other thing to consider here is that 2009 is not the only code you might get. It happens to be the one you get when the connection is severed but there are connection codes for QMgr shutting down, channel shutting down, and a variety of resource and other errors.
Typically for a requirement to maintain a constant connection you would want to wrap the connect and message processing loop inside a try/catch block nested inside a while statement. When you catch an exception other than an intentional exit, close the objects and QMgr, sleep at least 5 seconds, then loop around to the top of the while. The sleep is crucial because if you get caught in a tight reconnect loop and throw hundreds of connection attempts at the QMgr, you can bring even a mainframe QMgr to its knees.
An alternative is to use a v7 WMQ client and QMgr. With this combination, automatic reconnection is configurable as a channel configuration.