How to find out connection leaks? - connection-pooling

We have a struts web application that is deployed on sun applicaition server that despite doing load test / system integrating test in development environments. There is no scenario of conneciton leak.
But however in the production environment. There seen to be connection leak as the connection in used keep on increasing.
Beside application codes, what other scenario could cause connection leak?

How do you measure this? Are you looking at the number of connections in the database? The size of the pool in the app server? Which database are you using?
I don't understand "There is no scenario of conneciton [sic] leak." If you aren't properly closing all your ResultSets, Statements, and Connections (in individual try/catch blocks in a finally block in method scope, in reverse order of instantiation) you might experience leaks.
Unless there's another application that is using the same database, it's got to be your code or the app server. If you're in deep denial about your code, try switching app servers and see if that helps.
I'd suggest that your test scenarios aren't realistic. If you don't observe this behavior in production, either your tests aren't triggering the behavior or the test and prod deployments are not identical.

Connection leaks are one of the most common issues encountered.
The major reason for this being Not closing the result set , connection after use.
try{
//perform jdbc operations
}catch(Exception e){
//perform error operations
}finally{
//close the connections
}
Also there are ways by which you can enable logging for connection leaks but this will impact the performance.

Related

Db connection not released from connection pool

I am migrating an existing site from Win2003/.Net 4.0 to Win2012 as lift and shift (no change in code).
The problem is that the connections are not released from the connection pool and errors after reaching 100 very quickly. When I checked the SQL Server(also 2012) using sp_who2, I can see the connections sleeping and not getting released.
This code properly closes connections and uses Enterprise library Data blocks and works fine in the old environment. Any clues on where the issue is?
After much research, I found out that the issue was in a section of the code where data readers were not being disposed off correctly in certain scenarios.
What is still not clear is why it works without the connection leaking in Win2003/iis6 combination. What was ruled out is any external factors that would contribute such a behavior in 2012 Server environment.

Will the Java JRE close connections if an applications crases?

Will the Java JRE close database connections if an applications crashes or exits without closing its active connections? If not is it the responsibility of the database to time out these connections?
I understand that if Java crashes then the database will need to time out all open connections as there is nothing else to do the job.
EDIT
Additional thought. If a WAR deployed in Tomcat crashes, will the Tomcat server cleanup the open connections?
That really depends.
If your JRE itself crashes, it might not be able to close all the connections. If just your application crashes, it might be able to close connections when it frees the resources. This seems to be the case most of the time, in my experience, as long as the JRE itself does not die. The best defence is of course proper error handling and making sure you do not have more connections open than required.
From my experience, it's better to set up a data source in Tomcat, that way, even if your application crashes, it's not an issue with open resources. I'm a rather big fan of making my application server handle as much resource management as possible, that way I am protected a bit more from my sometimes adventurous code.
Most times your application should close the connections in any proper errorhandlers. I prefer to set up the data source in Tomcat, like Ewald mentioned. There you can define a maximum of connections, and a timeout in your context.xml. When your application stacks too much connections it will release it.

How to workaround an CachedConnectionManager error on JBoss with JRuby?

I am having an issue deploying an JRuby Rails application into JBoss, using JNDI to manage database connections.
After the first request I have this error:
[CachedConnectionManager] Closing a connection for you. Please close them yourself
I think this is because JBoss uses a connection pool and expect that rails (jruby) release the connection after each use, what is wrong, because rails (ActiveRecord) has its own connection pool.
I've tried to call
ActiveRecord::Base.clear_active_connections!
after each request, in a after_filter, but this haven't worked.
Does somebody have some idea?
Thanks in advance.
I am also having connection pool problems trying to run a Sinatra/ActiveRecord app in multithreaded mode in a Glassfishv3 container. The app includes a "before" block with ActiveRecord::Base.connection to force acquisition of a thread-owned connection, and an "after" block with ActiveRecord::Base.clear_active_connections! to release that connection.
Multi threaded mode
I have tried a great many variations of ActiveRecord 3.0.12 and 3.2.3, JNDI with a Glassfish connection pool, simply using ActiveRecord's connection pool, or even monkey patching the ActiveRecord connection pool to bypass it completely, using the Glassfish pool directly.
When testing with a simple multi-threaded HTTP fetcher, all variations I have tried have resulted errors, with a greater percentage of failing requests as I increase the worker threads in the HTTP fetcher.
With AR 3.0.12, the typical error is the Glassfish pool throwing a timeout exception (for what it's worth, my AR connection pool is larger than my Glassfish pool; I understand AR will pool the connection adapter object, and arjdbc will acquire and release actual connections behind the scenes).
With AR 3.2.3, I get a more ominous error. The error below came from a test not using JNDI but simply using the ActiveRecord connection pool. This is one of the better configurations in that about 95% of requests complete OK. The error requests fail with this exception:
org.jruby.exceptions.RaiseException: (ConcurrencyError) Detected invalid hash contents due to unsynchronized modifications with concurrent users
at org.jruby.RubyHash.keys(org/jruby/RubyHash.java:1356)
at ActiveRecord::ConnectionAdapters::ConnectionPool.release(/Users/pat/app/glassfish/glassfish3/glassfish/domains/domain1/applications/lookup_service/WEB-INF/gems/gems/activerecord-3.2.3/lib/active_record/connection_adapters/abstract/connection_pool.rb:294)
at ActiveRecord::ConnectionAdapters::ConnectionPool.checkin(/Users/pat/app/glassfish/glassfish3/glassfish/domains/domain1/applications/lookup_service/WEB-INF/gems/gems/activerecord-3.2.3/lib/active_record/connection_adapters/abstract/connection_pool.rb:282)
at MonitorMixin.mon_synchronize(classpath:/META-INF/jruby.home/lib/ruby/1.9/monitor.rb:201)
at MonitorMixin.mon_synchronize(classpath:/META-INF/jruby.home/lib/ruby/1.9/monitor.rb:200)
at ActiveRecord::ConnectionAdapters::ConnectionPool.checkin(/Users/pat/app/glassfish/glassfish3/glassfish/domains/domain1/applications/lookup_service/WEB-INF/gems/gems/activerecord-3.2.3/lib/active_record/connection_adapters/abstract/connection_pool.rb:276)
at ActiveRecord::ConnectionAdapters::ConnectionPool.release_connection(/Users/pat/apps/glassfish/glassfish3/glassfish/domains/domain1/applications/lookup_service/WEB-INF/gems/gems/activerecord-3.2.3/lib/active_record/connection_adapters/abstrac/connection_pool.rb:110)
at ActiveRecord::ConnectionAdapters::ConnectionHandler.clear_active_connections!(/Users/pat/apps/glassfish/glassfish3/glassfish/domains/domain1/applications/lookup_service/WEB-INF/gems/gems/activerecord-3.2.3/lib/active_record/connection_adapters/abstract/connection_pool.rb:375)
...
Single threaded mode
Losing confidence in the general thread safety of ActiveRecord (or arjdbc?), I gave up on using ActiveRecord multithreaded and configured warbler and JRuby-Rack do JRuby runtime pooling, emulating multiple single-threaded Ruby processes much like Unicorn, Thin, and other typical Ruby servers.
In config/warble.rb:
config.webxml.jruby.min.runtimes = 10
config.webxml.jruby.max.runtimes = 10
I reran my test, using JNDI this time, and all requests completed with no errors!
My Glassfish connection pool is size 5. Note the number of jruby runtimes is greater than the connection pool size. There is probably little point in making the JRuby runtime pool larger than the database connection pool since in this application, each request consumes a database connection, but I just wanted to make sure that even with contention for database connections I did not get the time-out errors I had seen in multithreaded mode.
These concurrency problems are disappointing to say the least. Is anyone successfully using ActiveRecord under moderate concurrency? I know that in a Rails app, one must call config.threadsafe! to avoid the global lock. Looking at the code, it does not seem to modify any ActiveRecord setting; is there some configuration of ActiveRecord I'm not doing?
The Rails connection pool does hold connections open, even after they are returned to the pool, so the underlying JDBC connection does not get closed. Are you using a JNDI connection with activerecord-jdbc-adapter (adapter: jndi or jndi: location in database.yml)?
If so, this should happen automatically, and would be a bug.
If not, you can use the code in ar-jdbc and apply the appropriate connection pool callbacks yourself.
https://github.com/jruby/activerecord-jdbc-adapter/blob/master/lib/arjdbc/jdbc/callbacks.rb
class ActiveRecord::ConnectionAdapters::JdbcAdapter
# This should force every connection to be closed when it gets checked back
# into the connection pool
include ActiveRecord::ConnectionAdapters::JndiConnectionPoolCallbacks
end

Application Pool recycling results in very long response times

I've read somewhere, that application pool recycling shouldn't be very noticeable to the end user, when overlapping is enabled, but in my case that results in at least 10 times longer responses than usually (depending on load, response time from regular 100ms grows up to 5000ms). Also that is not for a single request, but several ones right after pool recycling (I was using ~10 concurrent connections when testing this).
So questions would be:
In my opinion I don't do anything, that would take a long time on application start - in general, that is only IoC container and routing initialization, also even I would do something - that is what overlapping should take care, or not?
Is sql connection pool destroyed during pool recycling and could that be a reason for long response times?
What would be the best method to profile what is taking so long? Also may be there are ideas, what could take so long from IIS/.NET side, and how to avoid that.
Overlapping only means that the old worker process will be kept running while the new one is started. As soon as the new one is started, it begins handling all requests. "Started" does not mean that initialization (which might be contained in Application_Start, any static constructors in your application, or any one time, contentious tasks like proxy building) have been completed. This means that new requests will have to wait while these processes are completed, even though the "old" worker process might still be available for a short time. Also, if your application uses any kind of caching, your new caches will be "cold", meaning there will be some additional processing time required until the caches are warmed up.
Yes - your new application will have a new sql connection pool.
In my experience, in a production environment, with well tested code and an application that requires consistent, high performance, I choose to disable application pool recycling altogether. Application Pool recycling is a "feature" introduced to combat the perception that IIS was unstable, when in fact what was usually really unstable was the applications that it was hosting. In my opinion, it is a crutch that allows people to deploy less than stable code. If it is causing you problems, turn it off and make sure your application doesn't have any memory leaks, etc. that might lead to long term application instability.

Connection Timeout and Connection Lifetime

What is the advantage and disadvantage of connection timeout=0?
And what is the use of Connection Lifetime=0?
e.g
(Database=TestDB;
port=3306;
Uid=usernameID;
Pwd=myPassword;
Server=192.168.10.1;
Pooling=false;
Connection Lifetime=0;
Connection Timeout=0)
and what is the use of Connection Pooling?
Timeout is how long you wait for a response from a request before you give up. TimeOut=0 means you will keep waiting for the connection to occur forever. Good I guess if you are connecting to a really slow server that it is normal if it takes 12 hours to respond :-). Generally a bad thing. You want to put some kind of reasonable timeout on a request, so that you can realize your target is down and move on with your life.
Connection Lifetime = how long a connection lives before it is killed and recreated. A lifetime of 0 means never kill and recreate. Normally not a bad thing, because killing and recreating a connection is slow. Through various bugs your connections may get stuck in an unstable state (like when dealing with weird 3 way transactions).. but 99% of the time it is good to keep connection lifetime as infinite.
Connection pooling is a way to deal with the fact that creating a connection is very slow. So rather than make a new connection for every request, instead have a pool of say, 10, premade connections. When you need one, you borrow one, use it, and return in. You can adjust the size of the pool to change how your app behaves. Bigger pool = more connections = more threads doing stuff at a time, but this could also overwhelm whatever you are doing.
In summary:
ConnectionTimeout=0 is bad, make it something reasonable like 30 seconds.
ConnectionLifetime=0 is okay
ConnectionPooling=disabled is bad, you will likely want to use it.
I know this is an old thread but I think it is important to point out an instance in which you may want to disable Connection Pooling or use Connection Lifetime.
In some environments (especially when using Oracle, or at least in my experience) the web application is designed so that it connects to the database using the user's credentials vs a fixed connection string located in the server's configuration file. In this case enabling connection pooling will cause the server to create a connection pool for each user accessing the website (See Pool Fragmentation). Depending on the scenario this could either be good or bad.
However, connection pooling becomes a problem when the database server is configured to kill database connections that exceed a maximum idle time due to the fact that the database server could kill connections that may still reside in the connection pool. In this scenario the Connection Lifetime may come in handy to throw away these connections since they have been closed by the server anyway.

Resources