Hikari CP - Continuosly Validate Idle Connections - connection-pooling

I observed that Hikari validates the connection before giving it to the requester. At that point, if the connection is found to be dead, a new connection is created.
Is there a way that idle connections are regularly validated (even if no one is asking for a new connection)?
Something similar is present in C3P0 as 'idle_test_period'.

You can define keepaliveTime to validate idle connections
 "keepalive" will only occur on an idle connection. When the time arrives for a "keepalive" against a given connection, that connection will be removed from the pool, "pinged", and then returned to the pool. The 'ping' is one of either: invocation of the JDBC4 isValid() method, or execution of the connectionTestQuery.

Related

Wait for all connections to become idle in a connection pool (r2dbc pool)

I am currently working on integrating IAM DB authentication with my database.
Details are as follows:
Database: AWS RDS Postgres
Database Mapping: Jooq
Interface: R2DBC SPI
We maintain a connection pool of 20 connections in our java application. The token provided by AWS STS to be used to authenticate in IAM DB authentication lasts for 15 minutes. The ideal way of handling the updated password would be to update the connection pool configuration's password. But R2DBC pool doesn't provide an API to update the password. The workaround that I implemented for this is to wrap the connection pool object into another class, and schedule another thread that closes the current connection pool and updates the field with another connection pool every 15 minutes.
The problem with this is that I am noticing a lot of active connections to my Db instance (much more than 20). I suspect that the connections in the previous connection pools aren't being closed and are pending and the default keep-alive time is way too high.
The cold looks something like this
this.connectionPool.close().doOnSuccess(pool -> {
// some logging and metric collection here
this.connectionPool = new ConnectionPool(newUpdatedConfiguration);
}).doOnError(err -> {
// some logging and metric collection here
this.connectionPool = new ConnectionPool(newUpdatedConfiguration);
}).subscribe()
My initial guess was that the close() call doesn't essentially close the connection pool entirely. I am a little confused between the close() call and the dispose() call. Please let me know if my confusion has some direction in it.
Other than that, my next thought was that abruptly closing the connection pool might not be the most ideal way of doing things. I should ideally wait for all the connections in the connection pool to first get idle and then close it. Is that the correct thing to do or will it contribute a lot to the latency of replacing the connection pool? Is there a way to do that, Wait for all the connections to become idle?

org.apache.commons.dbcp.SQLNestedException: Cannot get a connection, pool error Timeout waiting for idle object

We have a connection pool for an embedded derby database. We are setting
max wait time to 5 secs
max connection in pool 100
We are getting org.apache.commons.dbcp.SQLNestedException: Cannot get a connection, pool error Timeout waiting for idle object very frequently. When this exception is coming, the connections owned by the application is always 1, this is evident from the logs.
The above exception states that the pool manager cannot produce a viable connection to a waiting requester and the maxWait has passed therefore triggering a timeout.
Ref: Cannot get a connection, pool error Timeout waiting for idle object in PutSQL?
There is 1 application using derby, the Derby database, and 2 other applications.
As per my understanding, the following are the main reason for not getting a connection
There is a network issue
Connection pool has been exhausted, because of connection leak
Connection pool getting exhausted, because of long-running queries
In our case, it's an embedded derby database which is local to the application.
So, network issue is ruled out. There are no long-running queries.
I am not able to figure out what is causing wait timeout. Could it be related to OS, Filesystem, server utilization going high etc?
Any help is appreciated.

Why and when should an idle database connection be retired from a connection pool?

I understand from reading HikariCP's documentation (see below) that idle connections should be retired from a connection pool.
My question is: why and when should an idle database connection be retired from the connection pool?
This is the part of HikariCP documentation that sparked my question:
idleTimeout:
This property controls the maximum amount of time (in milliseconds)
that a connection is allowed to sit idle in the pool. Whether a
connection is retired as idle or not is subject to a maximum variation
of +30 seconds, and average variation of +15 seconds. A connection
will never be retired as idle before this timeout. A value of 0 means
that idle connections are never removed from the pool. Default: 600000
(10 minutes)
Two main reasons:
a) they take up resources on the server (not terribly much since the connection is idle)
b) sometimes connections timeout themselves after periods of inactivity. You want to either close them before that, or run some periodic "ping" SQL to make sure they are still alive. Otherwise you'd get an error on the next SQL you want to execute.

Connection Timeout and Connection Lifetime

What is the advantage and disadvantage of connection timeout=0?
And what is the use of Connection Lifetime=0?
e.g
(Database=TestDB;
port=3306;
Uid=usernameID;
Pwd=myPassword;
Server=192.168.10.1;
Pooling=false;
Connection Lifetime=0;
Connection Timeout=0)
and what is the use of Connection Pooling?
Timeout is how long you wait for a response from a request before you give up. TimeOut=0 means you will keep waiting for the connection to occur forever. Good I guess if you are connecting to a really slow server that it is normal if it takes 12 hours to respond :-). Generally a bad thing. You want to put some kind of reasonable timeout on a request, so that you can realize your target is down and move on with your life.
Connection Lifetime = how long a connection lives before it is killed and recreated. A lifetime of 0 means never kill and recreate. Normally not a bad thing, because killing and recreating a connection is slow. Through various bugs your connections may get stuck in an unstable state (like when dealing with weird 3 way transactions).. but 99% of the time it is good to keep connection lifetime as infinite.
Connection pooling is a way to deal with the fact that creating a connection is very slow. So rather than make a new connection for every request, instead have a pool of say, 10, premade connections. When you need one, you borrow one, use it, and return in. You can adjust the size of the pool to change how your app behaves. Bigger pool = more connections = more threads doing stuff at a time, but this could also overwhelm whatever you are doing.
In summary:
ConnectionTimeout=0 is bad, make it something reasonable like 30 seconds.
ConnectionLifetime=0 is okay
ConnectionPooling=disabled is bad, you will likely want to use it.
I know this is an old thread but I think it is important to point out an instance in which you may want to disable Connection Pooling or use Connection Lifetime.
In some environments (especially when using Oracle, or at least in my experience) the web application is designed so that it connects to the database using the user's credentials vs a fixed connection string located in the server's configuration file. In this case enabling connection pooling will cause the server to create a connection pool for each user accessing the website (See Pool Fragmentation). Depending on the scenario this could either be good or bad.
However, connection pooling becomes a problem when the database server is configured to kill database connections that exceed a maximum idle time due to the fact that the database server could kill connections that may still reside in the connection pool. In this scenario the Connection Lifetime may come in handy to throw away these connections since they have been closed by the server anyway.

Websphere Application Server 6.1 Connection Pool question - what happens when AS fails to get connection

I have studied the Websphere document "Connection Life Cycle" for Websphere Application Server Express v6.1 and and have searched the web for an answer to the following.
Connection Pool State
Pretest existing pooled connection is selected - retry interval is zero seconds
Pretest new connections is selected - # of retries is zero and retry interval is 0
Pretest SQL String is "Select 'Hello' from dual"
What happens if the pretest fails and
There are no connections in the
inFreePool or InUse state?
There are connections in the inFreePool state?
I'm referring to the settings in "Data sources > data_source > Websphere Application Server data source"
application calls getConnection
If there is a connection inFreePool then it is tested using pretest SQL string and
handed to the application if it passes the test.
If it fails the test the pool is purged according to the Purge policy.
If the purge policy is EntirePool then the entire free pool is purged and a new connection is acquired and tested.
If purge policy is failingConnectionOnly then the failing connection is discarded and another connection is obtained from the pool and tested.
If there are no connections in the pool then a new connection is created and tested. If the new connection fails then an exception (Type?) is thrown.
If there are no connections in the free pool then a new connection is created, tested and handed to the application if it passes the test. If the new connection fails then an exception is thrown.

Resources