Connection Timeout and Connection Lifetime - timeout

What is the advantage and disadvantage of connection timeout=0?
And what is the use of Connection Lifetime=0?
e.g
(Database=TestDB;
port=3306;
Uid=usernameID;
Pwd=myPassword;
Server=192.168.10.1;
Pooling=false;
Connection Lifetime=0;
Connection Timeout=0)
and what is the use of Connection Pooling?

Timeout is how long you wait for a response from a request before you give up. TimeOut=0 means you will keep waiting for the connection to occur forever. Good I guess if you are connecting to a really slow server that it is normal if it takes 12 hours to respond :-). Generally a bad thing. You want to put some kind of reasonable timeout on a request, so that you can realize your target is down and move on with your life.
Connection Lifetime = how long a connection lives before it is killed and recreated. A lifetime of 0 means never kill and recreate. Normally not a bad thing, because killing and recreating a connection is slow. Through various bugs your connections may get stuck in an unstable state (like when dealing with weird 3 way transactions).. but 99% of the time it is good to keep connection lifetime as infinite.
Connection pooling is a way to deal with the fact that creating a connection is very slow. So rather than make a new connection for every request, instead have a pool of say, 10, premade connections. When you need one, you borrow one, use it, and return in. You can adjust the size of the pool to change how your app behaves. Bigger pool = more connections = more threads doing stuff at a time, but this could also overwhelm whatever you are doing.
In summary:
ConnectionTimeout=0 is bad, make it something reasonable like 30 seconds.
ConnectionLifetime=0 is okay
ConnectionPooling=disabled is bad, you will likely want to use it.

I know this is an old thread but I think it is important to point out an instance in which you may want to disable Connection Pooling or use Connection Lifetime.
In some environments (especially when using Oracle, or at least in my experience) the web application is designed so that it connects to the database using the user's credentials vs a fixed connection string located in the server's configuration file. In this case enabling connection pooling will cause the server to create a connection pool for each user accessing the website (See Pool Fragmentation). Depending on the scenario this could either be good or bad.
However, connection pooling becomes a problem when the database server is configured to kill database connections that exceed a maximum idle time due to the fact that the database server could kill connections that may still reside in the connection pool. In this scenario the Connection Lifetime may come in handy to throw away these connections since they have been closed by the server anyway.

Related

Wait for all connections to become idle in a connection pool (r2dbc pool)

I am currently working on integrating IAM DB authentication with my database.
Details are as follows:
Database: AWS RDS Postgres
Database Mapping: Jooq
Interface: R2DBC SPI
We maintain a connection pool of 20 connections in our java application. The token provided by AWS STS to be used to authenticate in IAM DB authentication lasts for 15 minutes. The ideal way of handling the updated password would be to update the connection pool configuration's password. But R2DBC pool doesn't provide an API to update the password. The workaround that I implemented for this is to wrap the connection pool object into another class, and schedule another thread that closes the current connection pool and updates the field with another connection pool every 15 minutes.
The problem with this is that I am noticing a lot of active connections to my Db instance (much more than 20). I suspect that the connections in the previous connection pools aren't being closed and are pending and the default keep-alive time is way too high.
The cold looks something like this
this.connectionPool.close().doOnSuccess(pool -> {
// some logging and metric collection here
this.connectionPool = new ConnectionPool(newUpdatedConfiguration);
}).doOnError(err -> {
// some logging and metric collection here
this.connectionPool = new ConnectionPool(newUpdatedConfiguration);
}).subscribe()
My initial guess was that the close() call doesn't essentially close the connection pool entirely. I am a little confused between the close() call and the dispose() call. Please let me know if my confusion has some direction in it.
Other than that, my next thought was that abruptly closing the connection pool might not be the most ideal way of doing things. I should ideally wait for all the connections in the connection pool to first get idle and then close it. Is that the correct thing to do or will it contribute a lot to the latency of replacing the connection pool? Is there a way to do that, Wait for all the connections to become idle?

Are there reasons to keep Active Record pool size low when you use pgbouncer?

If I use pgbouncer, what are the reasons I should not just set Active Record's pool size to 99999, effectively disabling it, and leaving pgbouncer in charge of all pooling?
In my case, this is with Rails 5.2. pgbouncer uses transaction pooling.
I can think of some possible reasons:
If a runaway process somehow tries to open a high number of threads/connections, the AR pool would set a ceiling, preventing it from exhausting all connections.
Similarly, if AR doesn't close connections to pgbouncer correctly (e.g. if some code opens connections in threads without closing them), and AR's reaper does not run or does not run often enough, that code could exhaust all connections.
If Active Record itself has costly overhead per connection (does it?), perhaps it's preferable to wait and reuse connections instead of opening a higher number of connections, in situations where the same process tries to open a lot of connections.
Are those valid reasons? Are they the only reasons?
(I've seen Disabling Connection Pooling in Rails to use PgBouncer and think this is related but not quite the same question.)

Elixir: DBConnection queue_target and queue_interval explained

I am reading DBConnection documentation. And I don't quite understand following quote:
Our goal is to wait at most :queue_target for a connection.
If all connections checked out during a :queue_interval takes more than
:queue_target, then we double the :queue_target. If checking out
connections take longer than the new target, then we start dropping
messages.
Could you please explain me on examples?
In my app I have very huge operation that is executed by periodic worker. I would like to have timeout for it 1minute, or don't have timeout at all. Which queue_target and queue_interval should I set to avoid: Elixir.DBConnection.ConnectionError',message => <<"tcp recv: closed (the connection was closed by the pool, possibly due to a timeout or because the pool has been terminated)"
In regular case I would like me queue timeout to be 5 seconds. How could I achieve this with queue_target and queue_interval?
The timeouts you're referring to are set with the :timeout option in execution functions (i.e. execute/4), :queue_target and :queue_interval are only meant to affect the pool's ability to begin new requests (for requests to checkout connections from the pool), not requests that have already checked out connections and are already being processed.
Keep in mind that all attempts to checkout connections during a :queue_interval must take longer than :queue_target in order for these values to affect anything. Normally you'd test different values and monitor your database's ability to keep up in order to find optimal values for your environment.

Connection pools explained in a practical context

Rails' database.yml file has a setting, pool: 5. I understand what a database connection pool is but I'm being tripped by a few subtleties:
A connection is used then returned to its pool. The next request can then use a connection from the pool rather than creating a new connection.
How is it determined which request gets which connection?
Suppose I have a concurrent connections limit of 5 and one of my web pages needs to make 10 queries to the database:
Is each query a separate connection or all 10 queries are considered one connection?
In terms of queries, connections, or speed, what can be an example of a situation that would overwhelm that 5 concurrent connections limit?
And suppose that, in a different database, I set the database connection pool size to 5.
How are pool size and concurrent connections related, if at all?
In terms of queries, connections, or speed, what can be an example of a situation that would overwhelm this pool size?
1) ActiveRecord::Base loads a connection when required (lazily on a request or it's current one is closed/disconnected)
2) No, The same connection will be used to make multiple queries
3) No way to answer that without using diagnostic utilities which your db vendor supplied with your db
4) That is db vendor/adapter specific
5) same answer as 3.
If you are experiencing a slow down, the only way to solve them is by using diagnostic tools to inform you where your bottleneck is concurring. 90% of the time, it's not your db or the connections to it (It's usually the indexing, n+1, etc... )
If you are NOT experiencing any slow down, then keep the defaults and move on. Premature optimization will lead to an over engineered solution

Is there any reason to use a database connection pool with ActiveRecord?

What are the benefits to using an external connection pool?
I've heard that most other applications will open up a connection for each unit of work. In Rails, for example, I'd take that to mean that each request could open a new connection. I'm assuming a connection pool would make that possible.
The only benefit I can think of is that it allows you to have 1,000 frontend processes without having 1,000 postgres processes running.
Are there any other benefits?
Rails has connection pooling built in:
Simply use ActiveRecord::Base.connection as with Active Record 2.1 and earlier (pre-connection-pooling). Eventually, when you’re done with the connection(s) and wish it to be returned to the pool, you call ActiveRecord::Base.clear_active_connections!. This will be the default behavior for Active Record when used in conjunction with Action Pack’s request handling cycle.
Manually check out a connection from the pool with ActiveRecord::Base.connection_pool.checkout. You are responsible for returning this connection to the pool when finished by calling ActiveRecord::Base.connection_pool.checkin(connection).
Use ActiveRecord::Base.connection_pool.with_connection(&block), which obtains a connection, yields it as the sole argument to the block, and returns it to the pool after the block completes.
This has been available since version 2.2. You'll see a pool parameter in your database.yml for controlling it:
pool: number indicating size of connection pool (default 5)
I don't think there would be much point in layering another pooling system underneath it and it could even confuse AR's pooling if you tried it.

Resources