Prevent keeping unused DB connection - grails

Problem description:
Lets have a service method which is called from controller:
class PaymentService {
static transactional = false
public void pay(long id) {
Member member = Member.get(id)
//long running task executing HTTP request
requestPayment(member)
}
}
The problem is if 8 users hit the same service in the same time and the time to execute the requestPayment(member) method is 30 seconds, the whole application gets stucked for 30 seconds.
The problem is even bigger than it seems, because if the HTTP request is performing well, nobody realizes any trouble. The serious problem is that availability of our web service depends on the availability of our external partner/component (in our use-case payment gateway). So when your partner starts to have performance issues, you will have them as well and even worse it will affect all parts of your app.
Evaluation:
The cause of problem is that Member.get(id) reserves a DB connection from pool and it keeps it for further use, despite requestPayment(member) method never needs to access DB. When next (9-th) request hits any other part of the application which requires DB connection (transactional service, DB select, ...) it keeps waiting (or timeouts if maxWait is set to lower duration) until the pool has an available connection, which can last even 30 seconds in our use case.
The stacktrace for the waiting thread is:
at java.lang.Object.wait(Object.java:-1)
at java.lang.Object.wait(Object.java:485)
at org.apache.commons.pool.impl.GenericObjectPool.borrowObject(GenericObjectPool.java:1115)
at org.apache.commons.dbcp.PoolingDataSource.getConnection(PoolingDataSource.java:106)
at org.apache.commons.dbcp.BasicDataSource.getConnection(BasicDataSource.java:1044)
at org.springframework.jdbc.datasource.DataSourceUtils.doGetConnection(DataSourceUtils.java:111)
Or for timeout:
JDBC begin failed
org.apache.commons.dbcp.SQLNestedException: Cannot get a connection, pool error Timeout waiting for idle object
at org.apache.commons.dbcp.PoolingDataSource.getConnection(PoolingDataSource.java:114)
at org.apache.commons.dbcp.BasicDataSource.getConnection(BasicDataSource.java:1044)
Caused by: java.util.NoSuchElementException: Timeout waiting for idle object
at org.apache.commons.pool.impl.GenericObjectPool.borrowObject(GenericObjectPool.java:1167)
at org.apache.commons.dbcp.PoolingDataSource.getConnection(PoolingDataSource.java:106)
... 7 more
Obviously the same issue happens with transactional service, however it makes much more sense since the connection is reserved for the transaction.
As a temporary solution its possible to increase the pool size with maxActive property on datasource, however it doesn't solve the real problem of holding an unused connection.
As a permanent solution its possible to enclose all DB operations to transactional behavior (withTransaction{..}, #Transactional), which returns the connection back to pool after commit (or to my surprise also withNewSession{..} works). But we need to be sure that the whole call chain from controller up to the requestPayment(member) method doesn't leak the connection.
I'd like to be able to throw an exception in the requestPayment(member) method if the connection is "leaked" (similar to Propagation.NEVER transactional behavior), so I can reveal the issue early during test phase.

After digging in the source code I've found the solution:
class PaymentService {
static transactional = false
def sessionFactory
public void pay(long id) {
Member member = Member.get(id)
sessionFactory.currentSession.disconnect()
//long running task executing HTTP request
requestPayment(member)
}
}
The above statement releases the connection back to pool.
If executed from transactional context, an exception is thrown (org.hibernate.HibernateException connnection proxy not usable after transaction completion), since we can't release such a connection (which is exactly what I needed).
Javadoc:
Disconnect the Session from the current JDBC connection. If the
connection was obtained by Hibernate close it and return it to the
connection pool; otherwise, return it to the application.
This is used by applications which supply JDBC connections to
Hibernate and which require long-sessions (or long-conversations)

Related

Blazor-server scoped services, closed connections, garbage collection

If I have a scoped service:
services.AddSingleton<MyScopedService>();
And in that service, an HTTP request is made:
HttpClient client = _clientFactory.CreateClient();
StringContent formData = ...;
HttpResponseMessage response = await client.PostAsync(uri, formData);
string data = await response.Content.ReadAsStringAsync();
I read here that for an AddScoped service, the service scope is the SignalR connection.
If the user closes the browser tab before the response is returned, the MyScopedService code still completes.
Could someone explain what happens to that MyScopedService instance? When is it considered out of scope? After the code completes? Is the time until it's garbage collected predictable?
I have a Blazor-server project using scoped dependency injections (fluxor, and a CircuitHandler), and I'm noticing that the total app memory increases with each new connection (obviously), but takes a while (minutes) for the memory to come down after the browser tabs are closed.
Just wondering if this is expected, or if I could be doing something to let the memory usage recover more quickly. Or maybe I'm doing something wrong with my scoped services.
Add IDisposeAsync to your service then in your service :
public async ValueTask DisposeAsync() => await hubConnection.DisposeAsync();
This was copied from one of my own libraries I was facing the same issue. GC will not work if there are references to other objects...

Does calling ActiveMQConnection cleanup and changeuserinfo impact DefaultMessageListenerContainer with CACHE_CONSUMER

Here's some context: I've set up an ActiveMQBroker (version 5.13.1) with a BrokerPlugin that takes a requestId (just a UUID for tracking requests across different servers) and OAuth token (more about the OAuth token below) in the 'username' and 'password' fields, respectively, of the org.apache.activemq.command.ConnectionInfo ... it's based on this great post. On the client/consumer side, I'm wrapping the ActiveMQConnection inside of a Spring DefaultMessageListenerContainer (version 4.2.4-RELEASE) with cacheLeve=CACHE_CONSUMER (so it also caches the Session and Consumer along with the Connection).
The snag is that the client's OAuth token expires every 20 minutes, so I've set up a ScheduledExecutorService on the client side to refresh the OAuth token every 18 minutes.
My question is that if my scheduled task on the client side calls ActiveMQConnection#cleanup() followed by ActiveMQConnection#changeUserInfo(newRequestId, newAuthToken) ... will that negatively impact spring's DefaultMessageListenerContainer that is holding onto that same ActiveMQConnection? Or another way to ask, is there a "right" way for my client code to set the new "username" and "password" fields in the ActiveMQConnection without messing anything up in the DefaultMessageListenerContainer? I'm especially concerned with any multi-threading issues since the DefaultMessageListenerContainer has several consumer threads ... and my ScheduledExecutorService is, of course, running it's own thread to update the OAuth token into the ActiveMQConnection.
Would it be enough to extend DefaultMessageListenerContainer and wrap my update of the OAuth token inside a synchronized block of the 'sharedConnectionMonitor', e.g. would something the following be necessary and/or sufficient:
public class OAuthMessageListenerContainer extends DefaultMessageListenerContainer {
public void updateOAuthToken(String requestId, String authToken) throws JMSException {
synchronized(sharedConnectionMonitor) {
ActiveMQConnection amqConn = (ActiveMQConnection)getSharedConnection();
amqConn.doCleanup(true);
amqConn.changeUserInfo(requestId, authToken);
}
}
}
FWIW here's how I solved the issue:
1) I first had to upgrade ActiveMQ from 5.13.0 to 5.13.3 because 5.13.0 had a thread deadlock issue with the FailoverTransport when trying to reconnect to the broker after the transport was interrupted, e.g. network glitch, server restart, zookeeper electing a new "master" broker (we're using replicated leveldb on the server with 3 brokers)
2) After working past the thread deadlock in ActiveMQ I realized that changing the username and password on the ActiveMQConnection basically resets the whole connection anyway ... which means the DMLC is reset ... so I made a subclass of DMLC that I could reset from a TransportListener on the connection when the transport is interrupted ... but then ran into another thread deadlock in the DMLC ... and found this answer that helped me past that.
The subclass of DMLC just needs this method:
public void restart() {
try {
shutdown();
initialize();
start();
}
catch (Exception e) {
LOG.error("Could not restart DMLC", e);
}
}
and the transport listener needs to do this:
#Override
public void transportInterrupted() {
dmlc.restart();
}

OWIN asynchronous startup (using Hangfire)

I am using Hangfire with SQL Storage on a remote SQL server and running it alongside my existing MVC site. My startup class is very simple:
public void Configuration(IAppBuilder app)
{
app.UseHangfire(config =>
{
config.UseSqlServerStorage("MY_CONNECTION_STRING");
config.UseServer();
});
}
The problem is that any delay in connecting to the remote server delays my MVC site from spinning up. Is there a way to start OWIN asynchronously so that the project is able to respond to requests regardless of what happens during the OWIN startup, including fatal errors?
Hangfire initialization logic is performed in a dedicated thread to decrease the start-up time of your application. So, UseServer method creates a new thread only, without any additional logic.
UseSqlServerStorage method connects to your database to check your current schema to run automatic migrations if necessary (one simple query to the Hangfire.Schema table). This is the default behavior, however you are able to disable it:
var options = new SqlServerStorageOptions
{
PrepareSchemaIfNecessary = false
};
var storage = new SqlServerStorage("<name or connection string>", options);
After performing this step, Hangfire will not connect to your database at startup (and no other class will do it). But keep an eye on release notes, they will contain information about database storage changes.

Which is a better way to handle connection pooling?

I'm trying to implement connection pooling for a JSF 2.1 application which has a H2 database and Jetty 9 Web server embedded in it. I have two options to implement connection pooling for the h2 database. The options being let Jetty implement connection pooling for me, or I define a application scoped managed bean which creates connection pool. I would like to know which would be a better approach in handling connection pooling?
Connection pooling using Application scoped managed bean:
JdbcConnectionPool cp = JdbcConnectionPool.create(
"jdbc:h2:~/test", "sa", "sa");
for (String sql : args) {
Connection conn = cp.getConnection();
conn.createStatement().execute(sql);
conn.close();
}
cp.dispose();
Either approach of connection pooling is fine. There are many connection pool implementations (each one with advantages and disadvantages), use whatever you feel like using.
If you have a list of statements to execute, then I wouldn't open a new connection for each statement. Instead, execute all statements with the same connection (and statement):
JdbcConnectionPool cp = JdbcConnectionPool.create(
"jdbc:h2:~/test", "sa", "sa");
...
Connection conn = cp.getConnection();
Statement stat = conn.createStatement();
for (String sql : args) {
stat.execute(sql);
}
conn.close();
...
cp.dispose();
The connection pool can be started / stopped:
Outside the web application, as a resource (that's a bit more complicated in my view), for example as described in the article "Database Connection Pooling with Tomcat". You will find similar documentation for Jetty.
Using a ServletContextListener (also described in the H2 documentation). In my view, this is a bit simpler. The disadvantage is that the connection pool can not be used by multiple web applications.

Closing Blackberry HttpConnection on timeout

I have a timer task that closes a connection when it's triggered, the problem is that sometimes it is triggered before the connection actually opens, like this:
try {
HttpConnection conn = getMyConnection(); // Asume this returns a valid connection object
// ... At this moment the timer triggers the worker wich closes the connection:
conn.close(); // This is done by the timeTask before conn.getResponseCode()
int mCode = conn.getResponseCode(); // BOOOMMMM!!!! EXPLOTION!!!!
// ... Rest of my code here.
} catch(Throwable e) {
System.out.println("ups..."); // This never gets called... Why?
}
When I try conn.getResponseCode(), an exception is thrown but isn't cought, why?
I get this error: ClientProtocol(HttpProtocolBase).transitionToState(int) line: 484 and a source not found :S.
The connection lives in a different thread, and has its own lifecycle. You are trying to access it from the timer thread in a synchronous way.
To begin with, a connection is a state machine. It starts in the "setup" state, then changes to the "connected" state if some methods are called on it (any method that requires to contact the server), and finally it changes to the "closed" state when the connection has been terminated by either the server or the client. The method getResponseCode is one of those that can cause the connection to transition from the so called "setup" state to the "connected" state, if it wasn't already connected. You are trying to get the response code immediatly without even knowing whether the connection was established or not. You are not even letting the connection time to connect or close itself properly. Even if you could, have a look at what the javadocs say about the close method:
When a connection has been closed, access to any of its methods except this close() will cause an an IOException to be thrown.
If you really need to do something after it has been closed, pass a "listener" object to the connection so that it can call back when the connection has been closed, and pass back the response code (if the connection with the server was ever stablished).

Resources