connections were in use and max pool size was reached - connection-pooling

I am working on dotnet core 5.0 and I am facing this error message:
Timeout expired. The timeout period elapsed prior to obtaining a connection from the pool. This may have occurred because all pooled connections were in use and max pool size was reached.
I set these parameters to my connection string
"DefaultConnection": "Data Source=.;Initial Catalog=TestDB;User Id=sa;password=admin123;Pooling=true;Max Pool Size=100;MultipleActiveResultSets=true"
This is how I deal with DB:
using (CD_DataToolContext objCDContext = new CD_DataToolContext())
{
List<CdStandardFile> standardFiles = new List<CdStandardFile>();
foreach (var item in data) {
CdStandardFile cdStandardFile = new CdStandardFile();
standardFiles.Add(cdStandardFile);
}
objCDContext.AddRange(standardFiles); objCDContext.SaveChanges();
}

Related

Connection Acquiring Timing out despite idle connections

We are using r2dbc-pool for our application, along with Jooq.
The ConnectionFactory is as follows
ConnectionFactoryOptions.Builder connectionFactoryBuilder = ConnectionFactoryOptions.builder();
connectionFactoryBuilder.option(ConnectionFactoryOptions.HOST, ....)
.option(ConnectionFactoryOptions.DRIVER, "pool")
.option(ConnectionFactoryOptions.PROTOCOL, "postgres")
.option(ConnectionFactoryOptions.DATABASE, ....)
.option(ConnectionFactoryOptions.USER, username)
.option(ConnectionFactoryOptions.PASSWORD, password);
return ConnectionFactories.get(connectionFactoryBuilder.build());
The ConnectionPoolConfiguration looks something like this
ConnectionPoolConfiguration configuration = ConnectionPoolConfiguration.builder(<connection-factory>)
.initialSize(10)
.maxCreateConnectionTime(Duration.ofSeconds(30))
.maxAcquireTime(Duration.ofSeconds(30))
.acquireRetry(3)
.maxSize(20)
.build();
We were constantly getting Connection acquisition timed out after 30000ms. We suspected that maybe the load was too much for the connection and decided to log the PoolMetrics exposed by ConnectionPool.getMetrics()
The code we had to get connection looks something like this
public Single<Connection> getConnection(ConnectionPool connectionPool) {
Optional<PoolMetrics> poolMetricsOptional = connectionPool.getMetrics();
poolMetricsOptional.ifPresent(
poolMetrics -> log.info("Connection Pool before acquiring connection: {}", poolMetrics)
);
Single<Connection> connectionSingle = Single.fromPublisher(connectionPool.create());
Optional<PoolMetrics> poolMetricsOptional = connectionPool.getMetrics();
poolMetricsOptional.ifPresent(
poolMetrics -> log.info("Connection Pool after acquiring connection: {}", poolMetrics)
);
return connectionSingle;
When we started hitting the timeout the logs looked something like this
Connection Pool before acquiring connection: Acquire Size: 1, Allocated Size: 20, Idle Size: 9, Pending Acquire Size: 0
Connection Pool after acquiring connection: Acquire Size: 1, Allocated Size: 20, Idle Size: 9, Pending Acquire Size: 0
I have two doubts:
Shouldn't acquire size + idle size be equal to allocated size?
Any idea why isn't the connection being acquired despite the idle connections?
Version details are as follows
r2dbc-pool: 0.9.1.RELEASE
r2dbc-postgresql: 0.9.0.RELEASE

gRPC streaming call which takes longer than 2 minutes is killed by hardware (routers, etc.) in between client and server

Grpc.Net client:
a gRpc client sends large amount of data to a gRpc server
after the gRpc server receives the data from the client, the http2 channel becomes idle (but is open) until the server returns the response to the client
the gRpc server receives the data and starts processing it. If the data processing takes longer than 2 minutes (which is the default idle timeout for http calls) then the response never reaches the client because the channel is actually disconnected, but the client does not know this because it was shutdown by other hardware in between due to long idle time.
Solution:
when the channel is created at the gRpc client side, it must have a httpClient set on it
the httpClient must be instantiated from a socketsHttpHandler with
the following properties set (PooledConnectionIdleTimeout, PooledConnectionLifetime, KeepAlivePingPolicy, KeepAlivePingTimeout, KeepAlivePingDelay)
Code snipped:
SocketsHttpHandler socketsHttpHandler = new SocketsHttpHandler()
{
PooledConnectionIdleTimeout = TimeSpan.FromMinutes(180),
PooledConnectionLifetime = TimeSpan.FromMinutes(180),
KeepAlivePingPolicy = HttpKeepAlivePingPolicy.Always,
KeepAlivePingTimeout = TimeSpan.FromSeconds(90),
KeepAlivePingDelay = TimeSpan.FromSeconds(90)
};
socketsHttpHandler.SslOptions.RemoteCertificateValidationCallback = (sender, cert, chain, sslPolicyErrors) => { return true; };
HttpClient httpClient = new HttpClient(socketsHttpHandler);
httpClient.Timeout = TimeSpan.FromMinutes(180);
var channel = GrpcChannel.ForAddress(_agentServerURL, new GrpcChannelOptions
{
Credentials = ChannelCredentials.Create(new SslCredentials(), credentials),
MaxReceiveMessageSize = null,
MaxSendMessageSize = null,
MaxRetryAttempts = null,
MaxRetryBufferPerCallSize = null,
MaxRetryBufferSize = null,
HttpClient = httpClient
});
A workaround is to package your message in an oneof and then send a KeepAlive from a seperate thread every x seconds, for the duration of the calculations.
For example:
message YourData {
…
}
message KeepAlive {}
message DataStreamPacket {
oneof data {
YourData data = 1;
KeepAlive ka = 2;
}
}
Then in your code:
stream <-
StartThread() {
each 5 seconds:
Send KeepAlive
}
doCalculations()
StopThread()
SendData()
this is what I needed. I had this problem for months now, but my only solution was to decrease the volume of data.

Micronaut ReadTimeoutException

I have a Grails 4 application providing a REST API. One of the endpoints sometimes fail with the following exception:
io.micronaut.http.client.exceptions.ReadTimeoutException: Read Timeout
at io.micronaut.http.client.exceptions.ReadTimeoutException.<clinit>(ReadTimeoutException.java:26)
at io.micronaut.http.client.DefaultHttpClient$10.exceptionCaught(DefaultHttpClient.java:1917)
at io.netty.channel.AbstractChannelHandlerContext.invokeExceptionCaught(AbstractChannelHandlerContext.java:297)
at io.netty.channel.AbstractChannelHandlerContext.invokeExceptionCaught(AbstractChannelHandlerContext.java:276)
at io.netty.channel.AbstractChannelHandlerContext.fireExceptionCaught(AbstractChannelHandlerContext.java:268)
at io.netty.channel.CombinedChannelDuplexHandler$DelegatingChannelHandlerContext.fireExceptionCaught(CombinedChannelDuplexHandler.java:426)
at io.netty.channel.ChannelHandlerAdapter.exceptionCaught(ChannelHandlerAdapter.java:92)
at io.netty.channel.CombinedChannelDuplexHandler$1.fireExceptionCaught(CombinedChannelDuplexHandler.java:147)
at io.netty.channel.ChannelInboundHandlerAdapter.exceptionCaught(ChannelInboundHandlerAdapter.java:143)
at io.netty.channel.CombinedChannelDuplexHandler.exceptionCaught(CombinedChannelDuplexHandler.java:233)
at io.netty.channel.AbstractChannelHandlerContext.invokeExceptionCaught(AbstractChannelHandlerContext.java:297)
at io.netty.channel.AbstractChannelHandlerContext.invokeExceptionCaught(AbstractChannelHandlerContext.java:276)
at io.netty.channel.AbstractChannelHandlerContext.fireExceptionCaught(AbstractChannelHandlerContext.java:268)
at io.netty.handler.timeout.ReadTimeoutHandler.readTimedOut(ReadTimeoutHandler.java:98)
at io.netty.handler.timeout.ReadTimeoutHandler.channelIdle(ReadTimeoutHandler.java:90)
at io.netty.handler.timeout.IdleStateHandler$ReaderIdleTimeoutTask.run(IdleStateHandler.java:505)
at io.netty.handler.timeout.IdleStateHandler$AbstractIdleTask.run(IdleStateHandler.java:477)
at io.netty.util.concurrent.PromiseTask$RunnableAdapter.call(PromiseTask.java:38)
at io.netty.util.concurrent.ScheduledFutureTask.run(ScheduledFutureTask.java:127)
at io.netty.util.concurrent.AbstractEventExecutor.safeExecute(AbstractEventExecutor.java:163)
at io.netty.util.concurrent.SingleThreadEventExecutor.runAllTasks(SingleThreadEventExecutor.java:405)
at io.netty.channel.nio.NioEventLoop.run(NioEventLoop.java:500)
at io.netty.util.concurrent.SingleThreadEventExecutor$5.run(SingleThreadEventExecutor.java:906)
at io.netty.util.internal.ThreadExecutorMap$2.run(ThreadExecutorMap.java:74)
at io.netty.util.concurrent.FastThreadLocalRunnable.run(FastThreadLocalRunnable.java:30)
at java.base/java.lang.Thread.run(Thread.java:834)
The endpoint uses micronaut http client to call other systems. The remote system takes a very long time to respond, causing the ReadTimeOutException.
Here is the code calling the remote Service:
class RemoteTaskService implements GrailsConfigurationAware {
String taskStepperUrl
// initializes fields from configuration
void setConfiguration(Config config) {
taskStepperUrl = config.getProperty('services.stepper')
}
private BlockingHttpClient getTaskClient() {
HttpClient.create(taskStepperUrl.toURL()).toBlocking()
}
List<Map> loadTasksByProject(long projectId) {
try {
retrieveRemoteList("/api/tasks?projectId=${projectId}")
} catch(HttpClientResponseException e) {
log.error("Loading tasks of project failed with status: ${e.status.code}: ${e.message}")
throw new NotFoundException("No tasks found for project ${projectId}")
}
}
private List<Map> retrieveRemoteList(String path) {
HttpRequest request = HttpRequest.GET(path)
HttpResponse<List> response = taskClient.exchange(request, List) as HttpResponse<List>
response.body()
}
}
I've tried resolving it using the following configuration in my application.yml:
micronaut:
server:
read-timeout: 30
and
micronaut.http.client.read-timeout: 30
...with no success. Despite my configuration, the timeout still occurs around 10s after calling the endpoint.
How can I change the read timeout duration for the http rest client?
micronaut.http.client.read-timeout takes a duration, so you should add a measuring unit to the value, like 30s, 30m or 30h.
It seems that the configuration values are not injected in the manually created http clients.
A solution is to configure the HttpClient at creation, setting the readTimeout duration:
private BlockingHttpClient getTaskClient() {
HttpClientConfiguration configuration = new DefaultHttpClientConfiguration()
configuration.readTimeout = Duration.ofSeconds(30)
new DefaultHttpClient(taskStepperUrl.toURL(), configuration).toBlocking()
}
In my case I was streaming a file from a client as
#Get(value = "${service-path}", processes = APPLICATION_OCTET_STREAM)
Flowable<byte[]> fullImportStream();
so when I got this my first impulse was to increase the read-timeout value. Though, for streaming scenarios the property that applies is read-idle-timeout as stated in the docs https://docs.micronaut.io/latest/guide/configurationreference.html#io.micronaut.http.client.DefaultHttpClientConfiguration

Connecting to Neo4j Aura with .NET Core 2.2 web api

I am trying to connect a to Neo4j Aura instance from a .NET core 2.2 web api. I understand I need the Neo4j .Net Driver v4.0.0-alpha01, but I do not seem to be able to connect. There aren't very many examples out there as this driver is new and so is Aura.
I keep getting:
Failed after retried for 6 times in 30000 ms. Make sure that your database is online and retry again.
I configure the driver as such
public void ConfigureServices(IServiceCollection services)
{
string uri = "neo4j://1234567.databases.neo4j.io:7687";//not actual subdomain
string username = "neo4j";
string password = "seeeeeeecret";//not actual password
services.AddCors();
services.AddMvc().SetCompatibilityVersion(CompatibilityVersion.Version_2_2);
services.AddSingleton(GraphDatabase.Driver(uri, AuthTokens.Basic(username, password)));
}
and in my test controller i run this
private async Task<string> Neo4JTestAsync()
{
string db = "MyDb";
string message = "TESTMESSAGE";
IAsyncSession session = _driver.AsyncSession(o => o.WithDatabase(db));
try
{
var greeting = session.WriteTransactionAsync(async tx =>
{
var result = tx.RunAsync("CREATE (a:Greeting) " +
"SET a.message = $message " +
"RETURN a.message + ', from node ' + id(a)",
new { message });
var res = await result;
return "return something eventually";
});
return await greeting;
}
catch (Exception e)
{
return e.Message; // throws "Failed after retried for 6 times in 30000 ms. Make sure that your database is online and retry again"
}
finally
{
await session.CloseAsync();
}
}
I can't get the exact error message you do - but I'm pretty sure this is due to encryption - one of the big differences between the 1.x and 4.x drivers is the default position on Encryption - which is now off by default.
So you'll want to change your initialisation to:
services.AddSingleton(GraphDatabase.Driver(uri, AuthTokens.Basic(username, password), config => config.WithEncryptionLevel(EncryptionLevel.Encrypted)));
That should get you going. Also - make sure you stick with the neo4j:// protocol, as that'll route you properly.
Have you tried bolt:// in the connection string?
string uri = "bolt://1234567.databases.neo4j.io:7687";//not actual subdomain

Glassfish Connection Pool - java.sql.SQLException: Connection closed

I'm using for a web project JSF2 with Oracle Glassfish Server Open Source Edition 4.0 and Oracle Database 11g (Version 11.2.0-1.0).
The server and database are running on the same windows machine.
A connection pool managed the connections to the database.
Does anybody know why I sometimes get the following exception:
java.sql.SQLException: Connection closed
at com.sun.gjc.spi.base.ConnectionHolder.checkValidity(ConnectionHolder.java:766)
at com.sun.gjc.spi.base.ConnectionHolder.commit(ConnectionHolder.java:243)
at de.mydomain.myproject.Hl7MessageHandler.run(Hl7MessageHandler.java:123)
...
Or sometimes this one:
java.sql.SQLRecoverableException: Closed connection
at oracle.jdbc.driver.PhysicalConnection.commit(PhysicalConnection.java:5675)
at oracle.jdbc.driver.PhysicalConnection.commit(PhysicalConnection.java:5735)
at com.sun.gjc.spi.base.ConnectionHolder.commit(ConnectionHolder.java:244)
at de.mydomain.myproject.Hl7MessageHandler.run(Hl7MessageHandler.java:123)
...
The Database Class:
public static Connection getConnection() throws NamingException, SQLException {
Context initContext = new InitialContext();
DataSource datasSource = (DataSource)initContext.lookup("jdbc/Oracle");
Connection connnection = datasSource.getConnection();
return connnection;
}
Request handling in the servlet:
public IResponseSendable<String> run(String hl7MsgString, boolean publishErrorToDB) {
// ... do something
try {
con = Database.getConnection();
} catch (NamingException | SQLException conExc) {
return generateAck(true, conExc.getMessage(), hl7MsgString);
}
try {
con.setAutoCommit(false);
process();
con.commit();
} catch (HL7Exception | SQLException pe) {
logger.error(...);
// Exceptionhandling...
try {
con.rollback();
} catch (SQLException rollbackExc) {
logger.error(...);
}
return generateAck(true, pe.getMessage(),hl7MsgString, _log);
}
finally {
try {
con.setAutoCommit(true);
con.close();
} catch (SQLException e) {
logger.error(...);
}
}
return generateAck(false, "", hl7MsgString);
}
The process-Methode:
private void process() throws HL7Exception, SQLException {
// Do something...
String sql = "BEGIN save_patient_data(?,?,?,?,?,?,?); END;";
CallableStatement stmt = (CallableStatement) con.prepareCall(sql);
stmt.setString(1, ...);
// ...
stmt.registerOutParameter(6, java.sql.Types.VARCHAR);
stmt.registerOutParameter(7, java.sql.Types.NUMERIC);
stmt.execute();
// ...
stmt.close();
// More databse stored procedure can be called ...
}
Connection Pool Settingts:
Initial and Minimum Pool Size: 10 Connections
Maximum Pool Size: 60 Connections
Pool Resize Quantity: 2 Connections
Idle Timeout: 600 Seconds
Max Wait Time: 0 Milliseconds
Validate At Most Once: 0 Seconds
Connection Leak Timeout: 10 Seconds
Connection Leak Reclaim: enabled
Statement Leak Timeout: 6 Seconds
Statement Leak Reclaim: enabled
Creation Retry Attempts: 0
Retry Interval: 10 Seconds
Connection Validation: Required
Validation Method: meta-data
The database IDLE-Timeout setting is "UNLIMITED".
Notcie:
The exception occurred either when to call "con.prepareCall(sql);" (must not be at the first time) or when I try to commit the connection or later when to try to turn autocommit on.
Does any body know the reason or what is the best way to debug the application to find it out?
Thank you.
Edit:
Maybe it's important:
I can find in the server log many warnings about connection leaks:
2014-07-28T14:49:17.961+0200|Warnung: A potential connection leak detected for connection pool OraclePool. The stack trace of the thread is provided below :
com.sun.enterprise.resource.pool.ConnectionPool.setResourceStateToBusy(ConnectionPool.java:324)
com.sun.enterprise.resource.pool.ConnectionPool.getResourceFromPool(ConnectionPool.java:758)
com.sun.enterprise.resource.pool.ConnectionPool.getUnenlistedResource(ConnectionPool.java:632)
com.sun.enterprise.resource.pool.AssocWithThreadResourcePool.getUnenlistedResource(AssocWithThreadResourcePool.java:200)
com.sun.enterprise.resource.pool.ConnectionPool.internalGetResource(ConnectionPool.java:526)
com.sun.enterprise.resource.pool.ConnectionPool.getResource(ConnectionPool.java:381)
com.sun.enterprise.resource.pool.PoolManagerImpl.getResourceFromPool(PoolManagerImpl.java:245)
com.sun.enterprise.resource.pool.PoolManagerImpl.getResource(PoolManagerImpl.java:170)
com.sun.enterprise.connectors.ConnectionManagerImpl.getResource(ConnectionManagerImpl.java:360)
com.sun.enterprise.connectors.ConnectionManagerImpl.internalGetConnection(ConnectionManagerImpl.java:307)
com.sun.enterprise.connectors.ConnectionManagerImpl.allocateConnection(ConnectionManagerImpl.java:196)
com.sun.enterprise.connectors.ConnectionManagerImpl.allocateConnection(ConnectionManagerImpl.java:171)
com.sun.enterprise.connectors.ConnectionManagerImpl.allocateConnection(ConnectionManagerImpl.java:166)
com.sun.gjc.spi.base.AbstractDataSource.getConnection(AbstractDataSource.java:114)
de.mydomain.myproject.utilities.Database.getConnection(Database.java:17)
...
You have connection leak reclaim enabled and the connection leak timeout is 10 seconds. This means that if you hold onto a logical connection for longer than 10 seconds, it is forcibly revoked and closed by the connection pool manager (and the physical connection is returned to the connection pool). Subsequent attempts to use the logical connection will result in a SQLException as the connection is closed.
Find out which operation takes longer than 10 seconds and try to reduce the time it takes or configure a longer connection leak timeout (10 seconds is IMHO a bit short for connection leak detection). The same BTW applies to your statement leak detection (6 seconds is also pretty short).

Resources