I used C3P0 connection pool to now but get not stable behavior. I test in various kinds of environments and improvement database options. I found today Tomcat 7 jdbc connection pool released and get it. Do anyone use it and get better performance than C3p0?
(I also test boncp connection pool)
My application is very high load. My problems are:
after pass a hour connection pool throws "Can't Open Connection" exception.
sometimes I get this exception "Attempted to use a closed or broken resource" pool and when restart my connection pool(by its mbean) problem fixed
My C3P0 parameters are:
initialPoolSize = 1
minPoolSize=1
maxPoolSize = 50
maxIdleTime = 20000
debugUnreturnedConnectionStackTraces = true
propertyCycle =60
acquireRetryDelay =1000
maxConnectionAge =0
checkoutTimeout =5000
acquireIncrement =1
numHelperThreads =5
acquireRetryAttempts =1
unreturnedConnectionTimeout =90
breakAfterAcquireFailure =false
I also test this parameters with several value but don't see any perceptible changes.
I haven't tried the tomcat pool yet, but will look into this soon. What you can probably do is tweak your c3p0 pool for optimization. This will vary according to the actual load over your application, but as compared to other pooling technologies, I've found c3p0 to be flexible.
It would be nice if you could elaborate your problem here, and mention the pooling parameters you are using.
Related
I'm using PoolingClientConnectionManager and I am suspecting that I'm leaking connections. I have a monitoring thread that prints out the PoolStats as below:
[leased: 126; pending: 0; available: 14; max: 140]
..
[leased: 140; pending: 20; available: 0; max: 140]
..
[leased: 140; pending: 10; available: 0; max: 140]
I spawn an equal number of threads to the number of pool connections (140), so I was never expecting leased + pending > max. Is this assumption valid ? Or is this a case of connections kept alive by the manager ? I'm not sure if this case the connections are attributed to "lease" or "available".
What I have noticed is that connection leaks might occur if the HttpClient connection is interrupted during DNS resolve. In this scenario, leased connections are not released back to the pool. Is there a suggested way of de-allocating proper resources so that connections are properly released back to the pool ?
Thanks in advance.
Yes, it does seem quite likely that there is a connection leak. It is unlikely that DNS lookup could be causing it though. HttpClient is supposed to release connection automatically in case of an i/o, protocol or runtime exception.
As far as resource deallocation is concerned the rule should be fairly simple: as long as there is an entity associated with the response, one must make sure its content gets fully consumed. HttpClient 4.2 and HttpClient 4.3 also provide additional safe-guards for resource deallocation in exceptional cases: HttpUriRequest#releaseConnection in 4.2 and CloseableHttpResponse#close in 4.3
You could also try running your application with connection management context logging turned on as described here and see if that could help you track down requests that never release the underlying connection.
We're currently preparing our Azure role (standard Web Role) for an expected massive load, and we need to know how much memory the current setup consumes. To accomplish this, we're using load tests while measuring the consumed memory with GC.GetTotalMemory.
The page http://technet.microsoft.com/en-us/cloud/gg663909.aspx lists the Compute Instance Guaranteed Memory for each instance size (for example, 0.768 GB for the Extra-Small Instance and 3.5 GB for the Medium Instance).
Are the values of GC.GetTotalMemory comparable to the values in these list? In other words, if GC.GetTotalMemory is staying significantly below the listed limit, can we be sure that there won't be any sudden perfomance loss due to memory swapping?
If we hit the limit, is our assumption correct that there will be some memory swapping (writing memory content to the virtual harddisk), or will there be more severe implications like repeated App Pool recycling?
(the last question comes up because most shared hosters will recycle your App Pool if you hit some memory limit, but frankly we don't expect anything like this from Windows Azure)
This method will only give you the currently allocated bytes by your process. The 0.768 GB includes the memory availble to the operating system, and there can be virtual memory as well.
system.gc.gettotalmemory
To get the total system memory you can use:
Add a Reference to System.Management.
private static void DisplayTotalRam()
{
string Query = "SELECT MaxCapacity FROM Win32_PhysicalMemoryArray";
ManagementObjectSearcher searcher = new ManagementObjectSearcher(Query);
foreach (ManagementObject WniPART in searcher.Get())
{
UInt32 SizeinKB = Convert.ToUInt32(WniPART.Properties["MaxCapacity"].Value);
UInt32 SizeinMB = SizeinKB / 1024;
UInt32 SizeinGB = SizeinMB / 1024;
Console.WriteLine("Size in KB: {0}, Size in MB: {1}, Size in GB: {2}", SizeinKB, SizeinMB, SizeinGB);
}
}
Source for code
To answer your last question, Windows Azure will stay out of the way, and paging will happen like on any Windows server.
Whether IIS recycles your app pool probably depends on your IIS settings, but those are under your control. (You can, for example, run appcmd in a startup task if you want to change a default.)
My Delphi XE application is based on a single EXE using a local server DLL created by RemObjects and uses a lot of memory for a specific operation until it generates an exception saying there are not enough memory. So I'm trying to understand why and where this is happening so I placed various steps throughout my code where I report on memory usage. The problem is that I'm getting very different information based on the method used to get memory usage information:
If I use the method explained here which asks FastMM directly for both the Client EXE and Server DLL, here is what I get:
STEP 1: [client] = 36664572 - [server] = 3274976
STEP 2: [client] = 62641230 - [server] = 44430224
STEP 3: [client] = 66665630 - [server] = 44430224
Now if I use the method explained here which uses GetProcessMemoryInfo, I get far more memory usage:
STEP 1: [process] = 133722112
STEP 2: [process] = 1072115712
STEP 3: [process] = 1075818496
It looks like second method is the right based on my memory problems but how could the FastMM method be so "low" ? And what can explain the difference ?
GetProcessMemoryInfo also reports memory that is not managed by FastMM, like memory that is allocated by the various non Delphi dlls you might call (like winapi).
Also FastMM can allocate more memory from Windows that your application actually uses for internal structures, fragmentation and pooling.
And as last, with GetProcessMemoryInfo you measuring the Workingset size. That is what part of the application's memory currenctly in RAM instead if in the page file. It includes more than just data structures and is definately not comparable to the total memory the application has allocated. PagefileUsage would be more comparable. Workingset size almost never is what you are looking for. See here for a better explanation.
So they both give different results because they both measure different things.
am connectting the MQ with below code. I am able connected to MQ successfully. My case is i place the messages to MQ every 1 min once. After disconnecting the cable i get a ResonCode error but IsConnected property still show true. Is this is the right way to check if the connection is still connected ? Or there any best pratcices around that.
I would like to open the connection when applicaiton is started keep it open for ever.
public static MQQueueManager ConnectMQ()
{
if ((queueManager == null) || (!queueManager.IsConnected)||(queueManager.ReasonCode == 2009))
{
queueManager = new MQQueueManager();
}
return queueManager;
}
The behavior of the WMQ client connection is that when idle it will appear to be connected until an API call fails or the connection times out. So isConnected() will likely report true until a get, put or inquire call is attempted and fails, at which point QMgr will then report disconnected.
The other thing to consider here is that 2009 is not the only code you might get. It happens to be the one you get when the connection is severed but there are connection codes for QMgr shutting down, channel shutting down, and a variety of resource and other errors.
Typically for a requirement to maintain a constant connection you would want to wrap the connect and message processing loop inside a try/catch block nested inside a while statement. When you catch an exception other than an intentional exit, close the objects and QMgr, sleep at least 5 seconds, then loop around to the top of the while. The sleep is crucial because if you get caught in a tight reconnect loop and throw hundreds of connection attempts at the QMgr, you can bring even a mainframe QMgr to its knees.
An alternative is to use a v7 WMQ client and QMgr. With this combination, automatic reconnection is configurable as a channel configuration.
I changed the JTA transaction timeout from admin console and set to 300, even after changing it fails saying JTA transaction unexpectedly rolled back (maybe due to a timeout) with a:
weblogic.transaction.RollbackException: Transaction timed out after 181 seconds`
To make sure whether my changes (timeout value 300) got reflected for that domain or not I checked under domain config.xml it got reflected with 300.
My question is, is there any other place also do I need to update the transaction timeout value and do I need to restart the server ?
Full stack trace after the exception from server below:
Caused by: org.springframework.transaction.UnexpectedRollbackException: JTA transaction unexpectedly rolled back (maybe due to a timeout); nested exception is weblogic.transaction.RollbackException: Transaction
timed out after 180 seconds
BEA1-160A800A149091F72E5E
at org.springframework.transaction.jta.JtaTransactionManager.doCommit(JtaTransactionManager.java:1031)
at org.springframework.transaction.support.AbstractPlatformTransactionManager.processCommit(AbstractPlatformTransactionManager.java:709)
at org.springframework.transaction.support.AbstractPlatformTransactionManager.commit(AbstractPlatformTransactionManager.java:678)
at org.springframework.transaction.interceptor.TransactionAspectSupport.completeTransactionAfterThrowing(TransactionAspectSupport.java:359)
at org.springframework.transaction.interceptor.TransactionInterceptor.invoke(TransactionInterceptor.java:110)
at org.springframework.aop.framework.ReflectiveMethodInvocation.proceed(ReflectiveMethodInvocation.java:171)
at org.springframework.aop.framework.JdkDynamicAopProxy.invoke(JdkDynamicAopProxy.java:204)
at $Proxy103.saveRegistryData(Unknown Source)
at gov.cms.pqri.arch.submission.registry.bean.RegDataAccessManager.persistRegistry(RegDataAccessManager.java:54)
... 14 more
Caused by: weblogic.transaction.RollbackException: Transaction timed out after 180 seconds
BEA1-160A800A149091F72E5E
at weblogic.transaction.internal.TransactionImpl.throwRollbackException(TransactionImpl.java:1818)
at weblogic.transaction.internal.ServerTransactionImpl.internalCommit(ServerTransactionImpl.java:333)
at weblogic.transaction.internal.ServerTransactionImpl.commit(ServerTransactionImpl.java:227)
at weblogic.transaction.internal.TransactionManagerImpl.commit(TransactionManagerImpl.java:281)
at org.springframework.transaction.jta.JtaTransactionManager.doCommit(JtaTransactionManager.java:1028)
... 22 more
after changing the stuck Thread Max time to 300 under servers -> configuration -> tuning (tab) from admin console it is getting updated and working fine.
I have also came across this issue and have resolved the same, since this is related to JTA transaction so we need to increase the timeout of JTA as well along with the time out for stuck max thread. Please click on JTA from the weblogic console home and increase the JTA timeout from 30(by default) to 300.
We met same issue on Weblogic 12.1.2 [JTA transaction unexpectedly rolled back (maybe due to a timeout)] after all investigation we found the root cause of the problem.In my opinion it occurs due to huge dataset processing transactional and near the end of the process If an exception is thrown, JTA is rolling back data as expected.But it does not give the details of the error.In our case ,it mostly cause because of the database integrity (e.g we try to insert data a column with smaller size than data.)
In summary,it will be the best way to investigate db logs instead of increasing stuck Thread Max time.Thread max time can be a solution,but not a proper solution for real enterprise systems.
Also this issue discussed on another stackover link and hibernate jira issue
And solution suggested:
This is a default behaviour of Weblogic JTA realization. To obtain
root exception you should set system property
weblogic.transaction.allowOverrideSetRollbackReason to true.
One of the solution is add this line into
/bin/setDomainEnv.cmd:
set JAVA_OPTIONS=%JAVA_OPTIONS% -Dweblogic.transaction.allowOverrideSetRollbackReason=true
I got my JTA timeouts increased by adding jta.properties file into config folder of my app with lines:
com.atomikos.icatch.default_jta_timeout=600000
com.atomikos.icatch.max_timeout=600000