Infinispan synchronization timeout exception - timeout

I'm afraid I'm a little bit of a noob at this, but here goes.
I have an issue with Infinispan and timeout during synch.
I'm running two instances of JBoss AS7 with Infinispan 5.2.7-FINAL. TCP for synch.
Sometimes when deleting an entry from a cache on one of the nodes, the synchronization seems to fail, but checking the log on both nodes tells med the entry is gone and all the required cleanup involved from application side (tearing down Camel routes, etc) has been done. The initiating node claims to time-out when notifying it's peer:
09:53:33,182 INFO [ConfigurationService] (http-/10.217.xx.yy:8080-3) ConfigurationService: deleteSmsAccountConfigurationData: got smsAccountID of SmsAccountIdentifier{smsAccountId=tel:46xxxxxx}
09:53:33,183 INFO [com.ex.control.throttling.controller.ThrottlingService] (http-/10.217.xx.yy:8080-3) ThrottlingService: deleteThrottleForResource: deleting throttle from resource tel:46xxxxxx
09:53:45,232 INFO [com.ex.control.routing.startup.DynamicRouteBootStrap] (OOB-21,smapexs01-54191) DynamnicRouteBootStrap: stopRoute: Attempting to stop sendSmppRoute: sendsmpproutetel:46xxxxxx recieveSmppRoute: recievesmpproutetel:46xxxxxx
09:53:45,233 INFO [org.apache.camel.impl.DefaultShutdownStrategy] (OOB-21,smapexs01-54191) Starting to graceful shutdown 1 routes (timeout 300 seconds)
09:53:45,235 INFO [org.apache.camel.impl.DefaultShutdownStrategy] (Camel (camel-2) thread #1 - ShutdownTask) Route: sendsmpproutetel:46xxxxxx shutdown complete, was consuming from: Endpoint[direct://tel:46xxxxxx]
09:53:45,235 INFO [org.apache.camel.impl.DefaultShutdownStrategy] (OOB-21,smapexs01-54191) Graceful shutdown of 1 routes completed in 0 seconds
09:53:48,190 ERROR [org.infinispan.interceptors.InvocationContextInterceptor] (http-/10.217.xx.yy:8080-3) ISPN000136: Execution error: org.infinispan.CacheException: org.jgroups.TimeoutException: timeout sending message to smapexs03-38104
On this node it takes less than a second (I guess) to tear it down, on the other node it takes 6 seconds, is there a timeout of say 5 seconds somewhere that I have overseen?
I've searched this forum and others, but have not been able to sort it out.
Like I said, I'm a bit of a noob, I hope this makes sense to you. Anything that points me in the right direction is very welcome!
Thanks,
Peter

Related

Transactions not expired after timeout expired

We're using neo4j (3.1.5-enterprise) for one of our services. (Over HTTP)
We set dbms.transaction.timeout=150s in our neo4j config file .
We have a scenario which may take more time than 150 seconds, but what we would like is for the transaction to be expired after 150 seconds anyway.
For some reason its not happening and the transaction continue until it fully executed but its not being stopped after 150 seconds, any guess why?
In our application logs I can see the following exception (more stacktrace details below):
neo.db.NeoHttpDriver - Errors in response:
[NeoResponseError{
code='Neo.DatabaseError.Statement.ExecutionFailed',
message='Transaction timeout. (Overtime: 23793 ms).',
stackTrace='org.neo4j.kernel.guard.GuardTimeoutException: Transaction timeout. (Overtime: 23793 ms).
...
Also, our service steps(in the specific scenario that may take long time) in general is open a transaction, lock some common entity and proceed. Since the transaction is not expired and released(and therefor the common entity continue to be locked) after 150 seconds, then other threads may also be locked for a long time.
Thanks!
Orel
Exception stacktrace:
15:00:59.627 [DefaultThreadPool-7] DEBUG c.e.e.m.neo.db.NeoHttpDriver - Errors in response: [NeoResponseError{code='Neo.DatabaseError.Statement.ExecutionFailed', message='Transaction timeout. (Overtime: 23793 ms).', stackTrace='org.neo4j.kernel.guard.GuardTimeoutException: Transaction timeout. (Overtime: 23793 ms).
at org.neo4j.kernel.guard.TimeoutGuard.check(TimeoutGuard.java:71)
at org.neo4j.kernel.guard.TimeoutGuard.check(TimeoutGuard.java:57)
at org.neo4j.kernel.guard.TimeoutGuard.check(TimeoutGuard.java:49)
at org.neo4j.kernel.impl.api.GuardingStatementOperations.nodeCursorById(GuardingStatementOperations.java:300)
at org.neo4j.kernel.impl.api.OperationsFacade.nodeHasProperty(OperationsFacade.java:343)
at org.neo4j.cypher.internal.spi.v3_1.TransactionBoundQueryContext$NodeOperations.hasProperty(TransactionBoundQueryContext.scala:319)
at org.neo4j.cypher.internal.compatibility.ExceptionTranslatingQueryContextFor3_1$ExceptionTranslatingOperations$$anonfun$hasProperty$1.apply$mcZ$sp(ExceptionTranslatingQueryContextFor3_1.scala:245)
at org.neo4j.cypher.internal.compatibility.ExceptionTranslatingQueryContextFor3_1$ExceptionTranslatingOperations$$anonfun$hasProperty$1.apply(ExceptionTranslatingQueryContextFor3_1.scala:245)
at org.neo4j.cypher.internal.compatibility.ExceptionTranslatingQueryContextFor3_1$ExceptionTranslatingOperations$$anonfun$hasProperty$1.apply(ExceptionTranslatingQueryContextFor3_1.scala:245)
at org.neo4j.cypher.internal.spi.v3_1.ExceptionTranslationSupport$class.translateException(ExceptionTranslationSupport.scala:32)
at org.neo4j.cypher.internal.compatibility.ExceptionTranslatingQueryContextFor3_1.translateException(ExceptionTranslatingQueryContextFor3_1.scala:34)
at org.neo4j.cypher.internal.compatibility.ExceptionTranslatingQueryContextFor3_1$ExceptionTranslatingOperations.hasProperty(ExceptionTranslatingQueryContextFor3_1.scala:245)
at org.neo4j.cypher.internal.compiler.v3_1.spi.DelegatingOperations.hasProperty(DelegatingQueryContext.scala:221)
at org.neo4j.cypher.internal.compiler.v3_1.pipes.AbstractSetPropertyOperation.setProperty(SetOperation.scala:98)
at org.neo4j.cypher.internal.compiler.v3_1.pipes.SetEntityPropertyOperation.set(SetOperation.scala:117)
at org.neo4j.cypher.internal.compiler.v3_1.pipes.SetPipe$$anonfun$internalCreateResults$1.apply(SetPipe.scala:31)
at org.neo4j.cypher.internal.compiler.v3_1.pipes.SetPipe$$anonfun$internalCreateResults$1.apply(SetPipe.scala:30)
at scala.collection.Iterator$$anon$11.next(Iterator.scala:409)
at scala.collection.Iterator$$anon$11.next(Iterator.scala:409)
at scala.collection.Iterator$$anon$11.next(Iterator.scala:409)
at org.neo4j.cypher.internal.compiler.v3_1.ClosingIterator$$anonfun$next$1.apply(ResultIterator.scala:71)
at org.neo4j.cypher.internal.compiler.v3_1.ClosingIterator$$anonfun$next$1.apply(ResultIterator.scala:68)
at org.neo4j.cypher.internal.compiler.v3_1.ClosingIterator$$anonfun$failIfThrows$1.apply(ResultIterator.scala:94)
at org.neo4j.cypher.internal.compiler.v3_1.ClosingIterator.decoratedCypherException(ResultIterator.scala:103)
at org.neo4j.cypher.internal.compiler.v3_1.ClosingIterator.failIfThrows(ResultIterator.scala:92)
at org.neo4j.cypher.internal.compiler.v3_1.ClosingIterator.next(ResultIterator.scala:68)
at org.neo4j.cypher.internal.compiler.v3_1.ClosingIterator.next(ResultIterator.scala:49)
at scala.collection.Iterator$class.foreach(Iterator.scala:893)
at org.neo4j.cypher.internal.compiler.v3_1.ClosingIterator.foreach(ResultIterator.scala:49)
at scala.collection.generic.Growable$class.$plus$plus$eq(Growable.scala:59)
at scala.collection.mutable.ListBuffer.$plus$plus$eq(ListBuffer.scala:183)
at scala.collection.mutable.ListBuffer.$plus$plus$eq(ListBuffer.scala:45)
at scala.collection.TraversableOnce$class.to(TraversableOnce.scala:310)
at org.neo4j.cypher.internal.compiler.v3_1.ClosingIterator.to(ResultIterator.scala:49)
at scala.collection.TraversableOnce$class.toList(TraversableOnce.scala:294)
at org.neo4j.cypher.internal.compiler.v3_1.ClosingIterator.toList(ResultIterator.scala:49)
at org.neo4j.cypher.internal.compiler.v3_1.EagerResultIterator.<init>(ResultIterator.scala:35)
at org.neo4j.cypher.internal.compiler.v3_1.ClosingIterator.toEager(ResultIterator.scala:53)
at org.neo4j.cypher.internal.compiler.v3_1.executionplan.DefaultExecutionResultBuilderFactory$ExecutionWorkflowBuilder.buildResultIterator(DefaultExecutionResultBuilderFactory.scala:109)
at org.neo4j.cypher.internal.compiler.v3_1.executionplan.DefaultExecutionResultBuilderFactory$ExecutionWorkflowBuilder.createResults(DefaultExecutionResultBuilderFactory.scala:99)
at org.neo4j.cypher.internal.compiler.v3_1.executionplan.DefaultExecutionResultBuilderFactory$ExecutionWorkflowBuilder.build(DefaultExecutionResultBuilderFactory.scala:68)
at org.neo4j.cypher.internal.compiler.v3_1.executionplan.InterpretedExecutionPlanBuilder$$anonfun$getExecutionPlanFunction$1.apply(ExecutionPlanBuilder.scala:164)
at org.neo4j.cypher.internal.compiler.v3_1.executionplan.InterpretedExecutionPlanBuilder$$anonfun$getExecutionPlanFunction$1.apply(ExecutionPlanBuilder.scala:148)
at org.neo4j.cypher.internal.compiler.v3_1.executionplan.InterpretedExecutionPlanBuilder$$anon$1.run(ExecutionPlanBuilder.scala:123)
at org.neo4j.cypher.internal.compatibility.CompatibilityFor3_1$ExecutionPlanWrapper$$anonfun$run$1.apply(CompatibilityFor3_1.scala:275)
at org.neo4j.cypher.internal.compatibility.CompatibilityFor3_1$ExecutionPlanWrapper$$anonfun$run$1.apply(CompatibilityFor3_1.scala:273)
at org.neo4j.cypher.internal.compatibility.exceptionHandlerFor3_1$runSafely$.apply(CompatibilityFor3_1.scala:190)
at org.neo4j.cypher.internal.compatibility.CompatibilityFor3_1$ExecutionPlanWrapper.run(CompatibilityFor3_1.scala:273)
at org.neo4j.cypher.internal.PreparedPlanExecution.execute(PreparedPlanExecution.scala:26)
at org.neo4j.cypher.internal.ExecutionEngine.execute(ExecutionEngine.scala:107)
at org.neo4j.cypher.internal.javacompat.ExecutionEngine.executeQuery(ExecutionEngine.java:59)
at org.neo4j.server.rest.transactional.TransactionHandle.safelyExecute(TransactionHandle.java:371)
at org.neo4j.server.rest.transactional.TransactionHandle.executeStatements(TransactionHandle.java:323)
at org.neo4j.server.rest.transactional.TransactionHandle.execute(TransactionHandle.java:230)
at org.neo4j.server.rest.transactional.TransactionHandle.execute(TransactionHandle.java:119)
at org.neo4j.server.rest.web.TransactionalService.lambda$executeStatements$0(TransactionalService.java:203)
Most likely the problem is that the tx is waiting on a lock. Prior to Neo4j 3.2, dbms.transaction.timeout cannot cover the case of terminating a transaction that's waiting on a lock (or rather, it will mark it for termination, but the actual termination won't happen until the lock is acquired).
In Neo4j 3.2, dbms.lock.acquisition.timeout was introduced, which interrupts waiting on a lock and allows the thread to check if the tx has been terminated and take appropriate action.
The following is based on an answer provided by Neo4j Support:
dbms.lock.acquisition.timeout
As a starting point, dbms.lock.acquisition.timeout was only added in Neo4j 3.2, it does not exist for 3.1. where we don't yet have lock acquisition timeout, hence wait times on locks can over-runs past the set limit. Things like GC can also extend the time. However, as you're currently on 3.1.5, dbms.lock.acquisition.timeout would not yet be enforced.
dbms.transaction.timeout
dbms.transaction.timeout marks a transaction for termination, but the actual logic of checking this and performing the termination happens on a running thread, not one waiting on locks, and doesn't cause the thread to wake up and check. Presumably the logic for terminating a thread upon timeout is that some other thread periodically checks execution time for a transaction, and if it has exceeded the transaction timeout, sets a boolean variable on the transaction to indicate that it is marked for termination. The actual termination of the thread likely happens in an event loop for the transaction, where it checks that variable to see if it's marked for termination, then terminates and rolls back. A thread that attempts to acquire a lock enters a waiting state when the lock is already held by another thread. During this waiting state, the event loop is not being processed, so the thread never reaches the point in the event loop where it can check if it's been marked as terminated and take care of it.
Bottom line:
dbms.transaction.timeout does not cause a hard timeout, it only marks the transaction as timed-out, which will cause it to rollback once the flag is checked.

What causes dask job failure with CancelledError exception

I have been seeing below error message for quite some time now but could not figure out what leads to the failure.
Error:
concurrent.futures._base.CancelledError: ('sort_index-f23b0553686b95f2d91d4a3fda85f229', 7)
On restart of dask cluster it runs successfully.
If running a dask-cloudprovider ECSCluster or FargateCluster the concurrent.futures._base.CancelledError can result from a long-running step in computation where there is no output (logging or otherwise) to the Client. In these cases, due to the lack of interaction with the client, the scheduler regards itself as "idle" and times out after the configured cloudprovider.ecs.scheduler_timeout period, which defaults to 5 minutes. The CancelledError error message is misleading, but if you look in the logs for the scheduler task itself it will record the idle timeout.
The solution is to set scheduler_timeout to a higher value, either via config or by passing directly to the ECSCluster/FargateCluster constructor.

How to protect against Redis::TimeoutError: Connection timed out on Heroku

This might seem like a silly question, but I figured someone here on StackOverflow might have some ideas around this. What the hell, right?
I'm using Heroku workers with 1X Dynos to run Resque. Sometimes I get this error: Redis::TimeoutError: Connection timed out. It happens in the redis gem; here's the stacktrace:
Redis::TimeoutError Connection timed out
vendor/bundle/ruby/2.1.0/gems/redis-3.2.0/lib/redis/connection/ruby.rb:55 rescue in _read_from_socket
vendor/bundle/ruby/2.1.0/gems/redis-3.2.0/lib/redis/connection/ruby.rb:48 _read_from_socket
vendor/bundle/ruby/2.1.0/gems/redis-3.2.0/lib/redis/connection/ruby.rb:41 gets
vendor/bundle/ruby/2.1.0/gems/redis-3.2.0/lib/redis/connection/ruby.rb:273 read
vendor/bundle/ruby/2.1.0/gems/redis-3.2.0/lib/redis/client.rb:245 block in read
vendor/bundle/ruby/2.1.0/gems/redis-3.2.0/lib/redis/client.rb:233 io
vendor/bundle/ruby/2.1.0/gems/redis-3.2.0/lib/redis/client.rb:244 read
vendor/bundle/ruby/2.1.0/gems/redis-3.2.0/lib/redis/client.rb:175 block in call_pipelined
vendor/bundle/ruby/2.1.0/gems/redis-3.2.0/lib/redis/client.rb:214 block (2 levels) in process
vendor/bundle/ruby/2.1.0/gems/redis-3.2.0/lib/redis/client.rb:340 ensure_connected
vendor/bundle/ruby/2.1.0/gems/redis-3.2.0/lib/redis/client.rb:204 block in process
vendor/bundle/ruby/2.1.0/gems/redis-3.2.0/lib/redis/client.rb:286 logging
vendor/bundle/ruby/2.1.0/gems/redis-3.2.0/lib/redis/client.rb:203 process
vendor/bundle/ruby/2.1.0/gems/redis-3.2.0/lib/redis/client.rb:174 call_pipelined
vendor/bundle/ruby/2.1.0/gems/redis-3.2.0/lib/redis/client.rb:146 block in call_pipeline
vendor/bundle/ruby/2.1.0/gems/redis-3.2.0/lib/redis/client.rb:273 with_reconnect
vendor/bundle/ruby/2.1.0/gems/redis-3.2.0/lib/redis/client.rb:144 call_pipeline
vendor/bundle/ruby/2.1.0/gems/redis-3.2.0/lib/redis.rb:2101 block in pipelined
vendor/bundle/ruby/2.1.0/gems/redis-3.2.0/lib/redis.rb:37 block in synchronize
vendor/ruby-2.1.5/lib/ruby/2.1.0/monitor.rb:211 mon_synchronize
vendor/bundle/ruby/2.1.0/gems/redis-3.2.0/lib/redis.rb:37 synchronize
vendor/bundle/ruby/2.1.0/gems/redis-3.2.0/lib/redis.rb:2097 pipelined
vendor/bundle/ruby/2.1.0/gems/redis-namespace-1.5.1/lib/redis/namespace.rb:413 namespaced_block
vendor/bundle/ruby/2.1.0/gems/redis-namespace-1.5.1/lib/redis/namespace.rb:265 pipelined
vendor/bundle/ruby/2.1.0/gems/resque-1.25.2/lib/resque.rb:214 push
vendor/bundle/ruby/2.1.0/gems/resque-1.25.2/lib/resque/job.rb:153 create
vendor/bundle/ruby/2.1.0/gems/resque_solo-0.1.0/lib/resque_ext/job.rb:7 create_solo
vendor/bundle/ruby/2.1.0/gems/resque-1.25.2/lib/resque.rb:317 enqueue_to
vendor/bundle/ruby/2.1.0/gems/resque-1.25.2/lib/resque.rb:298 enqueue
We have resque-retry set up, but I guess it doesn't matter if the enqueue call can't even connect to redis.
My current solution is to wrap every Resque.enqueue call with begin/rescue so that we can re-try the enqueue call (as per https://github.com/resque/resque/issues/840). But isn't there a better way?
Heroku Redis allows you to change your instance timeout and maxmemory-policy settings. Theses settings will be kept across upgrades and HA failover.
From documentation:
The timeout setting sets the number of seconds Redis waits before killing idle connections. A value of zero means that connections will not be closed. The default value is 300 seconds (5 minutes). You can change this value using the CLI:
$ heroku redis:timeout maturing-deeply-2628 --seconds 60
Timeout for maturing-deeply-2628 (REDIS_URL) set to 60 seconds.
Connections to the redis instance will be stopped after idling for 60 seconds.
Looks like in case of resque the 0 (do not use connection time out) would be the best choise (--seconds 0).

Debugging Erlang heart timeouts

I use the heart program to restart an Erlang node when it becomes unresponsive. However, I am finding it hard to understand why the node freezes. SASL logs don't show any errors, and my own logs don't seem to show anything remarkable happening at those times. Can anybody give advice on debugging this sort of thing?
By default the heart program issues a SIGKILL to kill off the unresponsive VM so it can quickly start a new one. This makes getting any useful information about the VM pretty much impossible. Something I've tried in the past is to patch the heart program to avoid the hard kill and instead get the VM to create a crash dump and a coredump. I used a patch like this (this one is for Erlang/OTP R14B02):
--- erts/etc/common/heart.c.orig 2011-04-17 12:11:24.000000000 -0400
+++ erts/etc/common/heart.c 2011-04-17 12:12:36.000000000 -0400
## -559,10 +559,11 ##
int res;
if(heart_beat_kill_pid != 0){
pid = (pid_t) heart_beat_kill_pid;
- res = kill(pid,SIGKILL);
+ res = kill(pid,SIGUSR1);
+ sleep(4);
for(i=0; i < 5 && res == 0; ++i){
sleep(1);
- res = kill(pid,SIGKILL);
+ res = kill(pid,i < 2 ? SIGQUIT : SIGKILL);
}
if(errno != ESRCH){
print_error("Unable to kill old process, "
As you can see, with this patch heart will first issue a SIGUSR1 to try to get the VM to create a crash dump. Since this can take awhile, heart then sleeps for 4 seconds. You might have to increase this sleep time if you're not getting full crash dumps. After that, heart then tries twice to issue a SIGQUIT with the hope of getting a coredump, and if that fails, issues a SIGKILL.
Note that this patch will slow down heart's VM restart due to the time required to wait for the crash dumps and coredumps. If you use it in production, be aware of this limitation.
You could try to call erlang:halt/1 from your HEART_COMMAND thus creating a crash dump from the unresponsive node.
You can try using the erl_call tool with e.g. -a erlang halt 123.
If the erlang node can't respond to this is also interesting information.
Did you try increasing `HEART_BEAT_TIMEOUT? Maybe the node is just bogged down a bit an misses the timeout but doesn't freeze.
If you have any idea of why it is freezing you could try to trace the module using dbg.
http://www.erlang.org/doc/man/dbg.html
In short try
dbg:tracer(), dbg:p(all,c), dbg:tpl(Module, Function, x).
If you want to stop this tracing issue
dbg:ctpl()
See documentation for more info.
Note: Change Module and Function to whatever you want to trace, leave x as it is. You can also skip Function and only give Module, x.
Warning: Running this on a live system can be dangerous as the amount of information that is going to be printed to the shell can be enormous.

JTA transaction timeout exception - weblogic 10.X

I changed the JTA transaction timeout from admin console and set to 300, even after changing it fails saying JTA transaction unexpectedly rolled back (maybe due to a timeout) with a:
weblogic.transaction.RollbackException: Transaction timed out after 181 seconds`
To make sure whether my changes (timeout value 300) got reflected for that domain or not I checked under domain config.xml it got reflected with 300.
My question is, is there any other place also do I need to update the transaction timeout value and do I need to restart the server ?
Full stack trace after the exception from server below:
Caused by: org.springframework.transaction.UnexpectedRollbackException: JTA transaction unexpectedly rolled back (maybe due to a timeout); nested exception is weblogic.transaction.RollbackException: Transaction
timed out after 180 seconds
BEA1-160A800A149091F72E5E
at org.springframework.transaction.jta.JtaTransactionManager.doCommit(JtaTransactionManager.java:1031)
at org.springframework.transaction.support.AbstractPlatformTransactionManager.processCommit(AbstractPlatformTransactionManager.java:709)
at org.springframework.transaction.support.AbstractPlatformTransactionManager.commit(AbstractPlatformTransactionManager.java:678)
at org.springframework.transaction.interceptor.TransactionAspectSupport.completeTransactionAfterThrowing(TransactionAspectSupport.java:359)
at org.springframework.transaction.interceptor.TransactionInterceptor.invoke(TransactionInterceptor.java:110)
at org.springframework.aop.framework.ReflectiveMethodInvocation.proceed(ReflectiveMethodInvocation.java:171)
at org.springframework.aop.framework.JdkDynamicAopProxy.invoke(JdkDynamicAopProxy.java:204)
at $Proxy103.saveRegistryData(Unknown Source)
at gov.cms.pqri.arch.submission.registry.bean.RegDataAccessManager.persistRegistry(RegDataAccessManager.java:54)
... 14 more
Caused by: weblogic.transaction.RollbackException: Transaction timed out after 180 seconds
BEA1-160A800A149091F72E5E
at weblogic.transaction.internal.TransactionImpl.throwRollbackException(TransactionImpl.java:1818)
at weblogic.transaction.internal.ServerTransactionImpl.internalCommit(ServerTransactionImpl.java:333)
at weblogic.transaction.internal.ServerTransactionImpl.commit(ServerTransactionImpl.java:227)
at weblogic.transaction.internal.TransactionManagerImpl.commit(TransactionManagerImpl.java:281)
at org.springframework.transaction.jta.JtaTransactionManager.doCommit(JtaTransactionManager.java:1028)
... 22 more
after changing the stuck Thread Max time to 300 under servers -> configuration -> tuning (tab) from admin console it is getting updated and working fine.
I have also came across this issue and have resolved the same, since this is related to JTA transaction so we need to increase the timeout of JTA as well along with the time out for stuck max thread. Please click on JTA from the weblogic console home and increase the JTA timeout from 30(by default) to 300.
We met same issue on Weblogic 12.1.2 [JTA transaction unexpectedly rolled back (maybe due to a timeout)] after all investigation we found the root cause of the problem.In my opinion it occurs due to huge dataset processing transactional and near the end of the process If an exception is thrown, JTA is rolling back data as expected.But it does not give the details of the error.In our case ,it mostly cause because of the database integrity (e.g we try to insert data a column with smaller size than data.)
In summary,it will be the best way to investigate db logs instead of increasing stuck Thread Max time.Thread max time can be a solution,but not a proper solution for real enterprise systems.
Also this issue discussed on another stackover link and hibernate jira issue
And solution suggested:
This is a default behaviour of Weblogic JTA realization. To obtain
root exception you should set system property
weblogic.transaction.allowOverrideSetRollbackReason to true.
One of the solution is add this line into
/bin/setDomainEnv.cmd:
set JAVA_OPTIONS=%JAVA_OPTIONS% -Dweblogic.transaction.allowOverrideSetRollbackReason=true
I got my JTA timeouts increased by adding jta.properties file into config folder of my app with lines:
com.atomikos.icatch.default_jta_timeout=600000
com.atomikos.icatch.max_timeout=600000

Resources