QuickFIX/J - Connection getting forcibly closed by remote host when receiving multiple SecurityDefinitions - quickfixj

I am processing security definitions coming from ICE exchange using quickfixj. When I do it for markets which has securities less than 1000/1200, I get it via multiple message (100 securities per request(35=d) ) without any issues.
However when it has more securities than after 14 messages, my session get disconnected with below error:
'Socket exception: java.io.IOException: An existing connection was forcibly closed by the remote host'
I can still see message 15 in logs but it does not come inside fromApp method.
I understand that sometimes this kind of issue happens if too much processing is happening in fromAPP method. I checked the fromApp method and I was just putting the message to a queue which gets processed by another thread, I am not doing anything in that method.
To troubleshoot this error, I have removed any logic from fromApp method and for now I am just printing number of messages received so far. Even with this, my session gets disconnect after 14 messages. ( whereas more than 50 messages should have come.)
#Override
public void fromApp(quickfix.Message message, SessionID sessionID)
throws FieldNotFound, IncorrectDataFormat, IncorrectTagValue, UnsupportedMessageType {
try {
counter++;
System.out.println("Inside from App, message : " +counter);
} catch (Exception e) {
System.out.println("Failed to process the underlying. Error message: " + e.getMessage());
}
}
I have checked the configuration file as well. I have played with it to see if any of the validation if causing it but no luck.
Here is the config file I am using:
[default]
FileStorePath=C:\Test
BeginString=FIX.4.4
ConnectionType=initiator
[session]
SenderCompID=XXXX
SenderSubID=X
SenderLocationID=X
TargetCompID=ICE
SocketConnectHost=XX.XXX.XXX.XXX
SocketConnectPort=80
StartTime=00:00:00
EndTime=23:59:59
HeartBtInt=60
ReconnectInterval=5
UseDataDictionary=Y
ResetOnLogon=Y
ForceResync=Y
CheckLatency=N
I am expecting to receive all the 50 messages instead of 14 message followed by session disconnected error I am getting right now. Any help/ideas on why this issue is happening and how can I fix this would be really appreciated.
Thanks.
Logs as mentioned in the comments (total no of securities 19293, expected msg count (35=d) - 193 for 1 security definition request. Received -13, session disconnect after that:
{Inside Logon
2019-09-08 12:43:46 INFO DefaultSessionSchedule:87 - [FIX.4.4:xxxx/x/x->ICE] daily, 00:00:00-UTC - 23:59:59-UTC
<20190908-10:43:46, FIX.4.4:xxxx/x/x->ICE, event> (Session FIX.4.4:xxxx/x/x->ICE schedule is daily, 00:00:00-UTC - 23:59:59-UTC)
<20190908-10:43:46, FIX.4.4:xxxx/x/x->ICE, event> (Created session: FIX.4.4:xxxx/x/x->ICE)
inside oncreate
<20190908-10:43:46, FIX.4.4:xxxx/x/x->ICE, event> (Configured socket addresses for session: [/63.247.113.249:80])
2019-09-08 12:43:46 INFO ThreadedSocketInitiator:321 - SessionTimer started
<20190908-10:43:46, FIX.4.4:xxxx/x/x->ICE, event> (MINA session created: local=/10.72.59.226:54361, class org.apache.mina.transport.socket.nio.NioSocketSession, remote=/63.247.113.249:80)
<20190908-10:43:47, FIX.4.4:xxxx/x/x->ICE, outgoing> (8=FIX.4.4 9=108 35=A 34=1 49=xxxx 50=1 52=20190908-10:43:47.860 56=ICE 142=0 553=xxxxxxxx 554=xxxxxxx 98=0 108=30 141=Y 10=098 )
<20190908-10:43:47, FIX.4.4:xxxx/x/x->ICE, event> (Initiated logon request)
2019-09-08 12:43:48 DEBUG ProtocolCodecFilter:233 - Processing a MESSAGE_RECEIVED for session 1
<20190908-10:43:48, FIX.4.4:xxxx/x/x->ICE, incoming> (8=FIX.4.4 9=73 35=A 49=ICE 34=1 52=20190908-10:43:47.519 56=xxxx 57=1 98=0 108=30 141=Y 10=189 )
<20190908-10:43:48, FIX.4.4:xxxx/x/x->ICE, event> (Logon contains ResetSeqNumFlag=Y, resetting sequence numbers to 1)
<20190908-10:43:48, FIX.4.4:xxxx/x/x->ICE, event> (Received logon)
2019-09-08 12:44:18 DEBUG ProtocolCodecFilter:233 - Processing a MESSAGE_RECEIVED for session 1
<20190908-10:44:18, FIX.4.4:xxxx/x/x->ICE, incoming> (8=FIX.4.4 9=55 35=0 49=ICE 34=2 52=20190908-10:44:17.563 56=xxxx 57=1 10=100 )
<20190908-10:44:18, FIX.4.4:xxxx/x/x->ICE, outgoing> (8=FIX.4.4 9=61 35=0 34=3 49=xxxx 50=1 52=20190908-10:44:18.844 56=ICE 142=0 10=099 )
2019-09-08 12:44:18 DEBUG ProtocolCodecFilter:233 - Processing a MESSAGE_RECEIVED for session 1
<20190908-10:44:18, FIX.4.4:xxxx/x/x->ICE, incoming> (8=FIX.4.4 9=64 35=2 49=ICE 34=3 52=20190908-10:44:18.459 56=xxxx 57=1 7=2 16=2 10=234 )
<20190908-10:44:18, FIX.4.4:xxxx/x/x->ICE, event> (Received ResendRequest FROM: 2 TO: 2)
inside toApp
<20190908-10:44:18, FIX.4.4:xxxx/x/x->ICE, event> (Resending message: 2)
<20190908-10:44:18, FIX.4.4:xxxx/x/x->ICE, outgoing> (8=FIX.4.4 9=134 35=c 34=2 43=Y 49=xxxx 50=1 52=20190908-10:44:18.963 56=ICE 122=20190908-10:43:48.598 142=0 48=2 320=1567939428584_0 321=3 461=FXXXXX 10=178 )
2019-09-08 12:44:19 DEBUG ProtocolCodecFilter:233 - Processing a MESSAGE_RECEIVED for session 1
<20190908-10:44:19, FIX.4.4:xxxx/x/x->ICE, incoming> (8=FIX.4.4 9=61 35=1 49=ICE 34=4 52=20190908-10:44:18.564 56=xxxx 57=1 112=1 10=105 )
<20190908-10:44:19, FIX.4.4:xxxx/x/x->ICE, outgoing> (8=FIX.4.4 9=67 35=0 34=4 49=xxxx 50=1 52=20190908-10:44:19.064 56=ICE 142=0 112=1 10=104 )
2019-09-08 12:44:19 DEBUG ProtocolCodecFilter:233 - Processing a MESSAGE_RECEIVED for session 1
<20190908-10:44:19, FIX.4.4:xxxx/x/x->ICE, incoming> (8=FIX.4.4 9=73 35=1 49=ICE 34=5 52=20190908-10:44:18.578 56=xxxx 57=1 112=1567939458579 10=255 )
<20190908-10:44:19, FIX.4.4:xxxx/x/x->ICE, outgoing> (8=FIX.4.4 9=79 35=0 34=5 49=xxxx 50=1 52=20190908-10:44:19.078 56=ICE 142=0 112=1567939458579 10=254 )
2019-09-08 12:44:20 DEBUG ProtocolCodecFilter:233 - Processing a MESSAGE_RECEIVED for session 1
2019-09-08 12:44:20 DEBUG ProtocolCodecFilter:233 - Processing a MESSAGE_RECEIVED for session 1
2019-09-08 12:44:20 DEBUG ProtocolCodecFilter:233 - Processing a MESSAGE_RECEIVED for session 1
2019-09-08 12:44:20 DEBUG ProtocolCodecFilter:233 - Processing a MESSAGE_RECEIVED for session 1
2019-09-08 12:44:20 DEBUG ProtocolCodecFilter:233 - Processing a MESSAGE_RECEIVED for session 1
2019-09-08 12:44:20 DEBUG ProtocolCodecFilter:233 - Processing a MESSAGE_RECEIVED for session 1
2019-09-08 12:44:20 DEBUG ProtocolCodecFilter:233 - Processing a MESSAGE_RECEIVED for session 1
2019-09-08 12:44:20 DEBUG ProtocolCodecFilter:233 - Processing a MESSAGE_RECEIVED for session 1
2019-09-08 12:44:20 DEBUG ProtocolCodecFilter:233 - Processing a MESSAGE_RECEIVED for session 1
<20190908-10:44:20, FIX.4.4:xxxx/x/x->ICE, incoming> (8=FIX.4.4 9=67593 35=d 49=ICE 34=6 52=20190908-10:44:19.844 56=xxxx 57=1 322=7250 323=4 320=1567939428584_0 393=19238 82=193 67=1 9064=0 711=100 311=5832793 ..
2019-09-08 12:44:20 DEBUG ProtocolCodecFilter:233 - Processing a MESSAGE_RECEIVED for session 1
security message no : 1:
2019-09-08 12:44:20 DEBUG ProtocolCodecFilter:233 - Processing a MESSAGE_RECEIVED for session 1
<20190908-10:44:20, FIX.4.4:xxxx/x/x->ICE, incoming> (8=FIX.4.4 9=60808 35=d 49=ICE 34=7 52=20190908-10:44:19.844 56=xxxx 57=1 322=7251 323=4 320=1567939428584_0 393=19238 82=193 67=2 9064=0 711=100 311=5832754 ...
2019-09-08 12:44:20 DEBUG ProtocolCodecFilter:233 - Processing a MESSAGE_RECEIVED for session 1
security message no : 2:
<20190908-10:44:20, FIX.4.4:xxxx/x/x->ICE, incoming> (8=FIX.4.4 9=56951 35=d 49=ICE 34=8 52=20190908-10:44:19.844 56=xxxx 57=1 322=7252 323=4 320=1567939428584_0 393=19238 82=193 67=3 9064=0 711=100 311=5964169 ...
2019-09-08 12:44:20 DEBUG ProtocolCodecFilter:233 - Processing a MESSAGE_RECEIVED for session 1
security message no : 3:
2019-09-08 12:44:20 DEBUG ProtocolCodecFilter:233 - Processing a MESSAGE_RECEIVED for session 1
2019-09-08 12:44:20 DEBUG ProtocolCodecFilter:233 - Processing a MESSAGE_RECEIVED for session 1
2019-09-08 12:44:20 DEBUG ProtocolCodecFilter:233 - Processing a MESSAGE_RECEIVED for session 1
<20190908-10:44:20, FIX.4.4:xxxx/x/x->ICE, incoming> (8=FIX.4.4 9=60133 35=d 49=ICE 34=9 52=20190908-10:44:20.070 56=xxxx 57=1 322=7253 323=4 320=1567939428584_0 393=19238 82=193 67=4 9064=0 711=100 311=5931984 ...
2019-09-08 12:44:25 DEBUG ProtocolCodecFilter:233 - Processing a MESSAGE_RECEIVED for session 1
security message no : 4:
<20190908-10:44:25, FIX.4.4:xxxx/x/x->ICE, incoming> (8=FIX.4.4 9=59412 35=d 49=ICE 34=10 52=20190908-10:44:20.184 56=xxxx 57=1 322=7254 323=4 320=1567939428584_0 393=19238 82=193 67=5 9064=0 711=100 311=5932131 ...
2019-09-08 12:44:25 DEBUG ProtocolCodecFilter:233 - Processing a MESSAGE_RECEIVED for session 1
security message no : 5:
<20190908-10:44:25, FIX.4.4:xxxx/x/x->ICE, incoming> (8=FIX.4.4 9=58564 35=d 49=ICE 34=11 52=20190908-10:44:20.184 56=xxxx 57=1 322=7255 323=4 320=1567939428584_0 393=19238 82=193 67=6 9064=0 711=100 311=6063274 ..
2019-09-08 12:44:25 DEBUG ProtocolCodecFilter:233 - Processing a MESSAGE_RECEIVED for session 1
security message no : 6:
<20190908-10:44:25, FIX.4.4:xxxx/x/x->ICE, incoming> (8=FIX.4.4 9=58085 35=d 49=ICE 34=12 52=20190908-10:44:20.296 56=xxxx 57=1 322=7256 323=4 320=1567939428584_0 393=19238 82=193 67=7 9064=0 711=100 311=6063858 ...
2019-09-08 12:44:43 DEBUG ProtocolCodecFilter:233 - Processing a MESSAGE_RECEIVED for session 1
security message no : 7:
<20190908-10:44:43, FIX.4.4:xxxx/x/x->ICE, incoming> (8=FIX.4.4 9=58654 35=d 49=ICE 34=13 52=20190908-10:44:20.296 56=xxxx 57=1 322=7257 323=4 320=1567939428584_0 393=19238 82=193 67=8 9064=0 711=100 311=5867472 ....
2019-09-08 12:44:43 DEBUG ProtocolCodecFilter:233 - Processing a MESSAGE_RECEIVED for session 1
security message no : 8:
<20190908-10:44:43, FIX.4.4:xxxx/x/x->ICE, incoming> (8=FIX.4.4 9=59786 35=d 49=ICE 34=14 52=20190908-10:44:20.296 56=xxxx 57=1 322=7258 323=4 320=1567939428584_0 393=19238 82=193 67=9 9064=0 711=100 311=5867638 ..
2019-09-08 12:44:43 DEBUG ProtocolCodecFilter:233 - Processing a MESSAGE_RECEIVED for session 1
security message no : 9:
<20190908-10:44:43, FIX.4.4:xxxx/x/x->ICE, incoming> (8=FIX.4.4 9=57341 35=d 49=ICE 34=15 52=20190908-10:44:20.296 56=xxxx 57=1 322=7259 323=4 320=1567939428584_0 393=19238 82=193 67=10 9064=0 711=100 311=5867886 3..
<20190908-10:45:17, FIX.4.4:xxxx/x/x->ICE, outgoing> (8=FIX.4.4 9=61 35=0 34=6 49=xxxx 50=1 52=20190908-10:44:49.846 56=ICE 142=0 10=108 )
<20190908-10:45:17, FIX.4.4:xxxx/x/x->ICE, incoming> (8=FIX.4.4 9=57999 35=d 49=ICE 34=16 52=20190908-10:44:20.357 56=xxxx 57=1 322=7260 323=4 320=1567939428584_0 393=19238 82=193 67=11 9064=0 711=100 311=6097650 ..
security message no : 10:
2019-09-08 12:45:17 DEBUG ProtocolCodecFilter:233 - Processing a MESSAGE_RECEIVED for session 1
security message no : 11:
<20190908-10:45:17, FIX.4.4:xxxx/x/x->ICE, incoming> (8=FIX.4.4 9=63224 35=d 49=ICE 34=17 52=20190908-10:44:20.357 56=xxxx 57=1 322=7261 323=4 320=1567939428584_0 393=19238 82=193 67=12 9064=0 711=100 311=5770217 ...
2019-09-08 12:45:17 DEBUG ProtocolCodecFilter:233 - Processing a MESSAGE_RECEIVED for session 1
security message no : 12:
<20190908-10:45:17, FIX.4.4:xxxx/x/x->ICE, incoming> (8=FIX.4.4 9=54954 35=d 49=ICE 34=18 52=20190908-10:44:20.357 56=xxxx 57=1 322=7262 323=4 320=1567939428584_0 393=19238 82=193 67=13 9064=0 711=100 311=5868773..
<20190908-10:45:53, FIX.4.4:xxxx/x/x->ICE, outgoing> (8=FIX.4.4 9=61 35=0 34=7 49=xxxx 50=1 52=20190908-10:45:19.846 56=ICE 142=0 10=107 )
<20190908-10:45:53, FIX.4.4:xxxx/x/x->ICE, outgoing> (8=FIX.4.4 9=61 35=0 34=8 49=xxxx 50=1 52=20190908-10:45:53.875 56=ICE 142=0 10=108 )
security message no : 13:
<20190908-10:45:53, FIX.4.4:xxxx/x/x->ICE, event> (Disconnecting: Encountered END_OF_STREAM)
inside onLogOut
java.io.IOException: An existing connection was forcibly closed by the remote host
at sun.nio.ch.SocketDispatcher.write0(Native Method)
at sun.nio.ch.SocketDispatcher.write(Unknown Source)
at sun.nio.ch.IOUtil.writeFromNativeBuffer(Unknown Source)
at sun.nio.ch.IOUtil.write(Unknown Source)
at sun.nio.ch.SocketChannelImpl.write(Unknown Source)
at org.apache.mina.transport.socket.nio.NioProcessor.write(NioProcessor.java:384)
at org.apache.mina.transport.socket.nio.NioProcessor.write(NioProcessor.java:47)
at org.apache.mina.core.polling.AbstractPollingIoProcessor$Processor.writeBuffer(AbstractPollingIoProcessor.java:1107)
at org.apache.mina.core.polling.AbstractPollingIoProcessor$Processor.flushNow(AbstractPollingIoProcessor.java:994)
at org.apache.mina.core.polling.AbstractPollingIoProcessor$Processor.flush(AbstractPollingIoProcessor.java:921)
at org.apache.mina.core.polling.AbstractPollingIoProcessor$Processor.run(AbstractPollingIoProcessor.java:688)
at org.apache.mina.util.NamePreservingRunnable.run(NamePreservingRunnable.java:64)
at java.util.concurrent.ThreadPoolExecutor.runWorker(Unknown Source)
at java.util.concurrent.ThreadPoolExecutor$Worker.run(Unknown Source)
at java.lang.Thread.run(Unknown Source)
2019-09-08 12:45:54 DEBUG AbstractIoService:338 - awaitTermination on (nio socket connector: managedSessionCount: 0) called by thread=[QFJ Timer]
2019-09-08 12:45:54 DEBUG AbstractIoService:340 - awaitTermination on (nio socket connector: managedSessionCount: 0) finished
<20190908-10:45:54, FIX.4.4:xxxx/x/x->ICE, event> (MINA session created: local=/10.72.59.226:54391, class org.apache.mina.transport.socket.nio.NioSocketSession, remote=/63.247.113.249:80)
inside add credetials
<20190908-10:45:55, FIX.4.4:xxxx/x/x->ICE, outgoing> (8=FIX.4.4 9=108 35=A 34=1 49=xxxx 50=1 52=20190908-10:45:55.850 56=ICE 142=0 553=xxxxxxxx 554=xxxxxxx 98=0 108=30 141=Y 10=098 )
<20190908-10:45:55, FIX.4.4:xxxx/x/x->ICE, event> (Initiated logon request)
2019-09-08 12:45:55 DEBUG ProtocolCodecFilter:233 - Processing a MESSAGE_RECEIVED for session 2
<20190908-10:45:55, FIX.4.4:xxxx/x/x->ICE, incoming> (8=FIX.4.4 9=73 35=A 49=ICE 34=1 52=20190908-10:45:55.495 56=xxxx 57=1 98=0 108=30 141=Y 10=193 )
<20190908-10:45:55, FIX.4.4:xxxx/x/x->ICE, event> (Logon contains ResetSeqNumFlag=Y, resetting sequence numbers to 1)
<20190908-10:45:55, FIX.4.4:xxxx/x/x->ICE, event> (Received logon)
<20190908-10:46:25, FIX.4.4:xxxx/x/x->ICE, outgoing> (8=FIX.4.4 9=61 35=0 34=2 49=xxxx 50=1 52=20190908-10:46:25.848 56=ICE 142=0 10=102 )
2019-09-08 12:46:26 DEBUG ProtocolCodecFilter:233 - Processing a MESSAGE_RECEIVED for session 2
<20190908-10:46:26, FIX.4.4:xxxx/x/x->ICE, incoming> (8=FIX.4.4 9=55 35=0 49=ICE 34=2 52=20190908-10:46:25.684 56=xxxx 57=1 10=105 )
<20190908-10:46:55, FIX.4.4:xxxx/x/x->ICE, outgoing> (8=FIX.4.4 9=61 35=0 34=3 49=xxxx 50=1 52=20190908-10:46:55.848 56=ICE 142=0 10=106 )
2019-09-08 12:46:56 DEBUG ProtocolCodecFilter:233 - Processing a MESSAGE_RECEIVED for session 2
<20190908-10:46:56, FIX.4.4:xxxx/x/x->ICE, incoming> (8=FIX.4.4 9=55 35=0 49=ICE 34=3 52=20190908-10:46:55.689 56=xxxx 57=1 10=114 )
<20190908-10:47:25, FIX.4.4:xxxx/x/x->ICE, outgoing> (8=FIX.4.4 9=61 35=0 34=4 49=xxxx 50=1 52=20190908-10:47:25.848 56=ICE 142=0 10=105 )
2019-09-08 12:47:26 DEBUG ProtocolCodecFilter:233 - Processing a MESSAGE_RECEIVED for session 2
<20190908-10:47:26, FIX.4.4:xxxx/x/x->ICE, incoming> (8=FIX.4.4 9=55 35=0 49=ICE 34=4 52=20190908-10:47:25.692 56=xxxx 57=1 10=107 )

I guess the problem why the connection is dropped is because you do not process your messages in time. ICE is probably waiting for heartbeats that you send out way too late (they probably would have told you that if you asked) and drops the connection because of that. Unfortunately I cannot tell only from the log why it is so slow.
But the slowness starts after incoming message with seqnum 9. You can see that up to that point the log timestamp and the 52/SendingTime of the message nearly match.
Incoming message 10 already has 5 seconds delay.
The heartbeat that you send out with seqnum 6 has a delay of 28 secs.
Incoming message 16 is almost a minute too late.
At first I suspected the "client" through which you connect. But that would not slow down your outgoing heartbeats.
It could be that your application has too little heap space configured and is hence doing a lot of garbage collections. Logging and processing FIX messages of 60kB is probably not so common. Or due to some reason QFJ really needs a long time to process such big messages. A series of stack traces or a profiler would help to diagnose this further.
I would try two things separately:
completely disable logging, i.e. set level to WARN or ERROR so that logging of the long messages is prevented and re-test if the problem persists.
increase heap memory of your application and re-test if the problem persists.

Related

How to use the Keycloak jboss/keycloak:15.0.2 docker image?

I am currently using Keycloak v11 running in a docker container. I would like to migrate to v15 but I want to test it before migrating. I pull the last docker image jboss/keycloak:15.0.2 and simply run:
docker run -d -e KEYCLOAK_USER=admin -e KEYCLOAK_PASSWORD=admin -p 8080:8080 --name kc-v15 jboss/keycloak:15.0.2
When I take a look at the logs I have multiple warnings and errors. The full stacktrace is below
Any help will be appreciated.
21:14:50,555 WARN [org.infinispan.topology.ClusterTopologyManagerImpl] (thread-8,ejb,04cbb180fd8b) ISPN000329: Unable to read rebalancing status from coordinator 8eb22ce71ea3: java.util.concurrent.CompletionException: org.infinispan.remoting.RemoteException: ISPN000217: Received exception from 8eb22ce71ea3, see cause for remote stack trace
at java.base/java.util.concurrent.CompletableFuture.encodeThrowable(CompletableFuture.java:331)
at java.base/java.util.concurrent.CompletableFuture.completeThrowable(CompletableFuture.java:346)
at org.jboss.as.clustering.common#23.0.2.Final//org.jboss.as.clustering.context.ContextualExecutor$1.run(ContextualExecutor.java:70)
at java.base/java.lang.Thread.run(Thread.java:829)
Caused by: org.infinispan.remoting.RemoteException: ISPN000217: Received exception from 8eb22ce71ea3, see cause for remote stack trace
at org.infinispan#11.0.9.Final//org.infinispan.remoting.transport.ResponseCollectors.wrapRemoteException(ResponseCollectors.java:25)
at org.infinispan#11.0.9.Final//org.infinispan.remoting.transport.impl.SingleTargetRequest.onResponse(SingleTargetRequest.java:43)
... 31 more
Caused by: org.infinispan.commons.CacheException: Unknown command id 90!
at org.infinispan#11.0.9.Final//org.infinispan.marshall.exts.ThrowableExternalizer.readObject(ThrowableExternalizer.java:181)
at org.infinispan#11.0.9.Final//org.infinispan.marshall.exts.ThrowableExternalizer.readObject(ThrowableExternalizer.java:42)
at org.infinispan#11.0.9.Final//org.infinispan.marshall.core.GlobalMarshaller.objectFromByteBuffer(GlobalMarshaller.java:221)
at org.infinispan#11.0.9.Final//org.infinispan.remoting.transport.jgroups.JGroupsTransport.processResponse(JGroupsTransport.java:1394)
... 29 more
21:14:50,569 WARN [org.infinispan.topology.ClusterTopologyManagerImpl] (ServerService Thread Pool -- 61) ISPN000329: Unable to read rebalancing status from coordinator 8eb22ce71ea3: java.util.concurrent.CompletionException: org.infinispan.remoting.RemoteException: ISPN000217: Received exception from 8eb22ce71ea3, see cause for remote stack trace
at java.base/java.util.concurrent.CompletableFuture.encodeThrowable(CompletableFuture.java:331)
at org.jboss.threads#2.4.0.Final//org.jboss.threads.EnhancedQueueExecutor$ThreadBody.run(EnhancedQueueExecutor.java:1348)
at java.base/java.lang.Thread.run(Thread.java:829)
at org.jboss.threads#2.4.0.Final//org.jboss.threads.JBossThread.run(JBossThread.java:513)
Caused by: org.infinispan.remoting.RemoteException: ISPN000217: Received exception from 8eb22ce71ea3, see cause for remote stack trace
at org.infinispan#11.0.9.Final//org.infinispan.remoting.transport.ResponseCollectors.wrapRemoteException(ResponseCollectors.java:25)
at org.infinispan#11.0.9.Final//org.infinispan.remoting.transport.ValidSingleResponseCollector.withException(ValidSingleResponseCollector.java:37)
Caused by: org.infinispan.commons.CacheException: Unknown command id 90!
at org.infinispan#11.0.9.Final//org.infinispan.marshall.exts.ThrowableExternalizer.readObject(ThrowableExternalizer.java:181)
at org.infinispan#11.0.9.Final//org.infinispan.marshall.core.GlobalMarshaller.objectFromObjectInput(GlobalMarshaller.java:192)
at org.infinispan#11.0.9.Final//org.infinispan.marshall.core.GlobalMarshaller.objectFromByteBuffer(GlobalMarshaller.java:221)
at org.infinispan#11.0.9.Final//org.infinispan.remoting.transport.jgroups.JGroupsTransport.processResponse(JGroupsTransport.java:1394)
... 29 more
21:14:50,573 WARN [org.infinispan.topology.ClusterTopologyManagerImpl] (ServerService Thread Pool -- 58) ISPN000329: Unable to read rebalancing status from coordinator 8eb22ce71ea3: java.util.concurrent.CompletionException: org.infinispan.remoting.RemoteException: ISPN000217: Received exception from 8eb22ce71ea3, see cause for remote stack trace
at java.base/java.util.concurrent.CompletableFuture.encodeThrowable(CompletableFuture.java:331)
at org.jboss.threads#2.4.0.Final//org.jboss.threads.EnhancedQueueExecutor$ThreadBody.run(EnhancedQueueExecutor.java:1377)
at java.base/java.lang.Thread.run(Thread.java:829)
at org.jboss.threads#2.4.0.Final//org.jboss.threads.JBossThread.run(JBossThread.java:513)
Caused by: org.infinispan.remoting.RemoteException: ISPN000217: Received exception from 8eb22ce71ea3, see cause for remote stack trace
at org.infinispan#11.0.9.Final//org.infinispan.remoting.transport.ResponseCollectors.wrapRemoteException(ResponseCollectors.java:25)
at org.infinispan#11.0.9.Final//org.infinispan.remoting.transport.ValidSingleResponseCollector.withException(ValidSingleResponseCollector.java:37)
at org.jboss.as.clustering.common#23.0.2.Final//org.jboss.as.clustering.context.ContextualExecutor$1.run(ContextualExecutor.java:70)
at java.base/java.lang.Thread.run(Thread.java:829)
Caused by: org.infinispan.commons.CacheException: Unknown command id 90!
at org.infinispan#11.0.9.Final//org.infinispan.marshall.core.GlobalMarshaller.objectFromObjectInput(GlobalMarshaller.java:192)
at org.infinispan#11.0.9.Final//org.infinispan.marshall.core.GlobalMarshaller.objectFromByteBuffer(GlobalMarshaller.java:221)
at org.infinispan#11.0.9.Final//org.infinispan.remoting.transport.jgroups.JGroupsTransport.processResponse(JGroupsTransport.java:1394)
... 29 more
21:14:50,590 WARN [org.infinispan.topology.ClusterTopologyManagerImpl] (thread-7,ejb,04cbb180fd8b) ISPN000329: Unable to read rebalancing status from coordinator 8eb22ce71ea3: java.util.concurrent.CompletionException: org.infinispan.remoting.RemoteException: ISPN000217: Received exception from 8eb22ce71ea3, see cause for remote stack trace
at java.base/java.util.concurrent.CompletableFuture.encodeThrowable(CompletableFuture.java:331)
at org.jboss.as.clustering.common#23.0.2.Final//org.jboss.as.clustering.context.ContextualExecutor$1.run(ContextualExecutor.java:70)
at java.base/java.lang.Thread.run(Thread.java:829)
Caused by: org.infinispan.remoting.RemoteException: ISPN000217: Received exception from 8eb22ce71ea3, see cause for remote stack trace
at org.infinispan#11.0.9.Final//org.infinispan.remoting.transport.ResponseCollectors.wrapRemoteException(ResponseCollectors.java:25)
at org.infinispan#11.0.9.Final//org.infinispan.remoting.transport.impl.SingleTargetRequest.addResponse(SingleTargetRequest.java:73)
at org.infinispan#11.0.9.Final//org.infinispan.remoting.transport.impl.SingleTargetRequest.onResponse(SingleTargetRequest.java:43)
... 31 more
Caused by: org.infinispan.commons.CacheException: Unknown command id 90!
at org.infinispan#11.0.9.Final//org.infinispan.marshall.exts.ThrowableExternalizer.readObject(ThrowableExternalizer.java:181)
at org.infinispan#11.0.9.Final//org.infinispan.marshall.core.GlobalMarshaller.objectFromObjectInput(GlobalMarshaller.java:192)
at org.infinispan#11.0.9.Final//org.infinispan.marshall.core.GlobalMarshaller.objectFromByteBuffer(GlobalMarshaller.java:221)
at org.infinispan#11.0.9.Final//org.infinispan.remoting.transport.jgroups.JGroupsTransport.processResponse(JGroupsTransport.java:1394)
... 29 more
21:14:50,590 WARN [org.infinispan.topology.ClusterTopologyManagerImpl] (thread-8,ejb,04cbb180fd8b) ISPN000329: Unable to read rebalancing status from coordinator 8eb22ce71ea3: java.util.concurrent.CompletionException: org.infinispan.remoting.RemoteException: ISPN000217: Received exception from 8eb22ce71ea3, see cause for remote stack trace
at java.base/java.util.concurrent.CompletableFuture.encodeThrowable(CompletableFuture.java:331)
at java.base/java.util.concurrent.CompletableFuture.completeThrowable(CompletableFuture.java:346)
at org.jboss.as.clustering.common#23.0.2.Final//org.jboss.as.clustering.context.ContextReferenceExecutor.execute(ContextReferenceExecutor.java:49)
at org.jboss.as.clustering.common#23.0.2.Final//org.jboss.as.clustering.context.ContextualExecutor$1.run(ContextualExecutor.java:70)
at java.base/java.lang.Thread.run(Thread.java:829)
Caused by: org.infinispan.remoting.RemoteException: ISPN000217: Received exception from 8eb22ce71ea3, see cause for remote stack trace
at org.infinispan#11.0.9.Final//org.infinispan.remoting.transport.ResponseCollectors.wrapRemoteException(ResponseCollectors.java:25)
at org.infinispan#11.0.9.Final//org.infinispan.remoting.transport.ValidSingleResponseCollector.withException(ValidSingleResponseCollector.java:37)
at org.infinispan#11.0.9.Final//org.infinispan.remoting.transport.ValidSingleResponseCollector.addResponse(ValidSingleResponseCollector.java:21)
at org.infinispan#11.0.9.Final//org.infinispan.remoting.transport.impl.SingleTargetRequest.addResponse(SingleTargetRequest.java:73)
at org.infinispan#11.0.9.Final//org.infinispan.remoting.transport.impl.SingleTargetRequest.onResponse(SingleTargetRequest.java:43)
... 31 more
Caused by: org.infinispan.commons.CacheException: Unknown command id 90!
at org.infinispan#11.0.9.Final//org.infinispan.marshall.exts.ThrowableExternalizer.readObject(ThrowableExternalizer.java:181)
at org.infinispan#11.0.9.Final//org.infinispan.marshall.core.GlobalMarshaller.readNullableObject(GlobalMarshaller.java:358)
at org.infinispan#11.0.9.Final//org.infinispan.marshall.core.GlobalMarshaller.objectFromObjectInput(GlobalMarshaller.java:192)
at org.infinispan#11.0.9.Final//org.infinispan.marshall.core.GlobalMarshaller.objectFromByteBuffer(GlobalMarshaller.java:221)
at org.infinispan#11.0.9.Final//org.infinispan.remoting.transport.jgroups.JGroupsTransport.processResponse(JGroupsTransport.java:1394)
... 29 more
21:18:51,056 ERROR [org.jboss.msc.service.fail] (ServerService Thread Pool -- 62) MSC000001: Failed to start service org.wildfly.clustering.infinispan.cache-container.keycloak: org.jboss.msc.service.StartException in service org.wildfly.clustering.infinispan.cache-container.keycloak: org.infinispan.manager.EmbeddedCacheManagerStartupException: org.infinispan.commons.CacheConfigurationException: Error starting component org.infinispan.statetransfer.StateTransferManager
at org.wildfly.clustering.service#23.0.2.Final//org.wildfly.clustering.service.FunctionalService.start(FunctionalService.java:66)
at org.jboss.threads#2.4.0.Final//org.jboss.threads.EnhancedQueueExecutor$ThreadBody.run(EnhancedQueueExecutor.java:1348)
at java.base/java.lang.Thread.run(Thread.java:829)
at org.jboss.threads#2.4.0.Final//org.jboss.threads.JBossThread.run(JBossThread.java:513)
Caused by: org.infinispan.manager.EmbeddedCacheManagerStartupException: org.infinispan.commons.CacheConfigurationException: Error starting component org.infinispan.statetransfer.StateTransferManager
at org.infinispan#11.0.9.Final//org.infinispan.manager.DefaultCacheManager.internalStart(DefaultCacheManager.java:751)
at org.infinispan#11.0.9.Final//org.infinispan.manager.DefaultCacheManager.start(DefaultCacheManager.java:717)
at org.jboss.as.clustering.infinispan#23.0.2.Final//org.jboss.as.clustering.infinispan.subsystem.CacheContainerServiceConfigurator.get(CacheContainerServiceConfigurator.java:123)
at org.jboss.as.clustering.infinispan#23.0.2.Final//org.jboss.as.clustering.infinispan.subsystem.CacheContainerServiceConfigurator.get(CacheContainerServiceConfigurator.java:76)
at org.wildfly.clustering.service#23.0.2.Final//org.wildfly.clustering.service.FunctionalService.start(FunctionalService.java:63)
... 7 more
Caused by: org.infinispan.commons.CacheConfigurationException: Error starting component org.infinispan.statetransfer.StateTransferManager
at org.infinispan#11.0.9.Final//org.infinispan.factories.impl.BasicComponentRegistryImpl.startWrapper(BasicComponentRegistryImpl.java:560)
at org.infinispan#11.0.9.Final//org.infinispan.factories.AbstractComponentRegistry.internalStart(AbstractComponentRegistry.java:341)
at org.infinispan#11.0.9.Final//org.infinispan.factories.AbstractComponentRegistry.start(AbstractComponentRegistry.java:237)
at org.infinispan#11.0.9.Final//org.infinispan.manager.DefaultCacheManager.internalStart(DefaultCacheManager.java:746)
... 11 more
Caused by: java.util.concurrent.CompletionException: org.infinispan.remoting.RemoteException: ISPN000217: Received exception from 8eb22ce71ea3, see cause for remote stack trace
at org.infinispan#11.0.9.Final//org.infinispan.factories.impl.BasicComponentRegistryImpl.invokeStart(BasicComponentRegistryImpl.java:592)
at org.infinispan#11.0.9.Final//org.infinispan.factories.impl.BasicComponentRegistryImpl.doStartWrapper(BasicComponentRegistryImpl.java:583)
at org.infinispan#11.0.9.Final//org.infinispan.factories.impl.BasicComponentRegistryImpl.startWrapper(BasicComponentRegistryImpl.java:552)
... 40 more
Caused by: org.infinispan.remoting.RemoteException: ISPN000217: Received exception from 8eb22ce71ea3, see cause for remote stack trace
at org.infinispan#11.0.9.Final//org.infinispan.remoting.transport.ResponseCollectors.wrapRemoteException(ResponseCollectors.java:25)
at org.infinispan#11.0.9.Final//org.infinispan.remoting.transport.ValidSingleResponseCollector.withException(ValidSingleResponseCollector.java:37)
at org.jboss.as.clustering.common#23.0.2.Final//org.jboss.as.clustering.context.ContextualExecutor$1.run(ContextualExecutor.java:70)
at java.base/java.lang.Thread.run(Thread.java:829)
Caused by: org.infinispan.commons.CacheException: Unknown command id 85!
at org.infinispan#11.0.9.Final//org.infinispan.marshall.exts.ThrowableExternalizer.readObject(ThrowableExternalizer.java:181)
at org.infinispan#11.0.9.Final//org.infinispan.marshall.core.GlobalMarshaller.objectFromObjectInput(GlobalMarshaller.java:192)
at org.infinispan#11.0.9.Final//org.infinispan.marshall.core.GlobalMarshaller.objectFromByteBuffer(GlobalMarshaller.java:221)
at org.infinispan#11.0.9.Final//org.infinispan.remoting.transport.jgroups.JGroupsTransport.processResponse(JGroupsTransport.java:1394)
... 29 more
21:18:51,123 ERROR [org.jboss.msc.service.fail] (ServerService Thread Pool -- 59) MSC000001: Failed to start service org.wildfly.clustering.infinispan.cache.ejb.http-remoting-connector: org.jboss.msc.service.StartException in service org.wildfly.clustering.infinispan.cache.ejb.http-remoting-connector: org.infinispan.commons.CacheConfigurationException: Error starting component org.infinispan.statetransfer.StateTransferManager
at org.wildfly.clustering.service#23.0.2.Final//org.wildfly.clustering.service.FunctionalService.start(FunctionalService.java:66)
at org.jboss.threads#2.4.0.Final//org.jboss.threads.EnhancedQueueExecutor$ThreadBody.run(EnhancedQueueExecutor.java:1377)
at java.base/java.lang.Thread.run(Thread.java:829)
at org.jboss.threads#2.4.0.Final//org.jboss.threads.JBossThread.run(JBossThread.java:513)
Caused by: org.infinispan.commons.CacheConfigurationException: Error starting component org.infinispan.statetransfer.StateTransferManager
at org.infinispan#11.0.9.Final//org.infinispan.factories.impl.BasicComponentRegistryImpl.startWrapper(BasicComponentRegistryImpl.java:560)
at org.infinispan#11.0.9.Final//org.infinispan.factories.AbstractComponentRegistry.start(AbstractComponentRegistry.java:237)
at org.wildfly.clustering.infinispan.spi#23.0.2.Final//org.wildfly.clustering.infinispan.spi.service.CacheServiceConfigurator.get(CacheServiceConfigurator.java:55)
at org.wildfly.clustering.service#23.0.2.Final//org.wildfly.clustering.service.FunctionalService.start(FunctionalService.java:63)
... 7 more
Caused by: java.util.concurrent.CompletionException: org.infinispan.remoting.RemoteException: ISPN000217: Received exception from 8eb22ce71ea3, see cause for remote stack trace
at org.infinispan#11.0.9.Final//org.infinispan.util.concurrent.CompletionStages.join(CompletionStages.java:82)
at org.infinispan#11.0.9.Final//org.infinispan.factories.impl.BasicComponentRegistryImpl.invokeStart(BasicComponentRegistryImpl.java:592)
at org.infinispan#11.0.9.Final//org.infinispan.factories.impl.BasicComponentRegistryImpl.doStartWrapper(BasicComponentRegistryImpl.java:583)
at org.infinispan#11.0.9.Final//org.infinispan.factories.impl.BasicComponentRegistryImpl.startWrapper(BasicComponentRegistryImpl.java:552)
... 22 more
Caused by: org.infinispan.remoting.RemoteException: ISPN000217: Received exception from 8eb22ce71ea3, see cause for remote stack trace
at org.infinispan#11.0.9.Final//org.infinispan.remoting.transport.ResponseCollectors.wrapRemoteException(ResponseCollectors.java:25)
at org.jboss.as.clustering.common#23.0.2.Final//org.jboss.as.clustering.context.ContextualExecutor$1.run(ContextualExecutor.java:70)
at java.base/java.lang.Thread.run(Thread.java:829)
Caused by: org.infinispan.commons.CacheException: Unknown command id 85!
at org.infinispan#11.0.9.Final//org.infinispan.marshall.exts.ThrowableExternalizer.readObject(ThrowableExternalizer.java:181)
at org.infinispan#11.0.9.Final//org.infinispan.marshall.exts.ThrowableExternalizer.readObject(ThrowableExternalizer.java:42)
at org.infinispan#11.0.9.Final//org.infinispan.marshall.core.GlobalMarshaller.objectFromByteBuffer(GlobalMarshaller.java:221)
at org.infinispan#11.0.9.Final//org.infinispan.remoting.transport.jgroups.JGroupsTransport.processResponse(JGroupsTransport.java:1394)
... 29 more
21:18:51,249 ERROR [org.jboss.as.controller.management-operation] (Controller Boot Thread) WFLYCTL0013: Operation ("add") failed - address: ([
("subsystem" => "infinispan"),
("cache-container" => "keycloak")
]) - failure description: {"WFLYCTL0080: Failed services" => {"org.wildfly.clustering.infinispan.cache-container.keycloak" => "org.infinispan.manager.EmbeddedCacheManagerStartupException: org.infinispan.commons.CacheConfigurationException: Error starting component org.infinispan.statetransfer.StateTransferManager
Caused by: org.infinispan.remoting.RemoteException: ISPN000217: Received exception from 8eb22ce71ea3, see cause for remote stack trace
Caused by: org.infinispan.commons.CacheException: Unknown command id 85!"}}
21:18:51,255 ERROR [org.jboss.as.controller.management-operation] (Controller Boot Thread) WFLYCTL0013: Operation ("add") failed - address: ([
("subsystem" => "ejb3"),
("service" => "remote")
]) - failure description: {"WFLYCTL0080: Failed services" => {"org.wildfly.clustering.infinispan.cache.ejb.http-remoting-connector" => "org.infinispan.commons.CacheConfigurationException: Error starting component org.infinispan.statetransfer.StateTransferManager
Caused by: org.infinispan.commons.CacheConfigurationException: Error starting component org.infinispan.statetransfer.StateTransferManager
Caused by: java.util.concurrent.CompletionException: org.infinispan.remoting.RemoteException: ISPN000217: Received exception from 8eb22ce71ea3, see cause for remote stack trace
Caused by: org.infinispan.remoting.RemoteException: ISPN000217: Received exception from 8eb22ce71ea3, see cause for remote stack trace
Caused by: org.infinispan.commons.CacheException: Unknown command id 85!"}}
21:18:51,258 ERROR [org.jboss.as.controller.management-operation] (Controller Boot Thread) WFLYCTL0013: Operation ("add") failed - address: ([
("subsystem" => "infinispan"),
("cache-container" => "keycloak"),
("thread-pool" => "async-operations")
]) - failure description: {"WFLYCTL0080: Failed services" => {"org.wildfly.clustering.infinispan.cache-container.keycloak" => "org.infinispan.manager.EmbeddedCacheManagerStartupException: org.infinispan.commons.CacheConfigurationException: Error starting component org.infinispan.statetransfer.StateTransferManager
Caused by: org.infinispan.manager.EmbeddedCacheManagerStartupException: org.infinispan.commons.CacheConfigurationException: Error starting component org.infinispan.statetransfer.StateTransferManager
Caused by: org.infinispan.remoting.RemoteException: ISPN000217: Received exception from 8eb22ce71ea3, see cause for remote stack trace
Caused by: org.infinispan.commons.CacheException: Unknown command id 85!"}}
21:18:51,261 ERROR [org.jboss.as.controller.management-operation] (Controller Boot Thread) WFLYCTL0013: Operation ("add") failed - address: ([
("subsystem" => "infinispan"),
("cache-container" => "keycloak"),
("thread-pool" => "blocking")
]) - failure description: {"WFLYCTL0080: Failed services" => {"org.wildfly.clustering.infinispan.cache-container.keycloak" => "org.infinispan.manager.EmbeddedCacheManagerStartupException: org.infinispan.commons.CacheConfigurationException: Error starting component org.infinispan.statetransfer.StateTransferManager
Caused by: org.infinispan.remoting.RemoteException: ISPN000217: Received exception from 8eb22ce71ea3, see cause for remote stack trace
Caused by: org.infinispan.commons.CacheException: Unknown command id 85!"}}
21:18:51,261 ERROR [org.jboss.as.controller.management-operation] (Controller Boot Thread) WFLYCTL0013: Operation ("add") failed - address: ([
("subsystem" => "infinispan"),
("cache-container" => "keycloak"),
("thread-pool" => "expiration")
]) - failure description: {"WFLYCTL0080: Failed services" => {"org.wildfly.clustering.infinispan.cache-container.keycloak" => "org.infinispan.manager.EmbeddedCacheManagerStartupException: org.infinispan.commons.CacheConfigurationException: Error starting component org.infinispan.statetransfer.StateTransferManager
Caused by: org.infinispan.remoting.RemoteException: ISPN000217: Received exception from 8eb22ce71ea3, see cause for remote stack trace
Caused by: org.infinispan.commons.CacheException: Unknown command id 85!"}}
21:18:51,261 ERROR [org.jboss.as.controller.management-operation] (Controller Boot Thread) WFLYCTL0013: Operation ("add") failed - address: ([
("subsystem" => "infinispan"),
("cache-container" => "keycloak"),
("thread-pool" => "listener")
]) - failure description: {"WFLYCTL0080: Failed services" => {"org.wildfly.clustering.infinispan.cache-container.keycloak" => "org.infinispan.manager.EmbeddedCacheManagerStartupException: org.infinispan.commons.CacheConfigurationException: Error starting component org.infinispan.statetransfer.StateTransferManager
Caused by: org.infinispan.manager.EmbeddedCacheManagerStartupException: org.infinispan.commons.CacheConfigurationException: Error starting component org.infinispan.statetransfer.StateTransferManager
Caused by: org.infinispan.commons.CacheException: Unknown command id 85!"}}
21:18:51,262 ERROR [org.jboss.as.controller.management-operation] (Controller Boot Thread) WFLYCTL0013: Operation ("add") failed - address: ([
("subsystem" => "infinispan"),
("cache-container" => "keycloak"),
("thread-pool" => "non-blocking")
]) - failure description: {"WFLYCTL0080: Failed services" => {"org.wildfly.clustering.infinispan.cache-container.keycloak" => "org.infinispan.manager.EmbeddedCacheManagerStartupException: org.infinispan.commons.CacheConfigurationException: Error starting component org.infinispan.statetransfer.StateTransferManager
Caused by: org.infinispan.remoting.RemoteException: ISPN000217: Received exception from 8eb22ce71ea3, see cause for remote stack trace
Caused by: org.infinispan.commons.CacheException: Unknown command id 85!"}}
21:18:51,263 ERROR [org.jboss.as.controller.management-operation] (Controller Boot Thread) WFLYCTL0013: Operation ("add") failed - address: ([
("subsystem" => "infinispan"),
("cache-container" => "keycloak"),
("thread-pool" => "persistence")
]) - failure description: {"WFLYCTL0080: Failed services" => {"org.wildfly.clustering.infinispan.cache-container.keycloak" => "org.infinispan.manager.EmbeddedCacheManagerStartupException: org.infinispan.commons.CacheConfigurationException: Error starting component org.infinispan.statetransfer.StateTransferManager
Caused by: org.infinispan.manager.EmbeddedCacheManagerStartupException: org.infinispan.commons.CacheConfigurationException: Error starting component org.infinispan.statetransfer.StateTransferManager
Caused by: org.infinispan.commons.CacheException: Unknown command id 85!"}}
21:18:51,264 ERROR [org.jboss.as.controller.management-operation] (Controller Boot Thread) WFLYCTL0013: Operation ("add") failed - address: ([
("subsystem" => "infinispan"),
("cache-container" => "keycloak"),
("thread-pool" => "remote-command")
]) - failure description: {"WFLYCTL0080: Failed services" => {"org.wildfly.clustering.infinispan.cache-container.keycloak" => "org.infinispan.manager.EmbeddedCacheManagerStartupException: org.infinispan.commons.CacheConfigurationException: Error starting component org.infinispan.statetransfer.StateTransferManager
Caused by: org.infinispan.manager.EmbeddedCacheManagerStartupException: org.infinispan.commons.CacheConfigurationException: Error starting component org.infinispan.statetransfer.StateTransferManager
Caused by: org.infinispan.commons.CacheException: Unknown command id 85!"}}
21:18:51,266 ERROR [org.jboss.as.controller.management-operation] (Controller Boot Thread) WFLYCTL0013: Operation ("add") failed - address: ([
("subsystem" => "infinispan"),
("cache-container" => "keycloak"),
("thread-pool" => "state-transfer")
]) - failure description: {"WFLYCTL0080: Failed services" => {"org.wildfly.clustering.infinispan.cache-container.keycloak" => "org.infinispan.manager.EmbeddedCacheManagerStartupException: org.infinispan.commons.CacheConfigurationException: Error starting component org.infinispan.statetransfer.StateTransferManager
Caused by: org.infinispan.remoting.RemoteException: ISPN000217: Received exception from 8eb22ce71ea3, see cause for remote stack trace
Caused by: org.infinispan.commons.CacheException: Unknown command id 85!"}}
21:18:51,266 ERROR [org.jboss.as.controller.management-operation] (Controller Boot Thread) WFLYCTL0013: Operation ("add") failed - address: ([
("subsystem" => "infinispan"),
("cache-container" => "keycloak"),
("thread-pool" => "transport")
]) - failure description: {"WFLYCTL0080: Failed services" => {"org.wildfly.clustering.infinispan.cache-container.keycloak" => "org.infinispan.manager.EmbeddedCacheManagerStartupException: org.infinispan.commons.CacheConfigurationException: Error starting component org.infinispan.statetransfer.StateTransferManager
Caused by: org.infinispan.manager.EmbeddedCacheManagerStartupException: org.infinispan.commons.CacheConfigurationException: Error starting component org.infinispan.statetransfer.StateTransferManager
Caused by: org.infinispan.commons.CacheConfigurationException: Error starting component org.infinispan.statetransfer.StateTransferManager
Caused by: java.util.concurrent.CompletionException: org.infinispan.remoting.RemoteException: ISPN000217: Received exception from 8eb22ce71ea3, see cause for remote stack trace
Caused by: org.infinispan.remoting.RemoteException: ISPN000217: Received exception from 8eb22ce71ea3, see cause for remote stack trace
The reason of this error is that I had another kc11 container running on my dev machine, stopping that container solved the issue. One of my questions is:
The goal of containers is isolation, so why a container impacts another one?

Spring amqp stop endless retrying loop, if rabbitmq service is not working

I'm having trouble with stopping infinite retrying when rabbitmq server is down. I tried using this code snippet
#Bean(name = "rabbitListenerContainerFactory")
public SimpleRabbitListenerContainerFactory simpleRabbitListenerContainerFactory(
SimpleRabbitListenerContainerFactoryConfigurer configurer,
ConnectionFactory connectionFactory) {
SimpleRabbitListenerContainerFactory factory = new SimpleRabbitListenerContainerFactory();
configurer.configure(factory, connectionFactory);
BackOff recoveryBackOff = new FixedBackOff(5000, 1);
factory.setRecoveryBackOff(recoveryBackOff);
return factory;
}
But I am still getting endless loop of retrying
17:49:47,417 DEBUG o.s.a.r.l.BlockingQueueConsumer [org.springframework.amqp.rabbit.RabbitListenerEndpointContainer#13-1] Starting consumer Consumer#4f35699: tags=[[]], channel=null, acknowledgeMode=AUTO local queue size=0
2020-08-31 17:49:49,431 WARN o.s.a.r.l.SimpleMessageListenerContainer [org.springframework.amqp.rabbit.RabbitListenerEndpointContainer#34-2] stopping container - restart recovery attempts exhausted
2020-08-31 17:49:49,431 DEBUG o.s.a.r.l.SimpleMessageListenerContainer [org.springframework.amqp.rabbit.RabbitListenerEndpointContainer#34-2] Shutting down Rabbit listener container
2020-08-31 17:49:49,431 INFO o.s.a.r.c.CachingConnectionFactory [org.springframework.amqp.rabbit.RabbitListenerEndpointContainer#13-1] Attempting to connect to: [localhost:5672]
2020-08-31 17:49:49,431 INFO o.s.a.r.l.SimpleMessageListenerContainer [org.springframework.amqp.rabbit.RabbitListenerEndpointContainer#34-2] Waiting for workers to finish.
2020-08-31 17:49:49,431 INFO o.s.a.r.l.SimpleMessageListenerContainer [org.springframework.amqp.rabbit.RabbitListenerEndpointContainer#34-2] Successfully waited for workers to finish.
2020-08-31 17:49:49,431 DEBUG o.s.a.r.l.SimpleMessageListenerContainer [org.springframework.amqp.rabbit.RabbitListenerEndpointContainer#34-2] Cancelling Consumer#e56de36: tags=[[]], channel=null, acknowledgeMode=AUTO local queue size=0
2020-08-31 17:49:49,431 DEBUG o.s.a.r.l.BlockingQueueConsumer [org.springframework.amqp.rabbit.RabbitListenerEndpointContainer#34-2] Closing Rabbit Channel: null
2020-08-31 17:49:51,434 DEBUG o.s.a.r.l.SimpleMessageListenerContainer [org.springframework.amqp.rabbit.RabbitListenerEndpointContainer#13-1] Recovering consumer in 5000 ms.
2020-08-31 17:49:51,434 DEBUG o.s.a.r.l.SimpleMessageListenerContainer [main] Starting Rabbit listener container.
2020-08-31 17:49:51,434 INFO o.s.a.r.c.CachingConnectionFactory [main] Attempting to connect to: [localhost:5672]
2020-08-31 17:49:53,485 INFO o.s.a.r.l.SimpleMessageListenerContainer [main] Broker not available; cannot force queue declarations during start: java.net.ConnectException: Connection refused: connect
2020-08-31 17:49:53,485 INFO o.s.a.r.c.CachingConnectionFactory [org.springframework.amqp.rabbit.RabbitListenerEndpointContainer#35-1] Attempting to connect to: [localhost:5672]
2020-08-31 17:49:55,518 ERROR o.s.a.r.l.SimpleMessageListenerContainer [org.springframework.amqp.rabbit.RabbitListenerEndpointContainer#35-1] Failed to check/redeclare auto-delete queue(s).
org.springframework.amqp.AmqpConnectException: java.net.ConnectException: Connection refused: connect
at org.springframework.amqp.rabbit.support.RabbitExceptionTranslator.convertRabbitAccessException(RabbitExceptionTranslator.java:62) ~[spring-rabbit-2.1.7.RELEASE.jar:2.1.7.RELEASE]
What I am trying to achieve, is that after one attempt of connecting, stop trying
Edit 1
I have my whole consumer side config here, I can't seem to find where another configurable container factory could be. The thing is, if I have fixed back off at for example 3000 ms, the message changes to Recovering consumer in 3000 ms.
#Configuration
#EnableRabbit
#AllArgsConstructor
public class RabbitConfig {
#Bean
public MessageConverter jsonMessageConverter() {
ObjectMapper jsonObjectMapper = new ObjectMapper();
jsonObjectMapper.registerModule(new JavaTimeModule());
return new Jackson2JsonMessageConverter(jsonObjectMapper);
}
#Bean
public RabbitErrorHandler rabbitExceptionHandler() {
return new RabbitErrorHandler();
}
#Bean(name = "rabbitListenerContainerFactory")
public SimpleRabbitListenerContainerFactory simpleRabbitListenerContainerFactory(
SimpleRabbitListenerContainerFactoryConfigurer configurer,
ConnectionFactory connectionFactory) {
SimpleRabbitListenerContainerFactory factory = new SimpleRabbitListenerContainerFactory();
configurer.configure(factory, connectionFactory);
BackOff recoveryBackOff = new FixedBackOff(5000, 1);
factory.setRecoveryBackOff(recoveryBackOff);
return factory;
}
}
I'm using version 2.1.7
I just copied your code and it worked as expected:
2020-08-31 11:58:08.997 INFO 94891 --- [ main] o.s.a.r.c.CachingConnectionFactory : Attempting to connect to: [localhost:5672]
2020-08-31 11:58:09.002 INFO 94891 --- [ main] o.s.a.r.l.SimpleMessageListenerContainer : Broker not available; cannot force queue declarations during start: java.net.ConnectException: Connection refused (Connection refused)
2020-08-31 11:58:09.006 INFO 94891 --- [ntContainer#0-1] o.s.a.r.c.CachingConnectionFactory : Attempting to connect to: [localhost:5672]
2020-08-31 11:58:09.014 INFO 94891 --- [ main] com.example.demo.So63673274Application : Started So63673274Application in 0.929 seconds (JVM running for 1.39)
2020-08-31 11:58:14.091 WARN 94891 --- [ntContainer#0-1] o.s.a.r.l.SimpleMessageListenerContainer : Consumer raised exception, processing can restart if the connection factory supports it. Exception summary: org.springframework.amqp.AmqpConnectException: java.net.ConnectException: Connection refused (Connection refused)
2020-08-31 11:58:14.093 INFO 94891 --- [ntContainer#0-1] o.s.a.r.l.SimpleMessageListenerContainer : Restarting Consumer#6f2cb653: tags=[[]], channel=null, acknowledgeMode=AUTO local queue size=0
2020-08-31 11:58:14.094 INFO 94891 --- [ntContainer#0-2] o.s.a.r.c.CachingConnectionFactory : Attempting to connect to: [localhost:5672]
2020-08-31 11:58:14.095 WARN 94891 --- [ntContainer#0-2] o.s.a.r.l.SimpleMessageListenerContainer : stopping container - restart recovery attempts exhausted
2020-08-31 11:58:14.095 INFO 94891 --- [ntContainer#0-2] o.s.a.r.l.SimpleMessageListenerContainer : Waiting for workers to finish.
2020-08-31 11:58:14.096 INFO 94891 --- [ntContainer#0-2] o.s.a.r.l.SimpleMessageListenerContainer : Successfully waited for workers to finish.
I see you have
stopping container - restart recovery attempts exhausted
too.
Perhaps the listener that keeps trying is from a different container factory?
Recovering consumer in 5000 ms.

Kylin start fail java.lang.IllegalArgumentException: Failed to find metadata store by url: kylin_metadata#hbase

I install kylin by https://github.com/cas-packone/ambari-kylin-service/
2020-01-13 01:52:16,673 INFO [main] utils.Compatibility:41 : Running in ZooKeeper 3.4.x compatibility mode
2020-01-13 01:52:16,710 INFO [main] imps.CuratorFrameworkImpl:284 : Starting
2020-01-13 01:52:16,715 INFO [main] zookeeper.ZooKeeper:438 : Initiating client connection, connectString=node1:2181 sessionTimeout=120000 watcher=org.apache.curator.ConnectionState#6c000e0c
2020-01-13 01:52:16,717 INFO [main-SendThread(node1:2181)] zookeeper.ClientCnxn:1013 : Opening socket connection to server node1/127.0.0.1:2181. Will not attempt to authenticate using SASL (unknown error)
2020-01-13 01:52:16,719 INFO [main-SendThread(node1:2181)] zookeeper.ClientCnxn:856 : Socket connection established, initiating session, client: /127.0.0.1:49768, server: node1/127.0.0.1:2181
2020-01-13 01:52:16,721 INFO [main-SendThread(node1:2181)] zookeeper.ClientCnxn:1273 : Session establishment complete on server node1/127.0.0.1:2181, sessionid = 0x16f798087cc03af, negotiated timeout = 60000
2020-01-13 01:52:16,722 INFO [main] imps.CuratorFrameworkImpl:326 : Default schema
2020-01-13 01:52:16,725 DEBUG [main] util.ZookeeperDistributedLock:142 : 6521#node1 trying to lock /kylin/kylin_metadata/create_htable/kylin_metadata/lock
2020-01-13 01:52:16,730 INFO [main-EventThread] state.ConnectionStateManager:237 : State change: CONNECTED
Exception in thread "main" java.lang.IllegalArgumentException: Failed to find metadata store by url: kylin_metadata#hbase
at org.apache.kylin.common.persistence.ResourceStore.createResourceStore(ResourceStore.java:89)
at org.apache.kylin.common.persistence.ResourceStore.getStore(ResourceStore.java:101)
at org.apache.kylin.rest.service.AclTableMigrationTool.checkIfNeedMigrate(AclTableMigrationTool.java:94)
at org.apache.kylin.tool.AclTableMigrationCLI.main(AclTableMigrationCLI.java:41)
Caused by: java.lang.reflect.InvocationTargetException
at sun.reflect.NativeConstructorAccessorImpl.newInstance0(Native Method)
at sun.reflect.NativeConstructorAccessorImpl.newInstance(NativeConstructorAccessorImpl.java:62)
at sun.reflect.DelegatingConstructorAccessorImpl.newInstance(DelegatingConstructorAccessorImpl.java:45)
at java.lang.reflect.Constructor.newInstance(Constructor.java:423)
at org.apache.kylin.common.persistence.ResourceStore.createResourceStore(ResourceStore.java:83)
... 3 more
Caused by: java.lang.NoSuchMethodError: org.apache.curator.framework.api.CreateBuilder.creatingParentsIfNeeded()Lorg/apache/curator/framework/api/ProtectACLCreateModePathAndBytesable;
at org.apache.kylin.storage.hbase.util.ZookeeperDistributedLock.lock(ZookeeperDistributedLock.java:145)
at org.apache.kylin.storage.hbase.util.ZookeeperDistributedLock.lock(ZookeeperDistributedLock.java:166)
at org.apache.kylin.storage.hbase.HBaseConnection.createHTableIfNeeded(HBaseConnection.java:305)
at org.apache.kylin.storage.hbase.HBaseResourceStore.createHTableIfNeeded(HBaseResourceStore.java:110)
at org.apache.kylin.storage.hbase.HBaseResourceStore.<init>(HBaseResourceStore.java:91)
... 8 more
2020-01-13 01:52:16,772 INFO [Curator-Framework-0] imps.CuratorFrameworkImpl:924 : backgroundOperationsLoop exiting
2020-01-13 01:52:16,773 INFO [Thread-1] zookeeper.ReadOnlyZKClient:344 : Close zookeeper connection 0x7b94089b to node1:2181
2020-01-13 01:52:16,775 INFO [ReadOnlyZKClient-node1:2181#0x7b94089b] zookeeper.ZooKeeper:692 : Session: 0x16f798087cc03ae closed
2020-01-13 01:52:16,775 INFO [ReadOnlyZKClient-node1:2181#0x7b94089b-EventThread] zookeeper.ClientCnxn:517 : EventThread shut down
2020-01-13 01:52:16,777 INFO [Thread-4] zookeeper.ZooKeeper:692 : Session: 0x16f798087cc03af closed
2020-01-13 01:52:16,777 INFO [main-EventThread] zookeeper.ClientCnxn:517 : EventThread shut down
ERROR: Unknown error. Please check full log.

Dataload Issue: Neo4j Down due to "UnderlyingStorageException: Error performing check point" error

While loading data in to Neo4j, I am getting this below error and neo4j is not working properly. My memory configuration is - dbms.memory.pagecache.size=10G dbms.memory.heap.initial_size=10G dbms.memory.heap.max_size=30G $ulimit -n --> 50000 Please help me to fix this issue.
Background: We have developed talend etl jobs to fetch data from Postgres tables and load into Graph neo4j database. During execution of this jobs the neo4j gives below error and not working properly. Also, it claims to restart the neo4j application.
Can someone face this issue and help me out on this?
2019-03-13 03:55:34.312+0000 ERROR Failed to start transaction. The database has encountered a critical error, and needs to be restarted. Please see database logs for more details.
org.neo4j.graphdb.TransactionFailureException: The database has encountered a critical error, and needs to be restarted. Please see database logs for more details.
at org.neo4j.kernel.impl.factory.ClassicCoreSPI.beginTransaction(ClassicCoreSPI.java:198)
at org.neo4j.kernel.impl.factory.GraphDatabaseFacade.beginTransactionInternal(GraphDatabaseFacade.java:610)
at org.neo4j.kernel.impl.factory.GraphDatabaseFacade.beginTransaction(GraphDatabaseFacade.java:405)
at org.neo4j.server.rest.transactional.TransitionalTxManagementKernelTransaction.startTransaction(TransitionalTxManagementKernelTransaction.java:123)
at org.neo4j.server.rest.transactional.TransitionalTxManagementKernelTransaction.<init>(TransitionalTxManagementKernelTransaction.java:51)
at org.neo4j.server.rest.transactional.TransitionalPeriodTransactionMessContainer.newTransaction(TransitionalPeriodTransactionMessContainer.java:55)
at org.neo4j.server.rest.transactional.TransactionHandle.ensureActiveTransaction(TransactionHandle.java:213)
at org.neo4j.server.rest.transactional.TransactionHandle.commit(TransactionHandle.java:156)
at org.neo4j.server.rest.web.TransactionalService.lambda$executeStatementsAndCommit$1(TransactionalService.java:218)
at com.sun.jersey.core.impl.provider.entity.StreamingOutputProvider.writeTo(StreamingOutputProvider.java:71)
at com.sun.jersey.core.impl.provider.entity.StreamingOutputProvider.writeTo(StreamingOutputProvider.java:57)
at com.sun.jersey.spi.container.ContainerResponse.write(ContainerResponse.java:302)
at com.sun.jersey.server.impl.application.WebApplicationImpl._handleRequest(WebApplicationImpl.java:1510)
at com.sun.jersey.server.impl.application.WebApplicationImpl.handleRequest(WebApplicationImpl.java:1419)
at com.sun.jersey.server.impl.application.WebApplicationImpl.handleRequest(WebApplicationImpl.java:1409)
at com.sun.jersey.spi.container.servlet.WebComponent.service(WebComponent.java:409)
at com.sun.jersey.spi.container.servlet.ServletContainer.service(ServletContainer.java:558)
at com.sun.jersey.spi.container.servlet.ServletContainer.service(ServletContainer.java:733)
at javax.servlet.http.HttpServlet.service(HttpServlet.java:790)
at org.eclipse.jetty.servlet.ServletHolder.handle(ServletHolder.java:812)
at org.eclipse.jetty.servlet.ServletHandler$CachedChain.doFilter(ServletHandler.java:1669)
at org.neo4j.server.rest.dbms.AuthorizationEnabledFilter.doFilter(AuthorizationEnabledFilter.java:123)
at org.eclipse.jetty.servlet.ServletHandler$CachedChain.doFilter(ServletHandler.java:1652)
at org.neo4j.server.rest.web.CollectUserAgentFilter.doFilter(CollectUserAgentFilter.java:69)
at org.eclipse.jetty.servlet.ServletHandler$CachedChain.doFilter(ServletHandler.java:1652)
at org.eclipse.jetty.servlet.ServletHandler.doHandle(ServletHandler.java:585)
at org.eclipse.jetty.server.session.SessionHandler.doHandle(SessionHandler.java:221)
at org.eclipse.jetty.server.handler.ContextHandler.doHandle(ContextHandler.java:1127)
at org.eclipse.jetty.servlet.ServletHandler.doScope(ServletHandler.java:515)
at org.eclipse.jetty.server.session.SessionHandler.doScope(SessionHandler.java:185)
at org.eclipse.jetty.server.handler.ContextHandler.doScope(ContextHandler.java:1061)
at org.eclipse.jetty.server.handler.ScopedHandler.handle(ScopedHandler.java:141)
at org.eclipse.jetty.server.handler.HandlerList.handle(HandlerList.java:52)
at org.eclipse.jetty.server.handler.HandlerWrapper.handle(HandlerWrapper.java:97)
at org.eclipse.jetty.server.Server.handle(Server.java:499)
at org.eclipse.jetty.server.HttpChannel.handle(HttpChannel.java:311)
at org.eclipse.jetty.server.HttpConnection.onFillable(HttpConnection.java:258)
at org.eclipse.jetty.io.AbstractConnection$2.run(AbstractConnection.java:544)
at org.eclipse.jetty.util.thread.QueuedThreadPool.runJob(QueuedThreadPool.java:635)
at org.eclipse.jetty.util.thread.QueuedThreadPool$3.run(QueuedThreadPool.java:555)
at java.lang.Thread.run(Thread.java:748)
Caused by: org.neo4j.kernel.api.exceptions.TransactionFailureException: The database has encountered a critical error, and needs to be restarted. Please see database logs for more details.
at sun.reflect.NativeConstructorAccessorImpl.newInstance0(Native Method)
at sun.reflect.NativeConstructorAccessorImpl.newInstance(NativeConstructorAccessorImpl.java:62)
at sun.reflect.DelegatingConstructorAccessorImpl.newInstance(DelegatingConstructorAccessorImpl.java:45)
at java.lang.reflect.Constructor.newInstance(Constructor.java:423)
at org.neo4j.kernel.internal.DatabaseHealth.assertHealthy(DatabaseHealth.java:63)
at org.neo4j.kernel.impl.api.Kernel.newTransaction(Kernel.java:94)
at org.neo4j.kernel.impl.factory.ClassicCoreSPI.beginTransaction(ClassicCoreSPI.java:190)
... 40 more
Caused by: org.neo4j.kernel.impl.store.UnderlyingStorageException: Error performing check point
at org.neo4j.kernel.impl.transaction.log.checkpoint.CheckPointScheduler$1.constructCombinedFailure(CheckPointScheduler.java:100)
at org.neo4j.kernel.impl.transaction.log.checkpoint.CheckPointScheduler$1.run(CheckPointScheduler.java:81)
at java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:511)
at java.util.concurrent.FutureTask.run(FutureTask.java:266)
at java.util.concurrent.ScheduledThreadPoolExecutor$ScheduledFutureTask.access$201(ScheduledThreadPoolExecutor.java:180)
at java.util.concurrent.ScheduledThreadPoolExecutor$ScheduledFutureTask.run(ScheduledThreadPoolExecutor.java:293)
at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142)
at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617)
at java.lang.Thread.run(Thread.java:748)
at org.neo4j.helpers.NamedThreadFactory$2.run(NamedThreadFactory.java:109)
Suppressed: org.neo4j.kernel.impl.store.kvstore.RotationTimeoutException: Failed to rotate logs. Expected version: 29701778, actual version: 29665951, wait timeout (ms): 600059
at org.neo4j.kernel.impl.store.kvstore.RotationState$Rotation.rotate(RotationState.java:79)
at org.neo4j.kernel.impl.store.kvstore.RotationState$Rotation.rotate(RotationState.java:52)
at org.neo4j.kernel.impl.store.kvstore.AbstractKeyValueStore$RotationTask.rotate(AbstractKeyValueStore.java:304)
at org.neo4j.kernel.impl.store.kvstore.AbstractKeyValueStore$RotationTask.rotate(AbstractKeyValueStore.java:281)
at org.neo4j.kernel.impl.store.counts.CountsTracker.rotate(CountsTracker.java:155)
at org.neo4j.kernel.impl.store.NeoStores.flush(NeoStores.java:242)
at org.neo4j.kernel.impl.storageengine.impl.recordstorage.RecordStorageEngine.flushAndForce(RecordStorageEngine.java:465)
at org.neo4j.kernel.impl.transaction.log.checkpoint.CheckPointerImpl.doCheckPoint(CheckPointerImpl.java:160)
at org.neo4j.kernel.impl.transaction.log.checkpoint.CheckPointerImpl.checkPointIfNeeded(CheckPointerImpl.java:134)
at org.neo4j.kernel.impl.transaction.log.checkpoint.CheckPointScheduler$1.run(CheckPointScheduler.java:64)
... 8 more
Suppressed: org.neo4j.kernel.impl.store.kvstore.RotationTimeoutException: Failed to rotate logs. Expected version: 29710329, actual version: 29665951, wait timeout (ms): 600046
at org.neo4j.kernel.impl.store.kvstore.RotationState$Rotation.rotate(RotationState.java:79)
at org.neo4j.kernel.impl.store.kvstore.RotationState$Rotation.rotate(RotationState.java:52)
at org.neo4j.kernel.impl.store.kvstore.AbstractKeyValueStore$RotationTask.rotate(AbstractKeyValueStore.java:304)
at org.neo4j.kernel.impl.store.kvstore.AbstractKeyValueStore$RotationTask.rotate(AbstractKeyValueStore.java:281)
at org.neo4j.kernel.impl.store.counts.CountsTracker.rotate(CountsTracker.java:155)
at org.neo4j.kernel.impl.store.NeoStores.flush(NeoStores.java:242)
at org.neo4j.kernel.impl.storageengine.impl.recordstorage.RecordStorageEngine.flushAndForce(RecordStorageEngine.java:465)
at org.neo4j.kernel.impl.transaction.log.checkpoint.CheckPointerImpl.doCheckPoint(CheckPointerImpl.java:160)
at org.neo4j.kernel.impl.transaction.log.checkpoint.CheckPointerImpl.checkPointIfNeeded(CheckPointerImpl.java:134)
at org.neo4j.kernel.impl.transaction.log.checkpoint.CheckPointScheduler$1.run(CheckPointScheduler.java:64)
... 8 more
Suppressed: org.neo4j.kernel.impl.store.kvstore.RotationTimeoutException: Failed to rotate logs. Expected version: 29725997, actual version: 29665951, wait timeout (ms): 600083
at org.neo4j.kernel.impl.store.kvstore.RotationState$Rotation.rotate(RotationState.java:79)
at org.neo4j.kernel.impl.store.kvstore.RotationState$Rotation.rotate(RotationState.java:52)
at org.neo4j.kernel.impl.store.kvstore.AbstractKeyValueStore$RotationTask.rotate(AbstractKeyValueStore.java:304)
at org.neo4j.kernel.impl.store.kvstore.AbstractKeyValueStore$RotationTask.rotate(AbstractKeyValueStore.java:281)
at org.neo4j.kernel.impl.store.counts.CountsTracker.rotate(CountsTracker.java:155)
at org.neo4j.kernel.impl.store.NeoStores.flush(NeoStores.java:242)
at org.neo4j.kernel.impl.storageengine.impl.recordstorage.RecordStorageEngine.flushAndForce(RecordStorageEngine.java:465)
at org.neo4j.kernel.impl.transaction.log.checkpoint.CheckPointerImpl.doCheckPoint(CheckPointerImpl.java:160)
at org.neo4j.kernel.impl.transaction.log.checkpoint.CheckPointerImpl.checkPointIfNeeded(CheckPointerImpl.java:134)
at org.neo4j.kernel.impl.transaction.log.checkpoint.CheckPointScheduler$1.run(CheckPointScheduler.java:64)
... 8 more
Suppressed: org.neo4j.kernel.impl.store.kvstore.RotationTimeoutException: Failed to rotate logs. Expected version: 29727712, actual version: 29665951, wait timeout (ms): 600045
at org.neo4j.kernel.impl.store.kvstore.RotationState$Rotation.rotate(RotationState.java:79)
at org.neo4j.kernel.impl.store.kvstore.RotationState$Rotation.rotate(RotationState.java:52)
at org.neo4j.kernel.impl.store.kvstore.AbstractKeyValueStore$RotationTask.rotate(AbstractKeyValueStore.java:304)
at org.neo4j.kernel.impl.store.kvstore.AbstractKeyValueStore$RotationTask.rotate(AbstractKeyValueStore.java:281)
at org.neo4j.kernel.impl.store.counts.CountsTracker.rotate(CountsTracker.java:155)
at org.neo4j.kernel.impl.store.NeoStores.flush(NeoStores.java:242)
at org.neo4j.kernel.impl.storageengine.impl.recordstorage.RecordStorageEngine.flushAndForce(RecordStorageEngine.java:465)
at org.neo4j.kernel.impl.transaction.log.checkpoint.CheckPointerImpl.doCheckPoint(CheckPointerImpl.java:160)
at org.neo4j.kernel.impl.transaction.log.checkpoint.CheckPointerImpl.checkPointIfNeeded(CheckPointerImpl.java:134)
at org.neo4j.kernel.impl.transaction.log.checkpoint.CheckPointScheduler$1.run(CheckPointScheduler.java:64)
... 8 more
Suppressed: org.neo4j.kernel.impl.store.kvstore.RotationTimeoutException: Failed to rotate logs. Expected version: 29729405, actual version: 29665951, wait timeout (ms): 600052
at org.neo4j.kernel.impl.store.kvstore.RotationState$Rotation.rotate(RotationState.java:79)
at org.neo4j.kernel.impl.store.kvstore.RotationState$Rotation.rotate(RotationState.java:52)
at org.neo4j.kernel.impl.store.kvstore.AbstractKeyValueStore$RotationTask.rotate(AbstractKeyValueStore.java:304)
at org.neo4j.kernel.impl.store.kvstore.AbstractKeyValueStore$RotationTask.rotate(AbstractKeyValueStore.java:281)
at org.neo4j.kernel.impl.store.counts.CountsTracker.rotate(CountsTracker.java:155)
at org.neo4j.kernel.impl.store.NeoStores.flush(NeoStores.java:242)
at org.neo4j.kernel.impl.storageengine.impl.recordstorage.RecordStorageEngine.flushAndForce(RecordStorageEngine.java:465)
at org.neo4j.kernel.impl.transaction.log.checkpoint.CheckPointerImpl.doCheckPoint(CheckPointerImpl.java:160)
at org.neo4j.kernel.impl.transaction.log.checkpoint.CheckPointerImpl.checkPointIfNeeded(CheckPointerImpl.java:134)
at org.neo4j.kernel.impl.transaction.log.checkpoint.CheckPointScheduler$1.run(CheckPointScheduler.java:64)
... 8 more
Suppressed: org.neo4j.kernel.impl.store.kvstore.RotationTimeoutException: Failed to rotate logs. Expected version: 29730976, actual version: 29665951, wait timeout (ms): 600072
at org.neo4j.kernel.impl.store.kvstore.RotationState$Rotation.rotate(RotationState.java:79)
at org.neo4j.kernel.impl.store.kvstore.RotationState$Rotation.rotate(RotationState.java:52)
at org.neo4j.kernel.impl.store.kvstore.AbstractKeyValueStore$RotationTask.rotate(AbstractKeyValueStore.java:304)
at org.neo4j.kernel.impl.store.kvstore.AbstractKeyValueStore$RotationTask.rotate(AbstractKeyValueStore.java:281)
at org.neo4j.kernel.impl.store.counts.CountsTracker.rotate(CountsTracker.java:155)
at org.neo4j.kernel.impl.store.NeoStores.flush(NeoStores.java:242)
at org.neo4j.kernel.impl.storageengine.impl.recordstorage.RecordStorageEngine.flushAndForce(RecordStorageEngine.java:465)
at org.neo4j.kernel.impl.transaction.log.checkpoint.CheckPointerImpl.doCheckPoint(CheckPointerImpl.java:160)
at org.neo4j.kernel.impl.transaction.log.checkpoint.CheckPointerImpl.checkPointIfNeeded(CheckPointerImpl.java:134)
at org.neo4j.kernel.impl.transaction.log.checkpoint.CheckPointScheduler$1.run(CheckPointScheduler.java:64)
... 8 more
Try deleting the neostore.counts.db.a/.b files. Neo4j will rebuild them if they are missing.
Can you please do the following (based on engineering feedback)
upgrade to a recent version (like 3.5.3)
check and possibly reduce your transaction batch sizes of your import jobs
check the log for other errors that occur before these errors and also share them

spring boot app with amqp stuck in blocked mode

my spring boot app with amqp rabbitMQ hangs without errors in one of the environments. The same config works ok in another.
The thread dump shows,
"main" - Thread t#32
java.lang.Thread.State: WAITING
at java.lang.Object.wait(Native Method)
- waiting on <e62a7fa> (a java.lang.Object)
at java.lang.Object.wait(Object.java:502)
at org.springframework.util.ConcurrencyThrottleSupport.beforeAccess(ConcurrencyThrottleSupport.java:124)
at org.springframework.core.task.SimpleAsyncTaskExecutor$ConcurrencyThrottleAdapter.beforeAccess(SimpleAsyncTaskExecutor.java:243)
at org.springframework.core.task.SimpleAsyncTaskExecutor.execute(SimpleAsyncTaskExecutor.java:184)
at org.springframework.core.task.SimpleAsyncTaskExecutor.execute(SimpleAsyncTaskExecutor.java:167)
at org.springframework.amqp.rabbit.listener.SimpleMessageListenerContainer.doStart(SimpleMessageListenerContainer.java:815)
- locked <5d300728> (a java.lang.Object)
at org.springframework.amqp.rabbit.listener.AbstractMessageListenerContainer.start(AbstractMessageListenerContainer.java:550)
at org.springframework.context.support.DefaultLifecycleProcessor.doStart(DefaultLifecycleProcessor.java:173)
at org.springframework.context.support.DefaultLifecycleProcessor.access$200(DefaultLifecycleProcessor.java:51)
at org.springframework.context.support.DefaultLifecycleProcessor$LifecycleGroup.start(DefaultLifecycleProcessor.java:346)
at org.springframework.context.support.DefaultLifecycleProcessor.startBeans(DefaultLifecycleProcessor.java:149)
at org.springframework.context.support.DefaultLifecycleProcessor.onRefresh(DefaultLifecycleProcessor.java:112)
at org.springframework.context.support.AbstractApplicationContext.finishRefresh(AbstractApplicationContext.java:874)
at org.springframework.boot.context.embedded.EmbeddedWebApplicationContext.finishRefresh(EmbeddedWebApplicationContext.java:144)
at org.springframework.context.support.AbstractApplicationContext.refresh(AbstractApplicationContext.java:544)
- locked <1c4259b2> (a java.lang.Object)
at org.springframework.boot.context.embedded.EmbeddedWebApplicationContext.refresh(EmbeddedWebApplicationContext.java:122)
at org.springframework.boot.SpringApplication.refresh(SpringApplication.java:761)
at org.springframework.boot.SpringApplication.refreshContext(SpringApplication.java:371)
at org.springframework.boot.SpringApplication.run(SpringApplication.java:315)
at org.springframework.boot.SpringApplication.run(SpringApplication.java:1186)
at org.springframework.boot.SpringApplication.run(SpringApplication.java:1175)
and then all the AMQP consumer threads are in BLOCKED state,
"AMQPConsumerThread_20" - Thread t#100
java.lang.Thread.State: BLOCKED
at org.springframework.amqp.rabbit.listener.SimpleMessageListenerContainer.isActive(SimpleMessageListenerContainer.java:870)
- waiting to lock <5d300728> (a java.lang.Object) owned by "main" t#32
at org.springframework.amqp.rabbit.listener.SimpleMessageListenerContainer.access$1000(SimpleMessageListenerContainer.java:95)
at org.springframework.amqp.rabbit.listener.SimpleMessageListenerContainer$AsyncMessageProcessingConsumer.run(SimpleMessageListenerContainer.java:1310)
at org.springframework.core.task.SimpleAsyncTaskExecutor$ConcurrencyThrottlingRunnable.run(SimpleAsyncTaskExecutor.java:268)
at java.lang.Thread.run(Thread.java:745)
Locked ownable synchronizers:
- None
The SpringBoot app then waits with the last line on console,
2017-02-08T17:48:26.91-0500 [APP/0] OUT 2017-02-08 17:48:26 [AMQPConsumerThread_1] INFO o.s.a.r.c.CachingConnectionFactory -
Created new connection: SimpleConnection#64f05682
[delegate=amqp://b489be6b-b6e5-41b5-8f31-3be3acac4518#x.y.z.w:5672/daad99cf-fa24-40fe-b0d1-f9312fb583be, localPort= 57509]
EDIT
when I drop the consumer count down to 1, the BLOCKED state goes away and the main thread state is as follows,
"main" - Thread t#32
java.lang.Thread.State: WAITING
at java.lang.Object.wait(Native Method)
- waiting on <79908042> (a java.lang.Object)
at java.lang.Object.wait(Object.java:502)
at org.springframework.util.ConcurrencyThrottleSupport.beforeAccess(ConcurrencyThrottleSupport.java:124)
at org.springframework.core.task.SimpleAsyncTaskExecutor$ConcurrencyThrottleAdapter.beforeAccess(SimpleAsyncTaskExecutor.java:243)
at org.springframework.core.task.SimpleAsyncTaskExecutor.execute(SimpleAsyncTaskExecutor.java:184)
at org.springframework.core.task.SimpleAsyncTaskExecutor.execute(SimpleAsyncTaskExecutor.java:167)
at org.springframework.amqp.rabbit.listener.SimpleMessageListenerContainer.doStart(SimpleMessageListenerContainer.java:815)
- locked <4f3dcc9> (a java.lang.Object)
at org.springframework.amqp.rabbit.listener.AbstractMessageListenerContainer.start(AbstractMessageListenerContainer.java:550)
at org.springframework.context.support.DefaultLifecycleProcessor.doStart(DefaultLifecycleProcessor.java:173)
at org.springframework.context.support.DefaultLifecycleProcessor.access$200(DefaultLifecycleProcessor.java:51)
at org.springframework.context.support.DefaultLifecycleProcessor$LifecycleGroup.start(DefaultLifecycleProcessor.java:346)
at org.springframework.context.support.DefaultLifecycleProcessor.startBeans(DefaultLifecycleProcessor.java:149)
at org.springframework.context.support.DefaultLifecycleProcessor.onRefresh(DefaultLifecycleProcessor.java:112)
at org.springframework.context.support.AbstractApplicationContext.finishRefresh(AbstractApplicationContext.java:874)
at org.springframework.boot.context.embedded.EmbeddedWebApplicationContext.finishRefresh(EmbeddedWebApplicationContext.java:144)
at org.springframework.context.support.AbstractApplicationContext.refresh(AbstractApplicationContext.java:544)
- locked <1a60fd96> (a java.lang.Object)
at org.springframework.boot.context.embedded.EmbeddedWebApplicationContext.refresh(EmbeddedWebApplicationContext.java:122)
at org.springframework.boot.SpringApplication.refresh(SpringApplication.java:761)
at org.springframework.boot.SpringApplication.refreshContext(SpringApplication.java:371)
at org.springframework.boot.SpringApplication.run(SpringApplication.java:315)
at org.springframework.boot.SpringApplication.run(SpringApplication.java:1186)
at org.springframework.boot.SpringApplication.run(SpringApplication.java:1175)
But the app still does not start.
the rabbitmq webconsole shows the consumer connected with the last line in logs,
2017-02-08T18:36:09.99-0500 [APP/0] OUT 2017-02-08 18:36:09 [AMQPConsumerThread_1] INFO o.s.a.r.c.CachingConnectionFactory - Created new connection: SimpleConnection#39271079
[delegate=amqp://b489be6b-b6e5-41b5-8f31-3be3acac4518#10.146.54.74:5672/daad99cf-fa24-40fe-b0d1-f9312fb583be, localPort= 32880]
what am I missing?
thanks
This...
"main" - Thread t#32
java.lang.Thread.State: WAITING
at java.lang.Object.wait(Native Method)
- waiting on <79908042> (a java.lang.Object)
at java.lang.Object.wait(Object.java:502)
at org.springframework.util.ConcurrencyThrottleSupport.beforeAccess(ConcurrencyThrottleSupport.java:124)
at org.springframework.core.task.SimpleAsyncTaskExecutor$ConcurrencyThrottleAdapter.beforeAccess(SimpleAsyncTaskExecutor.java:243)
at org.springframework.core.task.SimpleAsyncTaskExecutor.execute(SimpleAsyncTaskExecutor.java:184)
Indicates the SimpleAsyncTaskExecutor is at its thread limit (setConcurrencyLimit). This check is only performed if you have set a limit.
This won't proceed until some task terminates. Since the consumer threads are long-lived, that will likely not happen.

Resources