My VPS was upgraded to change to a different type of SSD last night, however I did not take a backup of the database prior to the shut down and restart.
After this point in time I have been unable to restart the database. The error log begins to show a failure when attempting to open a specific index as shown below.
I have attempted to bypass this index by renaming the directory it is stored in but this was not effective. Is there any other way of removing the offending index so that I can restart the database in the browser and then re-add the index later?
[neo4j/21dafb04] [ Store versions ]
[neo4j/21dafb04] --------------------------------------------------------------------------------
[neo4j/21dafb04] ArrayPropertyStore[neostore.nodestore.db.labels] AF4.3.0
[neo4j/21dafb04] NodeStore[neostore.nodestore.db] AF4.3.0
[neo4j/21dafb04] StringPropertyStore[neostore.propertystore.db.index.keys] AF4.3.0
[neo4j/21dafb04] PropertyIndexStore[neostore.propertystore.db.index] AF4.3.0
[neo4j/21dafb04] StringPropertyStore[neostore.propertystore.db.strings] AF4.3.0
[neo4j/21dafb04] ArrayPropertyStore[neostore.propertystore.db.arrays] AF4.3.0
[neo4j/21dafb04] PropertyStore[neostore.propertystore.db] AF4.3.0
[neo4j/21dafb04] RelationshipStore[neostore.relationshipstore.db] AF4.3.0
[neo4j/21dafb04] StringPropertyStore[neostore.relationshiptypestore.db.names] AF4.3.0
[neo4j/21dafb04] RelationshipTypeStore[neostore.relationshiptypestore.db] AF4.3.0
[neo4j/21dafb04] StringPropertyStore[neostore.labeltokenstore.db.names] AF4.3.0
[neo4j/21dafb04] LabelTokenStore[neostore.labeltokenstore.db] AF4.3.0
[neo4j/21dafb04] SchemaStore[neostore.schemastore.db] AF4.3.0
[neo4j/21dafb04] RelationshipGroupStore[neostore.relationshipgroupstore.db] AF4.3.0
[neo4j/21dafb04] NeoStore[neostore] AF4.3.0
[neo4j/21dafb04]
2022-01-24 14:34:34.901+0000 WARN [o.n.k.i.i.s.GenericNativeIndexProvider] [neo4j/21dafb04] Failed to open index:13. Requesting re-population. Cause: /var/lib/neo4j/data/databases/neo4j/schema/index/native-btree-1.0/13/index-13: /var/lib/neo4j/data/databases/neo4j/schema/index/native-btree-1.0/13/index-13 | GBPTree[file:/var/lib/neo4j/data/databases/neo4j/schema/index/native-btree-1.0/13/index-13]
2022-01-24 14:34:34.903+0000 INFO [o.n.k.i.a.i.IndexingService] [neo4j/21dafb04] IndexingService.init: index 13 on (:Role {roleId}) is POPULATING
2022-01-24 14:34:34.903+0000 INFO [o.n.k.i.a.i.IndexingService] [neo4j/21dafb04] IndexingService.init: indexes not specifically mentioned above are ONLINE
2022-01-24 14:34:34.904+0000 INFO [o.n.k.a.DatabaseAvailabilityGuard] [neo4j/21dafb04] Requirement `Database unavailable` makes database neo4j unavailable.
2022-01-24 14:34:34.905+0000 INFO [o.n.k.a.DatabaseAvailabilityGuard] [neo4j/21dafb04] DatabaseId{21dafb04[neo4j]} is unavailable.
2022-01-24 14:34:35.045+0000 WARN [o.n.k.d.Database] [neo4j/21dafb04] Exception occurred while starting the database. Trying to stop already started components.
org.neo4j.kernel.lifecycle.LifecycleException: Component 'org.neo4j.kernel.impl.transaction.log.files.TransactionLogFile#74aaf34e' was successfully initialized, but failed to start. Please see the attached cause exception "/var/lib/neo4j/data/transactions/neo4j/neostore.transaction.db.7".
at org.neo4j.kernel.lifecycle.LifeSupport$LifecycleInstance.start(LifeSupport.java:463) ~[neo4j-common-4.4.0.jar:4.4.0]
at org.neo4j.kernel.lifecycle.LifeSupport.start(LifeSupport.java:110) ~[neo4j-common-4.4.0.jar:4.4.0]
at org.neo4j.kernel.impl.transaction.log.files.TransactionLogFiles.start(TransactionLogFiles.java:66) ~[neo4j-kernel-4.4.0.jar:4.4.0]
at org.neo4j.kernel.lifecycle.LifeSupport$LifecycleInstance.start(LifeSupport.java:442) ~[neo4j-common-4.4.0.jar:4.4.0]
at org.neo4j.kernel.lifecycle.LifeSupport.start(LifeSupport.java:110) ~[neo4j-common-4.4.0.jar:4.4.0]
at org.neo4j.kernel.database.Database.start(Database.java:514) ~[neo4j-kernel-4.4.0.jar:4.4.0]
at org.neo4j.dbms.database.AbstractDatabaseManager.startDatabase(AbstractDatabaseManager.java:197) ~[neo4j-4.4.0.jar:4.4.0]
at org.neo4j.dbms.database.DefaultDatabaseManager.startDatabase(DefaultDatabaseManager.java:153) ~[neo4j-4.4.0.jar:4.4.0]
at org.neo4j.dbms.database.DefaultDatabaseManager.initialiseDefaultDatabase(DefaultDatabaseManager.java:64) ~[neo4j-4.4.0.jar:4.4.0]
at org.neo4j.dbms.database.DefaultDatabaseInitializer.start0(DefaultDatabaseInitializer.java:39) ~[neo4j-kernel-4.4.0.jar:4.4.0]
at org.neo4j.kernel.lifecycle.SafeLifecycle.transition(SafeLifecycle.java:124) [neo4j-common-4.4.0.jar:4.4.0]
at org.neo4j.kernel.lifecycle.SafeLifecycle.start(SafeLifecycle.java:138) [neo4j-common-4.4.0.jar:4.4.0]
at org.neo4j.kernel.lifecycle.LifeSupport$LifecycleInstance.start(LifeSupport.java:442) [neo4j-common-4.4.0.jar:4.4.0]
at org.neo4j.kernel.lifecycle.LifeSupport.start(LifeSupport.java:110) [neo4j-common-4.4.0.jar:4.4.0]
at org.neo4j.graphdb.facade.DatabaseManagementServiceFactory.startDatabaseServer(DatabaseManagementServiceFactory.java:219) [neo4j-4.4.0.jar:4.4.0]
at org.neo4j.graphdb.facade.DatabaseManagementServiceFactory.build(DatabaseManagementServiceFactory.java:181) [neo4j-4.4.0.jar:4.4.0]
at org.neo4j.server.CommunityBootstrapper.createNeo(CommunityBootstrapper.java:36) [neo4j-4.4.0.jar:4.4.0]
at org.neo4j.server.NeoBootstrapper.start(NeoBootstrapper.java:142) [neo4j-4.4.0.jar:4.4.0]
at org.neo4j.server.NeoBootstrapper.start(NeoBootstrapper.java:95) [neo4j-4.4.0.jar:4.4.0]
at org.neo4j.server.CommunityEntryPoint.main(CommunityEntryPoint.java:34) [neo4j-4.4.0.jar:4.4.0]
Caused by: java.nio.file.AccessDeniedException: /var/lib/neo4j/data/transactions/neo4j/neostore.transaction.db.7
at sun.nio.fs.UnixException.translateToIOException(UnixException.java:90) ~[?:?]
at sun.nio.fs.UnixException.rethrowAsIOException(UnixException.java:111) ~[?:?]
at sun.nio.fs.UnixException.rethrowAsIOException(UnixException.java:116) ~[?:?]
at sun.nio.fs.UnixFileSystemProvider.newFileChannel(UnixFileSystemProvider.java:182) ~[?:?]
at java.nio.channels.FileChannel.open(FileChannel.java:292) ~[?:?]
at org.neo4j.io.fs.DefaultFileSystemAbstraction.open(DefaultFileSystemAbstraction.java:76) ~[neo4j-io-4.4.0.jar:4.4.0]
at org.neo4j.io.fs.DefaultFileSystemAbstraction.write(DefaultFileSystemAbstraction.java:107) ~[neo4j-io-4.4.0.jar:4.4.0]
at org.neo4j.io.fs.DefaultFileSystemAbstraction.write(DefaultFileSystemAbstraction.java:58) ~[neo4j-io-4.4.0.jar:4.4.0]
at org.neo4j.kernel.impl.transaction.log.files.TransactionLogChannelAllocator.allocateFile(TransactionLogChannelAllocator.java:151) ~[neo4j-kernel-4.4.0.jar:4.4.0]
at org.neo4j.kernel.impl.transaction.log.files.TransactionLogChannelAllocator.createLogChannel(TransactionLogChannelAllocator.java:64) ~[neo4j-kernel-4.4.0.jar:4.4.0]
at org.neo4j.kernel.impl.transaction.log.files.TransactionLogFile.createLogChannelForVersion(TransactionLogFile.java:186) ~[neo4j-kernel-4.4.0.jar:4.4.0]
at org.neo4j.kernel.impl.transaction.log.files.TransactionLogFile.start(TransactionLogFile.java:140) ~[neo4j-kernel-4.4.0.jar:4.4.0]
at org.neo4j.kernel.lifecycle.LifeSupport$LifecycleInstance.start(LifeSupport.java:442) ~[neo4j-common-4.4.0.jar:4.4.0]
... 19 more
2022-01-24 14:34:35.051+0000 INFO [o.n.k.i.t.l.c.CheckPointerImpl] [neo4j/21dafb04] Checkpoint triggered by "Database shutdown" # txId: 719312 checkpoint started...
2022-01-24 14:34:35.085+0000 WARN [o.n.k.i.t.l.c.DetachedCheckpointAppender] [neo4j/21dafb04] Checkpoint was attempted while appender is not started. No checkpoint record will be appended.
2022-01-24 14:34:35.087+0000 INFO [o.n.k.i.t.l.c.CheckPointerImpl] [neo4j/21dafb04] Checkpoint triggered by "Database shutdown" # txId: 719312 checkpoint completed in 33ms
2022-01-24 14:34:35.093+0000 INFO [o.n.k.i.t.l.p.LogPruningImpl] [neo4j/21dafb04] No log version pruned. The strategy used was '1 days'.
2022-01-24 14:34:35.115+0000 ERROR [o.n.d.d.DefaultDatabaseManager] Failed to start DatabaseId{21dafb04[neo4j]}
org.neo4j.dbms.api.DatabaseManagementException: An error occurred! Unable to start `DatabaseId{21dafb04[neo4j]}`.
at org.neo4j.dbms.database.AbstractDatabaseManager.startDatabase(AbstractDatabaseManager.java:201) ~[neo4j-4.4.0.jar:4.4.0]
at org.neo4j.dbms.database.DefaultDatabaseManager.startDatabase(DefaultDatabaseManager.java:153) ~[neo4j-4.4.0.jar:4.4.0]
at org.neo4j.dbms.database.DefaultDatabaseManager.initialiseDefaultDatabase(DefaultDatabaseManager.java:64) ~[neo4j-4.4.0.jar:4.4.0]
at org.neo4j.dbms.database.DefaultDatabaseInitializer.start0(DefaultDatabaseInitializer.java:39) ~[neo4j-kernel-4.4.0.jar:4.4.0]
at org.neo4j.kernel.lifecycle.SafeLifecycle.transition(SafeLifecycle.java:124) [neo4j-common-4.4.0.jar:4.4.0]
at org.neo4j.kernel.lifecycle.SafeLifecycle.start(SafeLifecycle.java:138) [neo4j-common-4.4.0.jar:4.4.0]
at org.neo4j.kernel.lifecycle.LifeSupport$LifecycleInstance.start(LifeSupport.java:442) [neo4j-common-4.4.0.jar:4.4.0]
at org.neo4j.kernel.lifecycle.LifeSupport.start(LifeSupport.java:110) [neo4j-common-4.4.0.jar:4.4.0]
at org.neo4j.graphdb.facade.DatabaseManagementServiceFactory.startDatabaseServer(DatabaseManagementServiceFactory.java:219) [neo4j-4.4.0.jar:4.4.0]
at org.neo4j.graphdb.facade.DatabaseManagementServiceFactory.build(DatabaseManagementServiceFactory.java:181) [neo4j-4.4.0.jar:4.4.0]
at org.neo4j.server.CommunityBootstrapper.createNeo(CommunityBootstrapper.java:36) [neo4j-4.4.0.jar:4.4.0]
at org.neo4j.server.NeoBootstrapper.start(NeoBootstrapper.java:142) [neo4j-4.4.0.jar:4.4.0]
at org.neo4j.server.NeoBootstrapper.start(NeoBootstrapper.java:95) [neo4j-4.4.0.jar:4.4.0]
at org.neo4j.server.CommunityEntryPoint.main(CommunityEntryPoint.java:34) [neo4j-4.4.0.jar:4.4.0]
Caused by: java.lang.RuntimeException: org.neo4j.kernel.lifecycle.LifecycleException: Component 'org.neo4j.kernel.impl.transaction.log.files.TransactionLogFile#74aaf34e' was successfully initialized, but failed to start. Please see the attached cause exception "/var/lib/neo4j/data/transactions/neo4j/neostore.transaction.db.7".
at org.neo4j.kernel.database.Database.handleStartupFailure(Database.java:638) ~[neo4j-kernel-4.4.0.jar:4.4.0]
at org.neo4j.kernel.database.Database.start(Database.java:532) ~[neo4j-kernel-4.4.0.jar:4.4.0]
at org.neo4j.dbms.database.AbstractDatabaseManager.startDatabase(AbstractDatabaseManager.java:197) ~[neo4j-4.4.0.jar:4.4.0]
... 13 more
Caused by: org.neo4j.kernel.lifecycle.LifecycleException: Component 'org.neo4j.kernel.impl.transaction.log.files.TransactionLogFile#74aaf34e' was successfully initialized, but failed to start. Please see the attached cause exception "/var/lib/neo4j/data/transactions/neo4j/neostore.transaction.db.7".
at org.neo4j.kernel.lifecycle.LifeSupport$LifecycleInstance.start(LifeSupport.java:463) ~[neo4j-common-4.4.0.jar:4.4.0]
at org.neo4j.kernel.lifecycle.LifeSupport.start(LifeSupport.java:110) ~[neo4j-common-4.4.0.jar:4.4.0]
at org.neo4j.kernel.impl.transaction.log.files.TransactionLogFiles.start(TransactionLogFiles.java:66) ~[neo4j-kernel-4.4.0.jar:4.4.0]
at org.neo4j.kernel.lifecycle.LifeSupport$LifecycleInstance.start(LifeSupport.java:442) ~[neo4j-common-4.4.0.jar:4.4.0]
at org.neo4j.kernel.lifecycle.LifeSupport.start(LifeSupport.java:110) ~[neo4j-common-4.4.0.jar:4.4.0]
at org.neo4j.kernel.database.Database.start(Database.java:514) ~[neo4j-kernel-4.4.0.jar:4.4.0]
at org.neo4j.dbms.database.AbstractDatabaseManager.startDatabase(AbstractDatabaseManager.java:197) ~[neo4j-4.4.0.jar:4.4.0]
... 13 more
Caused by: java.nio.file.AccessDeniedException: /var/lib/neo4j/data/transactions/neo4j/neostore.transaction.db.7
at sun.nio.fs.UnixException.translateToIOException(UnixException.java:90) ~[?:?]
at sun.nio.fs.UnixException.rethrowAsIOException(UnixException.java:111) ~[?:?]
at sun.nio.fs.UnixException.rethrowAsIOException(UnixException.java:116) ~[?:?]
at sun.nio.fs.UnixFileSystemProvider.newFileChannel(UnixFileSystemProvider.java:182) ~[?:?]
at java.nio.channels.FileChannel.open(FileChannel.java:292) ~[?:?]
at org.neo4j.io.fs.DefaultFileSystemAbstraction.open(DefaultFileSystemAbstraction.java:76) ~[neo4j-io-4.4.0.jar:4.4.0]
at org.neo4j.io.fs.DefaultFileSystemAbstraction.write(DefaultFileSystemAbstraction.java:107) ~[neo4j-io-4.4.0.jar:4.4.0]
at org.neo4j.io.fs.DefaultFileSystemAbstraction.write(DefaultFileSystemAbstraction.java:58) ~[neo4j-io-4.4.0.jar:4.4.0]
at org.neo4j.kernel.impl.transaction.log.files.TransactionLogChannelAllocator.allocateFile(TransactionLogChannelAllocator.java:151) ~[neo4j-kernel-4.4.0.jar:4.4.0]
at org.neo4j.kernel.impl.transaction.log.files.TransactionLogChannelAllocator.createLogChannel(TransactionLogChannelAllocator.java:64) ~[neo4j-kernel-4.4.0.jar:4.4.0]
at org.neo4j.kernel.impl.transaction.log.files.TransactionLogFile.createLogChannelForVersion(TransactionLogFile.java:186) ~[neo4j-kernel-4.4.0.jar:4.4.0]
at org.neo4j.kernel.impl.transaction.log.files.TransactionLogFile.start(TransactionLogFile.java:140) ~[neo4j-kernel-4.4.0.jar:4.4.0]
at org.neo4j.kernel.lifecycle.LifeSupport$LifecycleInstance.start(LifeSupport.java:442) ~[neo4j-common-4.4.0.jar:4.4.0]
at org.neo4j.kernel.lifecycle.LifeSupport.start(LifeSupport.java:110) ~[neo4j-common-4.4.0.jar:4.4.0]
at org.neo4j.kernel.impl.transaction.log.files.TransactionLogFiles.start(TransactionLogFiles.java:66) ~[neo4j-kernel-4.4.0.jar:4.4.0]
at org.neo4j.kernel.lifecycle.LifeSupport$LifecycleInstance.start(LifeSupport.java:442) ~[neo4j-common-4.4.0.jar:4.4.0]
at org.neo4j.kernel.lifecycle.LifeSupport.start(LifeSupport.java:110) ~[neo4j-common-4.4.0.jar:4.4.0]
at org.neo4j.kernel.database.Database.start(Database.java:514) ~[neo4j-kernel-4.4.0.jar:4.4.0]
at org.neo4j.dbms.database.AbstractDatabaseManager.startDatabase(AbstractDatabaseManager.java:197) ~[neo4j-4.4.0.jar:4.4.0]
... 13 more
2022-01-24 14:34:35.186+0000 INFO [o.n.b.BoltServer] Bolt enabled on [0:0:0:0:0:0:0:0%0]:7687.
2022-01-24 14:34:35.187+0000 INFO [o.n.s.AbstractNeoWebServer$ServerComponentsLifecycleAdapter] Starting web server
2022-01-24 14:34:36.091+0000 INFO [o.n.s.CommunityNeoWebServer] Remote interface available at http://localhost:7474/
2022-01-24 14:34:36.091+0000 INFO [o.n.s.AbstractNeoWebServer$ServerComponentsLifecycleAdapter] Web server started.
2022-01-24 14:34:36.095+0000 INFO [o.n.g.f.DatabaseManagementServiceFactory] id: E5F568A8DED63307B74CB58935A9D6A6E1F7EADD2CF22433E0040FED4ECBEC2E
2022-01-24 14:34:36.095+0000 INFO [o.n.g.f.DatabaseManagementServiceFactory] name: system
2022-01-24 14:34:36.095+0000 INFO [o.n.g.f.DatabaseManagementServiceFactory] creationDate: 2021-03-12T13:08:07.119Z
I have run the following:
sudo neo4j-admin check-consistency --database=neo4j --check-indexes=true
and can see the indexes appear fine (if this is what is being checked?)
Selecting JVM - Version:11.0.13+8-Ubuntu-0ubuntu1.20.04, Name:OpenJDK 64-Bit Server VM, Vendor:Ubuntu
2022-01-24 22:17:20.020+0000 INFO [o.n.k.i.s.f.RecordFormatSelector] Selected RecordFormat:PageAlignedV4_3[AF4.3.0] record format from store /var/lib/neo4j/data/databases/neo4j
2022-01-24 22:17:20.022+0000 INFO [o.n.k.i.s.f.RecordFormatSelector] Format not configured for store /var/lib/neo4j/data/databases/neo4j. Selected format from the store files: RecordFormat:PageAlignedV4_3[AF4.3.0]
Index structure consistency check
.................... 10%
.................... 20%
.................... 30%
.................... 40%
.................... 50%
.................... 60%
.................... 70%
.................... 80%
.................... 90%
.................... 100%
Consistency check
.................... 10%
.................... 20%
.................... 30%
.................... 40%
.................... 50%
.................... 60%
.................... 70%
.................... 80%
.................... 90%
.................... 100%
Answer found here:
https://community.neo4j.com/t/database-offline-and-will-not-restart/27914/5
The index issue seems to have disappeared as part of this solution. For some reason the neo4j system account no longer had access to write to the log files or anything else in the data folder. This was resolved using:
sudo chown neo4j -R /var/lib/neo4j/data/
I am currently using Keycloak v11 running in a docker container. I would like to migrate to v15 but I want to test it before migrating. I pull the last docker image jboss/keycloak:15.0.2 and simply run:
docker run -d -e KEYCLOAK_USER=admin -e KEYCLOAK_PASSWORD=admin -p 8080:8080 --name kc-v15 jboss/keycloak:15.0.2
When I take a look at the logs I have multiple warnings and errors. The full stacktrace is below
Any help will be appreciated.
21:14:50,555 WARN [org.infinispan.topology.ClusterTopologyManagerImpl] (thread-8,ejb,04cbb180fd8b) ISPN000329: Unable to read rebalancing status from coordinator 8eb22ce71ea3: java.util.concurrent.CompletionException: org.infinispan.remoting.RemoteException: ISPN000217: Received exception from 8eb22ce71ea3, see cause for remote stack trace
at java.base/java.util.concurrent.CompletableFuture.encodeThrowable(CompletableFuture.java:331)
at java.base/java.util.concurrent.CompletableFuture.completeThrowable(CompletableFuture.java:346)
at org.jboss.as.clustering.common#23.0.2.Final//org.jboss.as.clustering.context.ContextualExecutor$1.run(ContextualExecutor.java:70)
at java.base/java.lang.Thread.run(Thread.java:829)
Caused by: org.infinispan.remoting.RemoteException: ISPN000217: Received exception from 8eb22ce71ea3, see cause for remote stack trace
at org.infinispan#11.0.9.Final//org.infinispan.remoting.transport.ResponseCollectors.wrapRemoteException(ResponseCollectors.java:25)
at org.infinispan#11.0.9.Final//org.infinispan.remoting.transport.impl.SingleTargetRequest.onResponse(SingleTargetRequest.java:43)
... 31 more
Caused by: org.infinispan.commons.CacheException: Unknown command id 90!
at org.infinispan#11.0.9.Final//org.infinispan.marshall.exts.ThrowableExternalizer.readObject(ThrowableExternalizer.java:181)
at org.infinispan#11.0.9.Final//org.infinispan.marshall.exts.ThrowableExternalizer.readObject(ThrowableExternalizer.java:42)
at org.infinispan#11.0.9.Final//org.infinispan.marshall.core.GlobalMarshaller.objectFromByteBuffer(GlobalMarshaller.java:221)
at org.infinispan#11.0.9.Final//org.infinispan.remoting.transport.jgroups.JGroupsTransport.processResponse(JGroupsTransport.java:1394)
... 29 more
21:14:50,569 WARN [org.infinispan.topology.ClusterTopologyManagerImpl] (ServerService Thread Pool -- 61) ISPN000329: Unable to read rebalancing status from coordinator 8eb22ce71ea3: java.util.concurrent.CompletionException: org.infinispan.remoting.RemoteException: ISPN000217: Received exception from 8eb22ce71ea3, see cause for remote stack trace
at java.base/java.util.concurrent.CompletableFuture.encodeThrowable(CompletableFuture.java:331)
at org.jboss.threads#2.4.0.Final//org.jboss.threads.EnhancedQueueExecutor$ThreadBody.run(EnhancedQueueExecutor.java:1348)
at java.base/java.lang.Thread.run(Thread.java:829)
at org.jboss.threads#2.4.0.Final//org.jboss.threads.JBossThread.run(JBossThread.java:513)
Caused by: org.infinispan.remoting.RemoteException: ISPN000217: Received exception from 8eb22ce71ea3, see cause for remote stack trace
at org.infinispan#11.0.9.Final//org.infinispan.remoting.transport.ResponseCollectors.wrapRemoteException(ResponseCollectors.java:25)
at org.infinispan#11.0.9.Final//org.infinispan.remoting.transport.ValidSingleResponseCollector.withException(ValidSingleResponseCollector.java:37)
Caused by: org.infinispan.commons.CacheException: Unknown command id 90!
at org.infinispan#11.0.9.Final//org.infinispan.marshall.exts.ThrowableExternalizer.readObject(ThrowableExternalizer.java:181)
at org.infinispan#11.0.9.Final//org.infinispan.marshall.core.GlobalMarshaller.objectFromObjectInput(GlobalMarshaller.java:192)
at org.infinispan#11.0.9.Final//org.infinispan.marshall.core.GlobalMarshaller.objectFromByteBuffer(GlobalMarshaller.java:221)
at org.infinispan#11.0.9.Final//org.infinispan.remoting.transport.jgroups.JGroupsTransport.processResponse(JGroupsTransport.java:1394)
... 29 more
21:14:50,573 WARN [org.infinispan.topology.ClusterTopologyManagerImpl] (ServerService Thread Pool -- 58) ISPN000329: Unable to read rebalancing status from coordinator 8eb22ce71ea3: java.util.concurrent.CompletionException: org.infinispan.remoting.RemoteException: ISPN000217: Received exception from 8eb22ce71ea3, see cause for remote stack trace
at java.base/java.util.concurrent.CompletableFuture.encodeThrowable(CompletableFuture.java:331)
at org.jboss.threads#2.4.0.Final//org.jboss.threads.EnhancedQueueExecutor$ThreadBody.run(EnhancedQueueExecutor.java:1377)
at java.base/java.lang.Thread.run(Thread.java:829)
at org.jboss.threads#2.4.0.Final//org.jboss.threads.JBossThread.run(JBossThread.java:513)
Caused by: org.infinispan.remoting.RemoteException: ISPN000217: Received exception from 8eb22ce71ea3, see cause for remote stack trace
at org.infinispan#11.0.9.Final//org.infinispan.remoting.transport.ResponseCollectors.wrapRemoteException(ResponseCollectors.java:25)
at org.infinispan#11.0.9.Final//org.infinispan.remoting.transport.ValidSingleResponseCollector.withException(ValidSingleResponseCollector.java:37)
at org.jboss.as.clustering.common#23.0.2.Final//org.jboss.as.clustering.context.ContextualExecutor$1.run(ContextualExecutor.java:70)
at java.base/java.lang.Thread.run(Thread.java:829)
Caused by: org.infinispan.commons.CacheException: Unknown command id 90!
at org.infinispan#11.0.9.Final//org.infinispan.marshall.core.GlobalMarshaller.objectFromObjectInput(GlobalMarshaller.java:192)
at org.infinispan#11.0.9.Final//org.infinispan.marshall.core.GlobalMarshaller.objectFromByteBuffer(GlobalMarshaller.java:221)
at org.infinispan#11.0.9.Final//org.infinispan.remoting.transport.jgroups.JGroupsTransport.processResponse(JGroupsTransport.java:1394)
... 29 more
21:14:50,590 WARN [org.infinispan.topology.ClusterTopologyManagerImpl] (thread-7,ejb,04cbb180fd8b) ISPN000329: Unable to read rebalancing status from coordinator 8eb22ce71ea3: java.util.concurrent.CompletionException: org.infinispan.remoting.RemoteException: ISPN000217: Received exception from 8eb22ce71ea3, see cause for remote stack trace
at java.base/java.util.concurrent.CompletableFuture.encodeThrowable(CompletableFuture.java:331)
at org.jboss.as.clustering.common#23.0.2.Final//org.jboss.as.clustering.context.ContextualExecutor$1.run(ContextualExecutor.java:70)
at java.base/java.lang.Thread.run(Thread.java:829)
Caused by: org.infinispan.remoting.RemoteException: ISPN000217: Received exception from 8eb22ce71ea3, see cause for remote stack trace
at org.infinispan#11.0.9.Final//org.infinispan.remoting.transport.ResponseCollectors.wrapRemoteException(ResponseCollectors.java:25)
at org.infinispan#11.0.9.Final//org.infinispan.remoting.transport.impl.SingleTargetRequest.addResponse(SingleTargetRequest.java:73)
at org.infinispan#11.0.9.Final//org.infinispan.remoting.transport.impl.SingleTargetRequest.onResponse(SingleTargetRequest.java:43)
... 31 more
Caused by: org.infinispan.commons.CacheException: Unknown command id 90!
at org.infinispan#11.0.9.Final//org.infinispan.marshall.exts.ThrowableExternalizer.readObject(ThrowableExternalizer.java:181)
at org.infinispan#11.0.9.Final//org.infinispan.marshall.core.GlobalMarshaller.objectFromObjectInput(GlobalMarshaller.java:192)
at org.infinispan#11.0.9.Final//org.infinispan.marshall.core.GlobalMarshaller.objectFromByteBuffer(GlobalMarshaller.java:221)
at org.infinispan#11.0.9.Final//org.infinispan.remoting.transport.jgroups.JGroupsTransport.processResponse(JGroupsTransport.java:1394)
... 29 more
21:14:50,590 WARN [org.infinispan.topology.ClusterTopologyManagerImpl] (thread-8,ejb,04cbb180fd8b) ISPN000329: Unable to read rebalancing status from coordinator 8eb22ce71ea3: java.util.concurrent.CompletionException: org.infinispan.remoting.RemoteException: ISPN000217: Received exception from 8eb22ce71ea3, see cause for remote stack trace
at java.base/java.util.concurrent.CompletableFuture.encodeThrowable(CompletableFuture.java:331)
at java.base/java.util.concurrent.CompletableFuture.completeThrowable(CompletableFuture.java:346)
at org.jboss.as.clustering.common#23.0.2.Final//org.jboss.as.clustering.context.ContextReferenceExecutor.execute(ContextReferenceExecutor.java:49)
at org.jboss.as.clustering.common#23.0.2.Final//org.jboss.as.clustering.context.ContextualExecutor$1.run(ContextualExecutor.java:70)
at java.base/java.lang.Thread.run(Thread.java:829)
Caused by: org.infinispan.remoting.RemoteException: ISPN000217: Received exception from 8eb22ce71ea3, see cause for remote stack trace
at org.infinispan#11.0.9.Final//org.infinispan.remoting.transport.ResponseCollectors.wrapRemoteException(ResponseCollectors.java:25)
at org.infinispan#11.0.9.Final//org.infinispan.remoting.transport.ValidSingleResponseCollector.withException(ValidSingleResponseCollector.java:37)
at org.infinispan#11.0.9.Final//org.infinispan.remoting.transport.ValidSingleResponseCollector.addResponse(ValidSingleResponseCollector.java:21)
at org.infinispan#11.0.9.Final//org.infinispan.remoting.transport.impl.SingleTargetRequest.addResponse(SingleTargetRequest.java:73)
at org.infinispan#11.0.9.Final//org.infinispan.remoting.transport.impl.SingleTargetRequest.onResponse(SingleTargetRequest.java:43)
... 31 more
Caused by: org.infinispan.commons.CacheException: Unknown command id 90!
at org.infinispan#11.0.9.Final//org.infinispan.marshall.exts.ThrowableExternalizer.readObject(ThrowableExternalizer.java:181)
at org.infinispan#11.0.9.Final//org.infinispan.marshall.core.GlobalMarshaller.readNullableObject(GlobalMarshaller.java:358)
at org.infinispan#11.0.9.Final//org.infinispan.marshall.core.GlobalMarshaller.objectFromObjectInput(GlobalMarshaller.java:192)
at org.infinispan#11.0.9.Final//org.infinispan.marshall.core.GlobalMarshaller.objectFromByteBuffer(GlobalMarshaller.java:221)
at org.infinispan#11.0.9.Final//org.infinispan.remoting.transport.jgroups.JGroupsTransport.processResponse(JGroupsTransport.java:1394)
... 29 more
21:18:51,056 ERROR [org.jboss.msc.service.fail] (ServerService Thread Pool -- 62) MSC000001: Failed to start service org.wildfly.clustering.infinispan.cache-container.keycloak: org.jboss.msc.service.StartException in service org.wildfly.clustering.infinispan.cache-container.keycloak: org.infinispan.manager.EmbeddedCacheManagerStartupException: org.infinispan.commons.CacheConfigurationException: Error starting component org.infinispan.statetransfer.StateTransferManager
at org.wildfly.clustering.service#23.0.2.Final//org.wildfly.clustering.service.FunctionalService.start(FunctionalService.java:66)
at org.jboss.threads#2.4.0.Final//org.jboss.threads.EnhancedQueueExecutor$ThreadBody.run(EnhancedQueueExecutor.java:1348)
at java.base/java.lang.Thread.run(Thread.java:829)
at org.jboss.threads#2.4.0.Final//org.jboss.threads.JBossThread.run(JBossThread.java:513)
Caused by: org.infinispan.manager.EmbeddedCacheManagerStartupException: org.infinispan.commons.CacheConfigurationException: Error starting component org.infinispan.statetransfer.StateTransferManager
at org.infinispan#11.0.9.Final//org.infinispan.manager.DefaultCacheManager.internalStart(DefaultCacheManager.java:751)
at org.infinispan#11.0.9.Final//org.infinispan.manager.DefaultCacheManager.start(DefaultCacheManager.java:717)
at org.jboss.as.clustering.infinispan#23.0.2.Final//org.jboss.as.clustering.infinispan.subsystem.CacheContainerServiceConfigurator.get(CacheContainerServiceConfigurator.java:123)
at org.jboss.as.clustering.infinispan#23.0.2.Final//org.jboss.as.clustering.infinispan.subsystem.CacheContainerServiceConfigurator.get(CacheContainerServiceConfigurator.java:76)
at org.wildfly.clustering.service#23.0.2.Final//org.wildfly.clustering.service.FunctionalService.start(FunctionalService.java:63)
... 7 more
Caused by: org.infinispan.commons.CacheConfigurationException: Error starting component org.infinispan.statetransfer.StateTransferManager
at org.infinispan#11.0.9.Final//org.infinispan.factories.impl.BasicComponentRegistryImpl.startWrapper(BasicComponentRegistryImpl.java:560)
at org.infinispan#11.0.9.Final//org.infinispan.factories.AbstractComponentRegistry.internalStart(AbstractComponentRegistry.java:341)
at org.infinispan#11.0.9.Final//org.infinispan.factories.AbstractComponentRegistry.start(AbstractComponentRegistry.java:237)
at org.infinispan#11.0.9.Final//org.infinispan.manager.DefaultCacheManager.internalStart(DefaultCacheManager.java:746)
... 11 more
Caused by: java.util.concurrent.CompletionException: org.infinispan.remoting.RemoteException: ISPN000217: Received exception from 8eb22ce71ea3, see cause for remote stack trace
at org.infinispan#11.0.9.Final//org.infinispan.factories.impl.BasicComponentRegistryImpl.invokeStart(BasicComponentRegistryImpl.java:592)
at org.infinispan#11.0.9.Final//org.infinispan.factories.impl.BasicComponentRegistryImpl.doStartWrapper(BasicComponentRegistryImpl.java:583)
at org.infinispan#11.0.9.Final//org.infinispan.factories.impl.BasicComponentRegistryImpl.startWrapper(BasicComponentRegistryImpl.java:552)
... 40 more
Caused by: org.infinispan.remoting.RemoteException: ISPN000217: Received exception from 8eb22ce71ea3, see cause for remote stack trace
at org.infinispan#11.0.9.Final//org.infinispan.remoting.transport.ResponseCollectors.wrapRemoteException(ResponseCollectors.java:25)
at org.infinispan#11.0.9.Final//org.infinispan.remoting.transport.ValidSingleResponseCollector.withException(ValidSingleResponseCollector.java:37)
at org.jboss.as.clustering.common#23.0.2.Final//org.jboss.as.clustering.context.ContextualExecutor$1.run(ContextualExecutor.java:70)
at java.base/java.lang.Thread.run(Thread.java:829)
Caused by: org.infinispan.commons.CacheException: Unknown command id 85!
at org.infinispan#11.0.9.Final//org.infinispan.marshall.exts.ThrowableExternalizer.readObject(ThrowableExternalizer.java:181)
at org.infinispan#11.0.9.Final//org.infinispan.marshall.core.GlobalMarshaller.objectFromObjectInput(GlobalMarshaller.java:192)
at org.infinispan#11.0.9.Final//org.infinispan.marshall.core.GlobalMarshaller.objectFromByteBuffer(GlobalMarshaller.java:221)
at org.infinispan#11.0.9.Final//org.infinispan.remoting.transport.jgroups.JGroupsTransport.processResponse(JGroupsTransport.java:1394)
... 29 more
21:18:51,123 ERROR [org.jboss.msc.service.fail] (ServerService Thread Pool -- 59) MSC000001: Failed to start service org.wildfly.clustering.infinispan.cache.ejb.http-remoting-connector: org.jboss.msc.service.StartException in service org.wildfly.clustering.infinispan.cache.ejb.http-remoting-connector: org.infinispan.commons.CacheConfigurationException: Error starting component org.infinispan.statetransfer.StateTransferManager
at org.wildfly.clustering.service#23.0.2.Final//org.wildfly.clustering.service.FunctionalService.start(FunctionalService.java:66)
at org.jboss.threads#2.4.0.Final//org.jboss.threads.EnhancedQueueExecutor$ThreadBody.run(EnhancedQueueExecutor.java:1377)
at java.base/java.lang.Thread.run(Thread.java:829)
at org.jboss.threads#2.4.0.Final//org.jboss.threads.JBossThread.run(JBossThread.java:513)
Caused by: org.infinispan.commons.CacheConfigurationException: Error starting component org.infinispan.statetransfer.StateTransferManager
at org.infinispan#11.0.9.Final//org.infinispan.factories.impl.BasicComponentRegistryImpl.startWrapper(BasicComponentRegistryImpl.java:560)
at org.infinispan#11.0.9.Final//org.infinispan.factories.AbstractComponentRegistry.start(AbstractComponentRegistry.java:237)
at org.wildfly.clustering.infinispan.spi#23.0.2.Final//org.wildfly.clustering.infinispan.spi.service.CacheServiceConfigurator.get(CacheServiceConfigurator.java:55)
at org.wildfly.clustering.service#23.0.2.Final//org.wildfly.clustering.service.FunctionalService.start(FunctionalService.java:63)
... 7 more
Caused by: java.util.concurrent.CompletionException: org.infinispan.remoting.RemoteException: ISPN000217: Received exception from 8eb22ce71ea3, see cause for remote stack trace
at org.infinispan#11.0.9.Final//org.infinispan.util.concurrent.CompletionStages.join(CompletionStages.java:82)
at org.infinispan#11.0.9.Final//org.infinispan.factories.impl.BasicComponentRegistryImpl.invokeStart(BasicComponentRegistryImpl.java:592)
at org.infinispan#11.0.9.Final//org.infinispan.factories.impl.BasicComponentRegistryImpl.doStartWrapper(BasicComponentRegistryImpl.java:583)
at org.infinispan#11.0.9.Final//org.infinispan.factories.impl.BasicComponentRegistryImpl.startWrapper(BasicComponentRegistryImpl.java:552)
... 22 more
Caused by: org.infinispan.remoting.RemoteException: ISPN000217: Received exception from 8eb22ce71ea3, see cause for remote stack trace
at org.infinispan#11.0.9.Final//org.infinispan.remoting.transport.ResponseCollectors.wrapRemoteException(ResponseCollectors.java:25)
at org.jboss.as.clustering.common#23.0.2.Final//org.jboss.as.clustering.context.ContextualExecutor$1.run(ContextualExecutor.java:70)
at java.base/java.lang.Thread.run(Thread.java:829)
Caused by: org.infinispan.commons.CacheException: Unknown command id 85!
at org.infinispan#11.0.9.Final//org.infinispan.marshall.exts.ThrowableExternalizer.readObject(ThrowableExternalizer.java:181)
at org.infinispan#11.0.9.Final//org.infinispan.marshall.exts.ThrowableExternalizer.readObject(ThrowableExternalizer.java:42)
at org.infinispan#11.0.9.Final//org.infinispan.marshall.core.GlobalMarshaller.objectFromByteBuffer(GlobalMarshaller.java:221)
at org.infinispan#11.0.9.Final//org.infinispan.remoting.transport.jgroups.JGroupsTransport.processResponse(JGroupsTransport.java:1394)
... 29 more
21:18:51,249 ERROR [org.jboss.as.controller.management-operation] (Controller Boot Thread) WFLYCTL0013: Operation ("add") failed - address: ([
("subsystem" => "infinispan"),
("cache-container" => "keycloak")
]) - failure description: {"WFLYCTL0080: Failed services" => {"org.wildfly.clustering.infinispan.cache-container.keycloak" => "org.infinispan.manager.EmbeddedCacheManagerStartupException: org.infinispan.commons.CacheConfigurationException: Error starting component org.infinispan.statetransfer.StateTransferManager
Caused by: org.infinispan.remoting.RemoteException: ISPN000217: Received exception from 8eb22ce71ea3, see cause for remote stack trace
Caused by: org.infinispan.commons.CacheException: Unknown command id 85!"}}
21:18:51,255 ERROR [org.jboss.as.controller.management-operation] (Controller Boot Thread) WFLYCTL0013: Operation ("add") failed - address: ([
("subsystem" => "ejb3"),
("service" => "remote")
]) - failure description: {"WFLYCTL0080: Failed services" => {"org.wildfly.clustering.infinispan.cache.ejb.http-remoting-connector" => "org.infinispan.commons.CacheConfigurationException: Error starting component org.infinispan.statetransfer.StateTransferManager
Caused by: org.infinispan.commons.CacheConfigurationException: Error starting component org.infinispan.statetransfer.StateTransferManager
Caused by: java.util.concurrent.CompletionException: org.infinispan.remoting.RemoteException: ISPN000217: Received exception from 8eb22ce71ea3, see cause for remote stack trace
Caused by: org.infinispan.remoting.RemoteException: ISPN000217: Received exception from 8eb22ce71ea3, see cause for remote stack trace
Caused by: org.infinispan.commons.CacheException: Unknown command id 85!"}}
21:18:51,258 ERROR [org.jboss.as.controller.management-operation] (Controller Boot Thread) WFLYCTL0013: Operation ("add") failed - address: ([
("subsystem" => "infinispan"),
("cache-container" => "keycloak"),
("thread-pool" => "async-operations")
]) - failure description: {"WFLYCTL0080: Failed services" => {"org.wildfly.clustering.infinispan.cache-container.keycloak" => "org.infinispan.manager.EmbeddedCacheManagerStartupException: org.infinispan.commons.CacheConfigurationException: Error starting component org.infinispan.statetransfer.StateTransferManager
Caused by: org.infinispan.manager.EmbeddedCacheManagerStartupException: org.infinispan.commons.CacheConfigurationException: Error starting component org.infinispan.statetransfer.StateTransferManager
Caused by: org.infinispan.remoting.RemoteException: ISPN000217: Received exception from 8eb22ce71ea3, see cause for remote stack trace
Caused by: org.infinispan.commons.CacheException: Unknown command id 85!"}}
21:18:51,261 ERROR [org.jboss.as.controller.management-operation] (Controller Boot Thread) WFLYCTL0013: Operation ("add") failed - address: ([
("subsystem" => "infinispan"),
("cache-container" => "keycloak"),
("thread-pool" => "blocking")
]) - failure description: {"WFLYCTL0080: Failed services" => {"org.wildfly.clustering.infinispan.cache-container.keycloak" => "org.infinispan.manager.EmbeddedCacheManagerStartupException: org.infinispan.commons.CacheConfigurationException: Error starting component org.infinispan.statetransfer.StateTransferManager
Caused by: org.infinispan.remoting.RemoteException: ISPN000217: Received exception from 8eb22ce71ea3, see cause for remote stack trace
Caused by: org.infinispan.commons.CacheException: Unknown command id 85!"}}
21:18:51,261 ERROR [org.jboss.as.controller.management-operation] (Controller Boot Thread) WFLYCTL0013: Operation ("add") failed - address: ([
("subsystem" => "infinispan"),
("cache-container" => "keycloak"),
("thread-pool" => "expiration")
]) - failure description: {"WFLYCTL0080: Failed services" => {"org.wildfly.clustering.infinispan.cache-container.keycloak" => "org.infinispan.manager.EmbeddedCacheManagerStartupException: org.infinispan.commons.CacheConfigurationException: Error starting component org.infinispan.statetransfer.StateTransferManager
Caused by: org.infinispan.remoting.RemoteException: ISPN000217: Received exception from 8eb22ce71ea3, see cause for remote stack trace
Caused by: org.infinispan.commons.CacheException: Unknown command id 85!"}}
21:18:51,261 ERROR [org.jboss.as.controller.management-operation] (Controller Boot Thread) WFLYCTL0013: Operation ("add") failed - address: ([
("subsystem" => "infinispan"),
("cache-container" => "keycloak"),
("thread-pool" => "listener")
]) - failure description: {"WFLYCTL0080: Failed services" => {"org.wildfly.clustering.infinispan.cache-container.keycloak" => "org.infinispan.manager.EmbeddedCacheManagerStartupException: org.infinispan.commons.CacheConfigurationException: Error starting component org.infinispan.statetransfer.StateTransferManager
Caused by: org.infinispan.manager.EmbeddedCacheManagerStartupException: org.infinispan.commons.CacheConfigurationException: Error starting component org.infinispan.statetransfer.StateTransferManager
Caused by: org.infinispan.commons.CacheException: Unknown command id 85!"}}
21:18:51,262 ERROR [org.jboss.as.controller.management-operation] (Controller Boot Thread) WFLYCTL0013: Operation ("add") failed - address: ([
("subsystem" => "infinispan"),
("cache-container" => "keycloak"),
("thread-pool" => "non-blocking")
]) - failure description: {"WFLYCTL0080: Failed services" => {"org.wildfly.clustering.infinispan.cache-container.keycloak" => "org.infinispan.manager.EmbeddedCacheManagerStartupException: org.infinispan.commons.CacheConfigurationException: Error starting component org.infinispan.statetransfer.StateTransferManager
Caused by: org.infinispan.remoting.RemoteException: ISPN000217: Received exception from 8eb22ce71ea3, see cause for remote stack trace
Caused by: org.infinispan.commons.CacheException: Unknown command id 85!"}}
21:18:51,263 ERROR [org.jboss.as.controller.management-operation] (Controller Boot Thread) WFLYCTL0013: Operation ("add") failed - address: ([
("subsystem" => "infinispan"),
("cache-container" => "keycloak"),
("thread-pool" => "persistence")
]) - failure description: {"WFLYCTL0080: Failed services" => {"org.wildfly.clustering.infinispan.cache-container.keycloak" => "org.infinispan.manager.EmbeddedCacheManagerStartupException: org.infinispan.commons.CacheConfigurationException: Error starting component org.infinispan.statetransfer.StateTransferManager
Caused by: org.infinispan.manager.EmbeddedCacheManagerStartupException: org.infinispan.commons.CacheConfigurationException: Error starting component org.infinispan.statetransfer.StateTransferManager
Caused by: org.infinispan.commons.CacheException: Unknown command id 85!"}}
21:18:51,264 ERROR [org.jboss.as.controller.management-operation] (Controller Boot Thread) WFLYCTL0013: Operation ("add") failed - address: ([
("subsystem" => "infinispan"),
("cache-container" => "keycloak"),
("thread-pool" => "remote-command")
]) - failure description: {"WFLYCTL0080: Failed services" => {"org.wildfly.clustering.infinispan.cache-container.keycloak" => "org.infinispan.manager.EmbeddedCacheManagerStartupException: org.infinispan.commons.CacheConfigurationException: Error starting component org.infinispan.statetransfer.StateTransferManager
Caused by: org.infinispan.manager.EmbeddedCacheManagerStartupException: org.infinispan.commons.CacheConfigurationException: Error starting component org.infinispan.statetransfer.StateTransferManager
Caused by: org.infinispan.commons.CacheException: Unknown command id 85!"}}
21:18:51,266 ERROR [org.jboss.as.controller.management-operation] (Controller Boot Thread) WFLYCTL0013: Operation ("add") failed - address: ([
("subsystem" => "infinispan"),
("cache-container" => "keycloak"),
("thread-pool" => "state-transfer")
]) - failure description: {"WFLYCTL0080: Failed services" => {"org.wildfly.clustering.infinispan.cache-container.keycloak" => "org.infinispan.manager.EmbeddedCacheManagerStartupException: org.infinispan.commons.CacheConfigurationException: Error starting component org.infinispan.statetransfer.StateTransferManager
Caused by: org.infinispan.remoting.RemoteException: ISPN000217: Received exception from 8eb22ce71ea3, see cause for remote stack trace
Caused by: org.infinispan.commons.CacheException: Unknown command id 85!"}}
21:18:51,266 ERROR [org.jboss.as.controller.management-operation] (Controller Boot Thread) WFLYCTL0013: Operation ("add") failed - address: ([
("subsystem" => "infinispan"),
("cache-container" => "keycloak"),
("thread-pool" => "transport")
]) - failure description: {"WFLYCTL0080: Failed services" => {"org.wildfly.clustering.infinispan.cache-container.keycloak" => "org.infinispan.manager.EmbeddedCacheManagerStartupException: org.infinispan.commons.CacheConfigurationException: Error starting component org.infinispan.statetransfer.StateTransferManager
Caused by: org.infinispan.manager.EmbeddedCacheManagerStartupException: org.infinispan.commons.CacheConfigurationException: Error starting component org.infinispan.statetransfer.StateTransferManager
Caused by: org.infinispan.commons.CacheConfigurationException: Error starting component org.infinispan.statetransfer.StateTransferManager
Caused by: java.util.concurrent.CompletionException: org.infinispan.remoting.RemoteException: ISPN000217: Received exception from 8eb22ce71ea3, see cause for remote stack trace
Caused by: org.infinispan.remoting.RemoteException: ISPN000217: Received exception from 8eb22ce71ea3, see cause for remote stack trace
The reason of this error is that I had another kc11 container running on my dev machine, stopping that container solved the issue. One of my questions is:
The goal of containers is isolation, so why a container impacts another one?
I tried to download neo4j today, and when I am trying to activate the server it does not work and giving me an error.
I am using neo4j-community-4.2.3 and JDK-11.0.10.
neo4j install-service completed and I typed eno4j start (no error occurred).
I tried to connect to the server with a web browser and it does not work.
I typed neo4j console and got this error
java.io.FileNotFoundException: C:\Program Files\neo4j-community-4.2.3\logs\debug.log (Access is denied)
at java.base/java.io.FileOutputStream.open0(Native Method)
at java.base/java.io.FileOutputStream.open(FileOutputStream.java:298)
at java.base/java.io.FileOutputStream.<init>(FileOutputStream.java:237)
at java.base/java.io.FileOutputStream.<init>(FileOutputStream.java:158)
at org.neo4j.logging.shaded.log4j.core.appender.rolling.RollingFileManager$RollingFileManagerFactory.createManager(RollingFileManager.java:678)
at org.neo4j.logging.shaded.log4j.core.appender.rolling.RollingFileManager$RollingFileManagerFactory.createManager(RollingFileManager.java:648)
at org.neo4j.logging.shaded.log4j.core.appender.AbstractManager.getManager(AbstractManager.java:113)
at org.neo4j.logging.shaded.log4j.core.appender.OutputStreamManager.getManager(OutputStreamManager.java:100)
at org.neo4j.logging.shaded.log4j.core.appender.rolling.RollingFileManager.getFileManager(RollingFileManager.java:205)
at org.neo4j.logging.shaded.log4j.core.appender.RollingFileAppender$Builder.build(RollingFileAppender.java:146)
at org.neo4j.logging.log4j.LogConfig.createRollingFileAppender(LogConfig.java:183)
at org.neo4j.logging.log4j.LogConfig.getAppender(LogConfig.java:152)
at org.neo4j.logging.log4j.LogConfig.configureLogging(LogConfig.java:105)
at org.neo4j.logging.log4j.LogConfig$Builder.build(LogConfig.java:290)
at org.neo4j.graphdb.factory.module.GlobalModule.createLogService(GlobalModule.java:337)
at org.neo4j.graphdb.factory.module.GlobalModule.<init>(GlobalModule.java:174)
at org.neo4j.graphdb.facade.DatabaseManagementServiceFactory.createGlobalModule(DatabaseManagementServiceFactory.java:252)
at org.neo4j.graphdb.facade.DatabaseManagementServiceFactory.build(DatabaseManagementServiceFactory.java:126)
at org.neo4j.server.CommunityBootstrapper.createNeo(CommunityBootstrapper.java:36)
at org.neo4j.server.NeoBootstrapper.start(NeoBootstrapper.java:134)
at org.neo4j.server.NeoBootstrapper.start(NeoBootstrapper.java:90)
at org.neo4j.server.CommunityEntryPoint.main(CommunityEntryPoint.java:35)
2021-02-26 21:16:37.044+0000 ERROR Failed to start Neo4j on dbms.connector.http.listen_address, a socket address. If missing port or hostname it is acquired from dbms.default_listen_address.
java.lang.IllegalStateException: ManagerFactory [org.neo4j.logging.shaded.log4j.core.appender.rolling.RollingFileManager$RollingFileManagerFactory#375b5b7f] unable to create manager for [C:\Program Files\neo4j-community-4.2.3\logs\debug.log] with data [org.neo4j.logging.shaded.log4j.core.appender.rolling.RollingFileManager$FactoryData#1813f3e9[pattern=C:\Program Files\neo4j-community-4.2.3\logs\debug.log.%i, append=true, bufferedIO=true, bufferSize=8192, policy=SizeBasedTriggeringPolicy(size=20971520), strategy=DefaultRolloverStrategy(min=1, max=7, useMax=false), advertiseURI=null, layout=org.neo4j.logging.log4j.Neo4jLogLayout#28cb9120, filePermissions=null, fileOwner=null]]
at org.neo4j.logging.shaded.log4j.core.appender.AbstractManager.getManager(AbstractManager.java:115) ~[neo4j-logging-4.2.3.jar:4.2.3]
at org.neo4j.logging.shaded.log4j.core.appender.OutputStreamManager.getManager(OutputStreamManager.java:100) ~[neo4j-logging-4.2.3.jar:4.2.3]
at org.neo4j.logging.shaded.log4j.core.appender.rolling.RollingFileManager.getFileManager(RollingFileManager.java:205) ~[neo4j-logging-4.2.3.jar:4.2.3]
at org.neo4j.logging.shaded.log4j.core.appender.RollingFileAppender$Builder.build(RollingFileAppender.java:146) ~[neo4j-logging-4.2.3.jar:4.2.3]
at org.neo4j.logging.log4j.LogConfig.createRollingFileAppender(LogConfig.java:183) ~[neo4j-logging-4.2.3.jar:4.2.3]
at org.neo4j.logging.log4j.LogConfig.getAppender(LogConfig.java:152) ~[neo4j-logging-4.2.3.jar:4.2.3]
at org.neo4j.logging.log4j.LogConfig.configureLogging(LogConfig.java:105) ~[neo4j-logging-4.2.3.jar:4.2.3]
at org.neo4j.logging.log4j.LogConfig$Builder.build(LogConfig.java:290) ~[neo4j-logging-4.2.3.jar:4.2.3]
at org.neo4j.graphdb.factory.module.GlobalModule.createLogService(GlobalModule.java:337) ~[neo4j-4.2.3.jar:4.2.3]
at org.neo4j.graphdb.factory.module.GlobalModule.<init>(GlobalModule.java:174) ~[neo4j-4.2.3.jar:4.2.3]
at org.neo4j.graphdb.facade.DatabaseManagementServiceFactory.createGlobalModule(DatabaseManagementServiceFactory.java:252) ~[neo4j-4.2.3.jar:4.2.3]
at org.neo4j.graphdb.facade.DatabaseManagementServiceFactory.build(DatabaseManagementServiceFactory.java:126) ~[neo4j-4.2.3.jar:4.2.3]
at org.neo4j.server.CommunityBootstrapper.createNeo(CommunityBootstrapper.java:36) ~[neo4j-4.2.3.jar:4.2.3]
at org.neo4j.server.NeoBootstrapper.start(NeoBootstrapper.java:134) [neo4j-4.2.3.jar:4.2.3]
at org.neo4j.server.NeoBootstrapper.start(NeoBootstrapper.java:90) [neo4j-4.2.3.jar:4.2.3]
at org.neo4j.server.CommunityEntryPoint.main(CommunityEntryPoint.java:35) [neo4j-4.2.3.jar:4.2.3]
2021-02-26 21:16:37.075+0000 INFO Neo4j Server shutdown initiated by request
2021-02-26 21:16:37.075+0000 INFO Stopped.
The problem was that I didn't have administrator permissions.
If you encounter this problem run cmd as an administrator and then run the command.
So I upgraded my Neo4j cluster instalation to 3.0.3 and it seems I can not load the database that comes with the instalation. This is the log file
<code>2016-06-28 14:11:20.879+0000 INFO Starting...
2016-06-28 14:11:21.620+0000 INFO Write transactions to database disabled
2016-06-28 14:11:22.483+0000 INFO Bolt enabled on localhost:7687.
2016-06-28 14:11:22.504+0000 INFO Initiating metrics...
2016-06-28 14:11:24.344+0000 INFO Attempting to join cluster of [192.168.1.91:5001, 192.168.1.92:5001, 192.168.1.93:5001]
2016-06-28 14:11:54.762+0000 ERROR Failed to start Neo4j: Starting Neo4j failed: Component 'org.neo4j.server.database.LifecycleManagingData$
org.neo4j.server.ServerStartupException: Starting Neo4j failed: Component 'org.neo4j.server.database.LifecycleManagingDatabase#1f7853af' wa$
at org.neo4j.server.exception.ServerStartupErrors.translateToServerStartupError(ServerStartupErrors.java:68)
at org.neo4j.server.AbstractNeoServer.start(AbstractNeoServer.java:217)
at org.neo4j.server.ServerBootstrapper.start(ServerBootstrapper.java:87)
at org.neo4j.server.ServerBootstrapper.start(ServerBootstrapper.java:66)
at org.neo4j.server.enterprise.EnterpriseEntryPoint.main(EnterpriseEntryPoint.java:32)
Caused by: org.neo4j.kernel.lifecycle.LifecycleException: Component 'org.neo4j.server.database.LifecycleManagingDatabase#1f7853af' was succ$
at org.neo4j.kernel.lifecycle.LifeSupport$LifecycleInstance.start(LifeSupport.java:444)
at org.neo4j.kernel.lifecycle.LifeSupport.start(LifeSupport.java:107)
at org.neo4j.server.AbstractNeoServer.start(AbstractNeoServer.java:189)
... 3 more
Caused by: java.lang.RuntimeException: Error starting org.neo4j.kernel.ha.factory.HighlyAvailableFacadeFactory, /opt/neo4j/neo4j-enterprise$
at org.neo4j.kernel.impl.factory.GraphDatabaseFacadeFactory.newFacade(GraphDatabaseFacadeFactory.java:144)
at org.neo4j.kernel.ha.factory.HighlyAvailableFacadeFactory.newFacade(HighlyAvailableFacadeFactory.java:42)
at org.neo4j.kernel.ha.HighlyAvailableGraphDatabase.<init>(HighlyAvailableGraphDatabase.java:41)
at org.neo4j.server.enterprise.EnterpriseNeoServer.lambda$static$0(EnterpriseNeoServer.java:80)
at org.neo4j.server.database.LifecycleManagingDatabase.start(LifecycleManagingDatabase.java:89)
at org.neo4j.kernel.lifecycle.LifeSupport$LifecycleInstance.start(LifeSupport.java:434)
... 5 more
Caused by: org.neo4j.kernel.lifecycle.LifecycleException: Component 'org.neo4j.cluster.client.ClusterJoin#4b6942a0' was successfully initia$
at org.neo4j.kernel.lifecycle.LifeSupport$LifecycleInstance.start(LifeSupport.java:444)
at org.neo4j.kernel.lifecycle.LifeSupport.start(LifeSupport.java:107)
at org.neo4j.kernel.lifecycle.LifeSupport$LifecycleInstance.start(LifeSupport.java:434)
at org.neo4j.kernel.lifecycle.LifeSupport.start(LifeSupport.java:107)
at org.neo4j.kernel.impl.factory.GraphDatabaseFacadeFactory.newFacade(GraphDatabaseFacadeFactory.java:140)
... 10 more
Caused by: java.util.concurrent.TimeoutException
at org.neo4j.cluster.statemachine.StateMachineProxyFactory$ResponseFuture.get(StateMachineProxyFactory.java:300)
</code>
Is there any fresh db I could try to load ? Thanks.
Is that on a single instance or cluster?
Best try it in standalone mode first.
How did you upgrade? Using the admin-tool with
neo4j-admin --mode database --from neo4j-2x-db --to neo4j-3x-db ?
see: https://neo4j.com/guides/upgrade/#neo4j-3-0
and: http://neo4j.com/docs/operations-manual/3.0/#upgrade-instructions-2x
I have 2 servers. A log file is appended at server A. And server B has HBase.
So I installed Flume NG to server A and using tail exec as source and avro sink, and to server B avro as source and hbase as a sink.
While running agent on server A, I m getting following exception:
2013-10-04 12:47:33,778 (SinkRunner-PollingRunner-DefaultSinkProcessor) [ERROR - org.apache.flume.SinkRunner$PollingRunner.run(SinkRunner.java:160)] Unable to deliver event. Exception follows.
org.apache.flume.EventDeliveryException: Failed to send events
at org.apache.flume.sink.AbstractRpcSink.process(AbstractRpcSink.java:382)
at org.apache.flume.sink.DefaultSinkProcessor.process(DefaultSinkProcessor.java:68)
at org.apache.flume.SinkRunner$PollingRunner.run(SinkRunner.java:147)
at java.lang.Thread.run(Thread.java:679)
Caused by: org.apache.flume.FlumeException: NettyAvroRpcClient { host: sun, port: 41414 }: RPC connection error
at org.apache.flume.api.NettyAvroRpcClient.connect(NettyAvroRpcClient.java:161)
at org.apache.flume.api.NettyAvroRpcClient.connect(NettyAvroRpcClient.java:115)
at org.apache.flume.api.NettyAvroRpcClient.configure(NettyAvroRpcClient.java:590)
at org.apache.flume.api.RpcClientFactory.getInstance(RpcClientFactory.java:88)
at org.apache.flume.sink.AvroSink.initializeRpcClient(AvroSink.java:127)
at org.apache.flume.sink.AbstractRpcSink.createConnection(AbstractRpcSink.java:209)
at org.apache.flume.sink.AbstractRpcSink.verifyConnection(AbstractRpcSink.java:269)
at org.apache.flume.sink.AbstractRpcSink.process(AbstractRpcSink.java:339)
... 3 more
Caused by: java.io.IOException: Error connecting to sun/10.xx.xx.xx:41414
at org.apache.avro.ipc.NettyTransceiver.getChannel(NettyTransceiver.java:261)
at org.apache.avro.ipc.NettyTransceiver.<init>(NettyTransceiver.java:203)
at org.apache.avro.ipc.NettyTransceiver.<init>(NettyTransceiver.java:152)
at org.apache.flume.api.NettyAvroRpcClient.connect(NettyAvroRpcClient.java:147)
... 10 more
Caused by: java.net.ConnectException: Connection refused
at sun.nio.ch.SocketChannelImpl.checkConnect(Native Method)
at sun.nio.ch.SocketChannelImpl.finishConnect(SocketChannelImpl.java:597)
at org.jboss.netty.channel.socket.nio.NioClientSocketPipelineSink$Boss.connect(NioClientSocketPipelineSink.java:396)
at org.jboss.netty.channel.socket.nio.NioClientSocketPipelineSink$Boss.processSelectedKeys(NioClientSocketPipelineSink.java:358)
at org.jboss.netty.channel.socket.nio.NioClientSocketPipelineSink$Boss.run(NioClientSocketPipelineSink.java:274)
at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1146)
at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:615)
While running flume agent on server B, following exception occoured:
2013-10-04 12:27:56,006 (lifecycleSupervisor-1-4) [ERROR - org.apache.flume.lifecycle.LifecycleSupervisor$MonitorRunnable.run(LifecycleSupervisor.java:253)] Unable to start EventDrivenSourceRunner: { source:Avro source avroSource: { bindAddress: stratos, port: 41414 } } - Exception follows.
org.jboss.netty.channel.ChannelException: Failed to bind to: stratos/10.xx.xx.xx:41414
at org.jboss.netty.bootstrap.ServerBootstrap.bind(ServerBootstrap.java:298)
at org.apache.avro.ipc.NettyServer.<init>(NettyServer.java:106)
at org.apache.flume.source.AvroSource.start(AvroSource.java:200)
at org.apache.flume.source.EventDrivenSourceRunner.start(EventDrivenSourceRunner.java:44)
at org.apache.flume.lifecycle.LifecycleSupervisor$MonitorRunnable.run(LifecycleSupervisor.java:251)
at java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:471)
at java.util.concurrent.FutureTask$Sync.innerRunAndReset(FutureTask.java:351)
at java.util.concurrent.FutureTask.runAndReset(FutureTask.java:178)
at java.util.concurrent.ScheduledThreadPoolExecutor$ScheduledFutureTask.access$201(ScheduledThreadPoolExecutor.java:165)
at java.util.concurrent.ScheduledThreadPoolExecutor$ScheduledFutureTask.run(ScheduledThreadPoolExecutor.java:267)
at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1146)
at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:615)
at java.lang.Thread.run(Thread.java:679)
Caused by: java.net.BindException: Cannot assign requested address
at sun.nio.ch.Net.bind(Native Method)
at sun.nio.ch.ServerSocketChannelImpl.bind(ServerSocketChannelImpl.java:137)
at sun.nio.ch.ServerSocketAdaptor.bind(ServerSocketAdaptor.java:77)
at org.jboss.netty.channel.socket.nio.NioServerSocketPipelineSink.bind(NioServerSocketPipelineSink.java:131)
at org.jboss.netty.channel.socket.nio.NioServerSocketPipelineSink.handleServerSocket(NioServerSocketPipelineSink.java:83)
at org.jboss.netty.channel.socket.nio.NioServerSocketPipelineSink.eventSunk(NioServerSocketPipelineSink.java:57)
at org.jboss.netty.channel.Channels.bind(Channels.java:569)
at org.jboss.netty.channel.AbstractChannel.bind(AbstractChannel.java:186)
at org.jboss.netty.bootstrap.ServerBootstrap$Binder.channelOpen(ServerBootstrap.java:343)
at org.jboss.netty.channel.Channels.fireChannelOpen(Channels.java:170)
at org.jboss.netty.channel.socket.nio.NioServerSocketChannel.<init>(NioServerSocketChannel.java:80)
at org.jboss.netty.channel.socket.nio.NioServerSocketChannelFactory.newChannel(NioServerSocketChannelFactory.java:156)
at org.jboss.netty.channel.socket.nio.NioServerSocketChannelFactory.newChannel(NioServerSocketChannelFactory.java:86)
at org.jboss.netty.bootstrap.ServerBootstrap.bind(ServerBootstrap.java:277)
... 12 more
2013-10-04 12:27:59,007 (lifecycleSupervisor-1-3) [INFO - org.apache.flume.source.AvroSource.start(AvroSource.java:192)] Starting Avro source avroSource: { bindAddress: stratos, port: 41414 }...
2013-10-04 12:27:59,008 (lifecycleSupervisor-1-3) [ERROR - org.apache.flume.lifecycle.LifecycleSupervisor$MonitorRunnable.run(LifecycleSupervisor.java:253)] Unable to start EventDrivenSourceRunner: { source:Avro source avroSource: { bindAddress: stratos, port: 41414 } } - Exception follows.
org.jboss.netty.channel.ChannelException: Failed to bind to stratos/10.xx.xx.xx:41414
at org.jboss.netty.bootstrap.ServerBootstrap.bind(ServerBootstrap.java:298)
at org.apache.avro.ipc.NettyServer.<init>(NettyServer.java:106)
at org.apache.flume.source.AvroSource.start(AvroSource.java:200)
at org.apache.flume.source.EventDrivenSourceRunner.start(EventDrivenSourceRunner.java:44)
at org.apache.flume.lifecycle.LifecycleSupervisor$MonitorRunnable.run(LifecycleSupervisor.java:251)
at java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:471)
at java.util.concurrent.FutureTask$Sync.innerRunAndReset(FutureTask.java:351)
at java.util.concurrent.FutureTask.runAndReset(FutureTask.java:178)
at java.util.concurrent.ScheduledThreadPoolExecutor$ScheduledFutureTask.access$201(ScheduledThreadPoolExecutor.java:165)
at java.util.concurrent.ScheduledThreadPoolExecutor$ScheduledFutureTask.run(ScheduledThreadPoolExecutor.java:267)
at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1146)
at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:615)
at java.lang.Thread.run(Thread.java:679)
Caused by: java.net.BindException: Cannot assign requested address
at sun.nio.ch.Net.bind(Native Method)
at sun.nio.ch.ServerSocketChannelImpl.bind(ServerSocketChannelImpl.java:137)
at sun.nio.ch.ServerSocketAdaptor.bind(ServerSocketAdaptor.java:77)
at org.jboss.netty.channel.socket.nio.NioServerSocketPipelineSink.bind(NioServerSocketPipelineSink.java:131)
at org.jboss.netty.channel.socket.nio.NioServerSocketPipelineSink.handleServerSocket(NioServerSocketPipelineSink.java:83)
at org.jboss.netty.channel.socket.nio.NioServerSocketPipelineSink.eventSunk(NioServerSocketPipelineSink.java:57)
at org.jboss.netty.channel.Channels.bind(Channels.java:569)
at org.jboss.netty.channel.AbstractChannel.bind(AbstractChannel.java:186)
at org.jboss.netty.bootstrap.ServerBootstrap$Binder.channelOpen(ServerBootstrap.java:343)
at org.jboss.netty.channel.Channels.fireChannelOpen(Channels.java:170)
at org.jboss.netty.channel.socket.nio.NioServerSocketChannel.<init>(NioServerSocketChannel.java:80)
at org.jboss.netty.channel.socket.nio.NioServerSocketChannelFactory.newChannel(NioServerSocketChannelFactory.java:156)
at org.jboss.netty.channel.socket.nio.NioServerSocketChannelFactory.newChannel(NioServerSocketChannelFactory.java:86)
at org.jboss.netty.bootstrap.ServerBootstrap.bind(ServerBootstrap.java:277)
... 12 more
flume conf on server A is:
agent1.sinks.avroSink.channel = memoryChannel
agent1.sinks.avroSink.type = avro
agent1.sinks.avroSink.hostname = sun
agent1.sinks.avroSink.port = 41414
flume conf on server B:
agent1.sources.avroSource.type = avro
agent1.sources.avroSource.channels = memoryChannel
agent1.sources.avroSource.bind = 10.xx.yy.zz
agent1.sources.avroSource.port = 41414
A must have B hostname.
B must have its's hostame.
Refer below link :
http://pic.dhe.ibm.com/infocenter/bigins/v2r1/index.jsp?topic=%2Fcom.ibm.swg.im.infosphere.biginsights.admin.doc%2Fdoc%2FUserScenarioFlume.html
Regards,
Raj