How do I solve the problem of ksqlDB internal topic issue - ksqldb

There is an error that the default_ksql_processing_log topic does not exist even though the KsqlDB cluster is set up on three servers and the ksql.service.id value is added to each server setting.
Which part should I check?
org.apache.kafka.streams.errors.StreamsException: Exception caught in process. taskId=0_2, processor=KSTREAM-SOURCE-0000000000, topic=source.topic, partition=2, offset=3264427, stacktrace=java.lang.RuntimeException: java.util.concurrent.ExecutionException: org.apache.kafka.common.errors.TimeoutException: Topic default_ksql_processing_log not present in metadata after 60000 ms.
at org.apache.kafka.log4jappender.KafkaLog4jAppender.append(KafkaLog4jAppender.java:355)
at org.apache.log4j.AppenderSkeleton.doAppend(AppenderSkeleton.java:251)
at org.apache.log4j.helpers.AppenderAttachableImpl.appendLoopOnAppenders(AppenderAttachableImpl.java:66)
at org.apache.log4j.Category.callAppenders(Category.java:206)
at org.apache.log4j.Category.forcedLog(Category.java:391)
at org.apache.log4j.Category.log(Category.java:856)
at org.slf4j.impl.Log4jLoggerAdapter.error(Log4jLoggerAdapter.java:518)
at io.confluent.common.logging.StructuredLoggerImpl.error(StructuredLoggerImpl.java:44)
at io.confluent.common.logging.StructuredLoggerImpl.error(StructuredLoggerImpl.java:40)
at io.confluent.ksql.logging.processing.ProcessingLoggerImpl.error(ProcessingLoggerImpl.java:35)
at io.confluent.ksql.execution.streams.GroupByParamsFactory.processColumn(GroupByParamsFactory.java:93)
at io.confluent.ksql.execution.streams.GroupByParamsFactory.access$100(GroupByParamsFactory.java:38)
at io.confluent.ksql.execution.streams.GroupByParamsFactory$ExpressionGrouper.apply(GroupByParamsFactory.java:141)
...
...
Caused by: java.util.concurrent.ExecutionException: org.apache.kafka.common.errors.TimeoutException: Topic default_ksql_processing_log not present in metadata after 60000 ms.
at org.apache.kafka.clients.producer.KafkaProducer$FutureFailure.<init>(KafkaProducer.java:1320)
at org.apache.kafka.clients.producer.KafkaProducer.doSend(KafkaProducer.java:989)
at org.apache.kafka.clients.producer.KafkaProducer.send(KafkaProducer.java:889)
at org.apache.kafka.clients.producer.KafkaProducer.send(KafkaProducer.java:775)
at org.apache.kafka.log4jappender.KafkaLog4jAppender.append(KafkaLog4jAppender.java:348)
... 46 more
Caused by: org.apache.kafka.common.errors.TimeoutException: Topic default_ksql_processing_log not present in metadata after 60000 ms.

Related

MULE-4 : SFTP Connection is not re-establishing after connection failure

We are reading new or updated files for every 1 minute on SFTP server using SFTP
<sftp:listener doc:name="On New or Updated File" config-ref="SFTP_Config" directory="abc/in" timeBetweenSizeCheck="2000" autoDelete="true" timeBetweenSizeCheckUnit="MILLISECONDS">
<reconnect-forever frequency="60000" />
<scheduling-strategy >
<fixed-frequency frequency="60000"/>
</scheduling-strategy>
</sftp:listener>
Sometimes when sftp server is down it is trying to reconnect, as we given reconnect forever its keep on trying. Even after SFTP server is back online again, its not able to read files.
I even tried giving reconnection attempts as 2, it was trying for 2 times and throwing error as pipe is closed. When SFTP server is online again, even though there are new files available it is not picking up.
Anyone faced this issue with Mule 4 SFTP connector (v1.3.10). Please help me out.
Below is the error message
[2020-12-04 10:04:34.662] ERROR
org.mule.extension.sftp.internal.source.SftpDirectoryListener
[_pollingSource_sd-sftp-svc-flow/executor.01]: Found exception trying
to poll directory '/ABC/xyz/In/'. Will try again on the next poll.
org.mule.runtime.api.exception.MuleRuntimeException: Found exception
trying to obtain path /ABC/xyz/In/ at
org.mule.extension.file.common.api.command.AbstractFileCommand.exception(AbstractFileCommand.java:209)
at
org.mule.extension.sftp.internal.command.SftpCommand.getFile(SftpCommand.java:92)
at
org.mule.extension.sftp.internal.command.SftpCommand.getExistingFile(SftpCommand.java:71)
at
org.mule.extension.sftp.internal.command.SftpListCommand.list(SftpListCommand.java:77)
at
org.mule.extension.file.common.api.AbstractFileSystem.list(AbstractFileSystem.java:112)
at
org.mule.extension.sftp.internal.source.SftpDirectoryListener.poll(SftpDirectoryListener.java:184)
at
org.mule.runtime.module.extension.internal.runtime.source.poll.PollingSourceWrapper.lambda$poll$3(PollingSourceWrapper.java:193)
at
org.mule.runtime.core.api.util.func.CheckedRunnable.run(CheckedRunnable.java:22)
at
org.mule.runtime.module.extension.internal.runtime.source.poll.PollingSourceWrapper.withWatermarkLock(PollingSourceWrapper.java:492)
at
org.mule.runtime.module.extension.internal.runtime.source.poll.PollingSourceWrapper.poll(PollingSourceWrapper.java:190)
at
org.mule.runtime.module.extension.internal.runtime.source.poll.PollingSourceWrapper.lambda$onStart$1(PollingSourceWrapper.java:143)
at
org.mule.runtime.module.extension.internal.runtime.source.poll.DelegateRunnable.run(DelegateRunnable.java:41)
at
java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:511)
at java.util.concurrent.FutureTask.run(FutureTask.java:266) at
org.mule.service.scheduler.internal.AbstractRunnableFutureDecorator.doRun(AbstractRunnableFutureDecorator.java:111)
at
org.mule.service.scheduler.internal.RunnableRepeatableFutureDecorator.run(RunnableRepeatableFutureDecorator.java:83)
at
java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149)
at
java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624)
at java.lang.Thread.run(Thread.java:748) Caused by:
org.mule.runtime.api.exception.MuleRuntimeException: Could not obtain
attributes for path /ABC/xyz/In/ at
org.mule.extension.sftp.internal.connection.SftpClient.exception(SftpClient.java:415)
at
org.mule.extension.sftp.internal.connection.SftpClient.exception(SftpClient.java:409)
at
org.mule.extension.sftp.internal.connection.SftpClient.getAttributes(SftpClient.java:142)
at
org.mule.extension.sftp.internal.command.SftpCommand.getFile(SftpCommand.java:88)
... 17 more Caused by:
org.mule.runtime.api.connection.ConnectionException: ... 20 more
Caused by: 4: at
com.jcraft.jsch.ChannelSftp.stat(ChannelSftp.java:2204) at
org.mule.extension.sftp.internal.connection.SftpClient.getAttributes(SftpClient.java:137)
... 18 more Caused by: java.io.IOException: Pipe closed at
java.io.PipedInputStream.read(PipedInputStream.java:307) at
com.jcraft.jsch.Channel$MyPipedInputStream.updateReadSide(Channel.java:362)
at com.jcraft.jsch.ChannelSftp.stat(ChannelSftp.java:2194) ... 19
more
[2020-12-04 10:04:34.769] WARN
org.mule.runtime.module.extension.internal.runtime.source.ExtensionMessageSource
[_pollingSource_sd-sftp-svc-flow/executor.01]: Message source
'listener' on flow 'sd-sftp-svc-flow' threw exception. Attempting to
reconnect... org.mule.runtime.api.connection.ConnectionException: at
org.mule.extension.sftp.internal.connection.SftpClient.exception(SftpClient.java:409)
at
org.mule.extension.sftp.internal.connection.SftpClient.getAttributes(SftpClient.java:142)
at
org.mule.extension.sftp.internal.command.SftpCommand.getFile(SftpCommand.java:88)
at
org.mule.extension.sftp.internal.command.SftpCommand.getExistingFile(SftpCommand.java:71)
at
org.mule.extension.sftp.internal.command.SftpListCommand.list(SftpListCommand.java:77)
at
org.mule.extension.file.common.api.AbstractFileSystem.list(AbstractFileSystem.java:112)
at
org.mule.extension.sftp.internal.source.SftpDirectoryListener.poll(SftpDirectoryListener.java:184)
at
org.mule.runtime.module.extension.internal.runtime.source.poll.PollingSourceWrapper.lambda$poll$3(PollingSourceWrapper.java:193)
at
org.mule.runtime.core.api.util.func.CheckedRunnable.run(CheckedRunnable.java:22)
at
org.mule.runtime.module.extension.internal.runtime.source.poll.PollingSourceWrapper.withWatermarkLock(PollingSourceWrapper.java:492)
at
org.mule.runtime.module.extension.internal.runtime.source.poll.PollingSourceWrapper.poll(PollingSourceWrapper.java:190)
at
org.mule.runtime.module.extension.internal.runtime.source.poll.PollingSourceWrapper.lambda$onStart$1(PollingSourceWrapper.java:143)
at
org.mule.runtime.module.extension.internal.runtime.source.poll.DelegateRunnable.run(DelegateRunnable.java:41)
at
java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:511)
at java.util.concurrent.FutureTask.run(FutureTask.java:266) at
org.mule.service.scheduler.internal.AbstractRunnableFutureDecorator.doRun(AbstractRunnableFutureDecorator.java:111)
at
org.mule.service.scheduler.internal.RunnableRepeatableFutureDecorator.run(RunnableRepeatableFutureDecorator.java:83)
at
java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149)
at
java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624)
at java.lang.Thread.run(Thread.java:748) Caused by: 4: at
com.jcraft.jsch.ChannelSftp.stat(ChannelSftp.java:2204) at
org.mule.extension.sftp.internal.connection.SftpClient.getAttributes(SftpClient.java:137)
... 18 more Caused by: java.io.IOException: Pipe closed at
java.io.PipedInputStream.read(PipedInputStream.java:307) at
com.jcraft.jsch.Channel$MyPipedInputStream.updateReadSide(Channel.java:362)
at com.jcraft.jsch.ChannelSftp.stat(ChannelSftp.java:2194) ... 19
more
[2020-12-04 10:04:34.832] ERROR
org.mule.runtime.core.api.retry.policy.ConnectNotifier
[[MuleRuntime].uber.11:
[phm-sd-prchs-ord-app-dev].uber#org.mule.runtime.module.extension.internal.runtime.source.ExtensionMessageSource.lambda$null$10:370
#314deb18]: Failed to connect/reconnect: Message Source Reconnection.
Root Exception was: Pipe closed. Type: class java.io.IOException
[2020-12-04 10:04:34.833] INFO
org.mule.runtime.core.internal.retry.policies.SimpleRetryPolicy
[[MuleRuntime].uber.11:
[phm-sd-prchs-ord-app-dev].uber#org.mule.runtime.module.extension.internal.runtime.source.ExtensionMessageSource.lambda$null$10:370
#314deb18]: Waiting for 2000ms before reconnecting. Failed attempt 1
of 2 [2020-12-04 10:04:36.846] ERROR
org.mule.runtime.core.api.retry.policy.ConnectNotifier
[[MuleRuntime].uber.11:
[phm-sd-prchs-ord-app-dev].uber#org.mule.runtime.module.extension.internal.runtime.source.ExtensionMessageSource.lambda$null$10:370
#314deb18]: Failed to connect/reconnect: Message Source Reconnection.
Root Exception was: Pipe closed. Type: class java.io.IOException
[2020-12-04 10:04:36.846] INFO
org.mule.runtime.core.internal.retry.policies.SimpleRetryPolicy
[[MuleRuntime].uber.11:
[phm-sd-prchs-ord-app-dev].uber#org.mule.runtime.module.extension.internal.runtime.source.ExtensionMessageSource.lambda$null$10:370
#314deb18]: Waiting for 2000ms before reconnecting. Failed attempt 2
of 2 [2020-12-04 10:04:38.858] ERROR
org.mule.runtime.core.api.retry.policy.ConnectNotifier
[[MuleRuntime].uber.11:
[phm-sd-prchs-ord-app-dev].uber#org.mule.runtime.module.extension.internal.runtime.source.ExtensionMessageSource.lambda$null$10:370
#314deb18]: Failed to connect/reconnect: Message Source Reconnection.
Root Exception was: Pipe closed. Type: class java.io.IOException
[2020-12-04 10:04:38.860] ERROR
org.mule.runtime.core.internal.retry.async.RetryWorker
[[MuleRuntime].uber.11:
[phm-sd-prchs-ord-app-dev].uber#org.mule.runtime.module.extension.internal.runtime.source.ExtensionMessageSource.lambda$null$10:370
#314deb18]: Error retrying work
org.mule.runtime.core.api.retry.policy.RetryPolicyExhaustedException:
at
org.mule.runtime.core.api.retry.policy.AbstractPolicyTemplate.execute(AbstractPolicyTemplate.java:78)
at
org.mule.runtime.core.internal.retry.async.RetryWorker.run(RetryWorker.java:56)
at
org.mule.runtime.core.internal.util.rx.ImmediateScheduler.execute(ImmediateScheduler.java:150)
at
org.mule.runtime.core.api.retry.async.AsynchronousRetryTemplate.execute(AsynchronousRetryTemplate.java:66)
at
org.mule.runtime.module.extension.internal.runtime.source.ExtensionMessageSource.startSource(ExtensionMessageSource.java:209)
at
org.mule.runtime.module.extension.internal.runtime.source.ExtensionMessageSource.restart(ExtensionMessageSource.java:350)
at
org.mule.runtime.module.extension.internal.runtime.source.ExtensionMessageSource.lambda$null$5(ExtensionMessageSource.java:316)
at reactor.core.publisher.MonoCreate.subscribe(MonoCreate.java:57)
at
reactor.core.publisher.MonoPeekTerminal.subscribe(MonoPeekTerminal.java:57)
at
reactor.core.publisher.MonoPeekTerminal.subscribe(MonoPeekTerminal.java:61)
at
reactor.core.publisher.MonoPeekFuseable.subscribe(MonoPeekFuseable.java:74)
at reactor.core.publisher.Mono.subscribe(Mono.java:3858) at
reactor.core.publisher.Mono.subscribeWith(Mono.java:3964) at
reactor.core.publisher.Mono.subscribe(Mono.java:3743) at
org.mule.runtime.module.extension.internal.runtime.source.ExtensionMessageSource.lambda$onException$9(ExtensionMessageSource.java:327)
at
java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:511)
at java.util.concurrent.FutureTask.run(FutureTask.java:266) at
org.mule.service.scheduler.internal.AbstractRunnableFutureDecorator.doRun(AbstractRunnableFutureDecorator.java:111)
at
org.mule.service.scheduler.internal.RunnableFutureDecorator.run(RunnableFutureDecorator.java:54)
at
java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149)
at
java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624)
at java.lang.Thread.run(Thread.java:748) Caused by:
org.mule.runtime.api.connection.ConnectionException: at
org.mule.extension.sftp.internal.connection.SftpClient.exception(SftpClient.java:409)
at
org.mule.extension.sftp.internal.connection.SftpClient.getAttributes(SftpClient.java:142)
at
org.mule.extension.sftp.internal.command.SftpCommand.getFile(SftpCommand.java:88)
at
org.mule.extension.sftp.internal.command.SftpCommand.getFile(SftpCommand.java:81)
at
org.mule.extension.sftp.internal.command.SftpCommand.exists(SftpCommand.java:111)
at
org.mule.extension.sftp.internal.command.SftpCommand.exists(SftpCommand.java:41)
at
org.mule.extension.file.common.api.command.AbstractFileCommand.resolveExistingPath(AbstractFileCommand.java:135)
at
org.mule.extension.sftp.internal.source.OnNewFileCommand.resolveRootPath(OnNewFileCommand.java:32)
at
org.mule.extension.sftp.internal.source.SftpDirectoryListener.resolveRootPath(SftpDirectoryListener.java:307)
at
org.mule.extension.sftp.internal.source.SftpDirectoryListener.doStart(SftpDirectoryListener.java:144)
at
org.mule.runtime.extension.api.runtime.source.PollingSource.onStart(PollingSource.java:44)
at
org.mule.runtime.module.extension.internal.runtime.source.poll.PollingSourceWrapper.onStart(PollingSourceWrapper.java:120)
at
org.mule.runtime.module.extension.internal.runtime.source.SourceAdapter.start(SourceAdapter.java:412)
at
org.mule.runtime.module.extension.internal.runtime.source.ExtensionMessageSource$StartSourceCallback.doWork(ExtensionMessageSource.java:548)
at
org.mule.runtime.core.api.retry.policy.AbstractPolicyTemplate.execute(AbstractPolicyTemplate.java:52)
... 21 more Caused by: 4: at
com.jcraft.jsch.ChannelSftp._stat(ChannelSftp.java:2235) at
com.jcraft.jsch.ChannelSftp._stat(ChannelSftp.java:2242) at
com.jcraft.jsch.ChannelSftp.stat(ChannelSftp.java:2199) at
org.mule.extension.sftp.internal.connection.SftpClient.getAttributes(SftpClient.java:137)
... 34 more Caused by: java.io.IOException: Pipe closed at
java.io.PipedInputStream.read(PipedInputStream.java:307) at
java.io.PipedInputStream.read(PipedInputStream.java:377) at
com.jcraft.jsch.ChannelSftp.fill(ChannelSftp.java:2909) at
com.jcraft.jsch.ChannelSftp.header(ChannelSftp.java:2935) at
com.jcraft.jsch.ChannelSftp._stat(ChannelSftp.java:2216) ... 37 more
[2020-12-04 10:04:38.867] WARN
org.mule.runtime.module.extension.internal.runtime.source.ExtensionMessageSource
[[MuleRuntime].uber.11:
[phm-sd-prchs-ord-app-dev].uber#org.mule.runtime.module.extension.internal.runtime.source.ExtensionMessageSource.lambda$null$10:370
#314deb18]: Message source 'listener' on flow 'sd-sftp-svc-flow'
successfully reconnected
We had the same issue with 1.3.10 version of Mule 4 SFTP connector and we raised the issue with Muelsoft and after lot of deliberation they fixed this issue in the latest 1.4.0 version of SFTP connector. Here is the official release notes and comment from Mulesoft
Engineering team has fixed the issue in SFTP connector and the official fix is included in SFTP connector v1.4.0 and has been released.
https://docs.mulesoft.com/release-notes/connector/connector-sftp#1-4-0
this error occurs due to SFTP server has been unavailable or there might be once your flow disconnected with SFTP server. Flow is not reconnecting again.
like in attached image
SFTP Reconnection setup click here
I had the same issue but with the version 1.4.0. #Sandeep Karnati told that 1.4.0 would fix the problem.
But i found in the mulesoft docs there is a new version 1.4.1 which according to MuleSoft fixes this problem.
https://docs.mulesoft.com/release-notes/connector/connector-sftp#1-4-0
I am curious if this works for me.

Neo4j: ERROR [o.n.k.i.DatabaseHealth] Database panic: The database has encountered a critical error, and needs to be restarted

The Neo4J service is failing with following error log, please help.
We are trying to configure Neo4j in one of our Dev Server to do some POCs, but after installation the Neo4j service is failing with following error.
2018-03-21 10:34:17.935+0000 ERROR [o.n.k.i.DatabaseHealth] Database panic: The database has encountered a critical error, and needs to be restarted. Please see database logs for more details. Failed to apply transaction: null
org.neo4j.kernel.api.exceptions.TransactionApplyKernelException: Failed to apply transaction: null
at org.neo4j.kernel.impl.storageengine.impl.recordstorage.RecordStorageEngine.apply(RecordStorageEngine.java:334)
at org.neo4j.kernel.recovery.DefaultRecoverySPI$RecoveryVisitor.visit(DefaultRecoverySPI.java:137)
at org.neo4j.kernel.recovery.DefaultRecoverySPI$RecoveryVisitor.visit(DefaultRecoverySPI.java:118)
at org.neo4j.kernel.recovery.Recovery.init(Recovery.java:128)
at org.neo4j.kernel.lifecycle.LifeSupport$LifecycleInstance.init(LifeSupport.java:406)
at org.neo4j.kernel.lifecycle.LifeSupport.init(LifeSupport.java:62)
at org.neo4j.kernel.lifecycle.LifeSupport.start(LifeSupport.java:98)
at org.neo4j.kernel.NeoStoreDataSource.start(NeoStoreDataSource.java:521)
at org.neo4j.kernel.lifecycle.LifeSupport$LifecycleInstance.start(LifeSupport.java:445)
at org.neo4j.kernel.lifecycle.LifeSupport.start(LifeSupport.java:107)
at org.neo4j.kernel.impl.transaction.state.DataSourceManager.start(DataSourceManager.java:100)
at org.neo4j.kernel.lifecycle.LifeSupport$LifecycleInstance.start(LifeSupport.java:445)
at org.neo4j.kernel.lifecycle.LifeSupport.start(LifeSupport.java:107)
at org.neo4j.kernel.impl.factory.GraphDatabaseFacadeFactory.initFacade(GraphDatabaseFacadeFactory.java:207)
at org.neo4j.kernel.impl.factory.GraphDatabaseFacadeFactory.newFacade(GraphDatabaseFacadeFactory.java:126)
at org.neo4j.server.CommunityNeoServer.lambda$static$0(CommunityNeoServer.java:58)
at org.neo4j.server.database.LifecycleManagingDatabase.start(LifecycleManagingDatabase.java:88)
at org.neo4j.kernel.lifecycle.LifeSupport$LifecycleInstance.start(LifeSupport.java:445)
at org.neo4j.kernel.lifecycle.LifeSupport.start(LifeSupport.java:107)
at org.neo4j.server.AbstractNeoServer.start(AbstractNeoServer.java:211)
at org.neo4j.server.ServerBootstrapper.start(ServerBootstrapper.java:111)
at org.neo4j.server.BlockingBootstrapper.start(BlockingBootstrapper.java:41)
at org.neo4j.server.ServerBootstrapper.start(ServerBootstrapper.java:79)
at org.neo4j.server.CommunityEntryPoint.start(CommunityEntryPoint.java:42)
Caused by: java.io.IOException: Failed to flush label updates
at org.neo4j.kernel.impl.transaction.command.IndexBatchTransactionApplier.applyPendingLabelAndIndexUpdates(IndexBatchTransactionApplier.java:116)
at org.neo4j.kernel.impl.transaction.command.IndexBatchTransactionApplier.close(IndexBatchTransactionApplier.java:124)
at org.neo4j.kernel.impl.api.BatchTransactionApplierFacade.close(BatchTransactionApplierFacade.java:70)
at org.neo4j.kernel.impl.storageengine.impl.recordstorage.RecordStorageEngine.apply(RecordStorageEngine.java:331)
... 23 more
Caused by: java.util.concurrent.ExecutionException: org.neo4j.kernel.impl.store.UnderlyingStorageException: org.neo4j.index.internal.gbptree.TreeInconsistencyException: GSPP WRITE failure
Pointer state A: CRASH
Pointer state B: CRASH
Generations: A < B | GB+Tree[file:D:\NEO_HOME\data\databases\graph.db\neostore.labelscanstore.db, layout:LabelScanLayout[version:0.1, identifier:21483684112629824, keySize:10, valueSize:8], generation:7/9]
at org.neo4j.concurrent.WorkSync.checkFailure(WorkSync.java:182)
at org.neo4j.concurrent.WorkSync.access$100(WorkSync.java:49)
at org.neo4j.concurrent.WorkSync$1.await(WorkSync.java:132)
at org.neo4j.kernel.impl.transaction.command.IndexBatchTransactionApplier.applyPendingLabelAndIndexUpdates(IndexBatchTransactionApplier.java:112)
... 26 more
Thanks in advance.

Can't load database on Neo4j

So I upgraded my Neo4j cluster instalation to 3.0.3 and it seems I can not load the database that comes with the instalation. This is the log file
<code>2016-06-28 14:11:20.879+0000 INFO Starting...
2016-06-28 14:11:21.620+0000 INFO Write transactions to database disabled
2016-06-28 14:11:22.483+0000 INFO Bolt enabled on localhost:7687.
2016-06-28 14:11:22.504+0000 INFO Initiating metrics...
2016-06-28 14:11:24.344+0000 INFO Attempting to join cluster of [192.168.1.91:5001, 192.168.1.92:5001, 192.168.1.93:5001]
2016-06-28 14:11:54.762+0000 ERROR Failed to start Neo4j: Starting Neo4j failed: Component 'org.neo4j.server.database.LifecycleManagingData$
org.neo4j.server.ServerStartupException: Starting Neo4j failed: Component 'org.neo4j.server.database.LifecycleManagingDatabase#1f7853af' wa$
at org.neo4j.server.exception.ServerStartupErrors.translateToServerStartupError(ServerStartupErrors.java:68)
at org.neo4j.server.AbstractNeoServer.start(AbstractNeoServer.java:217)
at org.neo4j.server.ServerBootstrapper.start(ServerBootstrapper.java:87)
at org.neo4j.server.ServerBootstrapper.start(ServerBootstrapper.java:66)
at org.neo4j.server.enterprise.EnterpriseEntryPoint.main(EnterpriseEntryPoint.java:32)
Caused by: org.neo4j.kernel.lifecycle.LifecycleException: Component 'org.neo4j.server.database.LifecycleManagingDatabase#1f7853af' was succ$
at org.neo4j.kernel.lifecycle.LifeSupport$LifecycleInstance.start(LifeSupport.java:444)
at org.neo4j.kernel.lifecycle.LifeSupport.start(LifeSupport.java:107)
at org.neo4j.server.AbstractNeoServer.start(AbstractNeoServer.java:189)
... 3 more
Caused by: java.lang.RuntimeException: Error starting org.neo4j.kernel.ha.factory.HighlyAvailableFacadeFactory, /opt/neo4j/neo4j-enterprise$
at org.neo4j.kernel.impl.factory.GraphDatabaseFacadeFactory.newFacade(GraphDatabaseFacadeFactory.java:144)
at org.neo4j.kernel.ha.factory.HighlyAvailableFacadeFactory.newFacade(HighlyAvailableFacadeFactory.java:42)
at org.neo4j.kernel.ha.HighlyAvailableGraphDatabase.<init>(HighlyAvailableGraphDatabase.java:41)
at org.neo4j.server.enterprise.EnterpriseNeoServer.lambda$static$0(EnterpriseNeoServer.java:80)
at org.neo4j.server.database.LifecycleManagingDatabase.start(LifecycleManagingDatabase.java:89)
at org.neo4j.kernel.lifecycle.LifeSupport$LifecycleInstance.start(LifeSupport.java:434)
... 5 more
Caused by: org.neo4j.kernel.lifecycle.LifecycleException: Component 'org.neo4j.cluster.client.ClusterJoin#4b6942a0' was successfully initia$
at org.neo4j.kernel.lifecycle.LifeSupport$LifecycleInstance.start(LifeSupport.java:444)
at org.neo4j.kernel.lifecycle.LifeSupport.start(LifeSupport.java:107)
at org.neo4j.kernel.lifecycle.LifeSupport$LifecycleInstance.start(LifeSupport.java:434)
at org.neo4j.kernel.lifecycle.LifeSupport.start(LifeSupport.java:107)
at org.neo4j.kernel.impl.factory.GraphDatabaseFacadeFactory.newFacade(GraphDatabaseFacadeFactory.java:140)
... 10 more
Caused by: java.util.concurrent.TimeoutException
at org.neo4j.cluster.statemachine.StateMachineProxyFactory$ResponseFuture.get(StateMachineProxyFactory.java:300)
</code>
Is there any fresh db I could try to load ? Thanks.
Is that on a single instance or cluster?
Best try it in standalone mode first.
How did you upgrade? Using the admin-tool with
neo4j-admin --mode database --from neo4j-2x-db --to neo4j-3x-db ?
see: https://neo4j.com/guides/upgrade/#neo4j-3-0
and: http://neo4j.com/docs/operations-manual/3.0/#upgrade-instructions-2x

CopyFieldMutation.java ERROR cannot get comparator 1

I'm running DSE 4.5.1 on a 3-node cluster in AWS with RF=3. One of the nodes gets this error in the system.log. See (long) stack trace below. I would like to understand what the error means or implies. Is this a cause for concern? Lastly, how do I resolve the error?
[cqlsh 4.1.1 | Cassandra 2.0.8.39 | CQL spec 3.1.1 | Thrift protocol 19.39.0]
ERROR [Native-Transport-Requests:34142] 2015-05-12 14:15:33,029 CopyFieldMutation.java (line 166) Cannot get comparator 1 in org.apache.cassandra.db.marshal.CompositeType(org.apache.cassandra.db.marshal.UTF8Type). This might due to a mismatch between the schema and the data read
java.lang.RuntimeException: Cannot get comparator 1 in org.apache.cassandra.db.marshal.CompositeType(org.apache.cassandra.db.marshal.UTF8Type). This might due to a mismatch between the schema and the data read
at org.apache.cassandra.db.marshal.CompositeType.getComparator(CompositeType.java:133)
at org.apache.cassandra.db.marshal.AbstractCompositeType.split(AbstractCompositeType.java:100)
at com.datastax.bdp.cassandra.cql3.CopyFieldMutation.buildKeyValueIterator(CopyFieldMutation.java:275)
at com.datastax.bdp.cassandra.cql3.CopyFieldMutation.addPrimaryKeyFieldsToMutation(CopyFieldMutation.java:260)
at com.datastax.bdp.cassandra.cql3.CopyFieldMutation.createMutation(CopyFieldMutation.java:110)
at com.datastax.bdp.search.solr.triggers.SolrAugmentationTrigger.augment(SolrAugmentationTrigger.java:76)
at org.apache.cassandra.triggers.TriggerExecutor.executeInternal(TriggerExecutor.java:190)
at org.apache.cassandra.triggers.TriggerExecutor.execute(TriggerExecutor.java:94)
at org.apache.cassandra.service.StorageProxy.mutateWithTriggers(StorageProxy.java:532)
at org.apache.cassandra.cql3.statements.ModificationStatement.executeWithoutCondition(ModificationStatement.java:546)
at org.apache.cassandra.cql3.statements.ModificationStatement.execute(ModificationStatement.java:530)
at org.apache.cassandra.cql3.QueryProcessor.processStatement(QueryProcessor.java:158)
at com.datastax.bdp.cassandra.cql3.DseQueryHandler.statementExecution(DseQueryHandler.java:207)
at com.datastax.bdp.cassandra.cql3.DseQueryHandler.process(DseQueryHandler.java:86)
at org.apache.cassandra.transport.messages.QueryMessage.execute(QueryMessage.java:119)
at org.apache.cassandra.transport.Message$Dispatcher.messageReceived(Message.java:304)
at org.jboss.netty.channel.SimpleChannelUpstreamHandler.handleUpstream(SimpleChannelUpstreamHandler.java:70)
at org.jboss.netty.channel.DefaultChannelPipeline.sendUpstream(DefaultChannelPipeline.java:564)
at org.jboss.netty.channel.DefaultChannelPipeline$DefaultChannelHandlerContext.sendUpstream(DefaultChannelPipeline.java:791)
at org.jboss.netty.handler.execution.ChannelUpstreamEventRunnable.doRun(ChannelUpstreamEventRunnable.java:43)
at org.jboss.netty.handler.execution.ChannelEventRunnable.run(ChannelEventRunnable.java:67)
at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1145)
at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:615)
at java.lang.Thread.run(Thread.java:745)
Caused by: java.lang.IndexOutOfBoundsException: index (1) must be less than size (1)
at com.google.common.base.Preconditions.checkElementIndex(Preconditions.java:306)
at com.google.common.base.Preconditions.checkElementIndex(Preconditions.java:285)
at com.google.common.collect.SingletonImmutableList.get(SingletonImmutableList.java:45)
at org.apache.cassandra.db.marshal.CompositeType.getComparator(CompositeType.java:124)
... 23 more
WARN [Native-Transport-Requests:34142] 2015-05-12 14:15:33,029 SolrAugmentationTrigger.java (line 107) Error generating additional mutations for Solr copy/dynamic fields. Update will be applied without them
org.apache.cassandra.exceptions.InvalidRequestException: Cannot get comparator 1 in org.apache.cassandra.db.marshal.CompositeType(org.apache.cassandra.db.marshal.UTF8Type). This might due to a mismatch between the schema and the data read
at com.datastax.bdp.cassandra.cql3.CopyFieldMutation.createMutation(CopyFieldMutation.java:167)
at com.datastax.bdp.search.solr.triggers.SolrAugmentationTrigger.augment(SolrAugmentationTrigger.java:76)
at org.apache.cassandra.triggers.TriggerExecutor.executeInternal(TriggerExecutor.java:190)
at org.apache.cassandra.triggers.TriggerExecutor.execute(TriggerExecutor.java:94)
at org.apache.cassandra.service.StorageProxy.mutateWithTriggers(StorageProxy.java:532)
at org.apache.cassandra.cql3.statements.ModificationStatement.executeWithoutCondition(ModificationStatement.java:546)
at org.apache.cassandra.cql3.statements.ModificationStatement.execute(ModificationStatement.java:530)
at org.apache.cassandra.cql3.QueryProcessor.processStatement(QueryProcessor.java:158)
at com.datastax.bdp.cassandra.cql3.DseQueryHandler.statementExecution(DseQueryHandler.java:207)
at com.datastax.bdp.cassandra.cql3.DseQueryHandler.process(DseQueryHandler.java:86)
at org.apache.cassandra.transport.messages.QueryMessage.execute(QueryMessage.java:119)
at org.apache.cassandra.transport.Message$Dispatcher.messageReceived(Message.java:304)
at org.jboss.netty.channel.SimpleChannelUpstreamHandler.handleUpstream(SimpleChannelUpstreamHandler.java:70)
at org.jboss.netty.channel.DefaultChannelPipeline.sendUpstream(DefaultChannelPipeline.java:564)
at org.jboss.netty.channel.DefaultChannelPipeline$DefaultChannelHandlerContext.sendUpstream(DefaultChannelPipeline.java:791)
at org.jboss.netty.handler.execution.ChannelUpstreamEventRunnable.doRun(ChannelUpstreamEventRunnable.java:43)
at org.jboss.netty.handler.execution.ChannelEventRunnable.run(ChannelEventRunnable.java:67)
at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1145)
at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:615)
at java.lang.Thread.run(Thread.java:745)

Restarting a failed/stalled stream during bootstrap of new node

We are trying to add a new Solr node to our cluster:
DC Cassandra
Cassandra node 1
DC Solr
Solr node 1 <-- new node (actually, a replacement for an old node)
Solr node 2
Solr node 3
Solr node 4
Solr node 5
During the bootstrap process:
The stream from node 3 to node 1 failed with an exception:
ERROR [STREAM-OUT-/IP_OF_NODE1] 2014-04-01 01:14:40,887 CassandraDaemon.java (line 196) Exception in thread Thread[STREAM-OUT-/IP_OF_NODE1,5,main]
java.lang.NullPointerException
at org.apache.cassandra.streaming.ConnectionHandler$MessageHandler.signalCloseDone(ConnectionHandler.java:249)
at org.apache.cassandra.streaming.ConnectionHandler$OutgoingMessageHandler.run(ConnectionHandler.java:375)
at java.lang.Thread.run(Thread.java:744)
The stream from node 4 to node 1 never started. The last relevant line in node 4's system.log is:
Received streaming plan for Bootstrap.
It should have been followed by:
Prepare completed. Receiving 0 files(0 bytes), sending x files(y bytes)
It seems that the bootstrap process is now stalled because the data file sizes are not changing anymore. How can I force those streams to be retried?
EDIT:
I restarted all nodes today in an attempt to force new node to retry the bootstrap process. Unfortunately, it encountered some stream failures again. This time, the exception in node 1 is as follows:
WARN [STREAM-IN-/IP_OF_NODE3] 2014-04-06 20:48:17,963 StreamSession.java (line 532) [Stream #c84effb0-bda9-11e3-a07d-89325af2f6bf] Retrying for following error
java.lang.RuntimeException: java.io.FileNotFoundException: /home/cassandra/data/my_keyspace/my_table/my_keyspace-my_table-tmp-jb-1209-Data.db (Too many open files)
at org.apache.cassandra.io.util.SequentialWriter.<init>(SequentialWriter.java:75)
at org.apache.cassandra.io.compress.CompressedSequentialWriter.<init>(CompressedSequentialWriter.java:71)
at org.apache.cassandra.io.compress.CompressedSequentialWriter.open(CompressedSequentialWriter.java:42)
at org.apache.cassandra.io.sstable.SSTableWriter.<init>(SSTableWriter.java:107)
at org.apache.cassandra.io.sstable.SSTableWriter.<init>(SSTableWriter.java:60)
at org.apache.cassandra.streaming.StreamReader.createWriter(StreamReader.java:111)
at org.apache.cassandra.streaming.compress.CompressedStreamReader.read(CompressedStreamReader.java:65)
at org.apache.cassandra.streaming.messages.IncomingFileMessage$1.deserialize(IncomingFileMessage.java:47)
at org.apache.cassandra.streaming.messages.IncomingFileMessage$1.deserialize(IncomingFileMessage.java:37)
at org.apache.cassandra.streaming.messages.StreamMessage.deserialize(StreamMessage.java:55)
at org.apache.cassandra.streaming.ConnectionHandler$IncomingMessageHandler.run(ConnectionHandler.java:283)
at java.lang.Thread.run(Thread.java:724)
Caused by: java.io.FileNotFoundException: /home/cassandra/data/my_keyspace/my_table/my_keyspace-my_table-tmp-jb-1209-Data.db (Too many open files)
at java.io.RandomAccessFile.open(Native Method)
at java.io.RandomAccessFile.<init>(RandomAccessFile.java:233)
at org.apache.cassandra.io.util.SequentialWriter.<init>(SequentialWriter.java:71)
ERROR [STREAM-IN-/78.46.63.218] 2014-04-06 20:48:17,964 StreamSession.java (line 418) [Stream #c84effb0-bda9-11e3-a07d-89325af2f6bf] Streaming error occurred
java.lang.IllegalArgumentException: Unknown type 0
at org.apache.cassandra.streaming.messages.StreamMessage$Type.get(StreamMessage.java:89)
at org.apache.cassandra.streaming.messages.StreamMessage.deserialize(StreamMessage.java:54)
at org.apache.cassandra.streaming.ConnectionHandler$IncomingMessageHandler.run(ConnectionHandler.java:283)
at java.lang.Thread.run(Thread.java:724)
There are tons of similar errors in the log. e.g.:
ERROR [CompactionExecutor:129] 2014-04-06 20:50:06,401 CassandraDaemon.java (line 196) Exception in thread Thread[CompactionExecutor:129,1,main]
java.lang.RuntimeException: java.lang.RuntimeException: java.io.FileNotFoundException: /home/cassandra/data/my_keyspace/my_table/my_keyspace-my_table-jb-51-Data.db (Too many open files)
at org.apache.cassandra.service.pager.QueryPagers$1.next(QueryPagers.java:154)
at org.apache.cassandra.service.pager.QueryPagers$1.next(QueryPagers.java:137)
at org.apache.cassandra.db.Keyspace.indexRow(Keyspace.java:400)
at org.apache.cassandra.db.index.SecondaryIndexBuilder.build(SecondaryIndexBuilder.java:62)
at org.apache.cassandra.db.compaction.CompactionManager$9.run(CompactionManager.java:833)
at java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:471)
at java.util.concurrent.FutureTask$Sync.innerRun(FutureTask.java:334)
at java.util.concurrent.FutureTask.run(FutureTask.java:166)
at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1145)
at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:615)
at java.lang.Thread.run(Thread.java:724)
Caused by: java.lang.RuntimeException: java.io.FileNotFoundException: /home/cassandra/data/my_keyspace/my_table/my_keyspace-my_table-jb-51-Data.db (Too many open files)
at org.apache.cassandra.io.compress.CompressedRandomAccessReader.open(CompressedRandomAccessReader.java:47)
at org.apache.cassandra.io.util.CompressedPoolingSegmentedFile.createReader(CompressedPoolingSegmentedFile.java:48)
at org.apache.cassandra.io.util.PoolingSegmentedFile.getSegment(PoolingSegmentedFile.java:39)
at org.apache.cassandra.io.sstable.SSTableReader.getFileDataInput(SSTableReader.java:1195)
at org.apache.cassandra.db.columniterator.SimpleSliceReader.<init>(SimpleSliceReader.java:57)
at org.apache.cassandra.db.columniterator.SSTableSliceIterator.createReader(SSTableSliceIterator.java:65)
at org.apache.cassandra.db.columniterator.SSTableSliceIterator.<init>(SSTableSliceIterator.java:42)
at org.apache.cassandra.db.filter.SliceQueryFilter.getSSTableColumnIterator(SliceQueryFilter.java:167)
at org.apache.cassandra.db.filter.QueryFilter.getSSTableColumnIterator(QueryFilter.java:62)
at org.apache.cassandra.db.CollationController.collectAllData(CollationController.java:250)
at org.apache.cassandra.db.CollationController.getTopLevelColumns(CollationController.java:53)
at org.apache.cassandra.db.ColumnFamilyStore.getTopLevelColumns(ColumnFamilyStore.java:1550)
at org.apache.cassandra.db.ColumnFamilyStore.getColumnFamily(ColumnFamilyStore.java:1379)
at org.apache.cassandra.db.Keyspace.getRow(Keyspace.java:327)
at org.apache.cassandra.db.SliceFromReadCommand.getRow(SliceFromReadCommand.java:65)
at org.apache.cassandra.service.pager.SliceQueryPager.queryNextPage(SliceQueryPager.java:77)
at org.apache.cassandra.service.pager.AbstractQueryPager.fetchPage(AbstractQueryPager.java:84)
at org.apache.cassandra.service.pager.SliceQueryPager.fetchPage(SliceQueryPager.java:33)
at org.apache.cassandra.service.pager.QueryPagers$1.next(QueryPagers.java:148)
... 10 more
Caused by: java.io.FileNotFoundException: /home/cassandra/data/my_keyspace/my_table/my_keyspace-my_table-jb-51-Data.db (Too many open files)
at java.io.RandomAccessFile.open(Native Method)
at java.io.RandomAccessFile.<init>(RandomAccessFile.java:233)
at org.apache.cassandra.io.util.RandomAccessReader.<init>(RandomAccessReader.java:58)
at org.apache.cassandra.io.compress.CompressedRandomAccessReader.<init>(CompressedRandomAccessReader.java:76)
at org.apache.cassandra.io.compress.CompressedRandomAccessReader.open(CompressedRandomAccessReader.java:43)
... 28 more
This appears to be very similar to a Cassandra bug/issue:
https://issues.apache.org/jira/browse/CASSANDRA-6965
I'll follow up on that.
Meanwhile, you could run rebuild/repair on that new node.
EDIT: Another Cassandra issue that appears to be related:
CASSANDRA-6984 - "NullPointerException in Streaming During Repair"
https://issues.apache.org/jira/browse/CASSANDRA-6984
That issue is labeled as a Blocker, so it should get some prompt attention. I've inquired as to whether there is a workaround.
Stay tuned.
(Too many open files)
Looks like you need to increase your ulimit.

Resources