Cassandra Digest Mismatch Error - datastax-enterprise

I see the following message in Cassandra's debug.log frequently and sometimes before losing nodes in the cluster. Any ideas on what the message means, and how to fix the underlying issue?
DEBUG [ReadRepairStage:9346] 2017-11-06 22:29:46,135 ReadCallback.java:242 - Digest mismatch:
org.apache.cassandra.service.DigestMismatchException: Mismatch for key DecoratedKey(-8713145541289520569, 00114c65616465722f6d61737465722f352e3100000364633100) (408c7e13eea38efc9429366038cbe4a3 vs 8ce8acece0966903ac590d3229099398)
at org.apache.cassandra.service.DigestResolver.compareResponses(DigestResolver.java:92) ~[cassandra-all-3.11.0.1900.jar:3.11.0.1900]
at org.apache.cassandra.service.ReadCallback$AsyncRepairRunner.run(ReadCallback.java:233) ~[cassandra-all-3.11.0.1900.jar:3.11.0.1900]
at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149) [na:1.8.0_151]
at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) [na:1.8.0_151]
at org.apache.cassandra.concurrent.NamedThreadFactory.lambda$threadLocalDeallocator$0(NamedThreadFactory.java:81) [cassandra-all-3.11.0.1900.jar:3.11.0.1900]
at java.lang.Thread.run(Thread.java:748) ~[na:1.8.0_151]
Here are the details of the Cassandra cluster:
4 node cluster
Each is an AWS instance of type m4.2xlarge
Each has an io1 volume with 20000 IOPS
All on same VPC, with 10.0.0.x private IP addresses
DataStax Enterprise Server 5.1.5

I think these are harmless messages from read repair noticing different data on different nodes, and probably not the cause of your node going down. See a more detailed answer this question last year: Datastax Mismatch for Key Issue

Related

Cant find Avro schema in divolte and kafka docker

I have 3 dockers kafka, divolt and streamsets (https://github.com/divolte/docker-divolte) started by compose-up. I want to convert the topic messages to avro files. I created the pipeline in streamset and paste the avro schema, but got an error:
com.streamsets.pipeline.api.base.OnRecordErrorException: KAFKA_37 - Cannot parse record from message 'divolte::3::0': java.io.IOException: Invalid int encoding
at com.streamsets.pipeline.stage.origin.multikafka.MultiKafkaSource$MultiTopicCallable.createRecord(MultiKafkaSource.java:192)
at com.streamsets.pipeline.stage.origin.multikafka.MultiKafkaSource$MultiTopicCallable.sendBatch(MultiKafkaSource.java:158)
at com.streamsets.pipeline.stage.origin.multikafka.MultiKafkaSource$MultiTopicCallable.call(MultiKafkaSource.java:135)
at com.streamsets.pipeline.stage.origin.multikafka.MultiKafkaSource$MultiTopicCallable.call(MultiKafkaSource.java:79)
at java.util.concurrent.FutureTask.run(FutureTask.java:266)
at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149)
at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624)
at java.lang.Thread.run(Thread.java:748)
Caused by: java.io.IOException: Invalid int encoding
I red that problem is in incorrect avro schema. Could you tell where I can find the correct avro schema for this? I cant find it in dockers and github.
Looks like it might be in the Divolte GitHub repo, at https://github.com/divolte/divolte-schema/blob/master/src/main/resources/DefaultEventRecord.avsc

Dataflow Batch Job fails with "Failed to close some writers"

I am running a batch pipeline with the Apache Beam 2.2 SDK via the Cloud Dataflow service. There are 751 text files that I parse using TextIO.readAll() transform, deserialize and write to a date partitioned table in BigQuery.
First thing I noticed is that autoscaling was not really kicking in and left the pipeline at 15 workers, even though I was able to push throughput a lot higher when for example manually setting the number of workers to 250.
My pipeline fails with the following stack trace:
(abed94a6f5139e21): java.io.IOException: Failed to close some writers
at org.apache.beam.sdk.io.gcp.bigquery.WriteBundlesToFiles.finishBundle(WriteBundlesToFiles.java:248)
Suppressed: java.io.IOException: com.google.api.client.googleapis.json.GoogleJsonResponseException: 503 Service Unavailable
Service Unavailable
at com.google.cloud.hadoop.util.AbstractGoogleAsyncWriteChannel.waitForCompletionAndThrowIfUploadFailed(AbstractGoogleAsyncWriteChannel.java:431)
at com.google.cloud.hadoop.util.AbstractGoogleAsyncWriteChannel.close(AbstractGoogleAsyncWriteChannel.java:289)
at org.apache.beam.sdk.io.gcp.bigquery.TableRowWriter.close(TableRowWriter.java:81)
at org.apache.beam.sdk.io.gcp.bigquery.WriteBundlesToFiles.finishBundle(WriteBundlesToFiles.java:242)
at org.apache.beam.sdk.io.gcp.bigquery.WriteBundlesToFiles$DoFnInvoker.invokeFinishBundle(Unknown Source)
at org.apache.beam.runners.core.SimpleDoFnRunner.finishBundle(SimpleDoFnRunner.java:187)
at com.google.cloud.dataflow.worker.SimpleParDoFn.finishBundle(SimpleParDoFn.java:407)
at com.google.cloud.dataflow.worker.util.common.worker.ParDoOperation.finish(ParDoOperation.java:60)
at com.google.cloud.dataflow.worker.util.common.worker.MapTaskExecutor.execute(MapTaskExecutor.java:76)
at com.google.cloud.dataflow.worker.DataflowWorker.executeWork(DataflowWorker.java:330)
at com.google.cloud.dataflow.worker.DataflowWorker.doWork(DataflowWorker.java:302)
at com.google.cloud.dataflow.worker.DataflowWorker.getAndPerformWork(DataflowWorker.java:251)
at com.google.cloud.dataflow.worker.DataflowBatchWorkerHarness$WorkerThread.doWork(DataflowBatchWorkerHarness.java:135)
at com.google.cloud.dataflow.worker.DataflowBatchWorkerHarness$WorkerThread.call(DataflowBatchWorkerHarness.java:115)
at com.google.cloud.dataflow.worker.DataflowBatchWorkerHarness$WorkerThread.call(DataflowBatchWorkerHarness.java:102)
at java.util.concurrent.FutureTask.run(FutureTask.java:266)
at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142)
at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617)
at java.lang.Thread.run(Thread.java:745)
Caused by: com.google.api.client.googleapis.json.GoogleJsonResponseException: 503 Service Unavailable
Service Unavailable
at com.google.api.client.googleapis.json.GoogleJsonResponseException.from(GoogleJsonResponseException.java:146)
at com.google.api.client.googleapis.services.json.AbstractGoogleJsonClientRequest.newExceptionOnError(AbstractGoogleJsonClientRequest.java:113)
at com.google.api.client.googleapis.services.json.AbstractGoogleJsonClientRequest.newExceptionOnError(AbstractGoogleJsonClientRequest.java:40)
at com.google.api.client.googleapis.services.AbstractGoogleClientRequest.executeUnparsed(AbstractGoogleClientRequest.java:432)
at com.google.api.client.googleapis.services.AbstractGoogleClientRequest.executeUnparsed(AbstractGoogleClientRequest.java:352)
at com.google.api.client.googleapis.services.AbstractGoogleClientRequest.execute(AbstractGoogleClientRequest.java:469)
at com.google.cloud.hadoop.util.AbstractGoogleAsyncWriteChannel$UploadOperation.call(AbstractGoogleAsyncWriteChannel.java:357)
... 4 more
Should I try with even more workers or split the work across several pipelines?
Thanks to the comment by jkff it worked flawlessly - after setting --maxNumWorkers=250 (15 seems to be the standard maximum).
The error was a transient error that Dataflow would retry several times and in the end, the pipeline ran successfully.

Akka 2.5 Distributed Data on Docker + Alpine Linux

After upgrading a service that uses Akka + Akka cluster sharding to the newly released Akka (2.5.0), we started encountering issues starting the system in Docker + Alpine Linux. From what I can infer, Akka Cluster sharding is configured to used Akka Distributed Data (which is not experimental anymore as of 2.5.0) and it is using LMDB (which requires GCC + glibc and it is not available in Alpine Linux).
My questions are as follows:
1) Is there any standard alternative supported by Akka instead of LMDB?
2) Is there any way to get LMDB to work in Alpine Linux?
Stack Trace:
[ERROR] [04/20/2017 13:42:19.014] [lotus-akka.actor.default-dispatcher-5] [akka://lotus/system/sharding/replicator/durableStore] Error relocating /tmp/lmdbjava-native-library-5972006786989102785.so: __fprintf_chk: symbol not found
akka.actor.ActorInitializationException: akka://lotus/system/sharding/replicator/durableStore: exception during creation
at akka.actor.ActorInitializationException$.apply(Actor.scala:191)
at akka.actor.ActorCell.create(ActorCell.scala:600)
at akka.actor.ActorCell.invokeAll$1(ActorCell.scala:454)
at akka.actor.ActorCell.systemInvoke(ActorCell.scala:476)
at akka.dispatch.Mailbox.processAllSystemMessages(Mailbox.scala:282)
at akka.dispatch.Mailbox.run(Mailbox.scala:223)
at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142)
at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617)
at java.lang.Thread.run(Thread.java:745)
Caused by: java.lang.reflect.InvocationTargetException
at sun.reflect.NativeConstructorAccessorImpl.newInstance0(Native Method)
at sun.reflect.NativeConstructorAccessorImpl.newInstance(NativeConstructorAccessorImpl.java:62)
at sun.reflect.DelegatingConstructorAccessorImpl.newInstance(DelegatingConstructorAccessorImpl.java:45)
at java.lang.reflect.Constructor.newInstance(Constructor.java:423)
at akka.util.Reflect$.instantiate(Reflect.scala:65)
at akka.actor.ArgsReflectConstructor.produce(IndirectActorProducer.scala:96)
at akka.actor.Props.newActor(Props.scala:213)
at akka.actor.ActorCell.newActor(ActorCell.scala:555)
at akka.actor.ActorCell.create(ActorCell.scala:581)
... 7 more
Caused by: java.lang.UnsatisfiedLinkError: Error relocating /tmp/lmdbjava-native-library-5972006786989102785.so: __fprintf_chk: symbol not found
at jnr.ffi.provider.jffi.NativeLibrary.loadNativeLibraries(NativeLibrary.java:87)
at jnr.ffi.provider.jffi.NativeLibrary.getNativeLibraries(NativeLibrary.java:70)
at jnr.ffi.provider.jffi.NativeLibrary.getSymbolAddress(NativeLibrary.java:49)
at jnr.ffi.provider.jffi.NativeLibrary.findSymbolAddress(NativeLibrary.java:59)
at jnr.ffi.provider.jffi.AsmLibraryLoader.generateInterfaceImpl(AsmLibraryLoader.java:158)
at jnr.ffi.provider.jffi.AsmLibraryLoader.loadLibrary(AsmLibraryLoader.java:89)
at jnr.ffi.provider.jffi.NativeLibraryLoader.loadLibrary(NativeLibraryLoader.java:43)
at jnr.ffi.LibraryLoader.load(LibraryLoader.java:325)
at jnr.ffi.LibraryLoader.load(LibraryLoader.java:304)
at org.lmdbjava.Library.<clinit>(Library.java:95)
at org.lmdbjava.Env$Builder.open(Env.java:406)
at org.lmdbjava.Env$Builder.open(Env.java:430)
at akka.cluster.ddata.LmdbDurableStore.<init>(DurableStore.scala:131)
... 16 more
Finally managed to solve this problem. Cluster sharding attempts to use durable storage by default (default is LMDB). For cluster sharding without using remember-entities, durable storage is not required.
Hence, the solution to this was to disable durable storage for cluster sharding by adding the following configuration
akka.cluster.sharding.distributed-data.durable.keys = []

Kafka cannot resolve Zookeper's DNS name

I have a kafka 0.10.1.0 cluster (2 nodes) and zookeeper 3.4.6 (3 nodes)
The clusters are hosted on Kubernetes following this tutorial.
Relevant entries from Kafka's server.properties:
listeners=PLAINTEXT://0.0.0.0:9092
advertised.listeners=PLAINTEXT://kafka.internal.<companyname>.com:9092
zookeeper.connect=zookeeper-1:2181,zookeeper-2:2181,zookeeper-3:2181
Upon server startup, each Kafka broker fails quickly with the following. To me, it looks like it cannot resolve the DNS name zookeeper-1. I also attempted removing the ports from zookeeper.connect, although my reading of the relevant code, I don't believe that will make a difference.
Naturally, I confirmed that zookeeper-1 can be resolved from within the cluster. Other containers from within the cluster can resolve the name.
I also attempted with a series of other aliases, including the services' DNS name and Zookeeper's load balancer(s), all of which I independently confirmed working. In each case, Kafka alone reported Name or service not known.
[2016-11-22 19:55:45,506] INFO Initiating client connection, connectString=zookeeper-1:2181,zookeeper-2:2181,zookeeper-3:2181 sessionTimeout=6000 watcher=org.I0Itec.zkclient.ZkClient#7722c3c3 (org.apache.zookeeper.ZooKeeper)
[2016-11-22 19:56:05,571] INFO Terminate ZkClient event thread. (org.I0Itec.zkclient.ZkEventThread)
[2016-11-22 19:56:05,572] FATAL Fatal error during KafkaServer startup. Prepare to shutdown (kafka.server.KafkaServer)
org.I0Itec.zkclient.exception.ZkException: Unable to connect to zookeeper-1:2181,zookeeper-2:2181,zookeeper-3:2181
at org.I0Itec.zkclient.ZkConnection.connect(ZkConnection.java:71)
at org.I0Itec.zkclient.ZkClient.connect(ZkClient.java:1227)
at org.I0Itec.zkclient.ZkClient.<init>(ZkClient.java:156)
at org.I0Itec.zkclient.ZkClient.<init>(ZkClient.java:130)
at kafka.utils.ZkUtils$.createZkClientAndConnection(ZkUtils.scala:76)
at kafka.utils.ZkUtils$.apply(ZkUtils.scala:58)
at kafka.server.KafkaServer.initZk(KafkaServer.scala:327)
at kafka.server.KafkaServer.startup(KafkaServer.scala:200)
at kafka.server.KafkaServerStartable.startup(KafkaServerStartable.scala:39)
at kafka.Kafka$.main(Kafka.scala:67)
at kafka.Kafka.main(Kafka.scala)
Caused by: java.net.UnknownHostException: zookeeper-1: Name or service not known
at java.net.Inet6AddressImpl.lookupAllHostAddr(Native Method)
at java.net.InetAddress$2.lookupAllHostAddr(InetAddress.java:928)
at java.net.InetAddress.getAddressesFromNameService(InetAddress.java:1323)
at java.net.InetAddress.getAllByName0(InetAddress.java:1276)
at java.net.InetAddress.getAllByName(InetAddress.java:1192)
at java.net.InetAddress.getAllByName(InetAddress.java:1126)
at org.apache.zookeeper.client.StaticHostProvider.<init>(StaticHostProvider.java:61)
at org.apache.zookeeper.ZooKeeper.<init>(ZooKeeper.java:446)
at org.apache.zookeeper.ZooKeeper.<init>(ZooKeeper.java:380)
at org.I0Itec.zkclient.ZkConnection.connect(ZkConnection.java:69)
... 10 more
[2016-11-22 19:56:05,575] INFO shutting down (kafka.server.KafkaServer)
[2016-11-22 19:56:05,616] INFO shut down completed (kafka.server.KafkaServer)
Other info related to the Kafka image: It is based off wurstmeister/kafka-docker but updated to inherit from openjdk:8-jre.
It turns out that this was an issue with Kubernetes itself.
After an unrelated upgrade to v1.4.6 and no other changes, the names were able to resolve normally.

Docker container with Neo4j 2.3 enterprise edition exits without error message when mounting graph.db volume

I have a snapshot of a graph.db that is over 40 gb in size. That snapshot came from a server running Neo4j 2.2.8. Now I'm trying to run this database locally to explore the existing graph without wasting resources and potentially crashing the server.
To do so, I'm using Neo4j via Docker and mounting that snapshot. At least that's what I'm trying to do via docker run -p 7474:7474 --ulimit=nofile=40000:40000 --env=NEO4J_CACHE_MEMORY=8G --env=NEO4J_ALLOW_STORE_UPGRADE=true --env=NEO4J_AUTH=none --volume=$HOME/graph.db:/data/graph.db neo4j:enterprise
This docker image uses the enterprise edition of Neo4j 2.3.2 and therefore needs to set allow_store_upgrade to true.
The only output I'm getting is this:
Starting Neo4j Server console-mode...
2016-03-02 22:43:44.277+0000 INFO No SSL certificate found, generating a self-signed certificate..
2016-03-02 22:43:45.718+0000 INFO Initiating metrics..
Then the container stops and I get control back of the command line.
docker ps -l shows that the container exits with status code 137.
My question here is: How can I troubleshoot this (non-)error and run this dataset on my local machine in a safe environment?
Now when I start Neo4j 2.3.2 community edition on localhost with $HOME/graph.db as the database path, the store gets updated and after a short while, Neo4j is accessible.
With the upgraded store in place, I stopped Neo4j on localhost and tried to re-run docker on the upgraded database.
This is the (logged) output:
Starting Neo4j Server console-mode...
2016-03-07 17:14:17.835+0000 INFO No SSL certificate found, generating a self-signed certificate..
2016-03-07 17:14:19.214+0000 INFO Initiating metrics..
2016-03-07 17:15:05.068+0000 INFO Successfully shutdown Neo4j Server
2016-03-07 17:15:05.070+0000 ERROR Failed to start Neo4j: Starting Neo4j failed: Component 'org.neo4j.server.database.LifecycleManagingDatabase#13ae26d2' was successfully initialized, but failed to start. Please see attached cause exception. Starting Neo4j failed: Component 'org.neo4j.server.database.LifecycleManagingDatabase#13ae26d2' was successfully initialized, but failed to start. Please see attached cause exception.
org.neo4j.server.ServerStartupException: Starting Neo4j failed: Component 'org.neo4j.server.database.LifecycleManagingDatabase#13ae26d2' was successfully initialized, but failed to start. Please see attached cause exception.
at org.neo4j.server.exception.ServerStartupErrors.translateToServerStartupError(ServerStartupErrors.java:67)
at org.neo4j.server.AbstractNeoServer.start(AbstractNeoServer.java:234)
at org.neo4j.server.Bootstrapper.start(Bootstrapper.java:97)
at org.neo4j.server.CommunityBootstrapper.start(CommunityBootstrapper.java:48)
at org.neo4j.server.enterprise.EnterpriseBootstrapper.main(EnterpriseBootstrapper.java:32)
Caused by: org.neo4j.kernel.lifecycle.LifecycleException: Component 'org.neo4j.server.database.LifecycleManagingDatabase#13ae26d2' was successfully initialized, but failed to start. Please see attached cause exception.
at org.neo4j.kernel.lifecycle.LifeSupport$LifecycleInstance.start(LifeSupport.java:462)
at org.neo4j.kernel.lifecycle.LifeSupport.start(LifeSupport.java:111)
at org.neo4j.server.AbstractNeoServer.start(AbstractNeoServer.java:194)
... 3 more
Caused by: java.lang.RuntimeException: Error starting org.neo4j.kernel.impl.enterprise.EnterpriseFacadeFactory, /var/lib/neo4j/data/graph.db
at org.neo4j.kernel.impl.factory.GraphDatabaseFacadeFactory.newFacade(GraphDatabaseFacadeFactory.java:143)
at org.neo4j.kernel.impl.enterprise.EnterpriseFacadeFactory.newFacade(EnterpriseFacadeFactory.java:40)
at org.neo4j.graphdb.EnterpriseGraphDatabase.<init>(EnterpriseGraphDatabase.java:57)
at org.neo4j.server.enterprise.EnterpriseNeoServer$2.newGraphDatabase(EnterpriseNeoServer.java:67)
at org.neo4j.server.database.LifecycleManagingDatabase.start(LifecycleManagingDatabase.java:95)
at org.neo4j.kernel.lifecycle.LifeSupport$LifecycleInstance.start(LifeSupport.java:452)
... 5 more
Caused by: org.neo4j.kernel.lifecycle.LifecycleException: Component 'org.neo4j.kernel.impl.api.index.IndexingService#7d237704' failed to initialize. Please see attached cause exception.
at org.neo4j.kernel.lifecycle.LifeSupport$LifecycleInstance.init(LifeSupport.java:434)
at org.neo4j.kernel.lifecycle.LifeSupport.init(LifeSupport.java:66)
at org.neo4j.kernel.lifecycle.LifeSupport.start(LifeSupport.java:102)
at org.neo4j.kernel.NeoStoreDataSource.start(NeoStoreDataSource.java:600)
at org.neo4j.kernel.lifecycle.LifeSupport$LifecycleInstance.start(LifeSupport.java:452)
at org.neo4j.kernel.lifecycle.LifeSupport.start(LifeSupport.java:111)
at org.neo4j.kernel.impl.transaction.state.DataSourceManager.start(DataSourceManager.java:112)
at org.neo4j.kernel.lifecycle.LifeSupport$LifecycleInstance.start(LifeSupport.java:452)
at org.neo4j.kernel.lifecycle.LifeSupport.start(LifeSupport.java:111)
at org.neo4j.kernel.impl.factory.GraphDatabaseFacadeFactory.newFacade(GraphDatabaseFacadeFactory.java:139)
... 10 more
Caused by: java.lang.OutOfMemoryError: Java heap space
at java.util.Arrays.copyOfRange(Arrays.java:3664)
at java.lang.String.<init>(String.java:207)
at org.apache.lucene.index.TermBuffer.toTerm(TermBuffer.java:122)
at org.apache.lucene.index.SegmentTermEnum.term(SegmentTermEnum.java:184)
at org.apache.lucene.index.TermInfosReaderIndex.<init>(TermInfosReaderIndex.java:77)
at org.apache.lucene.index.TermInfosReader.<init>(TermInfosReader.java:116)
at org.apache.lucene.index.SegmentCoreReaders.<init>(SegmentCoreReaders.java:83)
at org.apache.lucene.index.SegmentReader.get(SegmentReader.java:116)
at org.apache.lucene.index.SegmentReader.get(SegmentReader.java:94)
at org.apache.lucene.index.DirectoryReader.<init>(DirectoryReader.java:105)
at org.apache.lucene.index.ReadOnlyDirectoryReader.<init>(ReadOnlyDirectoryReader.java:27)
at org.apache.lucene.index.DirectoryReader$1.doBody(DirectoryReader.java:78)
at org.apache.lucene.index.SegmentInfos$FindSegmentsFile.run(SegmentInfos.java:709)
at org.apache.lucene.index.DirectoryReader.open(DirectoryReader.java:72)
at org.apache.lucene.index.IndexReader.open(IndexReader.java:256)
at org.neo4j.kernel.api.impl.index.LuceneIndexWriter.isOnline(LuceneIndexWriter.java:74)
at org.neo4j.kernel.api.impl.index.LuceneSchemaIndexProvider.getInitialState(LuceneSchemaIndexProvider.java:119)
at org.neo4j.kernel.impl.api.index.IndexingService.init(IndexingService.java:225)
at org.neo4j.kernel.lifecycle.LifeSupport$LifecycleInstance.init(LifeSupport.java:424)
at org.neo4j.kernel.lifecycle.LifeSupport.init(LifeSupport.java:66)
at org.neo4j.kernel.lifecycle.LifeSupport.start(LifeSupport.java:102)
at org.neo4j.kernel.NeoStoreDataSource.start(NeoStoreDataSource.java:600)
at org.neo4j.kernel.lifecycle.LifeSupport$LifecycleInstance.start(LifeSupport.java:452)
at org.neo4j.kernel.lifecycle.LifeSupport.start(LifeSupport.java:111)
at org.neo4j.kernel.impl.transaction.state.DataSourceManager.start(DataSourceManager.java:112)
at org.neo4j.kernel.lifecycle.LifeSupport$LifecycleInstance.start(LifeSupport.java:452)
at org.neo4j.kernel.lifecycle.LifeSupport.start(LifeSupport.java:111)
at org.neo4j.kernel.impl.factory.GraphDatabaseFacadeFactory.newFacade(GraphDatabaseFacadeFactory.java:139)
at org.neo4j.kernel.impl.enterprise.EnterpriseFacadeFactory.newFacade(EnterpriseFacadeFactory.java:40)
at org.neo4j.graphdb.EnterpriseGraphDatabase.<init>(EnterpriseGraphDatabase.java:57)
at org.neo4j.server.enterprise.EnterpriseNeoServer$2.newGraphDatabase(EnterpriseNeoServer.java:67)
at org.neo4j.server.database.LifecycleManagingDatabase.start(LifecycleManagingDatabase.java:95)

Resources