thread.QueuedThreadPool in neo4j - neo4j

I'm using neo4j-community-2.2.3.
neo4J has 2,220,700 nodes and 4,334,748 relations.
Suddenly, neo4j is not working.
Here is error log in console.log.
16:43:04.225 [qtp814955981-22324] WARN o.e.j.util.thread.QueuedThreadPool -
java.lang.IllegalStateException: org.eclipse.jetty.util.SharedBlockingCallback$BlockerTimeoutException
at org.eclipse.jetty.util.SharedBlockingCallback$Blocker.failed(SharedBlockingCallback.java:184) ~[jetty-util-9.2.4.v20141103.jar:9.2.4.v20141103]
at org.eclipse.jetty.server.HttpChannel$CommitCallback.failed(HttpChannel.java:852) ~[jetty-server-9.2.4.v20141103.jar:9.2.4.v20141103]
at org.eclipse.jetty.io.AbstractConnection$1.run(AbstractConnection.java:103) ~[jetty-io-9.2.4.v20141103.jar:9.2.4.v20141103]
at org.eclipse.jetty.util.thread.QueuedThreadPool.runJob(QueuedThreadPool.java:620) ~[jetty-util-9.2.4.v20141103.jar:9.2.4.v20141103]
at org.eclipse.jetty.util.thread.QueuedThreadPool$3.run(QueuedThreadPool.java:540) ~[jetty-util-9.2.4.v20141103.jar:9.2.4.v20141103]
at java.lang.Thread.run(Thread.java:745) [na:1.7.0_80]
Caused by: org.eclipse.jetty.util.SharedBlockingCallback$BlockerTimeoutException: null
at org.eclipse.jetty.util.SharedBlockingCallback$Blocker.block(SharedBlockingCallback.java:216) ~[jetty-util-9.2.4.v20141103.jar:9.2.4.v20141103]
at org.eclipse.jetty.server.HttpOutput.write(HttpOutput.java:133) ~[jetty-server-9.2.4.v20141103.jar:9.2.4.v20141103]
at org.eclipse.jetty.server.HttpOutput.close(HttpOutput.java:163) ~[jetty-server-9.2.4.v20141103.jar:9.2.4.v20141103]
at org.eclipse.jetty.server.Response.closeOutput(Response.java:1017) ~[jetty-server-9.2.4.v20141103.jar:9.2.4.v20141103]
at org.eclipse.jetty.server.HttpChannel.handle(HttpChannel.java:421) ~[jetty-server-9.2.4.v20141103.jar:9.2.4.v20141103]
at org.eclipse.jetty.server.HttpConnection.onFillable(HttpConnection.java:248) ~[jetty-server-9.2.4.v20141103.jar:9.2.4.v20141103]
at org.eclipse.jetty.io.AbstractConnection$2.run(AbstractConnection.java:540) ~[jetty-io-9.2.4.v20141103.jar:9.2.4.v20141103]
... 3 common frames omitted
16:43:04.226 [qtp814955981-22324] WARN o.e.j.util.thread.QueuedThreadPool - Unexpected thread death: org.eclipse.jetty.util.thread.QueuedThreadPool$3#74989b47 in qtp814955981{STARTED,1
2<=28<=28,i=0,q=1466}
16:43:04.227 [qtp814955981-22351] WARN o.e.j.util.thread.QueuedThreadPool -
java.lang.IllegalStateException: org.eclipse.jetty.util.SharedBlockingCallback$BlockerTimeoutException
at org.eclipse.jetty.util.SharedBlockingCallback$Blocker.failed(SharedBlockingCallback.java:184) ~[jetty-util-9.2.4.v20141103.jar:9.2.4.v20141103]
at org.eclipse.jetty.server.HttpChannel$CommitCallback.failed(HttpChannel.java:852) ~[jetty-server-9.2.4.v20141103.jar:9.2.4.v20141103]
at org.eclipse.jetty.io.AbstractConnection$1.run(AbstractConnection.java:103) ~[jetty-io-9.2.4.v20141103.jar:9.2.4.v20141103]
at org.eclipse.jetty.util.thread.QueuedThreadPool.runJob(QueuedThreadPool.java:620) ~[jetty-util-9.2.4.v20141103.jar:9.2.4.v20141103]
at org.eclipse.jetty.util.thread.QueuedThreadPool$3.run(QueuedThreadPool.java:540) ~[jetty-util-9.2.4.v20141103.jar:9.2.4.v20141103]
at java.lang.Thread.run(Thread.java:745) [na:1.7.0_80]
Caused by: org.eclipse.jetty.util.SharedBlockingCallback$BlockerTimeoutException: null
at org.eclipse.jetty.util.SharedBlockingCallback$Blocker.block(SharedBlockingCallback.java:216) ~[jetty-util-9.2.4.v20141103.jar:9.2.4.v20141103]
at org.eclipse.jetty.server.HttpOutput.write(HttpOutput.java:133) ~[jetty-server-9.2.4.v20141103.jar:9.2.4.v20141103]
at org.eclipse.jetty.server.HttpOutput.close(HttpOutput.java:163) ~[jetty-server-9.2.4.v20141103.jar:9.2.4.v20141103]
at org.eclipse.jetty.server.Response.closeOutput(Response.java:1017) ~[jetty-server-9.2.4.v20141103.jar:9.2.4.v20141103]
at org.eclipse.jetty.server.HttpChannel.handle(HttpChannel.java:421) ~[jetty-server-9.2.4.v20141103.jar:9.2.4.v20141103]
at org.eclipse.jetty.server.HttpConnection.onFillable(HttpConnection.java:248) ~[jetty-server-9.2.4.v20141103.jar:9.2.4.v20141103]
Please, advise! Many thanks in advance!

It sounds like you have exceeded the maximum number of threads available to the web server to handle requests. As you submit queries that query occupies a web server thread until such time as it returns or times out. IF you have a lot of queries that take a while to run you can run out of threads. The number of threads available to the http server is configurable. See the server browser configuration documentation:
Specify the number of threads used by the Neo4j Web server to control
the level of concurrent HTTP requests that the server will service.
org.neo4j.server.webserver.maxthreads=200
However, if this is frequently happening you probably need to look more closely at why your queries are taking so long to execute or your overall server load.

Related

UserCodeException: java.lang.OutOfMemoryError: Java heap space when streaming autoscaling

I have two streaming pipelines running on productions that has no troubles so far (both with n1-standard-4). However, when I decided to try autoscaling, it gives me said error. I've tried both with normal autoscaling and streaming engine (with n2-highmem1, n1-highmen1) but nothing works.
Here's the structure of my pipeline. It's a Pubsub to BigQuery.
java.lang.RuntimeException: org.apache.beam.sdk.util.UserCodeException: java.lang.OutOfMemoryError: Java heap space
org.apache.beam.runners.dataflow.worker.IntrinsicMapTaskExecutorFactory$1.typedApply(IntrinsicMapTaskExecutorFactory.java:194)
org.apache.beam.runners.dataflow.worker.IntrinsicMapTaskExecutorFactory$1.typedApply(IntrinsicMapTaskExecutorFactory.java:165)
org.apache.beam.runners.dataflow.worker.graph.Networks$TypeSafeNodeFunction.apply(Networks.java:63)
org.apache.beam.runners.dataflow.worker.graph.Networks$TypeSafeNodeFunction.apply(Networks.java:50)
org.apache.beam.runners.dataflow.worker.graph.Networks.replaceDirectedNetworkNodes(Networks.java:87)
org.apache.beam.runners.dataflow.worker.IntrinsicMapTaskExecutorFactory.create(IntrinsicMapTaskExecutorFactory.java:125)
org.apache.beam.runners.dataflow.worker.StreamingDataflowWorker.process(StreamingDataflowWorker.java:1203)
org.apache.beam.runners.dataflow.worker.StreamingDataflowWorker.access$1000(StreamingDataflowWorker.java:149)
org.apache.beam.runners.dataflow.worker.StreamingDataflowWorker$6.run(StreamingDataflowWorker.java:1024)
java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142) java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617)
java.lang.Thread.run(Thread.java:745)
Caused by: org.apache.beam.sdk.util.UserCodeException: java.lang.OutOfMemoryError: Java heap space
org.apache.beam.sdk.util.UserCodeException.wrap(UserCodeException.java:34)
drivemode.com.dataflow.reader.PubsubToAnalyticsTableRowFN$DoFnInvoker.invokeSetup(Unknown Source)
org.apache.beam.runners.dataflow.worker.DoFnInstanceManagers$ConcurrentQueueInstanceManager.deserializeCopy(DoFnInstanceManagers.java:80)
org.apache.beam.runners.dataflow.worker.DoFnInstanceManagers$ConcurrentQueueInstanceManager.peek(DoFnInstanceManagers.java:62)
org.apache.beam.runners.dataflow.worker.UserParDoFnFactory.create(UserParDoFnFactory.java:95)
org.apache.beam.runners.dataflow.worker.DefaultParDoFnFactory.create(DefaultParDoFnFactory.java:75)
org.apache.beam.runners.dataflow.worker.IntrinsicMapTaskExecutorFactory.createParDoOperation(IntrinsicMapTaskExecutorFactory.java:264)
org.apache.beam.runners.dataflow.worker.IntrinsicMapTaskExecutorFactory.access$000(IntrinsicMapTaskExecutorFactory.java:86)
org.apache.beam.runners.dataflow.worker.IntrinsicMapTaskExecutorFactory$1.typedApply(IntrinsicMapTaskExecutorFactory.java:183)\
11 more\nCaused by: java.lang.OutOfMemoryError: Java heap space
org.tukaani.xz.lz.LZDecoder.<init>(Unknown Source)
org.tukaani.xz.LZMA2InputStream.<init>(Unknown Source)
org.tukaani.xz.LZMA2InputStream.<init>(Unknown Source)
org.apache.commons.compress.archivers.sevenz.LZMA2Decoder.decode(LZMA2Decoder.java:39)
org.apache.commons.compress.archivers.sevenz.Coders.addDecoder(Coders.java:76)
org.apache.commons.compress.archivers.sevenz.SevenZFile.buildDecoderStack(SevenZFile.java:933)
org.apache.commons.compress.archivers.sevenz.SevenZFile.buildDecodingStream(SevenZFile.java:909)
org.apache.commons.compress.archivers.sevenz.SevenZFile.getNextEntry(SevenZFile.java:222)
net.iakovlev.timeshape.TimeZoneEngine.lambda$initialize$0(TimeZoneEngine.java:111)
net.iakovlev.timeshape.TimeZoneEngine$$Lambda$76/1740730188.apply(Unknown Source)
java.util.stream.ReferencePipeline$3$1.accept(ReferencePipeline.java:193) java.util.Spliterators$ArraySpliterator.forEachRemaining(Spliterators.java:948) java.util.stream.AbstractPipeline.copyInto(AbstractPipeline.java:481)
java.util.stream.AbstractPipeline.wrapAndCopyInto(AbstractPipeline.java:471) java.util.stream.ForEachOps$ForEachOp.evaluateSequential(ForEachOps.java:151 java.util.stream.ForEachOps$ForEachOp$OfRef.evaluateSequential(ForEachOps.java:174)
java.util.stream.AbstractPipeline.evaluate(AbstractPipeline.java:234) java.util.stream.ReferencePipeline.forEach(ReferencePipeline.java:418) net.iakovlev.timeshape.Index.build(Index.java:73) net.iakovlev.timeshape.TimeZoneEngine.initialize(TimeZoneEngine.java:126) net.iakovlev.timeshape.TimeZoneEngine.initialize(TimeZoneEngine.java:94) drivemode.com.dataflow.reader.AnalyticsTableRowFN.setUp(AnalyticsTableRowFN.java:60) drivemode.com.dataflow.reader.PubsubToAnalyticsTableRowFN$DoFnInvoker.invokeSetup(Unknown Source)
org.apache.beam.runners.dataflow.worker.DoFnInstanceManagers$ConcurrentQueueInstanceManager.deserializeCopy(DoFnInstanceManagers.java:80) org.apache.beam.runners.dataflow.worker.DoFnInstanceManagers$ConcurrentQueueInstanceManager.peek(DoFnInstanceManagers.java:62) org.apache.beam.runners.dataflow.worker.UserParDoFnFactory.create(UserParDoFnFactory.java:95) org.apache.beam.runners.dataflow.worker.DefaultParDoFnFactory.create(DefaultParDoFnFactory.java:75)
org.apache.beam.runners.dataflow.worker.IntrinsicMapTaskExecutorFactory.createParDoOperation(IntrinsicMapTaskExecutorFactory.java:264) org.apache.beam.runners.dataflow.worker.IntrinsicMapTaskExecutorFactory.access$000(IntrinsicMapTaskExecutorFactory.java:86) org.apache.beam.runners.dataflow.worker.IntrinsicMapTaskExecutorFactory$1.typedApply(IntrinsicMapTaskExecutorFactory.java:183)
org.apache.beam.runners.dataflow.worker.IntrinsicMapTaskExecutorFactory$1.typedApply(IntrinsicMapTaskExecutorFactory.java:165)
org.apache.beam.runners.dataflow.worker.graph.Networks$TypeSafeNodeFunction.apply(Networks.java:63)
or sometimes I get
org.apache.beam.vendor.guava.v20_0.com.google.common.util.concurrent.ExecutionError: java.lang.NoClassDefFoundError: Could not initialize class drivemode.com.dataflow.reader.AnalyticsTableRowFN
org.apache.beam.vendor.guava.v20_0.com.google.common.cache.LocalCache$Segment.get(LocalCache.java:2212)
org.apache.beam.vendor.guava.v20_0.com.google.common.cache.LocalCache.get(LocalCache.java:4053)
org.apache.beam.vendor.guava.v20_0.com.google.common.cache.LocalCache$LocalManualCache.get(LocalCache.java:4899)
org.apache.beam.runners.dataflow.worker.UserParDoFnFactory.create(UserParDoFnFactory.java:91)
org.apache.beam.runners.dataflow.worker.DefaultParDoFnFactory.create(DefaultParDoFnFactory.java:75)
org.apache.beam.runners.dataflow.worker.IntrinsicMapTaskExecutorFactory.createParDoOperation(IntrinsicMapTaskExecutorFactory.java:264)
org.apache.beam.runners.dataflow.worker.IntrinsicMapTaskExecutorFactory.access$000(IntrinsicMapTaskExecutorFactory.java:86)
org.apache.beam.runners.dataflow.worker.IntrinsicMapTaskExecutorFactory$1.typedApply(IntrinsicMapTaskExecutorFactory.java:183)
org.apache.beam.runners.dataflow.worker.IntrinsicMapTaskExecutorFactory$1.typedApply(IntrinsicMapTaskExecutorFactory.java:165)
org.apache.beam.runners.dataflow.worker.graph.Networks$TypeSafeNodeFunction.apply(Networks.java:63)
org.apache.beam.runners.dataflow.worker.graph.Networks$TypeSafeNodeFunction.apply(Networks.java:50)
org.apache.beam.runners.dataflow.worker.graph.Networks.replaceDirectedNetworkNodes(Networks.java:87)
org.apache.beam.runners.dataflow.worker.IntrinsicMapTaskExecutorFactory.create(IntrinsicMapTaskExecutorFactory.java:125)
org.apache.beam.runners.dataflow.worker.StreamingDataflowWorker.process(StreamingDataflowWorker.java:1203)
org.apache.beam.runners.dataflow.worker.StreamingDataflowWorker.access$1000(StreamingDataflowWorker.java:149)
org.apache.beam.runners.dataflow.worker.StreamingDataflowWorker$6.run(StreamingDataflowWorker.java:1024)
java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142)
java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617)
java.lang.Thread.run(Thread.java:745
Caused by: java.lang.NoClassDefFoundError: Could not initialize class drivemode.com.dataflow.reader.AnalyticsTableRowFN java.io.ObjectStreamClass.hasStaticInitializer(Native Method)
java.io.ObjectStreamClass.computeDefaultSUID(ObjectStreamClass.java:1787) java.io.ObjectStreamClass.access$100(ObjectStreamClass.java:72) java.io.ObjectStreamClass$1.run(ObjectStreamClass.java:253)
java.io.ObjectStreamClass$1.run(ObjectStreamClass.java:251)
java.security.AccessController.doPrivileged(Native Method)
java.io.ObjectStreamClass.getSerialVersionUID(ObjectStreamClass.java:250)
java.io.ObjectStreamClass.initNonProxy(ObjectStreamClass.java:611)
java.io.ObjectInputStream.readNonProxyDesc(ObjectInputStream.java:1630)
java.io.ObjectInputStream.readClassDesc(ObjectInputStream.java:1521)
java.io.ObjectInputStream.readNonProxyDesc(ObjectInputStream.java:1630) java.io.ObjectInputStream.readClassDesc(ObjectInputStream.java:1521) java.io.ObjectInputStream.readOrdinaryObject(ObjectInputStream.java:1781)
java.io.ObjectInputStream.readObject0(ObjectInputStream.java:1353)
java.io.ObjectInputStream.defaultReadFields(ObjectInputStream.java:2018)
java.io.ObjectInputStream.readSerialData(ObjectInputStream.java:1942)
java.io.ObjectInputStream.readOrdinaryObject(ObjectInputStream.java:1808) java.io.ObjectInputStream.readObject0(ObjectInputStream.java:1353)
java.io.ObjectInputStream.readObject(ObjectInputStream.java:373) org.apache.beam.sdk.util.SerializableUtils.deserializeFromByteArray(SerializableUtils.java:71) org.apache.beam.runners.dataflow.worker.UserParDoFnFactory$UserDoFnExtractor.getDoFnInfo(UserParDoFnFactory.java:62) org.apache.beam.runners.dataflow.worker.UserParDoFnFactory.lambda$create$0(UserParDoFnFactory.java:93)
org.apache.beam.vendor.guava.v20_0.com.google.common.cache.LocalCache$LocalManualCache$1.load(LocalCache.java:4904) org.apache.beam.vendor.guava.v20_0.com.google.common.cache.LocalCache$LoadingValueReference.loadFuture(LocalCache.java:3628) org.apache.beam.vendor.guava.v20_0.com.google.common.cache.LocalCache$Segment.loadSync(LocalCache.java:2336)
org.apache.beam.vendor.guava.v20_0.com.google.common.cache.LocalCache$Segment.lockedGetOrLoad(LocalCache.java:2295)
org.apache.beam.vendor.guava.v20_0.com.google.common.cache.LocalCache$Segment.get(LocalCache.java:2208)
... 18 more
Some of your ParDo functions or MapElements might be consuming more memory than they should, as per the logs pasted, review PubsubToAnalyticsTableRowFN.
This beam doclink might help you to tune the pipeline.
Apparently I found that the issue is in setup where I tried to initialize a time-zone lookup library that gives tz_string from (lat,lng). The memory consumption was too high and I solved the issue by loading said class at construction time.

Error when binding two instances Janusgraph on same graph

So I'm trying to deploy 2 instances of Janusgraph (v 0.3.1) to insert data in the same keyspace of ScyllaDb backend. To do that i deploy 2 janusgraph containers with docker. The first one is starting without errors and creates the keyspace in my ScyllaDb but the second one displays some errors.
So my Janusgraph containers works on a cluster while my backend works on another cluster. Actually, I tried with only one container to restart it when my keyspace Scylla is already created and I have the same problem.
I also found a script clean.groovy that allowed to force shutdown the graph instances opened. But nothing works...
5316 [main] INFO org.janusgraph.diskstorage.Backend - Initiated backend operations thread pool of size 8
5648 [main] WARN org.apache.tinkerpop.gremlin.server.GremlinServer - Graph [graph] configured at [/janusgraph-config/janusgraph.properties] could not be instantiated and will not be available in Gremlin Server. GraphFactory message: GraphFactory could not instantiate this Graph implementation [class org.janusgraph.core.JanusGraphFactory]
java.lang.RuntimeException: GraphFactory could not instantiate this Graph implementation [class org.janusgraph.core.JanusGraphFactory]
at org.apache.tinkerpop.gremlin.structure.util.GraphFactory.open(GraphFactory.java:82)
at org.apache.tinkerpop.gremlin.structure.util.GraphFactory.open(GraphFactory.java:70)
at org.apache.tinkerpop.gremlin.structure.util.GraphFactory.open(GraphFactory.java:104)
at org.apache.tinkerpop.gremlin.server.util.DefaultGraphManager.lambda$new$0(DefaultGraphManager.java:57)
at java.util.LinkedHashMap$LinkedEntrySet.forEach(LinkedHashMap.java:671)
at org.apache.tinkerpop.gremlin.server.util.DefaultGraphManager.<init>(DefaultGraphManager.java:55)
at sun.reflect.NativeConstructorAccessorImpl.newInstance0(Native Method)
at sun.reflect.NativeConstructorAccessorImpl.newInstance(NativeConstructorAccessorImpl.java:62)
at sun.reflect.DelegatingConstructorAccessorImpl.newInstance(DelegatingConstructorAccessorImpl.java:45)
at java.lang.reflect.Constructor.newInstance(Constructor.java:423)
at org.apache.tinkerpop.gremlin.server.util.ServerGremlinExecutor.<init>(ServerGremlinExecutor.java:80)
at org.apache.tinkerpop.gremlin.server.GremlinServer.<init>(GremlinServer.java:120)
at org.apache.tinkerpop.gremlin.server.GremlinServer.<init>(GremlinServer.java:84)
at org.apache.tinkerpop.gremlin.server.GremlinServer.main(GremlinServer.java:343)
Caused by: java.lang.reflect.InvocationTargetException
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.lang.reflect.Method.invoke(Method.java:498)
at org.apache.tinkerpop.gremlin.structure.util.GraphFactory.open(GraphFactory.java:78)
... 13 more
Caused by: org.janusgraph.core.JanusGraphException: A JanusGraph graph with the same instance id [0a0a01657-node11] is already open. Might required forced shutdown.
at org.janusgraph.graphdb.database.StandardJanusGraph.<init>(StandardJanusGraph.java:165)
at org.janusgraph.core.JanusGraphFactory.open(JanusGraphFactory.java:160)
at org.janusgraph.core.JanusGraphFactory.open(JanusGraphFactory.java:131)
at org.janusgraph.core.JanusGraphFactory.open(JanusGraphFactory.java:111)
... 18 more
5651 [main] INFO org.apache.tinkerpop.gremlin.server.util.ServerGremlinExecutor - Initialized Gremlin thread pool. Threads in pool named with pattern gremlin-*
5800 [main] INFO org.apache.tinkerpop.gremlin.server.util.ServerGremlinExecutor - Initialized GremlinExecutor and preparing GremlinScriptEngines instances.
7837 [gremlin-server-exec-1] ERROR org.apache.tinkerpop.gremlin.jsr223.DefaultGremlinScriptEngineManager - Could not create GremlinScriptEngine for gremlin-groovy
java.lang.IllegalStateException: javax.script.ScriptException: javax.script.ScriptException: groovy.lang.MissingPropertyException: No such property: graph for class: Script1
at org.apache.tinkerpop.gremlin.jsr223.DefaultGremlinScriptEngineManager.lambda$createGremlinScriptEngine$16(DefaultGremlinScriptEngineManager.java:464)
at java.util.stream.ForEachOps$ForEachOp$OfRef.accept(ForEachOps.java:184)
at java.util.stream.ReferencePipeline$3$1.accept(ReferencePipeline.java:193)
at java.util.ArrayList$ArrayListSpliterator.forEachRemaining(ArrayList.java:1382)
at java.util.stream.ReferencePipeline$Head.forEach(ReferencePipeline.java:580)
at java.util.stream.ReferencePipeline$7$1.accept(ReferencePipeline.java:270)
at java.util.ArrayList$ArrayListSpliterator.forEachRemaining(ArrayList.java:1382)
at java.util.stream.AbstractPipeline.copyInto(AbstractPipeline.java:481)
at java.util.stream.AbstractPipeline.wrapAndCopyInto(AbstractPipeline.java:471)
at java.util.stream.ForEachOps$ForEachOp.evaluateSequential(ForEachOps.java:151)
at java.util.stream.ForEachOps$ForEachOp$OfRef.evaluateSequential(ForEachOps.java:174)
at java.util.stream.AbstractPipeline.evaluate(AbstractPipeline.java:234)
at java.util.stream.ReferencePipeline.forEach(ReferencePipeline.java:418)
at org.apache.tinkerpop.gremlin.jsr223.DefaultGremlinScriptEngineManager.createGremlinScriptEngine(DefaultGremlinScriptEngineManager.java:450)
at org.apache.tinkerpop.gremlin.jsr223.DefaultGremlinScriptEngineManager.getEngineByName(DefaultGremlinScriptEngineManager.java:219)
at org.apache.tinkerpop.gremlin.jsr223.CachedGremlinScriptEngineManager.lambda$getEngineByName$0(CachedGremlinScriptEngineManager.java:57)
at java.util.concurrent.ConcurrentHashMap.computeIfAbsent(ConcurrentHashMap.java:1660)
at org.apache.tinkerpop.gremlin.jsr223.CachedGremlinScriptEngineManager.getEngineByName(CachedGremlinScriptEngineManager.java:57)
at org.apache.tinkerpop.gremlin.groovy.engine.GremlinExecutor.lambda$eval$0(GremlinExecutor.java:263)
at java.util.concurrent.FutureTask.run(FutureTask.java:266)
at java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:511)
at java.util.concurrent.FutureTask.run(FutureTask.java:266)
at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149)
at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624)
at java.lang.Thread.run(Thread.java:748)
Caused by: javax.script.ScriptException: javax.script.ScriptException: groovy.lang.MissingPropertyException: No such property: graph for class: Script1
at org.apache.tinkerpop.gremlin.groovy.jsr223.GremlinGroovyScriptEngine.eval(GremlinGroovyScriptEngine.java:397)
at javax.script.AbstractScriptEngine.eval(AbstractScriptEngine.java:264)
at org.apache.tinkerpop.gremlin.jsr223.DefaultGremlinScriptEngineManager.lambda$createGremlinScriptEngine$16(DefaultGremlinScriptEngineManager.java:460)
... 24 more
Caused by: javax.script.ScriptException: groovy.lang.MissingPropertyException: No such property: graph for class: Script1
at org.apache.tinkerpop.gremlin.groovy.jsr223.GremlinGroovyScriptEngine.eval(GremlinGroovyScriptEngine.java:713)
at org.apache.tinkerpop.gremlin.groovy.jsr223.GremlinGroovyScriptEngine.eval(GremlinGroovyScriptEngine.java:395)
... 26 more
Caused by: groovy.lang.MissingPropertyException: No such property: graph for class: Script1
at org.codehaus.groovy.runtime.ScriptBytecodeAdapter.unwrap(ScriptBytecodeAdapter.java:66)
at org.codehaus.groovy.runtime.callsite.PogoGetPropertySite.getProperty(PogoGetPropertySite.java:51)
at org.codehaus.groovy.runtime.callsite.AbstractCallSite.callGroovyObjectGetProperty(AbstractCallSite.java:310)
at Script1.run(Script1.groovy:16)
at org.apache.tinkerpop.gremlin.groovy.jsr223.GremlinGroovyScriptEngine.eval(GremlinGroovyScriptEngine.java:690)
... 27 more
7841 [main] WARN org.apache.tinkerpop.gremlin.server.util.ServerGremlinExecutor - Could not initialize gremlin-groovy GremlinScriptEngine as init script could not be evaluated
java.util.concurrent.CompletionException: java.lang.IllegalArgumentException: gremlin-groovy is not an available GremlinScriptEngine
at java.util.concurrent.CompletableFuture.reportJoin(CompletableFuture.java:375)
at java.util.concurrent.CompletableFuture.join(CompletableFuture.java:1934)
at org.apache.tinkerpop.gremlin.server.util.ServerGremlinExecutor.lambda$new$4(ServerGremlinExecutor.java:141)
at java.util.LinkedHashMap$LinkedKeySet.forEach(LinkedHashMap.java:559)
at org.apache.tinkerpop.gremlin.server.util.ServerGremlinExecutor.<init>(ServerGremlinExecutor.java:136)
at org.apache.tinkerpop.gremlin.server.GremlinServer.<init>(GremlinServer.java:120)
at org.apache.tinkerpop.gremlin.server.GremlinServer.<init>(GremlinServer.java:84)
at org.apache.tinkerpop.gremlin.server.GremlinServer.main(GremlinServer.java:343)
Caused by: java.lang.IllegalArgumentException: gremlin-groovy is not an available GremlinScriptEngine
at org.apache.tinkerpop.gremlin.jsr223.CachedGremlinScriptEngineManager.registerLookUpInfo(CachedGremlinScriptEngineManager.java:95)
at org.apache.tinkerpop.gremlin.jsr223.CachedGremlinScriptEngineManager.getEngineByName(CachedGremlinScriptEngineManager.java:58)
at org.apache.tinkerpop.gremlin.groovy.engine.GremlinExecutor.lambda$eval$0(GremlinExecutor.java:263)
at java.util.concurrent.FutureTask.run(FutureTask.java:266)
at java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:511)
at java.util.concurrent.FutureTask.run(FutureTask.java:266)
at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149)
at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624)
at java.lang.Thread.run(Thread.java:748)
I think all those errors are created by this :
Caused by: org.janusgraph.core.JanusGraphException: A JanusGraph graph with the same instance id [0a0a01657-node11] is already open. Might required forced shutdown.
And this is the script clean.groovy
graph = JanusGraphFactory.open('janusgraph.properties')
mgmt = graph.openManagement()
instances = mgmt.getOpenInstances()
instances.iterator().findAll {
!it.contains('current')
}.each {
mgmt.forceCloseInstance(it)
}
mgmt.commit()
So did anybody solved this problem of graph ?
Thanks in advance.
I finally got answer ! So you can use ConfiguredGraphFactory but it failed for me. So for the first container i started it normally but the second one i added another configuration in the janusgraph.properties file :
graph.replace-instance-if-exists=true`
And it works perfectly !

Google Dataflow batch job reading from Pub Sub

I'm having trouble running a dataflow pipeline in batch mode using PubsubIO.Read as the source. I have tried using both .maxNumRecords and .maxReadTime in order to make it a bounded collection. However, my job hangs on the read step and I get Socket Timeout Exceptions. Only when I publish a message to the topic while the job is running does it process queued messages. I was under the impression that if I specify a subscription for the pipeline that it would process any messages that it missed on startup. I'm using the Dataflow SDK 1.9 for Java. Below is a snippet of my code.
AuditPipelineOptions options = PipelineOptionsFactory.fromArgs(args).withValidation()
.as(AuditPipelineOptions.class);
options.setRunner(TemplatingDataflowPipelineRunner.class);
Pipeline p = Pipeline.create(options);
p.apply(PubsubIO.Read.subscription(options.getPubSubSubscription()).withCoder(StringUtf8Coder.of()).maxReadTime(new Duration(1000 * 60)))
.apply(ParDo.of(new CreateEntitiesFn(options.getNamespace(),options.getIgnoreKeys())))
.apply(DatastoreIO.v1().write().withProjectId(options.getProject()));
p.run();
Here is the full exception:
exception thrown while executing request
java.net.SocketTimeoutException: Read timed out
at java.net.SocketInputStream.socketRead0(Native Method)
at java.net.SocketInputStream.socketRead(SocketInputStream.java:116)
at java.net.SocketInputStream.read(SocketInputStream.java:170)
at java.net.SocketInputStream.read(SocketInputStream.java:141)
at sun.security.ssl.InputRecord.readFully(InputRecord.java:465)
at sun.security.ssl.InputRecord.read(InputRecord.java:503)
at sun.security.ssl.SSLSocketImpl.readRecord(SSLSocketImpl.java:983)
at sun.security.ssl.SSLSocketImpl.readDataRecord(SSLSocketImpl.java:940)
at sun.security.ssl.AppInputStream.read(AppInputStream.java:105)
at java.io.BufferedInputStream.fill(BufferedInputStream.java:246)
at java.io.BufferedInputStream.read1(BufferedInputStream.java:286)
at java.io.BufferedInputStream.read(BufferedInputStream.java:345)
at sun.net.www.http.HttpClient.parseHTTPHeader(HttpClient.java:704)
at sun.net.www.http.HttpClient.parseHTTP(HttpClient.java:647)
at sun.net.www.protocol.http.HttpURLConnection.getInputStream0(HttpURLConnection.java:1536)
at sun.net.www.protocol.http.HttpURLConnection.getInputStream(HttpURLConnection.java:1441)
at java.net.HttpURLConnection.getResponseCode(HttpURLConnection.java:480)
at sun.net.www.protocol.https.HttpsURLConnectionImpl.getResponseCode(HttpsURLConnectionImpl.java:338)
at com.google.api.client.http.javanet.NetHttpResponse.<init>(NetHttpResponse.java:37)
at com.google.api.client.http.javanet.NetHttpRequest.execute(NetHttpRequest.java:94)
at com.google.api.client.http.HttpRequest.execute(HttpRequest.java:981)
at com.google.api.client.googleapis.services.AbstractGoogleClientRequest.executeUnparsed(AbstractGoogleClientRequest.java:419)
at com.google.api.client.googleapis.services.AbstractGoogleClientRequest.executeUnparsed(AbstractGoogleClientRequest.java:352)
at com.google.api.client.googleapis.services.AbstractGoogleClientRequest.execute(AbstractGoogleClientRequest.java:469)
at com.google.cloud.dataflow.sdk.util.PubsubJsonClient.pull(PubsubJsonClient.java:171)
at com.google.cloud.dataflow.sdk.io.PubsubIO$Read$Bound$PubsubReader.processElement(PubsubIO.java:886)
at com.google.cloud.dataflow.sdk.util.SimpleDoFnRunner.invokeProcessElement(SimpleDoFnRunner.java:49)
at com.google.cloud.dataflow.sdk.util.DoFnRunnerBase.processElement(DoFnRunnerBase.java:139)
at com.google.cloud.dataflow.sdk.runners.worker.SimpleParDoFn.processElement(SimpleParDoFn.java:188)
at com.google.cloud.dataflow.sdk.runners.worker.ForwardingParDoFn.processElement(ForwardingParDoFn.java:42)
at com.google.cloud.dataflow.sdk.runners.worker.DataflowWorkerLoggingParDoFn.processElement(DataflowWorkerLoggingParDoFn.java:47)
at com.google.cloud.dataflow.sdk.util.common.worker.ParDoOperation.process(ParDoOperation.java:55)
at com.google.cloud.dataflow.sdk.util.common.worker.OutputReceiver.process(OutputReceiver.java:52)
at com.google.cloud.dataflow.sdk.util.common.worker.ReadOperation.runReadLoop(ReadOperation.java:221)
at com.google.cloud.dataflow.sdk.util.common.worker.ReadOperation.start(ReadOperation.java:182)
at com.google.cloud.dataflow.sdk.util.common.worker.MapTaskExecutor.execute(MapTaskExecutor.java:69)
at com.google.cloud.dataflow.sdk.runners.worker.DataflowWorker.executeWork(DataflowWorker.java:285)
at com.google.cloud.dataflow.sdk.runners.worker.DataflowWorker.doWork(DataflowWorker.java:221)
at com.google.cloud.dataflow.sdk.runners.worker.DataflowWorker.getAndPerformWork(DataflowWorker.java:171)
at com.google.cloud.dataflow.sdk.runners.worker.DataflowWorkerHarness$WorkerThread.doWork(DataflowWorkerHarness.java:192)
at com.google.cloud.dataflow.sdk.runners.worker.DataflowWorkerHarness$WorkerThread.call(DataflowWorkerHarness.java:172)
at com.google.cloud.dataflow.sdk.runners.worker.DataflowWorkerHarness$WorkerThread.call(DataflowWorkerHarness.java:159)
at java.util.concurrent.FutureTask.run(FutureTask.java:266)
at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142)
at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617)
at java.lang.Thread.run(Thread.java:745)
Sample Job ID: 2017-05-17_12_07_37-2442852182886580802

Neo4J HA with Neo4J Spatial

So I have just set up an HA environment where I have a master server and instances of an application using Neo4J embedded talking to that cluster. Everything seems to work if the state of both databases is the same.
However if I delete all data from my slave instance, and have it join the cluster, I expect the data from the cluster to propagate into the slave instance. Instead I get errors with what appears to be Neo4J spatial. I have Neo4J spatial in my application, and the server plugin installed in the on the master server side.
An example of the stack trace I get:
2015-10-19 15:10:27.096+0000 ERROR [org.neo4j]: Exception when stopping org.neo4j.kernel.lifecycle.Lifecycle$Delegate#ae93556 org.neo4j.gis.spatial.indexprovider.SpatialIndexImplementation.stop()V
java.lang.AbstractMethodError: org.neo4j.gis.spatial.indexprovider.SpatialIndexImplementation.stop()V
at org.neo4j.kernel.lifecycle.Lifecycles$1.stop(Lifecycles.java:55)
at org.neo4j.kernel.lifecycle.Lifecycle$Delegate.stop(Lifecycle.java:75)
at org.neo4j.kernel.lifecycle.LifeSupport$LifecycleInstance.stop(LifeSupport.java:527)
at org.neo4j.kernel.lifecycle.LifeSupport.stop(LifeSupport.java:155)
at org.neo4j.kernel.lifecycle.LifeSupport.shutdown(LifeSupport.java:185)
at org.neo4j.kernel.NeoStoreDataSource.stop(NeoStoreDataSource.java:1160)
at org.neo4j.kernel.lifecycle.LifeSupport$LifecycleInstance.stop(LifeSupport.java:527)
at org.neo4j.kernel.lifecycle.LifeSupport.stop(LifeSupport.java:155)
at org.neo4j.kernel.impl.transaction.state.DataSourceManager.stop(DataSourceManager.java:137)
at org.neo4j.kernel.ha.cluster.SwitchToSlave.stopServicesAndHandleBranchedStore(SwitchToSlave.java:521)
at org.neo4j.kernel.ha.cluster.SwitchToSlave.checkDataConsistency(SwitchToSlave.java:357)
at org.neo4j.kernel.ha.cluster.SwitchToSlave.executeConsistencyChecks(SwitchToSlave.java:316)
at org.neo4j.kernel.ha.cluster.SwitchToSlave.switchToSlave(SwitchToSlave.java:219)
at org.neo4j.kernel.ha.cluster.HighAvailabilityModeSwitcher$2.run(HighAvailabilityModeSwitcher.java:328)
at java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:471)
at java.util.concurrent.FutureTask.run(FutureTask.java:262)
at java.util.concurrent.ScheduledThreadPoolExecutor$ScheduledFutureTask.access$201(ScheduledThreadPoolExecutor.java:178)
at java.util.concurrent.ScheduledThreadPoolExecutor$ScheduledFutureTask.run(ScheduledThreadPoolExecutor.java:292)
at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1145)
at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:615)
at java.lang.Thread.run(Thread.java:744)
at org.neo4j.helpers.NamedThreadFactory$2.run(NamedThreadFactory.java:99)
2015-10-19 15:10:27.102+0000 ERROR [org.neo4j]: Lifecycle exception Failed to transition component 'org.neo4j.kernel.lifecycle.Lifecycle$Delegate#ae93556' from STOPPED to SHUTTING_DOWN. Please see attached cause exception
org.neo4j.kernel.lifecycle.LifecycleException: Failed to transition component 'org.neo4j.kernel.lifecycle.Lifecycle$Delegate#ae93556' from STOPPED to SHUTTING_DOWN. Please see attached cause exception
at org.neo4j.kernel.lifecycle.LifeSupport$LifecycleInstance.shutdown(LifeSupport.java:559)
at org.neo4j.kernel.lifecycle.LifeSupport.shutdown(LifeSupport.java:200)
at org.neo4j.kernel.NeoStoreDataSource.stop(NeoStoreDataSource.java:1160)
at org.neo4j.kernel.lifecycle.LifeSupport$LifecycleInstance.stop(LifeSupport.java:527)
at org.neo4j.kernel.lifecycle.LifeSupport.stop(LifeSupport.java:155)
at org.neo4j.kernel.impl.transaction.state.DataSourceManager.stop(DataSourceManager.java:137)
at org.neo4j.kernel.ha.cluster.SwitchToSlave.stopServicesAndHandleBranchedStore(SwitchToSlave.java:521)
at org.neo4j.kernel.ha.cluster.SwitchToSlave.checkDataConsistency(SwitchToSlave.java:357)
at org.neo4j.kernel.ha.cluster.SwitchToSlave.executeConsistencyChecks(SwitchToSlave.java:316)
at org.neo4j.kernel.ha.cluster.SwitchToSlave.switchToSlave(SwitchToSlave.java:219)
at org.neo4j.kernel.ha.cluster.HighAvailabilityModeSwitcher$2.run(HighAvailabilityModeSwitcher.java:328)
at java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:471)
at java.util.concurrent.FutureTask.run(FutureTask.java:262)
at java.util.concurrent.ScheduledThreadPoolExecutor$ScheduledFutureTask.access$201(ScheduledThreadPoolExecutor.java:178)
at java.util.concurrent.ScheduledThreadPoolExecutor$ScheduledFutureTask.run(ScheduledThreadPoolExecutor.java:292)
at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1145)
at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:615)
at java.lang.Thread.run(Thread.java:744)
at org.neo4j.helpers.NamedThreadFactory$2.run(NamedThreadFactory.java:99)
Caused by: java.lang.AbstractMethodError: org.neo4j.gis.spatial.indexprovider.SpatialIndexImplementation.shutdown()V
at org.neo4j.kernel.lifecycle.Lifecycles$1.shutdown(Lifecycles.java:64)
at org.neo4j.kernel.lifecycle.Lifecycle$Delegate.shutdown(Lifecycle.java:81)
at org.neo4j.kernel.lifecycle.LifeSupport$LifecycleInstance.shutdown(LifeSupport.java:555)
... 18 more
2015-10-19 15:10:27.103+0000 ERROR [org.neo4j]: Chained lifecycle exception Component 'org.neo4j.kernel.lifecycle.Lifecycle$Delegate#ae93556' failed to stop. Please see attached cause exception.
org.neo4j.kernel.lifecycle.LifecycleException: Component 'org.neo4j.kernel.lifecycle.Lifecycle$Delegate#ae93556' failed to stop. Please see attached cause exception.
at org.neo4j.kernel.lifecycle.LifeSupport$LifecycleInstance.stop(LifeSupport.java:532)
at org.neo4j.kernel.lifecycle.LifeSupport.stop(LifeSupport.java:155)
at org.neo4j.kernel.lifecycle.LifeSupport.shutdown(LifeSupport.java:185)
at org.neo4j.kernel.NeoStoreDataSource.stop(NeoStoreDataSource.java:1160)
at org.neo4j.kernel.lifecycle.LifeSupport$LifecycleInstance.stop(LifeSupport.java:527)
at org.neo4j.kernel.lifecycle.LifeSupport.stop(LifeSupport.java:155)
at org.neo4j.kernel.impl.transaction.state.DataSourceManager.stop(DataSourceManager.java:137)
at org.neo4j.kernel.ha.cluster.SwitchToSlave.stopServicesAndHandleBranchedStore(SwitchToSlave.java:521)
at org.neo4j.kernel.ha.cluster.SwitchToSlave.checkDataConsistency(SwitchToSlave.java:357)
at org.neo4j.kernel.ha.cluster.SwitchToSlave.executeConsistencyChecks(SwitchToSlave.java:316)
at org.neo4j.kernel.ha.cluster.SwitchToSlave.switchToSlave(SwitchToSlave.java:219)
at org.neo4j.kernel.ha.cluster.HighAvailabilityModeSwitcher$2.run(HighAvailabilityModeSwitcher.java:328)
at java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:471)
at java.util.concurrent.FutureTask.run(FutureTask.java:262)
at java.util.concurrent.ScheduledThreadPoolExecutor$ScheduledFutureTask.access$201(ScheduledThreadPoolExecutor.java:178)
at java.util.concurrent.ScheduledThreadPoolExecutor$ScheduledFutureTask.run(ScheduledThreadPoolExecutor.java:292)
at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1145)
at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:615)
at java.lang.Thread.run(Thread.java:744)
at org.neo4j.helpers.NamedThreadFactory$2.run(NamedThreadFactory.java:99)
Caused by: java.lang.AbstractMethodError: org.neo4j.gis.spatial.indexprovider.SpatialIndexImplementation.stop()V
at org.neo4j.kernel.lifecycle.Lifecycles$1.stop(Lifecycles.java:55)
at org.neo4j.kernel.lifecycle.Lifecycle$Delegate.stop(Lifecycle.java:75)
at org.neo4j.kernel.lifecycle.LifeSupport$LifecycleInstance.stop(LifeSupport.java:527)
... 19 more
Does Neo4j Spatial support replication across instances? Or more specifically restoring the spatial index to a new empty instance that joins the cluster for the first time?
An update to Neo4j spatial to version 0.15-neo4j-2.2.6 fixes this issue. So you need to be using Neo4j 2.2.6 in order to have Spatial indexes replicate properly

CopyFieldMutation.java ERROR cannot get comparator 1

I'm running DSE 4.5.1 on a 3-node cluster in AWS with RF=3. One of the nodes gets this error in the system.log. See (long) stack trace below. I would like to understand what the error means or implies. Is this a cause for concern? Lastly, how do I resolve the error?
[cqlsh 4.1.1 | Cassandra 2.0.8.39 | CQL spec 3.1.1 | Thrift protocol 19.39.0]
ERROR [Native-Transport-Requests:34142] 2015-05-12 14:15:33,029 CopyFieldMutation.java (line 166) Cannot get comparator 1 in org.apache.cassandra.db.marshal.CompositeType(org.apache.cassandra.db.marshal.UTF8Type). This might due to a mismatch between the schema and the data read
java.lang.RuntimeException: Cannot get comparator 1 in org.apache.cassandra.db.marshal.CompositeType(org.apache.cassandra.db.marshal.UTF8Type). This might due to a mismatch between the schema and the data read
at org.apache.cassandra.db.marshal.CompositeType.getComparator(CompositeType.java:133)
at org.apache.cassandra.db.marshal.AbstractCompositeType.split(AbstractCompositeType.java:100)
at com.datastax.bdp.cassandra.cql3.CopyFieldMutation.buildKeyValueIterator(CopyFieldMutation.java:275)
at com.datastax.bdp.cassandra.cql3.CopyFieldMutation.addPrimaryKeyFieldsToMutation(CopyFieldMutation.java:260)
at com.datastax.bdp.cassandra.cql3.CopyFieldMutation.createMutation(CopyFieldMutation.java:110)
at com.datastax.bdp.search.solr.triggers.SolrAugmentationTrigger.augment(SolrAugmentationTrigger.java:76)
at org.apache.cassandra.triggers.TriggerExecutor.executeInternal(TriggerExecutor.java:190)
at org.apache.cassandra.triggers.TriggerExecutor.execute(TriggerExecutor.java:94)
at org.apache.cassandra.service.StorageProxy.mutateWithTriggers(StorageProxy.java:532)
at org.apache.cassandra.cql3.statements.ModificationStatement.executeWithoutCondition(ModificationStatement.java:546)
at org.apache.cassandra.cql3.statements.ModificationStatement.execute(ModificationStatement.java:530)
at org.apache.cassandra.cql3.QueryProcessor.processStatement(QueryProcessor.java:158)
at com.datastax.bdp.cassandra.cql3.DseQueryHandler.statementExecution(DseQueryHandler.java:207)
at com.datastax.bdp.cassandra.cql3.DseQueryHandler.process(DseQueryHandler.java:86)
at org.apache.cassandra.transport.messages.QueryMessage.execute(QueryMessage.java:119)
at org.apache.cassandra.transport.Message$Dispatcher.messageReceived(Message.java:304)
at org.jboss.netty.channel.SimpleChannelUpstreamHandler.handleUpstream(SimpleChannelUpstreamHandler.java:70)
at org.jboss.netty.channel.DefaultChannelPipeline.sendUpstream(DefaultChannelPipeline.java:564)
at org.jboss.netty.channel.DefaultChannelPipeline$DefaultChannelHandlerContext.sendUpstream(DefaultChannelPipeline.java:791)
at org.jboss.netty.handler.execution.ChannelUpstreamEventRunnable.doRun(ChannelUpstreamEventRunnable.java:43)
at org.jboss.netty.handler.execution.ChannelEventRunnable.run(ChannelEventRunnable.java:67)
at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1145)
at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:615)
at java.lang.Thread.run(Thread.java:745)
Caused by: java.lang.IndexOutOfBoundsException: index (1) must be less than size (1)
at com.google.common.base.Preconditions.checkElementIndex(Preconditions.java:306)
at com.google.common.base.Preconditions.checkElementIndex(Preconditions.java:285)
at com.google.common.collect.SingletonImmutableList.get(SingletonImmutableList.java:45)
at org.apache.cassandra.db.marshal.CompositeType.getComparator(CompositeType.java:124)
... 23 more
WARN [Native-Transport-Requests:34142] 2015-05-12 14:15:33,029 SolrAugmentationTrigger.java (line 107) Error generating additional mutations for Solr copy/dynamic fields. Update will be applied without them
org.apache.cassandra.exceptions.InvalidRequestException: Cannot get comparator 1 in org.apache.cassandra.db.marshal.CompositeType(org.apache.cassandra.db.marshal.UTF8Type). This might due to a mismatch between the schema and the data read
at com.datastax.bdp.cassandra.cql3.CopyFieldMutation.createMutation(CopyFieldMutation.java:167)
at com.datastax.bdp.search.solr.triggers.SolrAugmentationTrigger.augment(SolrAugmentationTrigger.java:76)
at org.apache.cassandra.triggers.TriggerExecutor.executeInternal(TriggerExecutor.java:190)
at org.apache.cassandra.triggers.TriggerExecutor.execute(TriggerExecutor.java:94)
at org.apache.cassandra.service.StorageProxy.mutateWithTriggers(StorageProxy.java:532)
at org.apache.cassandra.cql3.statements.ModificationStatement.executeWithoutCondition(ModificationStatement.java:546)
at org.apache.cassandra.cql3.statements.ModificationStatement.execute(ModificationStatement.java:530)
at org.apache.cassandra.cql3.QueryProcessor.processStatement(QueryProcessor.java:158)
at com.datastax.bdp.cassandra.cql3.DseQueryHandler.statementExecution(DseQueryHandler.java:207)
at com.datastax.bdp.cassandra.cql3.DseQueryHandler.process(DseQueryHandler.java:86)
at org.apache.cassandra.transport.messages.QueryMessage.execute(QueryMessage.java:119)
at org.apache.cassandra.transport.Message$Dispatcher.messageReceived(Message.java:304)
at org.jboss.netty.channel.SimpleChannelUpstreamHandler.handleUpstream(SimpleChannelUpstreamHandler.java:70)
at org.jboss.netty.channel.DefaultChannelPipeline.sendUpstream(DefaultChannelPipeline.java:564)
at org.jboss.netty.channel.DefaultChannelPipeline$DefaultChannelHandlerContext.sendUpstream(DefaultChannelPipeline.java:791)
at org.jboss.netty.handler.execution.ChannelUpstreamEventRunnable.doRun(ChannelUpstreamEventRunnable.java:43)
at org.jboss.netty.handler.execution.ChannelEventRunnable.run(ChannelEventRunnable.java:67)
at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1145)
at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:615)
at java.lang.Thread.run(Thread.java:745)

Resources