Neo4j Batch Inserter Unknown Error - neo4j

I am attempting to use the java batch-inserter for a neo4j database, and I get the following error message:
>java -server -Xmx4G -jar target/batch-import-jar-with-dependencies.jar target/db nodes6.csv, rels5.csv
Using Existing Configuration File
Nodes file nodes6.csv, does not exist
Total import time: 0 seconds
Exception in thread "main" org.neo4j.graphdb.NotFoundException: id=4621
at org.neo4j.unsafe.batchinsert.BatchInserterImpl.getNodeRecord(BatchInserterImpl.java:915)
at org.neo4j.unsafe.batchinsert.BatchInserterImpl.createRelationship(BatchInserterImpl.java:468)
at org.neo4j.batchimport.Importer.importRelationships(Importer.java:108)
at org.neo4j.batchimport.Importer.main(Importer.java:63)
Nodes6.csv most certainly exists, so this is... confusing.

There are 2 errors here, the first is that it's unable to locate your nodes6.csv file, to me it looks like you entered the file name with an extra ,.
What you have:
java -server -Xmx4G -jar target/batch-import-jar-with-dependencies.jar target/db nodes6.csv, rels5.csv
Should it be this: java -server -Xmx4G -jar target/batch-import-jar-with-dependencies.jar target/db nodes6.csv rels5.csv

Related

Jenkins is spawning a lot of daemon processes and server crashes

I've recently installed Jenkins on a cheap VM on Azure. The specs are very low, since I use this server for testing the setup: 1vCPU & 1GB RAM. There will usually only be 1 build at the same time, with a max. of 3, in very rare occassions.
During the build process from Jenkins quite frequently my server would crash completely and stay so for +- 10 - 15 minutes until being able to be used again.
I checked the processes on the server and this is the result:
The full line is like this:
/etc/alternatives/java -Dcom.sun.akuma.Daemon=daemonized -Djava.awt.headless=true -DJENKINS_HOME=/var/lib/jenkins -jar /usr/lib/jenkins/jenkins.war --logfile=/var/log/jenkins/jenkins.log --webroot=/var/cache/jenkins/war --daemon --httpPort=8080 --debug=5 --handlerCountMax=100 --handlerCountMaxIdle=20
It is the same for every single one of those daemons, not a single parameter is different.
Is this normal behavior, and is this the reason why my server is crashing? Or are my specs just too low for Jenkins to run on to?
Thanks in advance!
EDIT:
My jenkins.log file looks pretty normal except for one NullPointerException that keeps coming back up:
2020-01-08 12:43:17.702+0000 [id=148] WARNING h.ExpressionFactory2$JexlExpression#evaluate: Caught exception evaluating: h.filterDescriptors(it,attrs.descriptors) in /configure. Reason: java.lang.NullPointerException: Descriptor list is null for context 'class hudson.model.Hudson' in thread 'Handling GET /configure from 85.154.65.124 : qtp2085857771-148 Jenkins/configure.jelly GlobalLibraries/config.jelly LibraryConfiguration/config.jelly SCMRetriever/DescriptorImpl/config.jelly MultiSCM/DescriptorImpl/config.jelly'
java.lang.NullPointerException: Descriptor list is null for context 'class hudson.model.Hudson' in thread 'Handling GET /configure from 85.154.65.124 : qtp2085857771-148 Jenkins/configure.jelly GlobalLibraries/config.jelly LibraryConfiguration/config.jelly SCMRetriever/DescriptorImpl/config.jelly MultiSCM/DescriptorImpl/config.jelly'
at hudson.model.DescriptorVisibilityFilter.apply(DescriptorVisibilityFilter.java:73)
...

vertx clustered mode hazelcast log config on linux

Using Eclipse on Windows, a vertx Verticle with a misconfigured cluster.xml shows the following error in the Eclipse console:
11:46:18.536 [hz._hzInstance_1_dev.generic-operation.thread-0] ERROR com.hazelcast.cluster - [192.168.25.8]:5701 [dev] [3.5.2] Node could not join cluster. A Configuration mismatch was detected: Incompatible joiners! expected: multicast, found: tcp-ip Node is going to shutdown now!
11:46:22.529 [vert.x-worker-thread-0] ERROR com.hazelcast.cluster.impl.TcpIpJoiner - [192.168.25.8]:5701 [dev] [3.5.2] com.hazelcast.core.HazelcastInstanceNotActiveException: Hazelcast instance is not active!
This is fine, I know to reconfigure the cluster for multicast. The problem is when I deploy the same code and configuration to Linux, and run it as a fat jar then the same log doesn't show either the hz thread or the vertx worker thread logs. Instead it shows the verticle logs as:
2015-11-05 12:03:09,329 Starting clustered Vertx
2015-11-05 12:03:13,549 ERROR: VerticleService failed to start: java.lang.NullPointerException
So if I run on Linux the log to tell me there's a misconfiguration isn't showing. There's something I am missing in the vertx / maven log config but I don't know what. Maven properties are as follows:
<properties>
<project.build.sourceEncoding>UTF-8</project.build.sourceEncoding>
<exec.mainClass>main.java.eiger.isct.service.Verticle</exec.mainClass>
<log4j.configurationFile>log4j2.xml</log4j.configurationFile>
<hazelcast.logging.type>log4j2</hazelcast.logging.type>
</properties>
and I start the fat jar using:
java -Dlog4j.configuration=log4j2.xml -jar Verticle-0.5-SNAPSHOT-fat.jar
How can I get the hz thread and vertx thread to log on Linux?
I've tried adding a vertx-default-jul-logging.properties file below to the maven resources dir but no luck.
com.hazelcast.level=ALL
java.util.logging.ConsoleHandler.level=ALL
java.util.logging.FileHandler.level=ALL
THANKS for your comment.
Vertx has started logging having added
-Djava.util.logging.config.file=../logging.properties
to the java start command and with the default logging.properties like (and this is a nice config for lower level stuff):
handlers=java.util.logging.ConsoleHandler,java.util.logging.FileHandler
java.util.logging.SimpleFormatter.format=%1$tY-%1$tm-%1$td %1$tH:%1$tM:%1$tS:%1$tL %4$s %2$s %5$s%6$s%n
java.util.logging.ConsoleHandler.formatter=java.util.logging.SimpleFormatter
java.util.logging.ConsoleHandler.level=ALL
java.util.logging.FileHandler.level=ALL
java.util.logging.FileHandler.formatter=java.util.logging.SimpleFormatter
java.util.logging.FileHandler.pattern=../logs/vertx.log
.level=ALL
io.vertx.level=ALL
com.hazelcast.level=ALL
io.netty.util.internal.PlatformDependent.level=ALL
and vertx is logging to ../logs/vertx.log on Linux

Error when using export-graphml in Neo4j 2.2

I am trying to use the export-graphml function in Neo4j 2.2. I have downloaded neo4j shell tools and extract it into the lib directory. I am able to export the entire database as a graphml file. However, if I try to export a subset using a query, I receive the following error:
Error occurred in server thread; nested exception is:
java.lang.NoSuchMethodError: org.neo4j.cypher.export.CypherResultSubGraph.from(Lorg/neo4j/cypher/javacompat/ExecutionResult;Lorg/neo4j/graphdb/GraphDatabaseService;Z)Lorg/neo4j/cypher/export/SubGraph;
The statement I used is:
export-graphml -o /path/to/file/out.graphml match (n:Person)-[r:RELATIONSHIP]-() WHERE n.id = 12345 return n, r
I have tried different variations with the different options (-r, -t) and none work

What does this Neo4j batch loader error number mean

I've been using the Neo4j batch loader for a while now and tonight started running into issues building my graph from a fresh database export. Running it yields the following:
> java -servjava -server -Xmx4G -jar ~/Dev/github.com/jexp/batch-import/target/batch-import-jar-with-dependencies.jar ./graph.db nodes.csv rels.csv node_index entities exact entities_idx.csv
Usage: Importer data/dir nodes.csv relationships.csv [node_index node-index-name fulltext|exact nodes_index.csv rel_index rel-index-name fulltext|exact rels_index.csv ....]
Using: Importer ./graph.db nodes.csv rels.csv node_index entities exact entities_idx.csv
Using Existing Configuration File
........................
Importing 2412268 Nodes took 4 seconds
.....................
Total import time: 9 seconds
Exception in thread "main" org.neo4j.graphdb.NotFoundException: id=2412269
at org.neo4j.unsafe.batchinsert.BatchInserterImpl.getNodeRecord(BatchInserterImpl.java:917)
at org.neo4j.unsafe.batchinsert.BatchInserterImpl.createRelationship(BatchInserterImpl.java:471)
at org.neo4j.batchimport.Importer.importRelationships(Importer.java:136)
at org.neo4j.batchimport.Importer.doImport(Importer.java:214)
at org.neo4j.batchimport.Importer.main(Importer.java:78)
I was able to successfully run the batch loader for the nodes.csv and rels.csv that are included in its own repository, so I'm thinking that the issue is somewhere in my rels.csv file. However, it's a pretty big file and I would like to know what id=2412269 means, as it seems like the best starting point for diagnosing the failure.
Any ideas?
_howard
This means that in the rels.csv file, you are trying to create a relationship for a node referenced by id = 2412269 . But no such node has been created in your nodes.csv file.
After working through the issue with the author of the importer, it turns out that the issue was that I had single, unescaped quotes in my nodes.csv file. So the rels.csv record was pointing to a node that could not be created in nodes.csv. Unfortunately, the error reported on the console was not exactly the error causing the issue.

launch cassandra-cli error

I get the following errors when I try to run cassandra-cli.
manuzhang#manuzhang-U24E:~/git/cassandra-trunk$ bin/cassandra-cli -h localhost -p 9160
Column Family assumptions read from /home/manuzhang/.cassandra-cli/assumptions.json
Connected to: "Test Cluster" on localhost/9160
Welcome to Cassandra CLI version Unknown
Exception in thread "main" java.lang.AssertionError
at org.apache.cassandra.cli.CliClient.loadHelp(CliClient.java:178)
at org.apache.cassandra.cli.CliClient.getHelp(CliClient.java:171)
at org.apache.cassandra.cli.CliClient.printBanner(CliClient.java:197)
at org.apache.cassandra.cli.CliMain.main(CliMain.java:312)
That line is:
final InputStream is = CliClient.class.getClassLoader().getResourceAsStream("org/apache/cassandra/cli/CliHelp.yaml");
assert is != null;
The file is actually located in $CASSANDRA_HOME/src/resources/org/apache/cassandra/cli.
I have run it successfully for several times.
well, solved by ant build in terminal.
I think it's because I'm building from source and from time to time I modify some codes.
but just adding several lines of comments cannot reproduce the problem.

Resources