neo4j failed to start on my machine - neo4j

I use brew install neo4j, when i'm trying to start on terminal by neo4j start, it keep loading forever as following.
$ neo4j start Using additional JVM arguments: -server
-XX:+DisableExplicitGC -Dorg.neo4j.server.properties=conf/neo4j-server.properties
-Djava.util.logging.config.file=conf/logging.properties -Dlog4j.configuration=file:conf/log4j.properties -XX:+UseConcMarkSweepGC -XX:+CMSClassUnloadingEnabled -XX:-OmitStackTraceInFastThrow -Dneo4j.ext.udc.source=homebrew
-Djava.awt.headless=true Starting Neo4j Server...WARNING: not changing user process [9320]... waiting for server to be ready ........................................................................... ........................................................................... ........................................................................... ......................................................................
Failed to start within 120 seconds.
Neo4j Server may have failed to start, please check the logs.
I checked some logs from neo4j already ...but it show nothing error
log messgea from /usr/local/Cellar/neo4j/2.1.6/libexec/data/graph.db/messages.log
2015-05-21 06:46:44.248+0000 INFO [o.n.k.i.n.s.StoreFactory]: [/usr/local/Cellar/neo4j/2.1.6/libexec/data/graph.db/neostore.schemastore.db] brickCount=0 brickSize=0b mappedMem=0b (storeSize=64b)
2015-05-21 06:46:44.248+0000 INFO [o.n.k.i.n.s.StoreFactory]: [/usr/local/Cellar/neo4j/2.1.6/libexec/data/graph.db/neostore.relationshipgroupstore.db] brickCount=0 brickSize=0b mappedMem=0b (storeSize=25b)
2015-05-21 06:46:44.249+0000 INFO [o.n.k.i.n.s.StoreFactory]: [/usr/local/Cellar/neo4j/2.1.6/libexec/data/graph.db/neostore] brickCount=0 brickSize=0b mappedMem=0b (storeSize=81b)
2015-05-21 06:46:44.310+0000 INFO [o.n.k.a.i.i.LuceneLabelScanStore]: No lucene scan store index found, this might just be first use. Preparing to rebuild.
2015-05-21 06:46:44.333+0000 INFO [o.n.k.a.i.i.LuceneLabelScanStore]: No lucene scan store index found, this might just be first use. Preparing to rebuild.
2015-05-21 06:46:44.414+0000 INFO [o.n.k.i.t.x.XaLogicalLog]: Opened logical log [/usr/local/Cellar/neo4j/2.1.6/libexec/data/graph.db/nioneo_logical.log.1] version=0, lastTxId=1 (clean)
2015-05-21 06:46:44.417+0000 INFO [o.n.k.a.i.i.LuceneLabelScanStore]: Rebuilding lucene scan store, this may take a while
2015-05-21 06:46:44.418+0000 INFO [o.n.k.a.i.i.LuceneLabelScanStore]: Lucene scan store rebuilt (roughly -1 nodes)
2015-05-21 06:46:44.421+0000 INFO [o.n.k.i.t.TxManager]: TM new log: tm_tx_log.1
2015-05-21 06:46:44.425+0000 INFO [o.n.k.i.t.KernelHealth]: Kernel health set to OK
log message from /usr/local/Cellar/neo4j/2.1.6/libexec/data/log/console.log
2015-05-21 06:46:43.878+0000 INFO [API] Setting startup timeout to: 120000ms based on -1
2015-05-21 06:57:03.825+0000 INFO [API] Successfully shutdown Neo4j Server.
I try to use brew to re-install neo4j, but it doesn't help.
i'm using mac machine.
java version /Library/Java/JavaVirtualMachines/jdk1.7.0_71.jdk/Contents/Home/bin/java

Can you try to run bin/neo4j console to let it finish the startup.
Probably it is creating / updating an index or an internal structure that takes longer than the 120s timeout?
see
2015-05-21 06:46:44.417+0000 INFO [o.n.k.a.i.i.LuceneLabelScanStore]: Rebuilding lucene scan store, this may take a while
2015-05-21 06:46:44.418+0000 INFO [o.n.k.a.i.i.LuceneLabelScanStore]: Lucene scan store rebuilt (roughly -1 nodes)

Related

Where can I find the default docker ulimit settings?

I have been trying to understand an issue I've had when running roribio16/alpine-sqs docker image on one of my machines. Whenever I try to run the image without specifying any other settings, docker run roribio16/alpine-sqs
[xxxx#yyyy ~]$ docker run roribio16/alpine-sqs
2021-05-29 15:48:41,216 INFO Included extra file "/etc/supervisor/conf.d/elasticmq.conf" during parsing
2021-05-29 15:48:41,216 INFO Included extra file "/etc/supervisor/conf.d/insight.conf" during parsing
2021-05-29 15:48:41,216 INFO Included extra file "/etc/supervisor/conf.d/sqs-init.conf" during parsing
2021-05-29 15:48:41,216 INFO Set uid to user 0 succeeded
2021-05-29 15:48:41,222 INFO RPC interface 'supervisor' initialized
2021-05-29 15:48:41,222 CRIT Server 'unix_http_server' running without any HTTP authentication checking
2021-05-29 15:48:41,222 INFO supervisord started with pid 1
2021-05-29 15:48:42,225 INFO spawned: 'sqs-init' with pid 9
2021-05-29 15:48:42,229 INFO spawned: 'elasticmq' with pid 10
2021-05-29 15:48:42,230 INFO spawned: 'insight' with pid 11
cp: can't stat '/opt/custom/*.conf': No such file or directory
> sqs-insight#0.3.0 start /opt/sqs-insight
> node index.js
15:48:42.605 [main] INFO org.elasticmq.server.Main$ - Starting ElasticMQ server (0.15.0) ...
Loading config file from "/opt/sqs-insight/lib/../config/config_local.json"
15:48:42.929 [elasticmq-akka.actor.default-dispatcher-2] INFO akka.event.slf4j.Slf4jLogger - Slf4jLogger started
Unable to load queues for undefined
Config contains 0 queues.
library initialization failed - unable to allocate file descriptor table - out of memorylistening on port 9325
2021-05-29 15:48:43,233 INFO success: sqs-init entered RUNNING state, process has stayed up for > than 1 seconds (startsecs)
2021-05-29 15:48:43,233 INFO success: elasticmq entered RUNNING state, process has stayed up for > than 1 seconds (startsecs)
2021-05-29 15:48:43,234 INFO success: insight entered RUNNING state, process has stayed up for > than 1 seconds (startsecs)
2021-05-29 15:48:43,234 INFO exited: sqs-init (exit status 0; expected)
2021-05-29 15:48:44,318 INFO exited: elasticmq (terminated by SIGABRT (core dumped); not expected)
2021-05-29 15:48:45,322 INFO spawned: 'elasticmq' with pid 67
15:48:45.743 [main] INFO org.elasticmq.server.Main$ - Starting ElasticMQ server (0.15.0) ...
15:48:46.044 [elasticmq-akka.actor.default-dispatcher-2] INFO akka.event.slf4j.Slf4jLogger - Slf4jLogger started
library initialization failed - unable to allocate file descriptor table - out of memory2021-05-29 15:48:47,223 INFO success: elasticmq entered RUNNING state, process has stayed up for > than 1 seconds (startsecs)
2021-05-29 15:48:47,389 INFO exited: elasticmq (terminated by SIGABRT (core dumped); not expected)
2021-05-29 15:48:48,393 INFO spawned: 'elasticmq' with pid 89
15:48:48.766 [main] INFO org.elasticmq.server.Main$ - Starting ElasticMQ server (0.15.0) ...
15:48:49.066 [elasticmq-akka.actor.default-dispatcher-3] INFO akka.event.slf4j.Slf4jLogger - Slf4jLogger started
library initialization failed - unable to allocate file descriptor table - out of memory^C2021-05-29 15:48:49,559 INFO success: elasticmq entered RUNNING state, process has stayed up for > than 1 seconds (startsecs)
2021-05-29 15:48:49,559 WARN received SIGINT indicating exit request
2021-05-29 15:48:49,559 INFO waiting for insight, elasticmq to die
2021-05-29 15:48:49,566 INFO stopped: insight (terminated by SIGTERM)
2021-05-29 15:48:50,431 INFO stopped: elasticmq (terminated by SIGABRT (core dumped))
With a bit of googling I found this post where somebody had the same issue when running some other random image, and then posted that they managed to get the image running by setting some ulimits when running the image, which also worked for me (docker run --ulimit nofile=122880:122880 roribio16/alpine-sqs).
I checked the ulimits set inside the container when I didn't use this configuration
docker exec -it ca bash
$ ulimit -a
and found that the nofile setting was ridiculously high, which I assume is what is causing the container to run out of memory, if too many files are being opened simultaneously. I don't have a particulary good understanding of how this works though so would appreciate any clarification somebody could shed on that particular topic also.
Anyway the point of that ramble is that I want to try and find where the default docker container ulimits are set as I don't understand why they are so high on the machine I am using. I have another machine that does not have this problem.
I can find lots of ways to change the default limits but there does not seem to be much information about where these limits get set in the first place. I understand according to the docker documentation that if custom values are not set then the ulimits should be inherited from my system but as far as I can tell my system nofile settings are much lower than what I'm seeing in the container.
(Both machines run manjaro linux however the one that doesn't have this issue is XFCE and the one that does is KDE).

Unable to start neo4j with systemctl: 'Failed to load from plugin jar'

I've been trying to restart neo4j after adding new data on an EC2 instance. I stopped the neo4j instance, then I called systemctl start neo4j, but when I call cypher-shell it says Connection refused, and connection to the browser port doesn't work anymore.
In the beginning I assumed it was a heap space problem, since looking at the debug.log it said there was a memory issue. I adjusted the heap space and cache settings in neo4j.conf as recommended by neo4j-admin memrec, but still neo4j won't start.
Then I assumed it was because my APOC package was outdated. My neo4j version is 3.5.6, but APOC is 3.5.0.3. I download the latest 3.5.0.4 version, but still neo4j won't start.
At last I tried chmod 777 on every file in the data/database and plugin directories and the directories themselves, but still neo4j won't start.
What's strange is when I try neo4j console for all of these attempts, both cypher-shell and the neo4j browser port works just fine. However, obviously I would prefer to be able to launch neo4j with systemctl.
Right now the only hint of error I can find in debug.log is the following:
2019-06-19 21:19:55.508+0000 INFO [o.n.i.d.DiagnosticsManager] Storage summary:
2019-06-19 21:19:55.508+0000 INFO [o.n.i.d.DiagnosticsManager] Total size of store: 3.07 GB
2019-06-19 21:19:55.509+0000 INFO [o.n.i.d.DiagnosticsManager] Total size of mapped files: 3.07 GB
2019-06-19 21:19:55.509+0000 INFO [o.n.i.d.DiagnosticsManager] --- STARTED diagnostics for KernelDiagnostics:StoreFiles
END ---
2019-06-19 21:19:55.509+0000 INFO [o.n.k.a.DatabaseAvailabilityGuard] Fulfilling of requirement 'Database available' mak
es database available.
2019-06-19 21:19:55.509+0000 INFO [o.n.k.a.DatabaseAvailabilityGuard] Database is ready.
2019-06-19 21:19:55.568+0000 INFO [o.n.k.i.DatabaseHealth] Database health set to OK
2019-06-19 21:19:56.198+0000 WARN [o.n.k.i.p.Procedures] Failed to load `apoc.util.s3.S3URLConnection` from plugin jar `
/var/lib/neo4j/plugins/apoc-3.5.0.4-all.jar`: com/amazonaws/ClientConfiguration
2019-06-19 21:19:56.199+0000 WARN [o.n.k.i.p.Procedures] Failed to load `apoc.util.s3.S3Aws` from plugin jar `/var/lib/n
eo4j/plugins/apoc-3.5.0.4-all.jar`: com/amazonaws/auth/AWSCredentials
2019-06-19 21:19:56.200+0000 WARN [o.n.k.i.p.Procedures] Failed to load `apoc.util.s3.S3Aws$1` from plugin jar `/var/lib
/neo4j/plugins/apoc-3.5.0.4-all.jar`: com/amazonaws/services/s3/model/S3ObjectInputStream
2019-06-19 21:19:56.207+0000 WARN [o.n.k.i.p.Procedures] Failed to load `apoc.util.hdfs.HDFSUtils$1` from plugin jar `/v
ar/lib/neo4j/plugins/apoc-3.5.0.4-all.jar`: org/apache/hadoop/fs/FSDataInputStream
2019-06-19 21:19:56.208+0000 WARN [o.n.k.i.p.Procedures] Failed to load `apoc.util.hdfs.HDFSUtils` from plugin jar `/var
/lib/neo4j/plugins/apoc-3.5.0.4-all.jar`: org/apache/hadoop/fs/FSDataOutputStream
...
...
...
2019-06-19 21:20:00.678+0000 INFO [o.n.g.f.GraphDatabaseFacadeFactory] Shutting down database.
2019-06-19 21:20:00.679+0000 INFO [o.n.g.f.GraphDatabaseFacadeFactory] Shutdown started
2019-06-19 21:20:00.679+0000 INFO [o.n.k.a.DatabaseAvailabilityGuard] Database is unavailable.
2019-06-19 21:20:00.684+0000 INFO [o.n.k.i.t.l.c.CheckPointerImpl] Checkpoint triggered by "Database shutdown" # txId: 1
checkpoint started...
2019-06-19 21:20:00.704+0000 INFO [o.n.k.i.t.l.c.CheckPointerImpl] Checkpoint triggered by "Database shutdown" # txId: 1
checkpoint completed in 20ms
2019-06-19 21:20:00.705+0000 INFO [o.n.k.i.t.l.p.LogPruningImpl] No log version pruned, last checkpoint was made in vers
ion 0
2019-06-19 21:20:00.725+0000 INFO [o.n.i.d.DiagnosticsManager] --- STOPPING diagnostics START ---
2019-06-19 21:20:00.725+0000 INFO [o.n.i.d.DiagnosticsManager] --- STOPPING diagnostics END ---
2019-06-19 21:20:00.725+0000 INFO [o.n.g.f.GraphDatabaseFacadeFactory] Shutdown started
2019-06-19 21:20:05.875+0000 INFO [o.n.g.f.m.e.CommunityEditionModule] No locking implementation specified, defaulting
to 'community'
2019-06-19 21:20:06.080+0000 INFO [o.n.g.f.GraphDatabaseFacadeFactory] Creating database.
2019-06-19 21:20:06.154+0000 INFO [o.n.k.a.DatabaseAvailabilityGuard] Requirement `Database available` makes database unavailable.
2019-06-19 21:20:06.156+0000 INFO [o.n.k.a.DatabaseAvailabilityGuard] Database is unavailable.
2019-06-19 21:20:06.183+0000 INFO [o.n.i.d.DiagnosticsManager] --- INITIALIZED diagnostics START ---
I think the warning isn't an issue, since it's just a warning and not an error or exception. Also it seems that the database just shuts down automatically, and then restarts, creating an infinite loop. This loop does not happen when I call neo4j console (all the warnings still exist in the logs). All my ports are default.
Any clue why this is happening? I've never encountered this error when I previously launched neo4j on this instance.
If it works with neo4j console but not with systemctl, you should check the rights of the Neo4j folder.
I'm pretty sure you have a problem on it, and that the systemctl doesn't run Neo4j with the same user as you

Can not enable Alwayson sql in DSE

I get this error when start Alwayson sql, tried many ways but the results still same. any ideas why?
Im using 1 cluster, 1 analytics+search center, 2 ubuntu 16.04 nodes.
INFO [ALWAYSON-SQL] 2019-02-14 11:36:01,348 ALWAYSON-SQL AlwaysOnSqlRunner.scala:304 - Shutting down AlwaysOn SQL.
INFO [ALWAYSON-SQL] 2019-02-14 11:36:01,617 ALWAYSON-SQL AlwaysOnSqlRunner.scala:328 - Set status to stopped
INFO [ALWAYSON-SQL] 2019-02-14 11:36:01,620 ALWAYSON-SQL AlwaysOnSqlRunner.scala:382 - Reserve port for AlwaysOn SQL
INFO [ALWAYSON-SQL] 2019-02-14 11:36:04,621 ALWAYSON-SQL AlwaysOnSqlRunner.scala:375 - Release reserved port
INFO [ALWAYSON-SQL] 2019-02-14 11:36:04,622 ALWAYSON-SQL AlwaysOnSqlRunner.scala:805 - Set InCluster token to DseFs client
INFO [ForkJoinPool-1-worker-1] 2019-02-14 11:36:04,650 AlwaysOnSqlRunner.scala:740 - dsefs server heartbeat response: pong
INFO [ForkJoinPool-1-worker-3] 2019-02-14 11:36:04,757 AlwaysOnSqlRunner.scala:704 - Create DseFs directory /var/log/spark/alwayson_sql
INFO [ForkJoinPool-1-worker-3] 2019-02-14 11:36:04,758 AlwaysOnSqlRunner.scala:805 - Set InCluster token to DseFs client
ERROR [ForkJoinPool-1-worker-3] 2019-02-14 11:36:04,788 AlwaysOnSqlRunner.scala:722 - Failed to check dsefs directory alwayson_sql
com.datastax.bdp.fs.model.AccessDeniedException: Insufficient permissions to path /
at com.datastax.bdp.fs.model.DseFsJsonProtocol$ThrowableReader$.read(DseFsJsonProtocol.scala:258)
at com.datastax.bdp.fs.model.DseFsJsonProtocol$ThrowableReader$.read(DseFsJsonProtocol.scala:232)
at spray.json.JsValue.convertTo(JsValue.scala:31)
at com.datastax.bdp.fs.rest.RestResponse$stateMachine$macro$331$1.apply(RestResponse.scala:48)
at com.datastax.bdp.fs.rest.RestResponse$stateMachine$macro$331$1.apply(RestResponse.scala:44)
at scala.concurrent.impl.CallbackRunnable.run(Promise.scala:32)
at io.netty.util.concurrent.AbstractEventExecutor.safeExecute(AbstractEventExecutor.java:163)
at io.netty.util.concurrent.SingleThreadEventExecutor.runAllTasks(SingleThreadEventExecutor.java:404)
at io.netty.channel.nio.NioEventLoop.run(NioEventLoop.java:465)
at io.netty.util.concurrent.SingleThreadEventExecutor$5.run(SingleThreadEventExecutor.java:884)
at java.lang.Thread.run(Thread.java:748)
INFO [ALWAYSON-SQL] 2019-02-14 11:36:04,788 ALWAYSON-SQL AlwaysOnSqlRunner.scala:247 - ALWAYSON-SQL caused an exception in state RUNNING : com.datastax.bdp.fs.model.AccessDeniedException: Insufficient permissions to path /
com.datastax.bdp.fs.model.AccessDeniedException: Insufficient permissions to path /
at com.datastax.bdp.fs.model.DseFsJsonProtocol$ThrowableReader$.read(DseFsJsonProtocol.scala:258)
at com.datastax.bdp.fs.model.DseFsJsonProtocol$ThrowableReader$.read(DseFsJsonProtocol.scala:232)
at spray.json.JsValue.convertTo(JsValue.scala:31)
at com.datastax.bdp.fs.rest.RestResponse$stateMachine$macro$331$1.apply(RestResponse.scala:48)
at com.datastax.bdp.fs.rest.RestResponse$stateMachine$macro$331$1.apply(RestResponse.scala:44)
at scala.concurrent.impl.CallbackRunnable.run(Promise.scala:32)
at io.netty.util.concurrent.AbstractEventExecutor.safeExecute(AbstractEventExecutor.java:163)
at io.netty.util.concurrent.SingleThreadEventExecutor.runAllTasks(SingleThreadEventExecutor.java:404)
at io.netty.channel.nio.NioEventLoop.run(NioEventLoop.java:465)
at io.netty.util.concurrent.SingleThreadEventExecutor$5.run(SingleThreadEventExecutor.java:884)
at java.lang.Thread.run(Thread.java:748)
I have seen this problem too! It was a permissions problem in dsefs! To fix, login with the root Cassandra user, and change permissions of the your alwayson log directory to the alwayson user.

Graphaware Framework and UUID not starting on Neo4j GrapheneDB

I am trying to get the Graphaware Framework and UUID running on a GrapheneDB instance. I have followed the instructions to zip the JAR and neo4j.properties files and uploaded using the GrapheneDB Web Interface but UUID's are not added when I create a new node.
neo4j.properties file
dbms.unmanaged_extension_classes=com.graphaware.server=/graphaware
com.graphaware.runtime.enabled=true
#UIDM becomes the module ID:
com.graphaware.module.UIDM.1=com.graphaware.module.uuid.UuidBootstrapper
#optional, default is uuid:
com.graphaware.module.UIDM.uuidProperty=uuid
#optional, default is false:
com.graphaware.module.UIDM.stripHyphens=true
#optional, default is all nodes:
#com.graphaware.module.UIDM.node=hasLabel('Label1') || hasLabel('Label2')
#optional, default is no relationships:
#com.graphaware.module.UIDM.relationship=isType('Type1')
com.graphaware.module.UIDM.relationship=com.graphaware.runtime.policy.all.IncludeAllBusinessRelationships
#optional, default is uuidIndex
com.graphaware.module.UIDM.uuidIndex=uuidIndex
#optional, default is uuidRelIndex
com.graphaware.module.UIDM.uuidRelationshipIndex=uuidRelIndex
Log Output
2017-03-02 10:20:40.184+0000 INFO Neo4j Server shutdown initiated by
request 2017-03-02 10:20:40.209+0000 INFO
[c.g.s.f.b.GraphAwareServerBootstrapper] stopped 2017-03-02
10:20:40.209+0000 INFO Stopping... 2017-03-02 10:20:40.982+0000 INFO
Stopped. 2017-03-02 10:20:43.402+0000 INFO Starting... 2017-03-02
10:20:43.820+0000 INFO Bolt enabled on 0.0.0.0:7475. 2017-03-02
10:20:45.153+0000 INFO [c.g.r.b.RuntimeKernelExtension] GraphAware
Runtime disabled. 2017-03-02 10:20:48.130+0000 INFO Started.
2017-03-02 10:20:48.343+0000 INFO
[c.g.s.f.b.GraphAwareServerBootstrapper] started 2017-03-02
10:20:48.350+0000 INFO Mounted unmanaged extension
[com.graphaware.server] at [/graphaware] 2017-03-02 10:20:48.724+0000
INFO Mounting GraphAware Framework at /graphaware 2017-03-02
10:20:48.755+0000 INFO Will try to scan the following packages:
{com..graphaware.,org..graphaware.,net..graphaware.}
2017-03-02 10:20:52.633+0000 INFO Remote interface available at
http://localhost:7474/
Messages.log Extract
2017-03-02 10:33:59.991+0000 INFO [o.n.k.i.DiagnosticsManager] ---
STARTED diagnostics for KernelDiagnostics:StoreFiles END ---
2017-03-02 10:34:01.846+0000 INFO [o.n.k.i.DiagnosticsManager] ---
SERVER STARTED START --- 2017-03-02 10:34:02.526+0000 INFO
[c.g.s.f.b.GraphAwareBootstrappingFilter] Mounting GraphAware
Framework at /graphaware 2017-03-02 10:34:02.547+0000 INFO
[c.g.s.f.c.GraphAwareWebContextCreator] Will try to scan the following
packages:
{com..graphaware.,org..graphaware.,net..graphaware.}
2017-03-02 10:34:06.100+0000 INFO [o.n.k.i.DiagnosticsManager] ---
SERVER STARTED END ---
It looks like the framework is not started but I have set enabled=true in the properties file.
Environment Setup
Neo4j Community Edition 3.1.1
graphaware-server-3.1.0.44
graphaware-uuid-3.1.0.44.13
Thanks

Neo4J server exception - fails to start

The server had been running fine and dandy but suddenly stopped. Tried restarting but that didn't help. This is what the log says:
2015-02-18 15:07:58.092+0000 INFO [o.n.k.i.DiagnosticsManager]: --- SHUTDOWN diagnostics END ---
2015-02-18 15:07:58.336+0000 ERROR [o.n.s.CommunityBootstrapper]: Failed to start Neo Server on port [7474]
org.neo4j.server.ServerStartupException: Starting Neo4j Server failed: Error starting org.neo4j.kernel.EmbeddedGraphDatabase, /var/lib/neo4j-community-2.1.4/data/graph.db
at org.neo4j.server.AbstractNeoServer.start(AbstractNeoServer.java:226) ~[neo4j-server-2.1.4.jar:2.1.4]
at org.neo4j.server.Bootstrapper.start(Bootstrapper.java:108) [neo4j-server-2.1.4.jar:2.1.4]
at org.neo4j.server.Bootstrapper.main(Bootstrapper.java:62) [neo4j-server-2.1.4.jar:2.1.4]
Caused by: java.lang.RuntimeException: Error starting org.neo4j.kernel.EmbeddedGraphDatabase, /var/lib/neo4j-community-2.1.4/data/graph.db
at org.neo4j.kernel.InternalAbstractGraphDatabase.run(InternalAbstractGraphDatabase.java:366) ~[neo4j-kernel-2.1.4.jar:2.1.4]
at org.neo4j.kernel.EmbeddedGraphDatabase.<init>(EmbeddedGraphDatabase.java:59) ~[neo4j-kernel-2.1.4.jar:2.1.4]
at org.neo4j.kernel.impl.recovery.StoreRecoverer.recover(StoreRecoverer.java:123) ~[neo4j-kernel-2.1.4.jar:2.1.4]
at org.neo4j.server.preflight.PerformRecoveryIfNecessary.run(PerformRecoveryIfNecessary.java:65) ~[neo4j-server-2.1.4.jar:2.1.4]
at org.neo4j.server.preflight.PreFlightTasks.run(PreFlightTasks.java:71) ~[neo4j-server-2.1.4.jar:2.1.4]
at org.neo4j.server.AbstractNeoServer.runPreflightTasks(AbstractNeoServer.java:362) ~[neo4j-server-2.1.4.jar:2.1.4]
at org.neo4j.server.AbstractNeoServer.start(AbstractNeoServer.java:159) ~[neo4j-server-2.1.4.jar:2.1.4]
... 2 common frames omitted
Caused by: org.neo4j.kernel.lifecycle.LifecycleException: Component 'org.neo4j.kernel.StoreLockerLifecycleAdapter#7bd760a1' was successfully initialized, but failed to start. Please see attached cause exception.
at org.neo4j.kernel.lifecycle.LifeSupport$LifecycleInstance.start(LifeSupport.java:513) ~[neo4j-kernel-2.1.4.jar:2.1.4]
at org.neo4j.kernel.lifecycle.LifeSupport.start(LifeSupport.java:115) ~[neo4j-kernel-2.1.4.jar:2.1.4]
at org.neo4j.kernel.InternalAbstractGraphDatabase.run(InternalAbstractGraphDatabase.java:343) ~[neo4j-kernel-2.1.4.jar:2.1.4]
... 8 common frames omitted
Caused by: org.neo4j.kernel.StoreLockException: Unable to obtain lock on store lock file: /var/lib/neo4j-community-2.1.4/data/graph.db/store_lock. Please ensure no other process is using this database, and that the directory is writable (required even for read-only access)
at org.neo4j.kernel.StoreLocker.checkLock(StoreLocker.java:82) ~[neo4j-kernel-2.1.4.jar:2.1.4]
at org.neo4j.kernel.StoreLockerLifecycleAdapter.start(StoreLockerLifecycleAdapter.java:44) ~[neo4j-kernel-2.1.4.jar:2.1.4]
at org.neo4j.kernel.lifecycle.LifeSupport$LifecycleInstance.start(LifeSupport.java:507) ~[neo4j-kernel-2.1.4.jar:2.1.4]
... 10 common frames omitted
Caused by: java.io.IOException: Unable to lock org.neo4j.kernel.impl.nioneo.store.StoreFileChannel#c8925d7
at org.neo4j.kernel.impl.nioneo.store.FileLock.wrapFileChannelLock(FileLock.java:38) ~[neo4j-kernel-2.1.4.jar:2.1.4]
at org.neo4j.kernel.impl.nioneo.store.FileLock.getOsSpecificFileLock(FileLock.java:93) ~[neo4j-kernel-2.1.4.jar:2.1.4]
at org.neo4j.kernel.DefaultFileSystemAbstraction.tryLock(DefaultFileSystemAbstraction.java:93) ~[neo4j-kernel-2.1.4.jar:2.1.4]
at org.neo4j.kernel.StoreLocker.checkLock(StoreLocker.java:74) ~[neo4j-kernel-2.1.4.jar:2.1.4]
... 12 common frames omitted
UPDATE
Acted upon the advice given in the answer but unfortunately the server still doesn't start. neo4j start command has been waiting for server to start for the last 10 minutes:
Using additional JVM arguments: -server -XX:+DisableExplicitGC
-Dorg.neo4j.server.properties=conf/neo4j-server.properties -Djava.util.logging.config.file=conf/logging.properties -Dlog4j.configuration=file:conf/log4j.properties -XX:+UseConcMarkSweepGC -XX:+CMSClassUnloadingEnabled Starting Neo4j Server...WARNING: not changing user process [29578]... waiting for
server to be ready.........
The log file says:
2015-02-18 15:50:09.958+0000 INFO [o.n.k.i.t.x.XaLogicalLog]: [/var/lib/neo4j-community-2.1.4/data/graph.db/nioneo_logical.log.1] entries found=91485 lastEntryPos=5328191
2015-02-18 15:50:09.958+0000 INFO [o.n.k.i.t.x.XaLogicalLog]: Opened logical log [/var/lib/neo4j-community-2.1.4/data/graph.db/nioneo_logical.log.1] version=8, lastTxId=513504 (recovered)
2015-02-18 15:50:09.959+0000 INFO [o.n.k.i.t.x.XaLogicalLog]: XaResourceManager[nioneo_logical.log] sorting 0 xids
2015-02-18 15:50:09.997+0000 INFO [o.n.k.i.t.x.XaLogicalLog]: XaResourceManager[nioneo_logical.log] checkRecoveryComplete 0 xids
2015-02-18 15:50:10.407+0000 INFO [o.n.k.i.n.s.StoreFactory]: /var/lib/neo4j-community-2.1.4/data/graph.db/neostore.relationshiptypestore.db.names rebuild id generator, highId=35 defragged count=0
2015-02-18 15:50:10.490+0000 INFO [o.n.k.i.n.s.StoreFactory]: /var/lib/neo4j-community-2.1.4/data/graph.db/neostore.labeltokenstore.db.names rebuild id generator, highId=17 defragged count=0
2015-02-18 15:50:10.571+0000 INFO [o.n.k.i.n.s.StoreFactory]: /var/lib/neo4j-community-2.1.4/data/graph.db/neostore.labeltokenstore.db rebuild id generator, highId=16 defragged count=0
2015-02-18 15:50:10.653+0000 INFO [o.n.k.i.n.s.StoreFactory]: /var/lib/neo4j-community-2.1.4/data/graph.db/neostore.propertystore.db.index.keys rebuild id generator, highId=49 defragged count=0
2015-02-18 15:50:10.776+0000 INFO [o.n.k.i.n.s.StoreFactory]: /var/lib/neo4j-community-2.1.4/data/graph.db/neostore.propertystore.db.index rebuild id generator, highId=44 defragged count=0
2015-02-18 15:50:10.819+0000 INFO [o.n.k.i.n.s.StoreFactory]: /var/lib/neo4j-community-2.1.4/data/graph.db/neostore.propertystore.db.strings rebuild id generator, highId=41 defragged count=0
2015-02-18 15:50:11.101+0000 INFO [o.n.k.i.n.s.StoreFactory]: /var/lib/neo4j-community-2.1.4/data/graph.db/neostore.propertystore.db.arrays rebuild id generator, highId=7954 defragged count=0
2015-02-18 15:50:15.753+0000 INFO [o.n.k.i.n.s.StoreFactory]: /var/lib/neo4j-community-2.1.4/data/graph.db/neostore.propertystore.db rebuild id generator, highId=212690 defragged count=0
2015-02-18 15:50:19.113+0000 INFO [o.n.k.i.n.s.StoreFactory]: /var/lib/neo4j-community-2.1.4/data/graph.db/neostore.relationshipstore.db rebuild id generator, highId=234591 defragged count=0
2015-02-18 15:50:19.155+0000 INFO [o.n.k.i.n.s.StoreFactory]: /var/lib/neo4j-community-2.1.4/data/graph.db/neostore.nodestore.db.labels rebuild id generator, highId=1 defragged count=0
2015-02-18 15:50:20.131+0000 INFO [o.n.k.i.n.s.StoreFactory]: /var/lib/neo4j-community-2.1.4/data/graph.db/neostore.nodestore.db rebuild id generator, highId=46820 defragged count=0
2015-02-18 15:50:20.194+0000 INFO [o.n.k.i.n.s.StoreFactory]: /var/lib/neo4j-community-2.1.4/data/graph.db/neostore.schemastore.db rebuild id generator, highId=5 defragged count=0
2015-02-18 15:50:20.300+0000 INFO [o.n.k.i.n.s.StoreFactory]: /var/lib/neo4j-community-2.1.4/data/graph.db/neostore.relationshipgroupstore.db rebuild id generator, highId=14508 defragged count=0
2015-02-18 15:50:20.423+0000 INFO [o.n.k.i.n.s.StoreFactory]: /var/lib/neo4j-community-2.1.4/data/graph.db/neostore rebuild id generator, highId=9 defragged count=0
2015-02-18 15:50:20.424+0000 INFO [o.n.k.i.t.x.XaLogicalLog]: XaResourceManager[nioneo_logical.log] recovery completed.
2015-02-18 15:50:20.424+0000 INFO [o.n.k.i.t.x.XaLogicalLog]: Recovery on log [/var/lib/neo4j-community-2.1.4/data/graph.db/nioneo_logical.log.1] completed.
2015-02-18 15:50:20.628+0000 INFO [o.n.k.i.t.TxManager]: TM opening log: /var/lib/neo4j-community-2.1.4/data/graph.db/tm_tx_log.2
2015-02-18 15:50:20.992+0000 INFO [o.n.k.i.t.x.XaLogicalLog]: Non clean shutdown detected on log [/var/lib/neo4j-community-2.1.4/data/graph.db/index/lucene.log.1]. Recovery started ...
2015-02-18 15:50:20.992+0000 INFO [o.n.k.i.t.x.XaLogicalLog]: [/var/lib/neo4j-community-2.1.4/data/graph.db/index/lucene.log.1] logVersion=2 with committed tx=199234
My server previously had around 50K nodes and 300K relationships. Is the server attempting to recover the data, and hence causing the delay in starting?
The error message
org.neo4j.kernel.StoreLockException: Unable to obtain lock on store lock file: /var/lib/neo4j-community-2.1.4/data/graph.db/store_lock. Please ensure no other process is using this database, and that the directory is writable (required even for read-only access)
is pretty much self speaking. You either don't have the file system permission to access your graph.db folder or (which I think is more probable) that there is still another Neo4j process running on the same graph.db directory. You can check for running processes using the usual suspects like jps or ps aux | grep java.
I had the same problem with message Unable to obtain a lock on store lock file (...)
What happened? When I was setting up graph.db, acting as sudo, the owner was obviously root/root. Neo4j process was not allowed to access the place.
The solution which worked for me was to change the owner of graph.db folder with:
chown -R neo4j.neo4j /path/to/graphdb/graph.db/

Resources