Unable to load webadmin for Neo4j High Availability - neo4j

I have installed 3 instances of neo4j version 1.9.4 on a linux machine, in 3 different directories: Neo4j01, neo4j02, neo4j03.
I have updated the configuration files neo4j.properties and neo4j-server.properties as mentioned in the link (http://docs.neo4j.org/chunked/milestone/ha-setup-tutorial.html).
When I start all the neo4j instances one after the other, they are successfully installing, but after some time 2 of the 3 neo4j process/instances are automatically disappearing. I noticed it via ps -aef | grep neo4j.
When I checked console logs then i found below errors:
2013-11-12 16:37:32.512+0000 INFO [Cluster] Checking store consistency with master
2013-11-12 16:37:33.174+0000 INFO [Cluster] Store is consistent
2013-11-12 16:37:33.176+0000 INFO [Cluster] Catching up with master
2013-11-12 16:37:33.276+0000 INFO [Cluster] Now consistent with master
2013-11-12 16:37:34.442+0000 INFO [Cluster] ServerId 2, successfully moved to slave for master ha://localhost.localdomain:6363?serverId=1
2013-11-12 16:37:34.689+0000 INFO [Cluster] Instance 1 is available as backup at backup://localhost.localdomain:6366
2013-11-12 16:37:34.798+0000 INFO [Cluster] Instance 2 (this server) is available as slave at ha://localhost.localdomain:6364?serverId=2
2013-11-12 16:37:35.036+0000 INFO [Cluster] Database available for write transactions
2013-11-12 16:37:35.360+0000 INFO [API] Successfully started database
2013-11-12 16:37:36.079+0000 INFO [API] Starting HTTP on port :7474 with 10 threads available
2013-11-12 16:37:40.596+0000 INFO [Cluster] Instance 3 has failed
2013-11-12 16:37:43.654+0000 INFO [API] Enabling HTTPS on port :7473
2013-11-12 16:38:01.081+0000 INFO [API] Mounted REST API at: /db/manage/
2013-11-12 16:38:01.158+0000 INFO [API] Mounted discovery module at [/]
2013-11-12 16:38:02.375+0000 INFO [API] Loaded server plugin "CypherPlugin"
2013-11-12 16:38:02.449+0000 INFO [API] Loaded server plugin "GremlinPlugin"
2013-11-12 16:38:02.462+0000 INFO [API] Mounted REST API at [/db/data/]
2013-11-12 16:38:02.534+0000 INFO [API] Mounted management API at [/db/manage/]
2013-11-12 16:38:03.568+0000 INFO [API] Mounted webadmin at [/webadmin]
2013-11-12 16:38:06.189+0000 INFO [API] Mounting static content at [/webadmin] from [webadmin-html]
2013-11-12 16:38:30.844+0000 DEBUG [API] Failed to start Neo Server on port [7474], reason [org.mortbay.util.MultiException[java.net.BindException: Address already in use, java.net.BindException: Address already in use]]
2013-11-12 16:38:30.880+0000 DEBUG [API] org.neo4j.server.ServerStartupException: Starting Neo4j Server failed: org.mortbay.util.MultiException[java.net.BindException: Address already in use, java.net.BindException: Address already in use]
at org.neo4j.server.AbstractNeoServer.start(AbstractNeoServer.java:211) ~[neo4j-server-1.9.4.jar:1.9.4]
at org.neo4j.server.Bootstrapper.start(Bootstrapper.java:86) [neo4j-server-1.9.4.jar:1.9.4]
at org.neo4j.server.Bootstrapper.main(Bootstrapper.java:49) [neo4j-server-1.9.4.jar:1.9.4]
Caused by: java.lang.RuntimeException: org.mortbay.util.MultiException[java.net.BindException: Address already in use, java.net.BindException: Address already in use]
at org.neo4j.server.web.Jetty6WebServer.startJetty(Jetty6WebServer.java:334) ~[neo4j-server-1.9.4.jar:1.9.4]
at org.neo4j.server.web.Jetty6WebServer.start(Jetty6WebServer.java:154) ~[neo4j-server-1.9.4.jar:1.9.4]
at org.neo4j.server.AbstractNeoServer.startWebServer(AbstractNeoServer.java:344) ~[neo4j-server-1.9.4.jar:1.9.4]
at org.neo4j.server.AbstractNeoServer.start(AbstractNeoServer.java:187) ~[neo4j-server-1.9.4.jar:1.9.4]
... 2 common frames omitted
Caused by: org.mortbay.util.MultiException: Multiple exceptions
at org.mortbay.jetty.Server.doStart(Server.java:188) ~[jetty-6.1.25.jar:6.1.25]
at org.mortbay.component.AbstractLifeCycle.start(AbstractLifeCycle.java:50) ~[jetty-util-6.1.25.jar:6.1.25]
at org.neo4j.server.web.Jetty6WebServer.startJetty(Jetty6WebServer.java:330) ~[neo4j-server-1.9.4.jar:1.9.4]
... 5 common frames omitted
2013-11-12 16:38:30.894+0000 DEBUG [API] Failed to start Neo Server on port [7474]
Now, only neo4j01 process is running and neo4j02 and neo4j03 processes are disappeared. But even though neo4j01 process is up and running I am unable to access the webadmin page at http://htname:7474/webadmin/#/info/org.neo4j/High%20Availability/.
Please, can someone shed some light on this?

You might want to take a look at https://github.com/neo-technology/neo4j-enterprise-local-qa. This contains a rakefile that automates a local setup of 3 instances. Clone the repo locally, and use
rake setup_cluster start_cluster
to bring a locally running cluster online. Shutdown can be done via
rake stop_cluster
Find the configs in machine[ABC]/conf/.

Related

Unable to start neo4j with systemctl: 'Failed to load from plugin jar'

I've been trying to restart neo4j after adding new data on an EC2 instance. I stopped the neo4j instance, then I called systemctl start neo4j, but when I call cypher-shell it says Connection refused, and connection to the browser port doesn't work anymore.
In the beginning I assumed it was a heap space problem, since looking at the debug.log it said there was a memory issue. I adjusted the heap space and cache settings in neo4j.conf as recommended by neo4j-admin memrec, but still neo4j won't start.
Then I assumed it was because my APOC package was outdated. My neo4j version is 3.5.6, but APOC is 3.5.0.3. I download the latest 3.5.0.4 version, but still neo4j won't start.
At last I tried chmod 777 on every file in the data/database and plugin directories and the directories themselves, but still neo4j won't start.
What's strange is when I try neo4j console for all of these attempts, both cypher-shell and the neo4j browser port works just fine. However, obviously I would prefer to be able to launch neo4j with systemctl.
Right now the only hint of error I can find in debug.log is the following:
2019-06-19 21:19:55.508+0000 INFO [o.n.i.d.DiagnosticsManager] Storage summary:
2019-06-19 21:19:55.508+0000 INFO [o.n.i.d.DiagnosticsManager] Total size of store: 3.07 GB
2019-06-19 21:19:55.509+0000 INFO [o.n.i.d.DiagnosticsManager] Total size of mapped files: 3.07 GB
2019-06-19 21:19:55.509+0000 INFO [o.n.i.d.DiagnosticsManager] --- STARTED diagnostics for KernelDiagnostics:StoreFiles
END ---
2019-06-19 21:19:55.509+0000 INFO [o.n.k.a.DatabaseAvailabilityGuard] Fulfilling of requirement 'Database available' mak
es database available.
2019-06-19 21:19:55.509+0000 INFO [o.n.k.a.DatabaseAvailabilityGuard] Database is ready.
2019-06-19 21:19:55.568+0000 INFO [o.n.k.i.DatabaseHealth] Database health set to OK
2019-06-19 21:19:56.198+0000 WARN [o.n.k.i.p.Procedures] Failed to load `apoc.util.s3.S3URLConnection` from plugin jar `
/var/lib/neo4j/plugins/apoc-3.5.0.4-all.jar`: com/amazonaws/ClientConfiguration
2019-06-19 21:19:56.199+0000 WARN [o.n.k.i.p.Procedures] Failed to load `apoc.util.s3.S3Aws` from plugin jar `/var/lib/n
eo4j/plugins/apoc-3.5.0.4-all.jar`: com/amazonaws/auth/AWSCredentials
2019-06-19 21:19:56.200+0000 WARN [o.n.k.i.p.Procedures] Failed to load `apoc.util.s3.S3Aws$1` from plugin jar `/var/lib
/neo4j/plugins/apoc-3.5.0.4-all.jar`: com/amazonaws/services/s3/model/S3ObjectInputStream
2019-06-19 21:19:56.207+0000 WARN [o.n.k.i.p.Procedures] Failed to load `apoc.util.hdfs.HDFSUtils$1` from plugin jar `/v
ar/lib/neo4j/plugins/apoc-3.5.0.4-all.jar`: org/apache/hadoop/fs/FSDataInputStream
2019-06-19 21:19:56.208+0000 WARN [o.n.k.i.p.Procedures] Failed to load `apoc.util.hdfs.HDFSUtils` from plugin jar `/var
/lib/neo4j/plugins/apoc-3.5.0.4-all.jar`: org/apache/hadoop/fs/FSDataOutputStream
...
...
...
2019-06-19 21:20:00.678+0000 INFO [o.n.g.f.GraphDatabaseFacadeFactory] Shutting down database.
2019-06-19 21:20:00.679+0000 INFO [o.n.g.f.GraphDatabaseFacadeFactory] Shutdown started
2019-06-19 21:20:00.679+0000 INFO [o.n.k.a.DatabaseAvailabilityGuard] Database is unavailable.
2019-06-19 21:20:00.684+0000 INFO [o.n.k.i.t.l.c.CheckPointerImpl] Checkpoint triggered by "Database shutdown" # txId: 1
checkpoint started...
2019-06-19 21:20:00.704+0000 INFO [o.n.k.i.t.l.c.CheckPointerImpl] Checkpoint triggered by "Database shutdown" # txId: 1
checkpoint completed in 20ms
2019-06-19 21:20:00.705+0000 INFO [o.n.k.i.t.l.p.LogPruningImpl] No log version pruned, last checkpoint was made in vers
ion 0
2019-06-19 21:20:00.725+0000 INFO [o.n.i.d.DiagnosticsManager] --- STOPPING diagnostics START ---
2019-06-19 21:20:00.725+0000 INFO [o.n.i.d.DiagnosticsManager] --- STOPPING diagnostics END ---
2019-06-19 21:20:00.725+0000 INFO [o.n.g.f.GraphDatabaseFacadeFactory] Shutdown started
2019-06-19 21:20:05.875+0000 INFO [o.n.g.f.m.e.CommunityEditionModule] No locking implementation specified, defaulting
to 'community'
2019-06-19 21:20:06.080+0000 INFO [o.n.g.f.GraphDatabaseFacadeFactory] Creating database.
2019-06-19 21:20:06.154+0000 INFO [o.n.k.a.DatabaseAvailabilityGuard] Requirement `Database available` makes database unavailable.
2019-06-19 21:20:06.156+0000 INFO [o.n.k.a.DatabaseAvailabilityGuard] Database is unavailable.
2019-06-19 21:20:06.183+0000 INFO [o.n.i.d.DiagnosticsManager] --- INITIALIZED diagnostics START ---
I think the warning isn't an issue, since it's just a warning and not an error or exception. Also it seems that the database just shuts down automatically, and then restarts, creating an infinite loop. This loop does not happen when I call neo4j console (all the warnings still exist in the logs). All my ports are default.
Any clue why this is happening? I've never encountered this error when I previously launched neo4j on this instance.
If it works with neo4j console but not with systemctl, you should check the rights of the Neo4j folder.
I'm pretty sure you have a problem on it, and that the systemctl doesn't run Neo4j with the same user as you

Can not enable Alwayson sql in DSE

I get this error when start Alwayson sql, tried many ways but the results still same. any ideas why?
Im using 1 cluster, 1 analytics+search center, 2 ubuntu 16.04 nodes.
INFO [ALWAYSON-SQL] 2019-02-14 11:36:01,348 ALWAYSON-SQL AlwaysOnSqlRunner.scala:304 - Shutting down AlwaysOn SQL.
INFO [ALWAYSON-SQL] 2019-02-14 11:36:01,617 ALWAYSON-SQL AlwaysOnSqlRunner.scala:328 - Set status to stopped
INFO [ALWAYSON-SQL] 2019-02-14 11:36:01,620 ALWAYSON-SQL AlwaysOnSqlRunner.scala:382 - Reserve port for AlwaysOn SQL
INFO [ALWAYSON-SQL] 2019-02-14 11:36:04,621 ALWAYSON-SQL AlwaysOnSqlRunner.scala:375 - Release reserved port
INFO [ALWAYSON-SQL] 2019-02-14 11:36:04,622 ALWAYSON-SQL AlwaysOnSqlRunner.scala:805 - Set InCluster token to DseFs client
INFO [ForkJoinPool-1-worker-1] 2019-02-14 11:36:04,650 AlwaysOnSqlRunner.scala:740 - dsefs server heartbeat response: pong
INFO [ForkJoinPool-1-worker-3] 2019-02-14 11:36:04,757 AlwaysOnSqlRunner.scala:704 - Create DseFs directory /var/log/spark/alwayson_sql
INFO [ForkJoinPool-1-worker-3] 2019-02-14 11:36:04,758 AlwaysOnSqlRunner.scala:805 - Set InCluster token to DseFs client
ERROR [ForkJoinPool-1-worker-3] 2019-02-14 11:36:04,788 AlwaysOnSqlRunner.scala:722 - Failed to check dsefs directory alwayson_sql
com.datastax.bdp.fs.model.AccessDeniedException: Insufficient permissions to path /
at com.datastax.bdp.fs.model.DseFsJsonProtocol$ThrowableReader$.read(DseFsJsonProtocol.scala:258)
at com.datastax.bdp.fs.model.DseFsJsonProtocol$ThrowableReader$.read(DseFsJsonProtocol.scala:232)
at spray.json.JsValue.convertTo(JsValue.scala:31)
at com.datastax.bdp.fs.rest.RestResponse$stateMachine$macro$331$1.apply(RestResponse.scala:48)
at com.datastax.bdp.fs.rest.RestResponse$stateMachine$macro$331$1.apply(RestResponse.scala:44)
at scala.concurrent.impl.CallbackRunnable.run(Promise.scala:32)
at io.netty.util.concurrent.AbstractEventExecutor.safeExecute(AbstractEventExecutor.java:163)
at io.netty.util.concurrent.SingleThreadEventExecutor.runAllTasks(SingleThreadEventExecutor.java:404)
at io.netty.channel.nio.NioEventLoop.run(NioEventLoop.java:465)
at io.netty.util.concurrent.SingleThreadEventExecutor$5.run(SingleThreadEventExecutor.java:884)
at java.lang.Thread.run(Thread.java:748)
INFO [ALWAYSON-SQL] 2019-02-14 11:36:04,788 ALWAYSON-SQL AlwaysOnSqlRunner.scala:247 - ALWAYSON-SQL caused an exception in state RUNNING : com.datastax.bdp.fs.model.AccessDeniedException: Insufficient permissions to path /
com.datastax.bdp.fs.model.AccessDeniedException: Insufficient permissions to path /
at com.datastax.bdp.fs.model.DseFsJsonProtocol$ThrowableReader$.read(DseFsJsonProtocol.scala:258)
at com.datastax.bdp.fs.model.DseFsJsonProtocol$ThrowableReader$.read(DseFsJsonProtocol.scala:232)
at spray.json.JsValue.convertTo(JsValue.scala:31)
at com.datastax.bdp.fs.rest.RestResponse$stateMachine$macro$331$1.apply(RestResponse.scala:48)
at com.datastax.bdp.fs.rest.RestResponse$stateMachine$macro$331$1.apply(RestResponse.scala:44)
at scala.concurrent.impl.CallbackRunnable.run(Promise.scala:32)
at io.netty.util.concurrent.AbstractEventExecutor.safeExecute(AbstractEventExecutor.java:163)
at io.netty.util.concurrent.SingleThreadEventExecutor.runAllTasks(SingleThreadEventExecutor.java:404)
at io.netty.channel.nio.NioEventLoop.run(NioEventLoop.java:465)
at io.netty.util.concurrent.SingleThreadEventExecutor$5.run(SingleThreadEventExecutor.java:884)
at java.lang.Thread.run(Thread.java:748)
I have seen this problem too! It was a permissions problem in dsefs! To fix, login with the root Cassandra user, and change permissions of the your alwayson log directory to the alwayson user.

Graphaware Framework and UUID not starting on Neo4j GrapheneDB

I am trying to get the Graphaware Framework and UUID running on a GrapheneDB instance. I have followed the instructions to zip the JAR and neo4j.properties files and uploaded using the GrapheneDB Web Interface but UUID's are not added when I create a new node.
neo4j.properties file
dbms.unmanaged_extension_classes=com.graphaware.server=/graphaware
com.graphaware.runtime.enabled=true
#UIDM becomes the module ID:
com.graphaware.module.UIDM.1=com.graphaware.module.uuid.UuidBootstrapper
#optional, default is uuid:
com.graphaware.module.UIDM.uuidProperty=uuid
#optional, default is false:
com.graphaware.module.UIDM.stripHyphens=true
#optional, default is all nodes:
#com.graphaware.module.UIDM.node=hasLabel('Label1') || hasLabel('Label2')
#optional, default is no relationships:
#com.graphaware.module.UIDM.relationship=isType('Type1')
com.graphaware.module.UIDM.relationship=com.graphaware.runtime.policy.all.IncludeAllBusinessRelationships
#optional, default is uuidIndex
com.graphaware.module.UIDM.uuidIndex=uuidIndex
#optional, default is uuidRelIndex
com.graphaware.module.UIDM.uuidRelationshipIndex=uuidRelIndex
Log Output
2017-03-02 10:20:40.184+0000 INFO Neo4j Server shutdown initiated by
request 2017-03-02 10:20:40.209+0000 INFO
[c.g.s.f.b.GraphAwareServerBootstrapper] stopped 2017-03-02
10:20:40.209+0000 INFO Stopping... 2017-03-02 10:20:40.982+0000 INFO
Stopped. 2017-03-02 10:20:43.402+0000 INFO Starting... 2017-03-02
10:20:43.820+0000 INFO Bolt enabled on 0.0.0.0:7475. 2017-03-02
10:20:45.153+0000 INFO [c.g.r.b.RuntimeKernelExtension] GraphAware
Runtime disabled. 2017-03-02 10:20:48.130+0000 INFO Started.
2017-03-02 10:20:48.343+0000 INFO
[c.g.s.f.b.GraphAwareServerBootstrapper] started 2017-03-02
10:20:48.350+0000 INFO Mounted unmanaged extension
[com.graphaware.server] at [/graphaware] 2017-03-02 10:20:48.724+0000
INFO Mounting GraphAware Framework at /graphaware 2017-03-02
10:20:48.755+0000 INFO Will try to scan the following packages:
{com..graphaware.,org..graphaware.,net..graphaware.}
2017-03-02 10:20:52.633+0000 INFO Remote interface available at
http://localhost:7474/
Messages.log Extract
2017-03-02 10:33:59.991+0000 INFO [o.n.k.i.DiagnosticsManager] ---
STARTED diagnostics for KernelDiagnostics:StoreFiles END ---
2017-03-02 10:34:01.846+0000 INFO [o.n.k.i.DiagnosticsManager] ---
SERVER STARTED START --- 2017-03-02 10:34:02.526+0000 INFO
[c.g.s.f.b.GraphAwareBootstrappingFilter] Mounting GraphAware
Framework at /graphaware 2017-03-02 10:34:02.547+0000 INFO
[c.g.s.f.c.GraphAwareWebContextCreator] Will try to scan the following
packages:
{com..graphaware.,org..graphaware.,net..graphaware.}
2017-03-02 10:34:06.100+0000 INFO [o.n.k.i.DiagnosticsManager] ---
SERVER STARTED END ---
It looks like the framework is not started but I have set enabled=true in the properties file.
Environment Setup
Neo4j Community Edition 3.1.1
graphaware-server-3.1.0.44
graphaware-uuid-3.1.0.44.13
Thanks

Neo4j Server doesn't connect to embedded database in Rails

I'm trying to get a Neo4j server connected to a database embedded in a Rails project. I'm following this guide: https://github.com/andreasronge/neo4j/wiki/Neo4j%3A%3Aha-cluster The only thing different, I think, is that I'm using the 1.9.7 Neo4j community server. I am able to start the rails server and the rails console, and both work simultaneously. I can start the standalone server just fine, but I can only see the root node in the UI.
I tried to point the standalone server to one of the two embedded databases, but the server then updates the database such that the Rails application doesn't start anymore.
Please let me know if you need more specifics.
Any hints are welcome. Thanks.
Here's the standalone server's neo4j.properties
ha.server_id=3
ha.initial_hosts=localhost:5001,localhost:5002,localhost:5003
ha.server=localhost:6003
ha.cluster_server=localhost:5003
ha.pull_interval=1
allow_store_upgrade=false
keep_logical_logs=true
Here's the standalone server's neo4j-server.properties
org.neo4j.server.database.location=data/graph.db
org.neo4j.server.database.mode=HA
org.neo4j.server.webserver.port=7474
org.neo4j.server.webserver.https.enabled=true
org.neo4j.server.webserver.https.port=7473
org.neo4j.server.webserver.https.cert.location=conf/ssl/snakeoil.cert
org.neo4j.server.webserver.https.key.location=conf/ssl/snakeoil.key
org.neo4j.server.webserver.https.keystore.location=data/keystore
org.neo4j.server.webadmin.rrdb.location=data/rrd
org.neo4j.server.webadmin.data.uri=/db/data/
org.neo4j.server.webadmin.management.uri=/db/manage/
org.neo4j.server.db.tuning.properties=conf/neo4j.properties
org.neo4j.server.manage.console_engines=gremlin, shell
org.neo4j.server.http.log.enabled=false
org.neo4j.server.http.log.config=conf/neo4j-http-logging.xml
These are the Noe4j and Rails related gems included in my Gemfile:
gem 'rails', '3.2.13'
gem 'neo4j', '>= 2.2.3'
gem 'neo4j-community', '1.9'
gem 'neo4j-advanced', '1.9'
gem 'neo4j-enterprise', '1.9'
In config/application.rb I added:
require "neo4j/rails/ha_console/railtie" if Rails.env.development?
config.neo4j.storage_path = "#{config.root}/db/neo4j-#{Rails.env}" unless Rails.env.development?
When I start the rails server:
Config HA cluster, ha.server_id: 1, db: /Users/rene_user/Code/neo_pedigree/db/ha_neo_1
=> Booting WEBrick
=> Rails 3.2.13 application starting in development on http://0.0.0.0:3000
=> Call with -d to detach
=> Ctrl-C to shutdown server
[2014-05-31 00:25:34] INFO WEBrick 1.3.1
[2014-05-31 00:25:34] INFO ruby 1.9.3 (2013-12-06) [java]
[2014-05-31 00:25:34] INFO WEBrick::HTTPServer#start: pid=998 port=3000
starting Neo4j in HA mode, machine id: 1 at localhost:6001 db /Users/rene_user/Code/neo_pedigree/db/ha_neo_1
Starting rails console:
Config HA cluster, ha.server_id: 1, db: /Users/rene_user/Code/neo_pedigree/db/ha_neo_1
Re-Config HA cluster, ha.server_id: 2, db: /Users/rene_user/Code/neo_pedigree/db/ha_neo_2
Loading development environment (Rails 3.2.13)
irb(main):001:0> Dog.all.count
starting Neo4j in HA mode, machine id: 2 at localhost:6002 db /Users/rene_user/Code/neo_pedigree/db/ha_neo_2
=> 3
Starting the standalone neo4j server:
Using additional JVM arguments: -server -XX:+DisableExplicitGC - Dorg.neo4j.server.properties=conf/neo4j-server.properties -Djava.util.logging.config.file=conf/logging.properties -Dlog4j.configuration=file:conf/log4j.properties -XX:+UseConcMarkSweepGC -XX:+CMSClassUnloadingEnabled
Starting Neo4j Server...WARNING: not changing user
HA instance started in process [39548]. Will be operational once connected to peers. See /Users/rene_user/Code/neo4j_server/data/log/console.log for current status.
Here's the standalone server's log from the point of starting the server:
2014-05-29 16:45:53.169+0000 INFO [API] Remote interface ready and available at [http://localhost:7474/]
20:52:19.925 [main] INFO org.neo4j.server.CommunityNeoServer - Setting startup timeout to: 120000ms based on -1
Detected incorrectly shut down database, performing recovery..
2014-05-31 18:52:23.361+0000 INFO [API] Successfully started database
2014-05-31 18:52:23.422+0000 INFO [API] Starting HTTP on port :7474 with 20 threads available
2014-05-31 18:52:23.991+0000 INFO [API] Enabling HTTPS on port :7473
2014-05-31 18:52:25.543+0000 INFO [API] Mounted discovery module at [/]
2014-05-31 18:52:25.646+0000 INFO [API] Loaded server plugin "CypherPlugin"
2014-05-31 18:52:25.649+0000 INFO [API] GraphDatabaseService.execute_query: execute a query
2014-05-31 18:52:25.650+0000 INFO [API] Loaded server plugin "GremlinPlugin"
2014-05-31 18:52:25.650+0000 INFO [API] GraphDatabaseService.execute_script: execute a Gremlin script with 'g' set to the Neo4jGraph and 'results' containing the results. Only results of one object type is supported.
2014-05-31 18:52:25.651+0000 INFO [API] Mounted REST API at [/db/data/]
2014-05-31 18:52:25.658+0000 INFO [API] Mounted management API at [/db/manage/]
2014-05-31 18:52:25.828+0000 INFO [API] Mounted webadmin at [/webadmin]
2014-05-31 20:52:26.004:INFO::Logging to STDERR via org.mortbay.log.StdErrLog
2014-05-31 18:52:26.069+0000 INFO [API] Mounting static content at [/webadmin] from [webadmin-html]
2014-05-31 20:52:26.191:INFO::jetty-6.1.25
2014-05-31 20:52:26.434:INFO::NO JSP Support for /webadmin, did not find org.apache.jasper.servlet.JspServlet
2014-05-31 20:52:29.374:INFO::Started SelectChannelConnector#localhost:7474
2014-05-31 20:52:30.566:INFO::Started SslSocketConnector#localhost:7473
2014-05-31 18:52:30.567+0000 INFO [API] Remote interface ready and available at [http://localhost:7474/]

Does Neo4j clustering require atleast 3 nodes?

I'm playing with Neo4J high availability clustering. Whilst the documentation indicates a cluster requires at least 3 nodes, or 2 with a arbitrator, I'm wondering what the implications of running with only 2 nodes are?
If i set up a 3 node cluster, and remove a node, i have no issues adding data. Likewise if i set-up the cluster with only 2 nodes i can still add data and don't seem to be restricted functionality. What should i expect to experience as limitations? For example, the following indicates the trace of a slave started in a 2 node cluster. Data can be added to the master with no issues - and be queried.
2013-11-06 10:34:50.403+0000 INFO [Cluster] Attempting to join cluster of [127.0.0.1:5001, 127.0.0.1:5002]
2013-11-06 10:34:54.473+0000 INFO [Cluster] Joined cluster:Name:neo4j.ha Nodes:{1=cluster://127.0.0.1:5001, 2=cluster://127.0.0.1:5002} Roles:{coordinator=1}
2013-11-06 10:34:54.477+0000 INFO [Cluster] Instance 2 (this server) joined the cluster
2013-11-06 10:34:54.512+0000 INFO [Cluster] Instance 1 was elected as coordinator
2013-11-06 10:34:54.530+0000 INFO [Cluster] Instance 1 is available as master at ha://localhost:6363?serverId=1
2013-11-06 10:34:54.531+0000 INFO [Cluster] Instance 1 is available as backup at backup://localhost:6366
2013-11-06 10:34:54.537+0000 INFO [Cluster] ServerId 2, moving to slave for master ha://localhost:6363?serverId=1
2013-11-06 10:34:54.564+0000 INFO [Cluster] Checking store consistency with master
2013-11-06 10:34:54.620+0000 INFO [Cluster] The store does not represent the same database as master. Will remove and fetch a new one from master
2013-11-06 10:34:54.646+0000 INFO [Cluster] ServerId 2, moving to slave for master ha://localhost:6363?serverId=1
2013-11-06 10:34:54.658+0000 INFO [Cluster] Copying store from master
2013-11-06 10:34:54.687+0000 INFO [Cluster] Copying index/lucene-store.db
2013-11-06 10:34:54.688+0000 INFO [Cluster] Copied index/lucene-store.db
2013-11-06 10:34:54.688+0000 INFO [Cluster] Copying neostore.nodestore.db
2013-11-06 10:34:54.689+0000 INFO [Cluster] Copied neostore.nodestore.db
2013-11-06 10:34:54.689+0000 INFO [Cluster] Copying neostore.propertystore.db
2013-11-06 10:34:54.689+0000 INFO [Cluster] Copied neostore.propertystore.db
2013-11-06 10:34:54.689+0000 INFO [Cluster] Copying neostore.propertystore.db.arrays
2013-11-06 10:34:54.690+0000 INFO [Cluster] Copied neostore.propertystore.db.arrays
2013-11-06 10:34:54.690+0000 INFO [Cluster] Copying neostore.propertystore.db.index
2013-11-06 10:34:54.690+0000 INFO [Cluster] Copied neostore.propertystore.db.index
2013-11-06 10:34:54.690+0000 INFO [Cluster] Copying neostore.propertystore.db.index.keys
2013-11-06 10:34:54.691+0000 INFO [Cluster] Copied neostore.propertystore.db.index.keys
2013-11-06 10:34:54.691+0000 INFO [Cluster] Copying neostore.propertystore.db.strings
2013-11-06 10:34:54.691+0000 INFO [Cluster] Copied neostore.propertystore.db.strings
2013-11-06 10:34:54.691+0000 INFO [Cluster] Copying neostore.relationshipstore.db
2013-11-06 10:34:54.692+0000 INFO [Cluster] Copied neostore.relationshipstore.db
2013-11-06 10:34:54.692+0000 INFO [Cluster] Copying neostore.relationshiptypestore.db
2013-11-06 10:34:54.692+0000 INFO [Cluster] Copied neostore.relationshiptypestore.db
2013-11-06 10:34:54.692+0000 INFO [Cluster] Copying neostore.relationshiptypestore.db.names
2013-11-06 10:34:54.693+0000 INFO [Cluster] Copied neostore.relationshiptypestore.db.names
2013-11-06 10:34:54.693+0000 INFO [Cluster] Copying nioneo_logical.log.v0
2013-11-06 10:34:54.693+0000 INFO [Cluster] Copied nioneo_logical.log.v0
2013-11-06 10:34:54.693+0000 INFO [Cluster] Copying neostore
2013-11-06 10:34:54.694+0000 INFO [Cluster] Copied neostore
2013-11-06 10:34:54.694+0000 INFO [Cluster] Done, copied 12 files
2013-11-06 10:34:55.101+0000 INFO [Cluster] Finished copying store from master
2013-11-06 10:34:55.117+0000 INFO [Cluster] Checking store consistency with master
2013-11-06 10:34:55.123+0000 INFO [Cluster] Store is consistent
2013-11-06 10:34:55.124+0000 INFO [Cluster] Catching up with master
2013-11-06 10:34:55.125+0000 INFO [Cluster] Now consistent with master
2013-11-06 10:34:55.172+0000 INFO [Cluster] ServerId 2, successfully moved to slave for master ha://localhost:6363?serverId=1
2013-11-06 10:34:55.207+0000 INFO [Cluster] Instance 2 (this server) is available as slave at ha://localhost:6364?serverId=2
2013-11-06 10:34:55.261+0000 INFO [API] Successfully started database
2013-11-06 10:34:55.265+0000 INFO [Cluster] Database available for write transactions
2013-11-06 10:34:55.318+0000 INFO [API] Starting HTTP on port :8574 with 40 threads available
2013-11-06 10:34:55.614+0000 INFO [API] Enabling HTTPS on port :8575
2013-11-06 10:34:56.256+0000 INFO [API] Mounted REST API at: /db/manage/
2013-11-06 10:34:56.261+0000 INFO [API] Mounted discovery module at [/]
2013-11-06 10:34:56.341+0000 INFO [API] Loaded server plugin "CypherPlugin"
2013-11-06 10:34:56.344+0000 INFO [API] Loaded server plugin "GremlinPlugin"
2013-11-06 10:34:56.347+0000 INFO [API] Mounted REST API at [/db/data/]
2013-11-06 10:34:56.355+0000 INFO [API] Mounted management API at [/db/manage/]
2013-11-06 10:34:56.435+0000 INFO [API] Mounted webadmin at [/webadmin]
2013-11-06 10:34:56.477+0000 INFO [API] Mounting static content at [/webadmin] from [webadmin-html]
2013-11-06 10:34:57.923+0000 INFO [API] Remote interface ready and available at [http://localhost:8574/]
2013-11-06 10:35:52.829+0000 INFO [API] Available console sessions: SHELL: class org.neo4j.server.webadmin.console.ShellSessionCreator
CYPHER: class org.neo4j.server.webadmin.console.CypherSessionCreator
GREMLIN: class org.neo4j.server.webadmin.console.GremlinSessionCreator
Thanks
There is no implications in terms of functionality Neo4j server.
But in terms of high availability is better to have more then 2 servers in cluster.
If there is a network failure between the 2 nodes and they are running but can't see each other, they will both promote themselves to master.
This may result in problems reforming the cluster when the network recovers.
Adding a 3rd node ensures that only one of the 3 nodes can ever be master.

Resources