Neo4j cluster: How to designate master? - neo4j

I've just read through Neo4j's tutorial on creating a cluster (link at bottom), but no information is given regarding which node is designated as the 'master', or how this is done.
Let's say I'm working with the first example, where there are a total of three nodes installed on three separate machines. How would I make one the master?
If any part of my question is mistaken, please let me know and I will be quick to edit.
Here's that link:
neo4j manual

As far as I can read in the documentation, you can only set a master in advance if all other nodes in the cluster have the ha.slave_only property set to true. However I would advise not to do this as your cluster needs at least one other non-slave_only node to elect as new master in case the elected master goes down.
Identifying the master can be done via the HA REST endpoints. You can find all the info at http://neo4j.com/docs/stable/ha-rest-info.html

Related

How to run a Jenkins job in an available agent

I have a Jenkins master and two agents. However the connectivity to one agent(agentA) is bit shaky and I want to use the other agent(agentB) when the connectivity to the first one is not available.
I am only using the Jenkins web interface and have not used scripts. I am trying to figure out how it can be done using the "Restrict where this project can be run" option in job's configuration. I tried using agentA|| agentB but when agentA is not available it hangs saying "pending - agentA is offline"
Is it possible to have a configuration to achieve what I need?
I can;t leave it blank because I have other agent (agentC, agentD) which do not want this job to run in.
I am not an admin of the Jenkins server, hence adding new plugins is not my preferred option but it can be done.
As noted in Least Load plugin,
By default Jenkins tries to allocate a jobs to the last node is was executed on. This can result in nodes being left idle while other nodes are overloaded.
As you generalized the example, I'm not 100% sure if your situation can be solved by simply better labelling of your nodes or you want to look at least load plugin (it is designed for balancing the load across nodes). Your example appears to show Node names (ie; agentA/agentB). The Queue allocation logic may be "Only A or Only B", then Jenkins sticks to it. Load balancing may not address that as while a Node (a Computer) name is also a label, it may have additional logic tied to it.
If you label the pair of nodes in a pool with a common label, say "CapabilityA", and constrain your jobs to run where "CapabilityA" rather than the node names, you may find jobs float across the pool (to B if A is not available. That's how we have our nodes labelled - by Capability, and we see jobs floating across nodes, but only once the first node is full (4 executors each), so not balanced.
Nodes can have many labels and you can use label conditions to have complex constraints.

jenkins change label as requested

We use jenkins for automation for our test infrastructure. The requirement is to give users the ability to use a jenkins node for their private test or debug using private jenkins jobs and then put back in the pool of nodes marked with labels; so that other jobs that were marked to run on particular labels can be run without interference.
We can achieve this by letting users alter label, but that didnt workout as users (nearly 50) are making their own label names and it takes time for admin to reassign the nodes (even with process) and precious test time is getting affected.
we are looking for some solution such as ability to provide buttons like take this node offline (cant use this option since jenkins cannot see the node anymore and so users cannot run jenkins job on the node) but may be with the ability to run scripts.
I have done some research on this but have to compromise on some requirements, so i decided to seek help from the community... SUGGESTIONS?
Did you have a look to this question:
How to take Jenkins master node offline using CLI?
In the 1st question, there are some CLI to make a node offline.
Maybe you can create a dedicated job on the master with one parameter (the node name). This job will call the Jenkins CLI to stop your node.

Is it possible to run a Neo4j cluster with strong consistency?

The docs of Neo4j state that when running in HA mode, you get eventual consistency. This is a quote from that page:
All updates will however propagate from the master to other slaves
eventually so a write from one slave may not be immediately visible on
all other slaves
My question is: is there a configuration that will allow me to write a cluster with strong consistency, of course at the cost of reduced performance? I'm looking for some sort of active-passive failover cluster configuration.
There is such an config option. ha.tx_push_factor determines to how many slaves a transaction should be pushed to synchronously. When setting this to ha.tx_push_factor=<clustersize>-1 you have immediate full consistency.

Neo4j HA Enterprise Master/Slave control

When starting up Enterprise Neo4j in HA the 1st server is starting as the master.
I have a requirement where I want to control who the master is in the cluster, is that actually possible in Neo4j?
What would happen if I set all the slaves with 'ha.slave_coordinator_update_mode=none'. Will this permit me to have a single master, and if it goes down no other instance will become the master, and when that instance recovers will become the master again.
Or, if I didn't use that setting, the master goes down and a slave takes over, when the original master comes back up will it just act as a slave or will it become the master again?
Is there some configuration that will permit control of that, the documentation doesn't cover that very clearly.
Orlok,
You can use ha.slave_only to ensure an instance doesn't ever become master. See http://docs.neo4j.org/chunked/stable/ha-configuration.html
That effectively allows you to add as many read slaves as you wish, but beware that you lose high availability if you only have one instance that can become master. I.e. have a few instances master-ready, setup with ha.slave_only=false, as well as a bunch of read slaves.
Regards,
Lasse

Neo4j HA replication issue on 1.9.M01

I'm using Neo4j 1.9.M01 in a Spring MVC application that exposes some domain specific REST services (read, update). The web application is deployed three times into the same web container (Tomcat 6) and each "node" has it's own embedded Neo4j HA instance part of the same cluster.
the three Neo4j config:
#node 1
ha.server_id=1
ha.server=localhost:6361
ha.cluster_server=localhost:5001
ha.initial_hosts=localhost:5001,localhost:5002,localhost:5003
#node 2
ha.server_id=2
ha.server=localhost:6362
ha.cluster_server=localhost:5002
ha.initial_hosts=localhost:5001,localhost:5002,localhost:5003
#node 3
ha.server_id=3
ha.server=localhost:6363
ha.cluster_server=localhost:5003
ha.initial_hosts=localhost:5001,localhost:5002,localhost:5003
Problem: when performing an update on one of the nodes the change is replicated to only ONE other node and the third node stays in the old state corrupting the consistency of the cluster.
I'm using the milestone because it's not allowed to run anything outside of the web container so I cannot rely on the old ZooKeeper based coordination in pre-1.9 versions.
Do I miss some configuration here or can it be an issue with the new coordination mechanism introduced in 1.9?
This behaviour (replication only to ONE other instance) is the same default as in 1.8. This is controlled by:
ha.tx_push_factor=1
which is the default.
Slaves get updates from master in a couple of ways:
By configuring a higher push factor, for example:
ha.tx_push_factor=2
(on every instance rather, because the one in use is the one on the current master).
By configuring pull interval for slaves to fetch updates from its master, for example:
ha.pull_interval=1s
By manually pulling updates using the Java API
By issuing a write transaction from the slave
See further at http://docs.neo4j.org/chunked/milestone/ha-configuration.html
A first guess would be to set
ha.discovery.enabled = false
see http://docs.neo4j.org/chunked/milestone/ha-configuration.html#_different_methods_for_participating_in_a_cluster for an explanation.
For a full analysis could you please provide data/graph.db/messages.log from all three cluster members.
Side note: It should be possible to use 1.8 also for your requirements. You could also spawn zookeeper directly from tomcat, just mimic what bin/neo4j-coordinator does: run class org.apache.zookeeper.server.quorum.QuorumPeerMain in a seperate thread upon startup of the web application.

Resources