How to add a node to a running neo4j cluster? - neo4j

Suppose I have a single running neo4j node configured for HA mode. Relevant config lines are, I believe, are:
"ha.cluster_server" : "hostname:5003",
"ha.initial_hosts" : "hostname:5003",
Is it possible to add another node that will, upon joining, form a 2-node cluster with the currently running one?
I should clarify that I tried doing it by the books, i.e. configuring the second member like this:
"ha.cluster_server" : "hostname:5004",
"ha.initial_hosts" : "hostname:5004,hostname:5003",
But the second member just hangs in an UNKNOWN state (transitionioning to slave, I guess).

First one server is not a cluster!
It should be possible. Configuration of second server should look like
ha.server_id=2 #different number then you have on first server
ha.initial_hosts=first_server:5003,second_server:5003
e.g.
first server
neo4j-server.properties
org.neo4j.server.database.mode=HA
neo4j.properties
ha.server_id=1
ha.initial_hosts=first_host:5001
ha.cluster_server=first_host:5001
ha.server=first_host:6001
second server
neo4j-server.properties
org.neo4j.server.database.mode=HA
neo4j.properties
ha.server_id=2 #different number then you have on first server
ha.initial_hosts=first_host:5001,second_host:5001
ha.cluster_server=second_host:5001
ha.server=second_host:6001

Related

Is any way to start go_binary before java_test?

Our project has a few GRPC servers defined as go_binary targets. We develop client SDKs for Java and Python applications and we would like to use java_test and py_test. Is any way to start a specific go_binary target before java_test or py_test?
You can create a test harness that starts the gRPC server before running the tests. For example, you could add the binary to the data attribute of the test, and then started it beforehand:
go_binary(
name = "my_grpc_server",
[...]
)
py_test(
name = "my_test",
[...]
data = [":my_grpc_server"],
)
and then inside the test file:
class ClientTestCase(unittest.TestCase):
def setUp(self):
r = runfiles.Create()
self.server = subprocess.Popen([r.Rlocation("path/to/my_grpc_server")])
def tearDown(self):
self.server.terminate()
self.server.wait()
This example is very simple, you'll probably run into issues regarding the availability of the port the server listens on, or waiting for the server to start up. You could add flags to your gRPC server to allow communication over a domain socket, or make it listen on an unused port and have the test parse the port number from the server's log output.
For details on finding the server with runfiles: https://github.com/bazelbuild/bazel/blob/a7a0d48fbeb059ee60e77580e5d05baeefdd5699/tools/python/runfiles/runfiles.py#L16-L58
If you find yourself copy-pasting this pattern a lot, or having to implement it in multiple languages, you could try using an sh_test() rule to wrap the underlying py_test or java_test, and to start the server, then start the test with an environment variable telling it how to reach the server (eg MY_GRPC_SERVER_ADDRESS=localhost:${test_port}.

ksqlDB server shuts down when config, offset and status topic is changed

I'm running a single ksqlDB Server on embedded mode on our Kubernetes cluster and I want to add a connector.
Adding a connector produces a Request timed out on Kafka Connect exactly similar to this blog post by Robin Moffatt.
So he suggests to change the KAFKA_OFFSET_REPLICATION_FACTOR contained in his docker-compose example.
But unfortunately in our Test environment, I don't have easy access to the existing Kafka cluster (we have admins there), so I think the fastest way to go about is to instead change the:
KSQL_CONNECT_CONFIG_STORAGE_TOPIC - change to a different topic name
KSQL_CONNECT_OFFSET_STORAGE_TOPIC
KSQL_CONNECT_STATUS_STORAGE_TOPIC
KSQL_CONNECT_CONFIG_STORAGE_REPLICATION_FACTOR to -1 (originally this value is 1)
KSQL_CONNECT_OFFSET_STORAGE_REPLICATION_FACTOR to -1 (originally this value is 1)
KSQL_CONNECT_STATUS_STORAGE_REPLICATION_FACTOR to -1 (originally this value is 1)
But when I change the topic names, I can see that new topics are created (using ksqlDB's SHOW TOPICS command), but it always shuts down and restarts forever, here are the logs:
[2021-07-22 01:27:19,889] INFO ProcessingLogConfig values:
ksql.logging.processing.rows.include = false
ksql.logging.processing.stream.auto.create = false
ksql.logging.processing.stream.name = KSQL_PROCESSING_LOG
ksql.logging.processing.topic.auto.create = false
ksql.logging.processing.topic.name =
ksql.logging.processing.topic.partitions = 1
ksql.logging.processing.topic.replication.factor = 1
(io.confluent.ksql.logging.processing.ProcessingLogConfig:372)
[2021-07-22 01:27:19,891] ERROR Aborting application start (io.confluent.ksql.rest.server.KsqlRestApplication:378)
io.confluent.ksql.rest.server.KsqlRestApplication$AbortApplicationStartException: Shutting down application during waitForPreconditions
at io.confluent.ksql.rest.server.KsqlRestApplication.waitForPreconditions(KsqlRestApplication.java:441)
at io.confluent.ksql.rest.server.KsqlRestApplication.startKsql(KsqlRestApplication.java:386)
at io.confluent.ksql.rest.server.KsqlRestApplication.startAsync(KsqlRestApplication.java:370)
at io.confluent.ksql.rest.server.MultiExecutable.doAction(MultiExecutable.java:68)
at io.confluent.ksql.rest.server.MultiExecutable.startAsync(MultiExecutable.java:42)
at io.confluent.ksql.rest.server.KsqlServerMain.tryStartApp(KsqlServerMain.java:89)
at io.confluent.ksql.rest.server.KsqlServerMain.main(KsqlServerMain.java:64)
[2021-07-22 01:27:19,892] INFO Server up and running (io.confluent.ksql.rest.server.KsqlServerMain:90)
[2021-07-22 01:27:19,892] INFO Server shutting down (io.confluent.ksql.rest.server.KsqlServerMain:96)
[2021-07-22 01:27:19,892] INFO ksqlDB shutdown called (io.confluent.ksql.rest.server.KsqlRestApplication:498)
[2021-07-22 01:27:34,926] INFO API server stopped (io.confluent.ksql.api.server.Server:196)
[2021-07-22 01:27:34,927] INFO ksqlDB shutdown complete (io.confluent.ksql.rest.server.KsqlRestApplication:553)
I don't have anymore details, it's just that.
When I return the config, offset and status topic names to what I had at first, the ksqlDB Server starts fine, but again I'm stuck with the problem that I can't create connectors.
I have a fear that when I attempt to delete the topics manually, ksqlDB server wont be able to start properly because it keeps on finding the original config, offset and status topics I had at first.
I have solved the issue, apparently using -1 as value for:
KSQL_CONNECT_CONFIG_REPLICATION_FACTOR
KSQL_CONNECT_OFFSET_REPLICATION_FACTOR
KSQL_CONNECT_STATUS_REPLICATION_FACTOR
doesn’t work properly, the Config topic becomes 20 partitions,
when I read in the Confluent Docs it should only be 1 partition, I think that’s why the ksqlDB Server just restarts endlessly, I just need to gather the right evidences.
Turning those values to 3 (which is our Kafka broker's default rep factor config) I think solved the issue, it was hard, because no error message/s are seen, like when it doesn’t want more than 1 partition of the created Config topic.

Change default freeradius auth and acct port in CoovaChilli

So I have two freeradius / radiusdesk installations on the server.
First one is old one and uses default freeradius ports: 1812/1813 for Auth/Acct.
The second one is the new once and using ports: 10001/10002 for Auth/Acct.
The issue now is that on my router, CoovaChili is always connection to the first one ( old one ) and communicating on the ports 1812/1813. I want to change it's ports. But it doesn't seems to be working. The OS is OpenWrt.
In my /etc/config/chilli i have added the following lines:
option radiusauthport 10001
option radiusacctport 10002
But is is not working. CoovaChilli still sends request to the old 1812/1813 ports. I want to know how to change that so it communicates with my defined port numbers, rather than the default ones.
Looking for the configurations to fix it.
Thanks
Looking at the OpernWRT guide at https://openwrt.org/docs/guide-user/services/captive-portal/wireless.hotspot.coova-chilli, it seems that you need to put the value parameter inside double quotes.
Specifically
option radiusauthport "10001"
option radiusacctport "10002

No such property: ToInputStream for class: Script4

I have a situation where I want to import my graph data to database.I am having janusgraph(latest version) running with cassandra(version 3) and elasticsearch(version 6.6.0) using Docker.I have been suggested to use gryo format.So I have tried this command
graph.io(IoCore.gryo()).reader().create().readGraph(ToInputStream.from("my_graph.kryo"), graph);
but ended up with an error
No such property: ToInputStream for class: Script4
The documentation I am following is here.Please take a look and put me in a right procedure. Thanks in advance!
ToInputStream is not a function of Gremlin or JanusGraph. I believe that it is only a function of IBM Compose so unless you are running JanusGraph on that specific platform, this command will not work.
Versions of JanusGraph that utilize TinkerPop 3.4.x will support the io() step and this is the preferred manner in which to load gryo (as well as graphson and graphml) files.
Graph graph = ... // setup JanusGraph instance
GraphTraversalSource g = traversal().withGraph(graph); // might use withRemote() here instead depending on how you are connecting I suppose
g.io("graph.kryo").read().iterate()
Note that if you are connecting remotely - it seems you are sending scripts to the Docker instance given your error - then be sure that that "graph.kryo" file path is accessible to Docker. That's what's nice about ToInputStream from Compose as it allows you to access remote sources.

How to test apache flume load balancing - Sink groups

I am kind of newbie to apache flume , I have configured single tier agent with sink group -load balance manually , I would like to know how can i test the sink group load balancing ? any idea folks
You can define two different sinks and mention them in the Sink Groups as below,
agent1.sinkgroups = g1
agent1.sinkgroups.g1.sinks = HDFS1 HDFS2
agent1.sinkgroups.g1.processor.type = load_balance
agent1.sinkgroups.g1.processor.backoff = true
agent1.sinkgroups.g1.processor.selector = round_robin
Here both of them are HDFS sinks.
You can mention the process selector (round_robin[default], random or custom selector) which defines how should the load be balanced between two sinks.
When you run the agent, you can see that two different set of data is stored in two respective HDFS paths(sinks).
Other two optional parameters are backoff and selector.maxTimeOut
You can refer this link for more info Flume 1.6.0 User Guide

Resources