I'm new to cassandra and wanted to understand and implement the NetworkTopology Stratergy.
I want to create a cassandra cluster with NetworkTopology stratergy with multiple data centers. How to do it?
I tried creating a docker bridge network and three cassandra nodes: cas1, cas2, cas3. When used nodetools to check status a cluster with single datacentre is only getting created. But I want to create 2 datacenters.
There's a document which walks you through this: Initializing a multiple node cluster (multiple datacenters). It's for Cassandra 3.x, but the procedure is pretty much the same for 4.x as well.
But if I had to guess, I'd say there's two things you're probably missing:
In the cassandra.yaml set the endpoint_snitch to GossipingPropertyFileSnitch.
endpoint_snitch: GossipingPropertyFileSnitch
That tells Cassandra to check the cassandra-rackdc.properties file for data center and rack information. Inside that file, you'll find the following settings (by default).
dc=dc1
rack=rack1
This is where you can set the name of the new DC. Then you can use those data center names to specify replication on keyspaces using NetworkTopologyStrategy.
Related
I am running catboost on Databricks cluster. Databricks Production cluster is very secure and we cannot create new directory on the go as a user. But we can have pre-created directories. I am passing below parameter for my CatBoostClassifier.
CatBoostClassifier(train_dir='dbfs/FileStore/files/')
It does not work and throws below error.
CatBoostError: catboost/libs/train_lib/dir_helper.cpp:20: Can't create train working dir
you're missing / character at the beginning - it should be '/dbfs/FileStore/files/' instead.
Also, writing to DBFS could be slow, and may fail if catboost is using random writes (see limitations). You may instead point to the local directory of the node, like, /tmp/...., and then use dbutils.fs.cp("file:///tmp/....", "/FileStore/files/catboost/...", True) to copy files from local directory to the DBFS.
I'm using prometheus and grafana to monitor the neo4j database cluster.I wanted to create a dynamic dashboard based on the DBNAME as a variable. Below is the query I'm using to populate a panel. Here the graph_db is the DBName and only this changes for multiple databases. Is there a way to change the metric name dynamically using variable.
neo4j_graph_db_transaction_last_closed_tx_id_total{job='$job', instance=~"$neo4j_instance"} --> For graph_db
neo4j_system_transaction_last_closed_tx_id_total{job='$job', instance=~"$neo4j_instance"} --> For system
I have found the solution in grafana we can subsitute the variables and use like this to solve the problem. I'm using the latest stable version of Grafana and it worked for me.
neo4j_${var}_transaction_last_closed_tx_id_total{job='$job', instance=~"$neo4j_instance"}
I have 2 drbd node (primary/secondary) and I try to solve split brain without any lost data.
Running : Drbd(8.9.10-2), Pacemaker, Corosync, Postgresql
My auto solve config:
net {
after-sb-0pri discard-zero-changes;
after-sb-1pri discard-secondary;
after-sb-2pri disconnect;
data-integrity-alg md5;
}
How can I find last updated node? Is there any command or something like?
How can I find last updated node? Is there any command or something like?
Unfortunately, you can't using DRBD itself. You could check your logs on both servers, and compare when each of them detected the split brain situation and therefor disconnected.
Or you mount the data on each server and compare from a client view of things. Then decide which server has the better data and discard everything on node B.
I have a container(Service C) which is listening to certain user event and based on the input it needs to spawn one or more instance of an another container(Service X).
From your use case description, it looks like deployment is what you are looking for https://kubernetes.io/docs/concepts/workloads/controllers/deployment/ . By using deployments you can dynamically scale the number of instances of the pod.
I have dockerbeats set up on a local cluster that is running ELK stack and some other misc. dockers (all containers controlled via kubernetes). I set up the dashboard from Ingensi (Ingensi dockerbeat Dashboard) for kibana and ran into an issue with the containerNames field while setting up the graphs. Now, for context, my docker containers have names like these:
k8s_dockerbeats.79c42f90_dockerbeats-796n9_default_472faa11-1b3a-11e6-8bf4-28924a2bffbf_2832ea88
(as well as supporting containers for kubernetes with similar container names) [2]: http://i.stack.imgur.com/hvIUG.png
k8s_POD.6d00e006_dockerbeats-796n9_default_472faa11-1b3a-11e6-8bf4-28924a2bffbf_3ddcfe44
When I set up the dashboard in kibana I get an issue where I get multiple containerNames from the same container. For example instead of a single containerName output I get the containerName split up into smaller segments:
k8s_dockerbeats
79c42f90_dockerbeats
796n9
28924a2bffbf_3ddcfe44
and so on...
I assume that the format of the container name is confusing the dashboard (maybe in the way that it parses the name information) and I could probably go around renaming every container to a more sensible name.
But before I do that, is there a way to configure the dashboard in such a way that I read in the entire container name string so that it does not break up like it does in the first image? (assuming I'll have to dig into the .json files from the repository mentioned above)
Thanks in advance if anyone answers this.
It sounds like the container name is being analyzed by Elasticsearch. You need to make sure that the container name field is marked as not_analyzed in the Elasticsearch index template. You can do this by installing the index template provided by Dockerbeat.
Marking the field as not_analyzed ensures that the data is not tokenized and it gets indexed as is. It will only be searchable by specifying the exact string.
You will need to delete your current indexes after installing the new index template in order to change the mappings.
Install the provided index template:
curl -XPUT 'http://elasticsearch:9200/_template/dockerbeat' -d#dockerbeat.template.json
Delete your the existing indexes:
curl -XDELETE 'http://elasticsearch:9200/dockerbeat-*'
You can view your current mappings by querying Elasticearch:
curl http://elasticsearch:9200/dockerbeat-*/_mapping