Dynamically change the metric name in prometheus in grafana dashboard - neo4j

I'm using prometheus and grafana to monitor the neo4j database cluster.I wanted to create a dynamic dashboard based on the DBNAME as a variable. Below is the query I'm using to populate a panel. Here the graph_db is the DBName and only this changes for multiple databases. Is there a way to change the metric name dynamically using variable.
neo4j_graph_db_transaction_last_closed_tx_id_total{job='$job', instance=~"$neo4j_instance"} --> For graph_db
neo4j_system_transaction_last_closed_tx_id_total{job='$job', instance=~"$neo4j_instance"} --> For system

I have found the solution in grafana we can subsitute the variables and use like this to solve the problem. I'm using the latest stable version of Grafana and it worked for me.
neo4j_${var}_transaction_last_closed_tx_id_total{job='$job', instance=~"$neo4j_instance"}

Related

Cassandra cluster using docker

I'm new to cassandra and wanted to understand and implement the NetworkTopology Stratergy.
I want to create a cassandra cluster with NetworkTopology stratergy with multiple data centers. How to do it?
I tried creating a docker bridge network and three cassandra nodes: cas1, cas2, cas3. When used nodetools to check status a cluster with single datacentre is only getting created. But I want to create 2 datacenters.
There's a document which walks you through this: Initializing a multiple node cluster (multiple datacenters). It's for Cassandra 3.x, but the procedure is pretty much the same for 4.x as well.
But if I had to guess, I'd say there's two things you're probably missing:
In the cassandra.yaml set the endpoint_snitch to GossipingPropertyFileSnitch.
endpoint_snitch: GossipingPropertyFileSnitch
That tells Cassandra to check the cassandra-rackdc.properties file for data center and rack information. Inside that file, you'll find the following settings (by default).
dc=dc1
rack=rack1
This is where you can set the name of the new DC. Then you can use those data center names to specify replication on keyspaces using NetworkTopologyStrategy.

INSERT INTO ... in MariaDB in Ubuntu under Windows WSL2 results in corrupted data in some columns

I am migrating a MariaDB database into a Linux docker container.
I am using mariadb:latest in Ubuntu 20 LTS via Windows 10 WSL2 via VSCode Remote WSL.
I have copied the sql dump into the container and imported it into the InnoDB database which has DEFAULT CHARACTER SET utf8. It does not report any errors:
> source /test.sql
That file does this (actual data truncated for this post):
USE `mydb`;
DROP TABLE IF EXISTS `opsitemtest`;
CREATE TABLE `opsitemtest` (
`opId` int(11) NOT NULL AUTO_INCREMENT,
`opKey` varchar(50) DEFAULT NULL,
`opName` varchar(200) DEFAULT NULL,
`opDetails` longtext,
PRIMARY KEY (`opId`),
KEY `token` (`opKey`)
) ENGINE=InnoDB AUTO_INCREMENT=4784 DEFAULT CHARSET=latin1;
insert into `opsitemtest`(`opId`,`opKey`,`opName`,`opDetails`) values
(4773,'8vlte0755dj','VTools addin for MSAccess','<p>There is a super helpful ...'),
(4774,'8vttlcr2fTA','BAS OLD QB','<ol>\n<li><a href=\"https://www.anz.com/inetbank/bankmain.asp\" ...'),
(4783,'9c7id5rmxGK','STP - Single Touch Payrol','<h1>Gather data</h1>\n<ol style=\"list-style-type: decimal;\"> ...');
If I source a subset of 12 records of the table in question all the columns are correctly populated.
If I source the full set of data for the same table ( 4700 rows ) where everything else is the same, many of the opDetails long text fields have a length showing in sqlYog but no data is visible. If I run a SELECT on that column there are no errors but some of the opDetails fields are "empty" (meaning: you can't see any data), and when I serialize that field, the opDetails column of some records (not all) has
"opDetails" : "\u0000\u0000\u0000\u0000\u0000\u0000\",
( and many more \u0000 ).
The opDetails field contains HTML fragments. I am guessing it is something to do with that content and possibly the CHARSET, although that doesn't explain why the error shows up only when there are a large number of rows imported. The same row imported via a set of 12 rows works correctly.
The same test of the full set of data on a Windows box with MariaDB running on that host (ie no Ubuntu or WSL etc) all works perfectly.
I tried setting the table charset to utf8 to match the database default but that had no effect. I assume it is some kind of Windows WSL issue but I am running the source command on the container all within the Ubuntu host.
The MariaDB data folder is mapped using a volume, again all inside the Ubuntu container:
volumes:
- ../flowt-docker-volumes/mariadb-data:/var/lib/mysql
Can anyone offer any suggestions while I go through and try manually removing content until it works? I am really in the dark here.
EDIT: I just ran the same import process on a Mac to a MariaDB container on the OSX host to check whether it was actually related to Windows WSL etc and the OSX database has the same issue. So maybe it is a MariaDB docker issue?
EDIT 2: It looks like it has nothing to do with the actual content of opDetails. For a given row that is showing the symptoms, whether or not the data gets imported correctly seems to depend on how many rows I am importing! For a small number of rows, all is well. For a large number there is missing data, but always the same rows and opDetails field. I will try importing in small chunks but overall the table isn't THAT big!
EDIT 3: I tried a docker-compose without a volume and imported the data directly into the MariaDB container. Same problem. I was wondering whether it was a file system incompatibility or some kind of speed issue. Yes, grasping at straws!
Thanks,
Murray
OK. I got it working. :-)
One piece of info I neglected to mention, and it might not be relevant anyway, is that I was importing from an sql dump from 10.1.48-MariaDB-0ubuntu0.18.04.1 because I was migrating a legacy app.
So, with my docker-compose:
Version
Result
mysql:latest
data imported correctly
mariadb:latest
failed as per this issue
mariadb:mariadb:10.7.4
failed as per this issue
mariadb:mariadb:10.7
failed as per this issue
mariadb:10.6
data imported correctly
mariadb:10.5
data imported correctly
mariadb:10.2
data imported correctly
Important: remember to completely remove the external volume mount folder content between tests!
So, now I am not sure whether the issue was some kind of sql incompatibility that I need to be aware of, or whether it is a bug that was introduced between v10.6 and 10.7. Therefore I have not logged a bug report. If others with more expertise think this is a bug, I am happy to make a report.
For now I am happy to use 10.6 so I can progress the migration- the deadline is looming!
So, this is sort of "solved".
Thanks for all your help. If I discover anything further I will post back here.
Murray

Installation & Configuration of Gremlin-neo4j in windows

Hi i am new to gremlin and neo4j anyone please tell me how to install and configure this database.
I use this http://tinkerpop.apache.org/docs/3.1.0-incubating/ link for reference but i con't configure it.
That's a really old version of TinkerPop you are referencing in that link. The latest version is 3.3.3, please considering using that.
The most simple way to get started is to just create a Graph instance which will start Neo4j in an embedded mode:
Graph graph = Neo4j.open('data/neo4j');
GraphTraversalSource g = graph.traversal();
List<Vertex> vertices = g.V().toList()
To have greater control over the Neo4j specific configuration rather than all the defaults then you will want to create a properties file or Configuration object and pass that to open() rather than a directory where your data is:
Configuration conf = new BaseConfiguration();
conf.setProperty("gremlin.neo4j.directory","/tmp/neo4j");
conf.setProperty("gremlin.neo4j.multiProperties",false);
conf.setProperty("gremlin.neo4j.conf.dbms.transaction.timeout","60000s");
Graph graph = Neo4jGraph.open(configuration);
GraphTraversalSource g = graph.traversal();
List<Vertex> vertices = g.V().toList()
I'd suggest sticking to embedded mode initially, but connecting in high availability mode is also possible using the "configuration" approach above with specifics defined here in the documentation.

Neo4j APOC import error

I have a data model that starts with a single record, this has a custom "recordId" that's a uuid, then it relates out to other nodes and they then in turn relate to each other. That starting node is what defines the data that "belongs" together, as in if we had separate databases inside neo4j. I need to export this data, into a backup data-set that can be re-imported into either the same or a new database with ease
After some help, I'm using APOC to do the export:
call apoc.export.cypher.query("MATCH (start:installations)
WHERE start.recordId = \"XXXXXXXX-XXX-XXX-XXXX-XXXXXXXXXXXXX\"
CALL apoc.path.subgraphAll(start, {}) YIELD nodes, relationships
RETURN nodes, relationships", "/var/lib/neo4j/data/test_export.cypher", {})
There are then 2 problems I'm having:
Problem 1 is the data that's exported has internal neo4j identifiers to generate the relationships. This is bad if we need to import into a new database and the UNIQUE IMPORT ID values already exist. I need to have this data generated with my own custom recordIds as the point of reference.
Problem 2 is that the import doesn't even work.
call apoc.cypher.runFile("/var/lib/neo4j/data/test_export.cypher") yield row, result
returns:
Failed to invoke procedure apoc.cypher.runFile: Caused by: java.lang.RuntimeException: Error accessing file /var/lib/neo4j/data/test_export.cypher
I'm hoping someone can help me figure out what may be going on, but I'm not sure what additional info is helpful. No one in the Neo4j slack channel has been able to help find a solution.
Thanks.
problem1:
The exported file does not contain any internal neo4j ids. It is not safe to use neo4j ids out of the database, since they are not globally unique. So you should not use them to transfer data from one database to another.
If you are about to use globally uniqe ids, you can use an external plugin like GraphAware UUID plugin. (disclaimer: I work for GraphAware)
problem2:
If you cannot access the file, then possible reasons:
apoc.import.file.enabled=true is not set in neo4j.conf
os level
permission is not set

Problems with Dockerbeats dashboard containerName field

I have dockerbeats set up on a local cluster that is running ELK stack and some other misc. dockers (all containers controlled via kubernetes). I set up the dashboard from Ingensi (Ingensi dockerbeat Dashboard) for kibana and ran into an issue with the containerNames field while setting up the graphs. Now, for context, my docker containers have names like these:
k8s_dockerbeats.79c42f90_dockerbeats-796n9_default_472faa11-1b3a-11e6-8bf4-28924a2bffbf_2832ea88
(as well as supporting containers for kubernetes with similar container names) [2]: http://i.stack.imgur.com/hvIUG.png
k8s_POD.6d00e006_dockerbeats-796n9_default_472faa11-1b3a-11e6-8bf4-28924a2bffbf_3ddcfe44
When I set up the dashboard in kibana I get an issue where I get multiple containerNames from the same container. For example instead of a single containerName output I get the containerName split up into smaller segments:
k8s_dockerbeats
79c42f90_dockerbeats
796n9
28924a2bffbf_3ddcfe44
and so on...
I assume that the format of the container name is confusing the dashboard (maybe in the way that it parses the name information) and I could probably go around renaming every container to a more sensible name.
But before I do that, is there a way to configure the dashboard in such a way that I read in the entire container name string so that it does not break up like it does in the first image? (assuming I'll have to dig into the .json files from the repository mentioned above)
Thanks in advance if anyone answers this.
It sounds like the container name is being analyzed by Elasticsearch. You need to make sure that the container name field is marked as not_analyzed in the Elasticsearch index template. You can do this by installing the index template provided by Dockerbeat.
Marking the field as not_analyzed ensures that the data is not tokenized and it gets indexed as is. It will only be searchable by specifying the exact string.
You will need to delete your current indexes after installing the new index template in order to change the mappings.
Install the provided index template:
curl -XPUT 'http://elasticsearch:9200/_template/dockerbeat' -d#dockerbeat.template.json
Delete your the existing indexes:
curl -XDELETE 'http://elasticsearch:9200/dockerbeat-*'
You can view your current mappings by querying Elasticearch:
curl http://elasticsearch:9200/dockerbeat-*/_mapping

Resources