I have a Cassandra ring with 16 nodes and ~1.1 billion records stored in it. It is still a bit unclear to me how some Cassandra metrics are interpreted and what exactly they mean.
For example, using the jconsole of one node we have access to these metrics:
org.apache.cassandra.metrics.ClientRequest.Read.Latency.Attributes.count - 12.612.001
org.apache.cassandra.metrics.Table.ReadLatency.Attributes.Count - 12.570.466
org.apache.cassandra.metrics.Table...ReadLatency.Attributes.Count - 12.569.578
org.apache.cassandra.metrics.ColumnFamily.ReadLatency.Attributes.Count - 12.570.466
org.apache.cassandra.metrics.Keyspace..ReadLatency.Attributes.Count - 12.570.199
nodetool tablestats mdb.experiment -> Read Count - 12.570.199
As you can see some of these metrics have similar or very close values and some identical. Do all these values refer to table or node level?
I would like to know the writes/sec and reads/sec that take place in total for the whole cassandra ring. Should I sum the 1) from all the nodes?
Related
I am importing several TB of CSV data into Neo4J for a project I have been working on. I have enough fast storage for the estimated 6.6TiB, however the machine has only 32GB of memory, and the import tool is suggesting 203GB to complete the import.
When I run the import, I see the following (I assume it exited because it ran out of memory). Is there any way I can import this large dataset with the limited amount of memory I have? Or if not with the limited amount of memory I have, with the maximum ~128GB that the motherboard this machine can support.
Available resources:
Total machine memory: 30.73GiB
Free machine memory: 14.92GiB
Max heap memory : 6.828GiB
Processors: 16
Configured max memory: 21.51GiB
High-IO: true
WARNING: estimated number of nodes 37583174424 may exceed capacity 34359738367 of selected record format
WARNING: 14.62GiB memory may not be sufficient to complete this import. Suggested memory distribution is:
heap size: 5.026GiB
minimum free and available memory excluding heap size: 202.6GiB
Import starting 2022-10-08 19:01:43.942+0000
Estimated number of nodes: 15.14 G
Estimated number of node properties: 97.72 G
Estimated number of relationships: 37.58 G
Estimated number of relationship properties: 0.00
Estimated disk space usage: 6.598TiB
Estimated required memory usage: 202.6GiB
(1/4) Node import 2022-10-08 19:01:43.953+0000
Estimated number of nodes: 15.14 G
Estimated disk space usage: 5.436TiB
Estimated required memory usage: 202.6GiB
.......... .......... .......... .......... .......... 5% ∆1h 38m 2s 867ms
neo4j#79d2b0538617:~/import$
TL:DR; Using Periodic Commit, or Transaction Batching
If you're trying to follow the Operations Manual: Neo4j Admin Import, and your csv matches the movies.csv in that example, I would suggest instead doing a more manual USING PERIODIC COMMIT LOAD CSV...:
Stop the db.
Put your csv at neo4j/import/myfile.csv.
If you're using Desktop: Project > DB > click the ... on the right >
Open Folder
Add the APOC plugin.
Start the DB.
Next, open a browser instance, run the following (adjust for your data), and leave it until tomorrow:
USING PERIODIC COMMIT LOAD CSV FROM 'file:///myfile.csv' AS line
WITH line[3] AS nodeLabels, {
id: line[0],
title: line[1],
year: toInteger(line[2])
} AS nodeProps
apoc.create.node(SPLIT(line[3],';',
Note: There are many ways to solve this problem, depending on your source data and the model you wish to create. This solution is only meant to give you a handful of tools to help you get around the memory limit. If it is a simple CSV, and you don't care about what labels the nodes get initially, and you have headers, you can skip the complex APOC, and probably just do something like the following:
USING PERIODIC COMMIT LOAD CSV WITH HEADERS FROM 'file:///myfile.csv' AS line
CREATE (a :ImportedNode)
SET a = line
File for Each Label
Original Asker mentioned having a separate csv for each label. In such instances it may be helpful to have a great-big single-command that can handle all of it, rather than needing to manually step through each step of the operation.
Assuming two label-types, each with a unique 'id' property, and one with a 'parent_id' referencing the other label...
UNWIND [
{ file: 'country.csv', label: 'Country'},
{ file: 'city.csv', label: 'City'}
] AS importFile
USING PERIODIC COMMIT LOAD CSV FROM 'file:///' + importFile.file AS line
CALL apoc.merge.node([importFile.label], {id: line.id}) YIELD node
SET node = line
;
// then build the relationships
MATCH (city :City)
WHERE city.parent_id
MATCH (country :Country {id: city.parent_id)
MERGE (city)-[:IN]->(country)
I am using the consul exporter to ingest the health and status of my services into Prometheus. I'd like to fire alerts when the status of services and nodes in Consul is critical and then use tags extracted from Consul when routing those alerts.
I understand from this discussion that service tags are likely to be exported as a separate metric, but I'm not sure how to join one series with another so I can leverage the tags with the health status.
For example, the following query:
max(consul_health_service_status{status="critical"}) by (service_name, status,node) == 1
could return:
{node="app-server-02",service_name="app-server",status="critical"} 1
but I'd also like 'env' from this series:
consul_service_tags{node="app-server-02",service_name="app-server",env="prod"} 1
to get joined along node and service_name to pass the following to the Alertmanager as a single series:
{node="app-server-02",service_name="app-server",status="critical",env="prod"} 1
I could then match 'env' in my routing.
Is there any way to do this? It doesn't look to me like any operations or functions give me the ability to group or join like this. As far as I can see, the tags would already need to be labels on the consul_health_service_status metric.
You can use the argument list of group_left to include extra labels from the right operand (parentheses and indents for clarity):
(
max(consul_health_service_status{status="critical"})
by (service_name,status,node) == 1
)
+ on(service_name,node) group_left(env)
(
0 * consul_service_tags
)
The important part here is the operation + on(service_name,node) group_left(env):
the + is "abused" as a join operator (fine since 0 * consul_service_tags always has the value 0)
group_left(env) is the modifier that includes the extra label env from the right (consul_service_tags)
The answer in this question is accurate. I want to also share a clearer explanation on joining two metrics preserving SAME Labels (might not be directly answering the question). In these metrics following label is there.
name (eg: aaa, bbb, ccc)
I have a metric name metric_a, and if this returns no data for some of the labels, I wish to fetch data from metric_b. i.e:
metric_a has values for {name="aaa"} and {name="bbb"}
metric_b has values for {name="ccc"}
I want the output to be for all three name labels. The solution is to use or in Prometheus.
sum by (name) (increase(metric_a[1w]))
or
sum by (name) (increase(metric_b[1w]))
The result of this will have values for {name="aaa"}, {name="bbb"} and {name="ccc"}.
It is a good practice in Prometheus ecosystem to expose additional labels, which can be joined to multiple metrics, via a separate info-like metric as explained in this article. For example, consul_service_tags metric exposes a set of tags, which can be joined to metrics via (service_name, node) labels.
The join is usually performed via on() and group_left() modifiers applied to * operation. The * doesn't modify values for time series on the left side because info-like metrics usually have constant 1 values. The on() modifier is used for limiting the labels used for finding matching time series on the left and the right side of *. The group_left() modifier is used for adding additional labels from time series on the right side of *. See these docs for details.
For example, the following PromQL query adds env label from consul_service_tags metric to consul_health_service_status metric with the same set of (service_name, node) labels:
consul_health_service_status
* on(service_name, node) group_left(env)
consul_service_tags
Additional label filters can be added to consul_health_service_status if needed. For example, the following query returns only time series with status="critical" label:
consul_health_service_status{status="critical"}
* on(service_name, node) group_left(env)
consul_service_tags
select SUM(value)
from /measurment1|measurment2/
where time > now() - 60m and host = 'hostname' limit 2;
Name: measurment1
time sum
---- ---
1505749307008583382 4680247
name: measurment2
time sum
---- ---
1505749307008583382 3004489
But is it possible to get value of SUM(measurment1+measurment2) , so that I see only o/p .
Not possible in influx query language. It does not support functions across measurements.
If this is something you require, you may be interested in layering another API on top of influx that do this, like Graphite via Influxgraph.
For the above, something like this.
/etc/graphite-api.yaml:
finders:
- influxgraph.InfluxDBFinder
influxdb:
db: <your database>
templates:
# Produces metric paths like 'measurement1.hostname.value'
- measurement.host.field*
Start the graphite-api/influxgraph webapp.
A query /render?from=-60min&target=sum(*.hostname.value) then produces the sum of value on tag host='hostname' for all measurements.
{measurement1,measurement2}.hostname.value can be used instead to limit it to specific measurements.
NB - Performance wise (of influx), best to have multiple values in the same measurement rather than same value field name in multiple measurements.
I have a Neo4J database with the following properties:
Array Store 8.00 KiB
Logical Log 16 B
Node Store 174.54 MiB
Property Store 477.08 MiB
Relationship Store 3.99 GiB
String Store Size 174.34 MiB
MiB Total Store Size 5.41 GiB
There are 12M nodes and 125M relationships.
So you could say this is a pretty large database.
My OS is windows 10 64bit, running on an Intel i7-4500U CPU #1.80Ghz with 8GB of RAM.
This isn't a complete powerhouse, but it's a decent machine and in theory the total store could even fit in RAM.
However when I run a very simple query (using the Neo4j Browser)
MATCH (n {title:"A clockwork orange"}) RETURN n;
I get a result:
Returned 1 row in 17445 ms.
I also used a post request with the same query to http://localhost:7474/db/data/cypher, this took 19seconds.
something like this:
http://localhost:7474/db/data/node/15000
is however executed in 23ms...
And I can confirm there is an index on title:
Indexes
ON :Page(title) ONLINE
So anyone have ideas on why this might be running so slow?
Thanks!
This has to scan all nodes in the db - if you re-run your query using n:Page instead of just n, it'll use the index on those nodes and you'll get better results.
To expand this a bit more - INDEX ON :Page(title) is only for nodes with a :Page label, and in order to take advantage of that index your MATCH() needs to specify that label in its search.
If a MATCH() is specified without a label, the query engine has no "clue" what you're looking for so it has to do a full db scan in order to find all the nodes with a title property and check its value.
That's why
MATCH (n {title:"A clockwork orange"}) RETURN n;
is taking so long - it has to scan the entire db.
If you tell the MATCH() you're looking for a node with a :Page label and a title property -
MATCH (n:Page {title:"A clockwork orange"}) RETURN n;
the query engine knows you're looking for nodes with that label, it also knows that there's an index on that label it can use - which means it can perform your search with the performance you're looking for.
When I run a script that tries to batch merge all nodes a certain types, I am getting some weird performance results.
When merging 2 collections of nodes (~42k) and (~26k), the performance is nice and fast.
But when I merge (~42) and (5), performance DRAMATICALLY degrades. I'm batching the ParentNodes (so (~42k) split up in batches of 500. Why does performance drop when I'm, essentially, merging less nodes (when the batch set is the same, but the source of the batch set is high and the target set is low)?
Relation Query:
MATCH (s:ContactPlayer)
WHERE has(s.ContactPrefixTypeId)
WITH collect(s) AS allP
WITH allP[7000..7500] as rangedP
FOREACH (parent in rangedP |
MERGE (child:ContactPrefixType
{ContactPrefixTypeId:parent.ContactPrefixTypeId}
)
MERGE (child)-[r:CONTACTPLAYER]->(parent)
SET r.ContactPlayerId = parent.ContactPlayerId ,
r.ContactPrefixTypeId = child.ContactPrefixTypeId )
Performance Results:
Process Starting
Starting to insert Contact items
[+++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++
++++++]
Total time for 42149 Contact items: 19176.87ms
Average time per batch (500): 213.4ms
Longest batch time: 663ms
Starting to insert ContactPlayer items
[++++++++++++++++++++++++++++++++++++++++++++++++++++++++]
Total time for 27970 ContactPlayer items: 9419.2106ms
Average time per batch (500): 167.75ms
Longest batch time: 689ms
Starting to relate Contact to ContactPlayer
[++++++++++++++++++++++++++++++++++++++++++++++++++++++++]
Total time taken to relate Contact to ContactPlayer: 7907.4877ms
Average time per batch (500): 141.151517857143ms
Longest batch time: 883.0918ms for Batch number: 0
Starting to insert ContactPrefixType items
[+]
Total time for 5 ContactPrefixType items: 22.0737ms
Average time per batch (500): 22ms
Longest batch time: 22ms
Already inserted data for Contact.
Starting to relate ContactPrefixType to Contact
[+++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++
++++++]
Total time taken to relate ContactPrefixType to Contact: 376540.8309ms
Average time per batch (500): 4429.78643647059ms
Longest batch time: 14263.1843ms for Batch number: 63
So far, the best I could come up with is the following (and it's a hack, specific to my environment):
If / Else condition:
If childrenNodes.count() < 200 -> assume they are type identifiers for the parent... i.e. ContactPrefixType
Else assume it is a matrix for relating multiple item types together (i.e. ContactAddress)
If childNodes < 200
MATCH (parent:{parentLabel}),
(child:{childLabel} {{childLabelIdProperty}:parent.{parentRelationProperty}})
CREATE child-[r:{relationshipLabel}]->parent
This takes about 3-5 seconds to complete per relationship type
Else
MATCH (child:{childLabel}),
(parent:{parentLabel} {{parentPropertyField : child.{childLabelIdProperty}})
WITH collect(parent) as parentCollection, child
WITH parentCollection[{batchStart}..{batchEnd}] as coll, child
FOREACH (parent in coll |
CREATE child-[r:{relationshipLabel}]-parent )
I'm not sure this is the most efficient way of doing this, but after trying MANY different options, this seems to be the fastest.
Stats:
insert 225,018 nodes with 2,070,977 properties
create 464,606 relationships
Total: 331 seconds.
Because this is a straight import and I'm not dealing with updates yet, I assume that all the relationships are correct and don't need to worry about invalid data... however, I will try to set properties to the relationship type so as to be able to perform cleanup functions later (i.e. store the parent and child Id's in the relationship type as properties for later reference)
If anyone can improve on this, I would love it.
Can you pass the ids in as parameters rather than fetch them from the graph? The query could look like
MATCH (s:ContactPlayer {ContactPrefixTypeId:{cptid})
MERGE (c:ContactPrefixType {ContactPrefixTypeId:{cptid})
MERGE c-[:CONTACT_PLAYER]->s
If you use the REST API Cypher resource, I think the entity should look something like
{
"query":...,
"params": {
"cptid":id1
}
}
If you use the transactional endpoint, it should look something like this. You control transaction size by the number of statements in each call, and also by the number of calls before you commit. More here.
{
"statements":[
"statement":...,
"parameters": {
"cptid":id1
},
"statement":...,
"parameters": {
"cptid":id2
}
]
}