Prometheus empty query results - devops

I'm trying to find metric data # HELP net_conntrack_dialer_conn_attempted_total Total number of connections attempted by the given dialer a given name.# TYPE net_conntrack_dialer_conn_attempted_total counter but receive result . What could be the reason?

Related

downsampling: get constant value. E.g. sensor name from GROUP BY

I created a continuous query to downsample readings from temperature sensors in my influxdb to store hourly means for a longer time. There are readings of multiple sensors in one table. Upon executing the query, the sensors ip is missing.
Basic data looks like this:
> SELECT ip,tC FROM ht LIMIT 5
name: ht
time ip tC
---- -- --
1671057540000000000 192.168.0.83 21
1671057570000000000 192.168.0.83 21
1671057750000000000 192.168.0.17 21.38
The continuous query (simplified without CREATE ... END):
SELECT last(ip), mean("tC") AS "mean_temp" INTO "downsampled"."ht_downsampled" FROM "ht" GROUP BY time(1h),ip
The issue is, the value of 'ip' is only a tag, not the value in the table and subsequently is missing in the table the query inserts into:
name: ht
tags: ip=192.168.0.17
time ip mean_temp mean_hum
---- -- --------- --------
1671055200000000000 21.47 42.75
1671058800000000000 21.39428571428571 48.785714285714285
1671062400000000000 21.314999999999998 51.625
Why is last(ip) not producing any value?
Can I get the value from the 'tags' into the table?
Is there a different approach to group data with a constant value?
Could you just try query the ip instead of the last(ip) since you are grouping by the ip in the statement already?
Sample code:
SELECT ip, mean("tC") AS "mean_temp" INTO "downsampled"."ht_downsampled" FROM "ht" GROUP BY time(1h), ip

Influxdb querying values from 2 measurements and using SUM() for the total value

select SUM(value)
from /measurment1|measurment2/
where time > now() - 60m and host = 'hostname' limit 2;
Name: measurment1
time sum
---- ---
1505749307008583382 4680247
name: measurment2
time sum
---- ---
1505749307008583382 3004489
But is it possible to get value of SUM(measurment1+measurment2) , so that I see only o/p .
Not possible in influx query language. It does not support functions across measurements.
If this is something you require, you may be interested in layering another API on top of influx that do this, like Graphite via Influxgraph.
For the above, something like this.
/etc/graphite-api.yaml:
finders:
- influxgraph.InfluxDBFinder
influxdb:
db: <your database>
templates:
# Produces metric paths like 'measurement1.hostname.value'
- measurement.host.field*
Start the graphite-api/influxgraph webapp.
A query /render?from=-60min&target=sum(*.hostname.value) then produces the sum of value on tag host='hostname' for all measurements.
{measurement1,measurement2}.hostname.value can be used instead to limit it to specific measurements.
NB - Performance wise (of influx), best to have multiple values in the same measurement rather than same value field name in multiple measurements.

Not able to view measurement after creating continuous query

I have a measurement that has the following fields in InfluxDB. The measurement name is data.
Measurement name: data
Time Device Interface Metric Value
2016-10-11T19:00:00Z device1_name int_name In_bits 10
2016-10-11T19:00:00Z device2_name int_name Out_bits 5
.
.
.
I have created a continuous query as follows:
CREATE CONTINUOUS QUERY "test_query" ON "db_name" BEGIN SELECT sum("value") as Sumin INTO "data.copy" FROM "data" where metric = 'In_bits' GROUP BY time(15m), device END
After creating this, how do I see the results saved in new measurement data.copy? After creating this query, I am not able to find the new measurement that has been created. If I'm doing something wrong, I would like to get more input on this matter. Thanks!

May I shuffle the order of the e-mails in a response for a FETCH command?

If the client does a FETCH with a range of sequence numbers, must the server response give each e-mail in ascending sequence number order?
The RFC3501 contains the following example of a FETCH command.
C: A654 FETCH 2:4 (FLAGS BODY[HEADER.FIELDS (DATE FROM)])
S: * 2 FETCH ....
S: * 3 FETCH ....
S: * 4 FETCH ....
S: A654 OK FETCH completed
Would the following example represent a compliant server?
C: A654 FETCH 2:4 (FLAGS BODY[HEADER.FIELDS (DATE FROM)])
S: * 3 FETCH ....
S: * 4 FETCH ....
S: * 2 FETCH ....
S: A654 OK FETCH completed
I could not find nothing in the sections for FETCH request and FETCH response regarding the order of the response.
You can reorder as much as you want. The paragraph Paurian quotes applies to UID assignment, not to reporting.
It's also safe in practice: Symantec's IMAP proxy (I forget its name, but its job is to scan for naughty attachments and present a santised view of the world to IMAP clients) sends fetch responses in an unpredictable order, and the main developer knows about no problems resulting from that.
From what I understand, No. The sequence must be in order. [See comments, below - as the specs mention storage, not retrieval of order.]
2.3.1.1. Unique Identifier (UID) Message Attribute
A 32-bit value assigned to each message, which when used with the
unique identifier validity value (see below) forms a 64-bit value
that MUST NOT refer to any other message in the mailbox or any
subsequent mailbox with the same name forever. Unique identifiers
are assigned in a strictly ascending fashion in the mailbox; as each
message is added to the mailbox it is assigned a higher UID than the
message(s) which were added previously. Unlike message sequence
numbers, unique identifiers are not necessarily contiguous.
Since these are sequence numbers, the result must be contiguous.
Articld 6.4.8. implies that FETCH without the UID prefix indicates a sequence search rather than a unique identifier within your range expression:
... the UID command (variant) takes a SEARCH command with
SEARCH command arguments. The interpretation of the arguments is
the same as with SEARCH; however, the numbers returned in a SEARCH
response for a UID SEARCH command are unique identifiers instead
Source: https://www.rfc-editor.org/rfc/rfc3501

neo4j REST 'Server got itself in trouble'

I am running a very basic test to check my understanding and evaluate neo4j REST server (neo4j-community-1.8.M07). I am using Neo4j Python REST Client.
Each test iteration starts with a random strings for the source node name and the destination node name. The names contain only letters a..z and numbers 0..9 (oddly enough, I never got it to fail if I use A..Z and 0..9). The name may be from one char to 36 chars long and there are no repeating chars. I create 36 nodes, where the 1-st node name is only one char long and the 36-th node name has 36 chars. Then I create relations between all nodes. The name of each relation is the concatenation of the source node name and the destination node name. The final graph has 37 nodes (1 reference node and 36 nodes with names from one char to 36 non-repeating chars) and 1260 relations. Before each test iteration I clear the graph, so that it has only one (the reference) node.
The problem is that after several successful iterations neo4j REST server crashes:
Error [500]: Internal Server Error. Server got itself in trouble.
Invalid data sent
The query that crashes the system can be different - here is an example of a query_string that caused a problem:
START n_from=node:index_faqts(node_name="h"),
n_to=node:index_faqts(node_name="hg2b8wpj04ms")CREATE UNIQUE
n_from-[r:`hhg2b8wpj04ms` ]->n_to RETURN r
self.cypher_extension.execute_query( query_string )
I spent a lot of time trying to find a trend, but in vain. If I did something wrong with the queries none of the tests would ever work. I have observed crashes for number of successful test cycles between 5 and 25 rounds.
What might be causing neo4j REST server to crash?
P.S. Some details...
The nodes are created like this:
...
self.index_faqts[ "node_name" ][ p_str_node_name ] =
self.gdb.nodes.create( **p_dict_node_attributes )
...
Just in case - before issuing the query to create a new relation I check the graph to make sure that the
source and the destination nodes exist. That check never failed.
You are using too many relationship-types, currently the limit is at 32k. Might be patched in Neo4j if you have a valid use-case.

Resources