Get difference using Flux - influxdb

I have a simple counter which is stored in a InfluxDB. Now I would like to get the difference in the counter value between two points in time. So the result should only be a single value.
I tried the following query:
from(bucket: "influxdb")
|> range(start: v.timeRangeStart, stop: v.timeRangeStop)
|> filter(fn: (r) => r["_measurement"] == "mqtt_consumer")
|> filter(fn: (r) => r["_field"] == "value")
|> filter(fn: (r) => r["host"] == "telegraf-1-18-1")
|> filter(fn: (r) => r["topic"] == "shellies/shellyplug-s-DDE23E/relay/0/energy")
|> difference()
But this does not give the difference between the two counter values (actually I have no idea what the result is supposed to be).
Can anyone give me a hint on how to use difference correctly?

Related

Flux Query : iterate filter from a list

I search a way to filter by a loop/iterate from a list
is it possible?
The table tophdd contains 2 entries, but i can't filter these two entries with a regex.
tophdd = from(bucket: v.bucket)
|>range(start: v.timeRangeStart, stop: v.timeRangeStop)
|>filter(fn: (r) => r._measurement == "HDDID")
|>filter(fn: (r) => r.serial == "${Serial}")
|>filter(fn: (r) => r._field == "HDDID_IOPS")
|>highestMax(n:2,groupColumns: ["HDDID"])
|>keep(columns: ["HDDID" ])
|>from(bucket: v.bucket)
|>range(start: v.timeRangeStart, stop: v.timeRangeStop)
|>filter(fn: (r) => r._measurement == "HDDID")
|>filter(fn: (r) => r.serial == "${Serial}")
|>filter(fn: (r) => r._field == "HDDID_IOPS")
|>filter(fn: (r) => r.HDDID = =~ /"${tophdd}"/)
|>aggregateWindow(column: "_value", every: v.windowPeriod, fn: mean)
i search to filter like this:
filter(fn: (r) => r.HDDID = =~ /"${tophdd}"/)
Is it possible to filter from a list?
Many thanks,
Looks like you just had a duplicate equal sign ( = = ) there. Try to update the query as follows:
filter(fn: (r) => r.HDDID =~ /"${tophdd}"/)
See more details here.
You can extract column values using findColumn to an array and then use contains function in the filter. Eg.
tophdd = from(bucket: v.bucket)
|> range(start: v.timeRangeStart, stop: v.timeRangeStop)
|> ...
|> keep(columns: ["HDDID" ])
|> findColumn(fn: (key) => true, column: "HDDID")
from(bucket: v.bucket)
|> range(start: v.timeRangeStart, stop: v.timeRangeStop)
|> filter(fn: (r) => r._measurement == "HDDID")
|> ...
|> filter(fn: (r) => contains(set: tophdd, value: r.HDDID))
|> aggregateWindow(column: "_value", every: v.windowPeriod, fn: mean)
Please note that the performance may be suboptimal as contains() is not a pushdown op.

Sum values in InfluxDB

I’m tryingto sum values in InfluxDB but I’m struggling a bit.
So, I have a _measurement "plug" with a field "value".
I have different records within the same bucket with a different ID tag.
I can get the evolution of 1 plug with this query:
from(bucket: "test-bucket")
|> range(start: v.timeRangeStart, stop: v.timeRangeStop)
|> filter(fn: (r) => r["_measurement"] == "plug")
|> filter(fn: (r) => r["_field"] == "value")
|> filter(fn: (r) => r["id"] == "tag1")
|> aggregateWindow(every: v.windowPeriod, fn: mean, createEmpty: false)
|> yield(name: "mean")
What I would like is the exact same graph with the sum of all r["id"].
So, if there is 34 for tag ID "tag1", 11.2 for "tag2" and 0 for "tag3", I would like a graph with 45.2 for that given time.
I’ve tried to use «group()» method, but I get a strange value, more like an average than a sum.
I’ve also tried to use «sum» method, but then, I feel like Influx is summing all the values across the whole timeline. That’s not what I want.
I just like to have a graph with with the sum of «value» field of all "tag" at a given time.
Thanks a lot for you help.
Right now you have a table per tag value. You can use pivot function to merge into a single table, where all the different _value columns are now named after their corresponding tag value:
from(bucket: "test-bucket")
|> range(start: v.timeRangeStart, stop: v.timeRangeStop)
|> filter(fn: (r) => r._measurement == "plug")
|> filter(fn: (r) => r._field == "value")
|> pivot(rowKey: ["_time"], columnKey: ["id"], valueColumn: "_value")
If you know the tag values in advance, the next step is easy:
|> map(fn: (r) => ({ _time: r._time, _value: r["tag1"] + r["tag2"] + r["tag3]}))
If you don't, it gets a bit more complicated. What I would try next in this case is to write a function that combines experimental.unpivot() (note: available since InfluxDB 2.4) with sum(). The trick here is to call this method within map(), so it will operate on a single row (ie, single timestamp) at a time:
sumColumns = (r) => r
|>experimental.unpivot()
|>group
|>sum()
|>findRecord(fn: (key) => true, idx: 0)
from(bucket: "test-bucket")
|> range(start: v.timeRangeStart, stop: v.timeRangeStop)
|> filter(fn: (r) => r._measurement == "plug")
|> filter(fn: (r) => r._field == "value")
|> pivot(rowKey: ["_time"], columnKey: ["id"], valueColumn: "_value")
|> map(fn: sumColumns)
Note that I have not tested this. It is just to give you an idea.

Query last value in Flux

I'm trying to get the last value from some IoT sensors and I actually achieved an intermediary result with the following Flux query:
from(bucket:"mqtt-bucket")
|> range(start:-10m )
|> filter(fn: (r) => r["_measurement"] == "mqtt_consumer")
|> filter(fn: (r) => r["thingy"] == "things/green-1/shadow/update"
or r["thingy"] == "things/green-3/shadow/update"
or r["thingy"] == "things/green-2/shadow/update")
|> filter(fn: (r) => r["_field"] == "data")
|> filter(fn: (r) => r["appId"] == "TEMP" or r["appId"] == "HUMID")
|> toFloat()
|> last()
The problem: I would like to get the last mesured value independently of a time range.
I saw in the docs that there is no way to unbound the range function. Maybe there is a work around ?
I just found this:
from(bucket: "stockdata")
|> range(start: 0)
|> filter(fn: (r) => r["_measurement"] == "nasdaq")
|> filter(fn: (r) => r["symbol"] == "OPEC/ORB")
|> last()

InfluxDB2.0 How to calculate daily value over bigger timeframe?

I'm migrating my InfluxDB1.8 version to InfluxDB2.0
I'm using a influxDB2.0 database and use grafana to display results.
What I insert as data are the results of my P1 meter, altough the results are total values, I would like to calculate and display the daily results.
What is being inserting is the current (gas usage) value. By calculating the difference of the begin and end of the day, I have my daily usage result.
I did find out a way to do this for 1 day. With the Spread function. But I don't get it working for a longer timeframe then 1 day.
But now to display this on a daily usage on a longer timeframe. I didn't find the right option to get this working
Week results
Anyone an idea?
Query for 1 day:
|> range(start: v.timeRangeStart, stop: v.timeRangeStop)
|> filter(fn: (r) => r["_measurement"] == "Gas-usage")
|> filter(fn: (r) => r["_field"] == "value")
|> aggregateWindow(every: v.windowPeriod, fn: mean, createEmpty: false)
|> spread(column: "_value")```
I did some checks on the 1.8 one and what works there is:
SELECT spread("value")
FROM "Gas-usage"
WHERE $timeFilter
GROUP BY time(1d) fill(null) tz('Europe/Berlin')
what is the equivalant of this query in influxdb 2.0 ?
Try change your aggregate window, like this:
|> aggregateWindow(every: 1d, fn: mean)
use the spread function inside your aggreagateWindow function.
should be like this:
|> range(start: v.timeRangeStart, stop: v.timeRangeStop)
|> filter(fn: (r) => r["_measurement"] == "Gas-usage")
|> filter(fn: (r) => r["_field"] == "value")
|> aggregateWindow(every: 1d, fn: spread, createEmpty: false)
from(bucket: "${bucket}")
|> range(start: v.timeRangeStart, stop: v.timeRangeStop)
|> filter(fn: (r) => r["_measurement"] == "system")
|> filter(fn: (r) => r.host == "${host}")
|> filter(fn: (r) => r["_field"] == "uptime")
|> aggregateWindow(every: 1d, fn: spread, createEmpty: false)
result of my grafana

How to calculate uptime in seconds using Flux?

Using the new Flux langauge, how best to calculate uptime? My current Flux query looks a bit like this:
from(bucket: "my-bucket")
|> range(start: v.timeRangeStart, stop: v.timeRangeStop)
|> filter(fn: (r) => r["_measurement"] == "process_start_time_seconds")
|> filter(fn: (r) => r["_field"] == "gauge")
|> map(fn: (r) => ({
r with
_value: (int(v: now()) / 1000000000) - int(v: r._value)
})
)
|> aggregateWindow(every: v.windowPeriod, fn: last, createEmpty: false)
This works but seems to be incredibly complex for such a small thing, in Prometheus it's basically one line:
(time() - process_start_time_seconds{job="my-job"})
Is there a way I can improve the Flux query?
I don't think you can simplify it a lot, but here are some ideas:
Store the converted current time in a variable
Don't use aggregateWindow() when you only want to fetch a single value over time
Move the map() as far out as you can for better performance
Use prettier syntax
It could then look like this (just a sketch, not tested for syntax):
currentSeconds = (int(v: now()) / 1000000000)
from(bucket: "my-bucket")
|> range(start: v.timeRangeStart, stop: v.timeRangeStop)
|> filter(fn: (r) => r._measurement == "process_start_time_seconds")
|> filter(fn: (r) => r._field == "gauge")
|> last()
|> map(fn: (r) => ({
r with _value: currentSeconds - int(v: r._value)
})
)

Resources