This is my query:
from(bucket: "power_monitor")
|> range(start: today())
|> aggregateWindow(every: 1h, fn: mean, createEmpty: false)
|> keep(columns: ["_measurement", "_value", "_time"])
|> increase()
|> last()
It tracks the amount of energy used since the start of the day, except I think the timezone is wrong... it was working perfectly until ~8pm when all values were reset to zero.
Can I fix this in the query by adding a time offset? or set my timezone?
I think I'd like the values to reset around 2am.
I got this from influx docs and added it to the query, but it doesn't seem to help (still all zeros):
import "timezone"
timezone.fixed(offset: -4h)
You need to set location option, ie.
import "timezone"
option location = timezone.fixed(offset: -4h)
from(bucket: "power_monitor")
...
Or use location parameter of aggregateWindow function, ie.
from(bucket: "power_monitor")
...
|> aggregateWindow(every: 1h, fn: mean, createEmpty: false, location: timezone.fixed(offset: -4h))
I think I found a workaround, but accepted answer will be one which uses timezones and not experimental function.
workaround:
import "experimental"
starttime = () => experimental.addDuration(d: -6h, to: today())
from(bucket: "power_monitor")
|> range(start: starttime())
|> aggregateWindow(every: 1h, fn: mean, createEmpty: false)
|> keep(columns: ["_measurement", "_value", "_time"])
|> increase()
|> last()
Related
I am quite new to Flux and want to solve an issue:
I got a bucket containing measurements, which are generated by a worker-service.
Each measurement belongs to a site and has an identifier (uuid). Each measurement contains three measurement points containing a value.
What I want to archive now is the following: Create a graph/list/table of measurements for a specific site and aggregate the median value of each of the three measurement points per measurement.
TLDR;
Get all measurementpoints that belong to the specific site-uuid
As each measurement has an uuid and contains three measurement points, group by measurement and take the median for each measurement
Return a result that only contains the median value for each measurement
This does not work:
from(bucket: "test")
|> range(start: v.timeRangeStart, stop: v.timeRangeStop)
|> filter(fn: (r) => r["_measurement"] == "lighthouse")
|> filter(fn: (r) => r["_field"] == "speedindex")
|> filter(fn: (r) => r["site"] == "1d1a13a3-bb07-3447-a3b7-d8ffcae74045")
|> group(columns: ["measurement"])
|> aggregateWindow(every: v.windowPeriod, fn: mean, createEmpty: false)
|> yield(name: "mean")
This does not throw an error, but it of course does not take the median of the specific groups.
This is the result (simple table):
If I understand your question correctly you want a single number to be returned.
In that case you'll want to use the |> mean() function:
from(bucket: "test")
|> range(start: v.timeRangeStart, stop: v.timeRangeStop)
|> filter(fn: (r) => r["_measurement"] == "lighthouse")
|> filter(fn: (r) => r["_field"] == "speedindex")
|> filter(fn: (r) => r["site"] == "1d1a13a3-bb07-3447-a3b7-d8ffcae74045")
|> group(columns: ["measurement"])
|> mean()
|> yield(name: "mean")
The aggregateWindow function aggregates your values over (multiple) windows of time. The script you posted computes the mean over each v.windowPeriod (in this case 20 minutes).
I am not entirely sure what v.windowPeriod represents, but I usually use time literals for all times (including start and stop), I find it easier to understand how the query relates to the result that way.
On a side note: the yield function only renames your result and allows you to have multiple returning queries, it does not compute anything.
How to write similar query using Flux:
SELECT field_a,field_b from 'measurement' where field_a = 10 and group by field_b
I'm afraid that the InfluxQL above won't work as currently InfluxDB only supports tags and time interval in GROUP BY clause, not the fields. This could be inferred from the syntax of group by clause (for more information refer to InfluxDB documentation).
Nevertheless, if you are grouping by some tag as follows:
SELECT field_a,tag_b from 'measurement' where field_a = 10 and group by tag_b
This is the equivalent Flux query:
from(bucket: "thisIsYourBucketInInfluxDBV2")
// specify start:0 to query from all time. Equivalent to SELECT * from db1. Use just as cautiously.
|> range(start: 0)
|> filter(fn: (r) => r._measurement == "measurement" and r._field == "field_a" and r._value = 10)
|> filter(fn: (r) => r._value = 10)
Here is a guide for you to migrate your InfluxQL to Flux.
You can query several fields using a regex. And you can group by fields if you pivot your result table using the schema.fieldsAsCols() function; that way, the result of the query has columns that have the names of the queried fields. See this query:
import "influxdata/influxdb/schema"
from(bucket: "yourBucket")
|> range(start: v.timeRangeStart, stop: v.timeRangeStop)
|> filter(fn: (r) => r["_measurement"] == "measurement")
|> filter(fn: (r) => r["_field"] =~ /^(field_a|field_b)$/)
|> aggregateWindow(every: v.windowPeriod, fn: first, createEmpty: false)
//|> group()
//|> sort(columns: ["_time"])
|> schema.fieldsAsCols()
|> filter(fn: (r) => r.field_a == 10)
|> group(columns: ["field_b"])
//|> max(column: "field_b")
|> yield()
Two remarks :
To make sure that you have only one table before you group by field_b, uncomment the lines |> group() and |> sort(columns: ["_time"]). The first ungroups the result which is otherwise splitted into different values of your tags (if you have any). The latter sortes the ungrouped result by the timestamp.
Since there is no aggregation in your initial query, the flux query outputs several result tables depending on the number of different values of field_b. If you are for example interested in the the max of field_a for every group uncomment the line before |> yield().
So I'm trying to find any documentation on more complex Flux queries but after days of searching I'm still lost. I want to be able to calculate average values for each hour of the week and then when new data comes in I want to check if it deviates by x standard deviations for that hour.
Basically I want to have 24x7 array fields each representing the mean/median value for each hour of the week for the last 1 year. Then I want to compare last days values for each hour against these averages and report an error. I do not understand how to calculate these averages. Is there some hidden extensive documentation on Flux?
I don't really need a full solution, just some direction would be nice. Like, are there some utility functions for this in the standard lib or whatever
EDIT: After some reading, it really looks like all I need to do is use the window and aggregateWindow functions but I haven't yet found how exactly
Ok, so, this is what worked for me. Needs some cleaning up but gets the values successfully grouped per hour+weekday and the mean of all the values
import "date"
tab1 = from(bucket: "qweqwe")
|> range(start: v.timeRangeStart, stop: v.timeRangeStop)
|> filter(fn: (r) => r["_measurement"] == "asdasd")
|> filter(fn: (r) => r["_field"] == "reach")
|> aggregateWindow(every: 1h, fn: mean, createEmpty: false)
mapped = tab1
|> map(fn: (r) => ({ r with wd: string(v: date.weekDay(t: r._time)), h: string(v: date.hour(t: r._time)) }))
|> map(fn: (r) => ({ r with mapped_time: r.wd + " " + r.h }))
grouped = mapped
|> group(columns: ["mapped_time"], mode: "by")
|> mean()
|> group()
|> toInt()
|> yield()
Here is my flux script, when I run it, there is no error, but there is no data in bucket “output-test-3” , and exist data in bucket "output-test-4" :(
I have been troubled by this problem for a long time. Can anyone solve my problem?
option task = {name: "join-test-1", every: 5m, offset: 5s}
max_connections = from(bucket: "Node-exporter")
|> range(start: -task.every)
|> filter(fn: (r) =>
(r["_measurement"] == "go_info"))
|> last()
|> to(bucket: "output-test-4")
used_connections = from(bucket: "Node-exporter")
|> range(start: -task.every)
|> filter(fn: (r) =>
(r["_measurement"] == "go_goroutines"))
|> last()
|> to(bucket: "output-test-4")
a = join(tables: {max_connections: max_connections, used_connections: used_connections}, on:
["_time", "_start", "_measurement", "_stop", "_field"])
|> to(bucket: "output-test-3")
When using the join() function to connect two queries a and b, the variables _field, _measurement, and _value will automatically become _field_a, _filed_b, _value_a, _value_b, etc. When InfluxDB writes to the bucket, there must be three fragments of _field, _measurement, and _value, but due to the above reasons, these neighbors have disappeared. So the easiest way to solve this problem is to use the map() function to recreate these three subdivisions. The content inside can be specified. When using the data, it is good not to use the data of these specified partitions.
I want to raise an alarm when the count of a particular kind of event is less than 5 for the 3 hours leading up to the moment the check is evaluated, but I need to do this check every 15 minutes.
Since I need to check more frequently than the span of time I'm measuring, I can't do this based on my raw data (according to the docs, "[the schedule] interval matches the aggregate function interval for the check query". But I figured I could use a "task" to transform my data into a form that would work.
I was able to aggregate the data in the way that I hoped via a flux query, and I even saved the resultant rolling count to a dashboard.
from(bucket: "myBucket")
|> range(start: v.timeRangeStart, stop: v.timeRangeStop)
|> filter(fn: (r) =>
(r._measurement == "measurementA"))
|> filter(fn: (r) =>
(r._field == "booleanAttributeX"))
|> window(
every: 15m,
period: 3h,
timeColumn: "_time",
startColumn: "_start",
stopColumn: "_stop",
createEmpty: true,
)
|> count()
|> yield(name: "count")
|> to(bucket: "myBucket", org: "myOrg")
Results in the following scatterplot.
My hope was that I could just copy-paste this as a new task and get my nice new aggregated dataset. After resolving a couple of legible syntax errors, I settled on the following task definition:
option v = {timeRangeStart: -12h, timeRangeStop: now()}
option task = {name: "blech", every: 15m}
from(bucket: "myBucket")
|> range(start: v.timeRangeStart, stop: v.timeRangeStop)
|> filter(fn: (r) =>
(r._measurement == "measurementA"))
|> filter(fn: (r) =>
(r._field == "booleanAttributeX"))
|> window(
every: 15m,
period: 3h,
timeColumn: "_time",
startColumn: "_start",
stopColumn: "_stop",
createEmpty: true,
)
|> count()
|> yield(name: "count")
|> to(bucket: "myBucket", org: "myOrg")
Unfortunately, I'm stuck on an error that I can't find any mention of anywhere: could not execute task run; Err: no time column detected: no time column detected.
If you could help me debug this task run error, or sidestep it by accomplishing this task in some other manner, I'll be very grateful.
I know I'm late here, but the to function needs a _time column, but the count aggregate you are adding returns a _start and _stop column to indicate the time frame for the count, not a _time.
You can solve this by either adding |> duplicate(column: "_stop", as: "_time") just before your to function, or leveraging the aggregateWindow function which handles this for you.
|> aggregateWindow(every: 15m, fn: count)
References:
https://v2.docs.influxdata.com/v2.0/reference/flux/stdlib/built-in/transformations/aggregates/count
https://v2.docs.influxdata.com/v2.0/reference/flux/stdlib/built-in/transformations/duplicate/
https://v2.docs.influxdata.com/v2.0/reference/flux/stdlib/built-in/transformations/aggregates/aggregatewindow/