Find the correct grouping - influxdb

I'll first try to describe the non-technical problem:
There are many services, each service can have multiple instances and each of those instances is scraped for data and that data is stored in influxdb. Now the point in time where that data is scraped from each instance of the service is (obviously) not exactly the same point in time.
What i like to query (or display) is the maximum value for each service, over all instances. And i did not find a way to "quantify" time points. For examle to always move the value to the next full minute or something so that timescales get comparable.
Not the technical problem is that the reported values are all totals so to get a sense of change in that values i need difference oder derivative but these functions are in my case often aplied to one value of instance 1 and one value of instance 2 which reflects the difference between the instances and not between two points in time.
heres what i tried so far but i gives me almost flatlines since the instances report pretty much the same value but at alternating points in time.
from(bucket: "dpl4")
|> range(start: v.timeRangeStart, stop: v.timeRangeStop)
|> filter(fn: (r) => r._measurement == "prometheus")
|> filter(fn: (r) => r._field == "http_server_requests_seconds_sum")
|> group(columns: ["service"], mode: "by")
|> aggregateWindow(every: 1m, fn: max)
|> derivative(unit: 1m, nonNegative: true)
I hope i was able to describe the problem.

Related

flux query very slow in compare to InfluxQL (10x slower)

I'm upgrading form influx1.x to influx2.x (updating queries from influxQL to Flux syntax).
For very simple queries, performance drops dramatically when I try to query more than 500,000 points and I'm not sure if there's anything I can do to improve my queries to get better performance
InfluxQL:
select last("y") AS "y" from "mydata".autogen."profile"
WHERE time >= '2019-01-01T00:00:00Z' and time <= '2019-01-07T23:59:59Z'
GROUP BY time(1s) FILL(none)
Flux:
data=from(bucket: "mydata")
|> range(start: 2019-01-01T00:00:00Z, stop: 2019-01-07T23:59:59Z)
|> filter(fn: (r) => r._measurement == "profile")
|> filter(fn: (r) => r._field=="y")
|> aggregateWindow(every: 1s, fn: last, createEmpty: false)
|> yield()
any advice?
You could try rebuilding the time series index with the command below:
influxd inspect build-tsi
See more details here.
The reason behind this is while you are upgrading, the meta and data are migrated but not the indices. So "InfluxDB must build a new time series index (TSI). Depending on the volume of data present, this may take some time." according to the guide.

InfluxDB - Get the latest data even if it is not in the time range provided

from(bucket: "metrics")
|> range(start: -5m)
|> filter(fn: (r) => r["_measurement"] == "cpu")
|> filter(fn: (r) => r["_field"] == "usage")
|> last()
Running this query will return the data only if it was saved in the last 5 minutes.
What I am looking for is, if there is not data for the time range provided, then get the latest data (could be 10m old or 5d old). I know that prometheus does return last data and we are trying to move from prometheus to influxDB and are stuck with this problem.
Also, just increasing the range to say -10d would not work because the volume of data is very high (hundreds of record per second are being written).
We are experimenting with down sampling as well to see if that will work for us, but wanted to know if there was a way to get it from the source bucket itself.

Using InfluxDB with interpolate.linear does not output missing values

I have some monthly counter measurements stored inside an InfluxDB instance, e.g. data like this (in line protocol):
readings,location=xyz,medium=Electricity,meter=mainMeter energy=13660 1625322660000000000
readings,location=xyz,medium=Electricity,meter=mainMeter energy=13810 1627839610000000000
These are monthly readings, not sharp to the beginning of a month (one is at 3rd of July, the other on 1st of August).
My goal it to interpolate these readings on a daily basis, so I stumbled upon the not so well documented interpolate.linear function from Flux (https://docs.influxdata.com/influxdb/v2.0/reference/flux/stdlib/interpolate/linear/).
But the only output I can generate with my function returns me the two given data values from my input.
import "interpolate"
from(bucket: "ManualInput")
|> range(start: v.timeRangeStart, stop: v.timeRangeStop)
|> filter(fn: (r) => r["_measurement"] == "readings")
|> filter(fn: (r) => r["_field"] == "energy")
|> interpolate.linear(every: 1d)
Am I missing something here? I've expected to have a linear interpolated value on each day... or is this not possible with Flux? (I'm using V 2.0.7)
I propose to add a yield() function.

How to create Influxdb alert for deviating from average hourly values?

So I'm trying to find any documentation on more complex Flux queries but after days of searching I'm still lost. I want to be able to calculate average values for each hour of the week and then when new data comes in I want to check if it deviates by x standard deviations for that hour.
Basically I want to have 24x7 array fields each representing the mean/median value for each hour of the week for the last 1 year. Then I want to compare last days values for each hour against these averages and report an error. I do not understand how to calculate these averages. Is there some hidden extensive documentation on Flux?
I don't really need a full solution, just some direction would be nice. Like, are there some utility functions for this in the standard lib or whatever
EDIT: After some reading, it really looks like all I need to do is use the window and aggregateWindow functions but I haven't yet found how exactly
Ok, so, this is what worked for me. Needs some cleaning up but gets the values successfully grouped per hour+weekday and the mean of all the values
import "date"
tab1 = from(bucket: "qweqwe")
|> range(start: v.timeRangeStart, stop: v.timeRangeStop)
|> filter(fn: (r) => r["_measurement"] == "asdasd")
|> filter(fn: (r) => r["_field"] == "reach")
|> aggregateWindow(every: 1h, fn: mean, createEmpty: false)
mapped = tab1
|> map(fn: (r) => ({ r with wd: string(v: date.weekDay(t: r._time)), h: string(v: date.hour(t: r._time)) }))
|> map(fn: (r) => ({ r with mapped_time: r.wd + " " + r.h }))
grouped = mapped
|> group(columns: ["mapped_time"], mode: "by")
|> mean()
|> group()
|> toInt()
|> yield()

InfluxDB2.0: How to sum up multiple time series with irregular time interval?

TL;DR
I'm using Influxdb v2.0 and use the Influx query syntax (as in the GUI). I'm having multiple series (same _field, different tag) of digital 0/1 state, and I want to sum them up. The problem is that the state is stored in the database with irregular time interval, meaning that for any time the real actual value for each tag should be queried with the last point possible. I have tried aggregateWindow with 'last' as the function but last just drop table for the windows with no point stored. Is there anyway I could sum them up? I accept any method (including exporting the data and use other language script instead lmao). Thank you in advance.
The Scenario
My team had implemented a check-in/check-out system with phone number representing each person for a real world event earlier, and had decided to use InfluxDB v2.0 as a database (We choose it so we can monitor through Grafana easily). I have a bucket storing points of checkin/checkout value, all the same schema. The schema is as followed:
measurement: 'user'
tags: [phone, type] // type is either ['normal', 'staff']
value: 0 or 1 // 0 for checking out event, 1 for checking in event
Whenever someone checks in the event, a point of value 1 is inserted, and vice versa, a point of value 0 is inserted whenever someone checks out the event. Keep in mind that the point can duplicate if user decided to trigger the api again like having already checked in earlier and check in again (although, we view this as having the same state of 1). So the data is like a digital 0/1 state but with irregular time interval of points, one graph line for each phone number. Same phone numbers but with different types are seen as different person for us.
The project had already deployed and we are tasked to do post-processing with the data. The problem is to visualize a graph of in-event population for the entire time. In mathematic point of view, this should be easily solved by summing all the state of each person (The 0/1 line) over time. I first tried something like this in the Influx query:
from(bucket: "event_name")
|> range(start: v.timeRangeStart, stop: v.timeRangeStop)
|> filter(fn: (r) => r["_measurement"] == "user")
|> group(columns: ["type"])
|> aggregateWindow(every: v.windowPeriod, fn: sum, createEmpty: true)
|> yield()
The result looks very promising, a population graph with 2 colours of type normal and staff. But when I look carefully, the sum function of Influx actually sum the _value of each point in each window. Meaning that for some window with no point, the sum function does not actually sum up everyone in the database. The goal is to sum the actual _value for those window with no point (the _value of these window should be the same as the _value of the last point, ex. like I had checked in at 7.00pm and the _value should be 1 all the time after 7.00pm even some window does not have any point). I then tried something like this:
from(bucket: "event_name")
|> range(start: v.timeRangeStart, stop: v.timeRangeStop)
|> filter(fn: (r) => r["_measurement"] == "user")
|> aggregateWindow(every: 1m, fn: last, createEmpty: true)
|> fill(usePrevious: true)
|> group(columns: ["type"])
|> aggregateWindow(every: 1m, fn: sum)
|> yield()
I use last point for each window, then fill the window with empty _value with previous possible point, then sum up the _value of each window again. But then I found out that last function actually drop empty table, meaning that the window with no point is dropped (createEmpty is then useless). The problem then scope into that I must find function like last but without dropping empty table. I have tried reduce to create my own logic like last but sadly it didn't go like I want (might be that I coded it wrong).
If you have any idea, please help. Thank you very much.
Nvm, I have found the solution, here is for those who are right in the same situation, though not very elegant in performance but it's the only query I have found it works.
from(bucket: "event_name")
|> range(start: v.timeRangeStart, stop: v.timeRangeStop)
|> filter(fn: (r) => r["_measurement"] == "user")
|> aggregateWindow(every: 1m, fn: last, createEmpty: false)
|> aggregateWindow(every: 1m, fn: mean, createEmpty: true)
|> fill(usePrevious: true)
|> fill(value: 0.0)
|> group(columns: ["type"])
|> aggregateWindow(every: 1m, fn: sum, createEmpty: false)
|> yield(name: "population")
I use last first to have the latest state for each window (though last actually drop empty tables, so making createEmpty: true is useless anyways)
Then for windows which does not have any point, I use mean with createEmpty: true in order to create points with null _value for empty windows. For windows that actually have real points, mean shouldn't change the value as there should only be 1 point per window because we used last earlier. The point of using mean here is just to create null points for empty windows. The step here is just to find a not-doing-anything function that doesn't drop empty table created by createEmpty. Fyi, I have tried many functions including making my own custom function like reduce and map but they do drop empty tables (and assigning null isn't even allowed), I even create an empty function like fn: (tables=<-, x) => tables for aggregateWindow but it drop empty tables anyways. So mean is my best bet here, though the side effect is my values changes from int to float.
I use fill here to replace null points with value from the last window. This is why I'm trying to assign null to the point in empty windows from the last step, and mean can only does this. The second fill is for those early empty window which should represent 0-state.
Then group by the type and summing them up should be the result I looking for
Hope I could help anyone who are in the same situation like me in the future

Resources