Showing hourly average (histogramm) in grafana - histogram

Given a timeseries of (electricity) marketdata with datapoints every hour, I want to show a Bar Graph with all time / time frame averages for every hour of the data, so that an analyst can easily compare actual prices to all time averages (which hour of the day is most/least expensive).
We have cratedb as backend, which is used in grafana just like a postgres source.
SELECT
extract(HOUR from start_timestamp) as "time",
avg(marketprice) as value
FROM doc.el_marketprices
GROUP BY 1
ORDER BY 1
So my data basically looks like this
time value
23.00 23.19
22.00 25.38
21.00 29.93
20.00 31.45
19.00 34.19
18.00 41.59
17.00 39.38
16.00 35.07
15.00 30.61
14.00 26.14
13.00 25.20
12.00 24.91
11.00 26.98
10.00 28.02
9.00 28.73
8.00 29.57
7.00 31.46
6.00 30.50
5.00 27.75
4.00 20.88
3.00 19.07
2.00 18.07
1.00 19.43
0 21.91
After hours of fiddling around with Bar Graphs, Histogramm Mode, Heatmap Panel und much more, I am just not able to draw a simple Hours-of-the day histogramm with this in Grafana. I would very much appreciate any advice on how to use any panel to get this accomplished.

your query doesn't return correct time series data for the Grafana - time field is not valid timestamp, so don't extract only
hour, but provide full start_timestamp (I hope it is timestamp
data type and value is in UTC)
add WHERE time condition - use Grafana's macro __timeFilter
use Grafana's macro $__timeGroupAlias for hourly groupping
SELECT
$__timeGroupAlias(start_timestamp,1h,0),
avg(marketprice) as value
FROM doc.el_marketprices
WHERE $__timeFilter(start_timestamp)
GROUP BY 1
ORDER BY 1
This will give you data for historic graph with hourly avg values.
Required histogram may be a tricky, but you can try to create metric, which will have extracted hour, e.g.
SELECT
$__timeGroupAlias(start_timestamp,1h,0),
extract(HOUR from start_timestamp) as "metric",
avg(marketprice) as value
FROM doc.el_marketprices
WHERE $__timeFilter(start_timestamp)
GROUP BY 1
ORDER BY 1
And then visualize it as histogram. Remember that Grafana is designated for time series data, so you need proper timestamp (not only extracted hours, eventually you can fake it) otherwise you will have hard time to visualize non time series data in Grafana. This 2nd query may not work properly, but it gives you at least idea.

Related

How to obtain time interval value reports from InfluxDB

Using InfluxDB: Is there any way to build a time-bucketed report of a field value representing a state that persists over time? Ideally in InfluxQL query language
More specifically as an example: Say a measurement contains points that report changes in the light bulb state (On / Off). They could be 0s and 1s as in the example below, or any other value. For example:
time light
---- -----
2022-03-18T00:00:00Z 1
2022-03-18T01:05:00Z 0
2022-03-18T01:55:00Z 0
2022-03-18T02:30:00Z 1
2022-03-18T04:06:00Z 0
The result should be a listing of intervals indicating if this light was on or off during each time interval (e.g. hours), or what percentage of that time it was on. For the given example, the result if grouping hourly should be:
Hour
Value
2022-03-18 00:00
1.00
2022-03-18 01:00
0.17
2022-03-18 02:00
0.50
2022-03-18 03:00
1.00
2022-03-18 04:00
0.10
Note that:
for 1am bucket, even if the light starts and ends in On state, it was On for only 10 over 60 minutes, so the value is low (10/60)
and more importantly the bucket from 3am to 4am has value "1" as the light was On since the last period, even if there was no change in this time period. This rules out usage of simple aggregation (e.g. MEAN) over a GROUP BY TIME(), as there would not be any way to know if an empty/missing bucket corresponds to an On or Off state as it only depends on the last reported value before that time bucket.
Is there a way to implement it in pure InfluxQL, without retrieving potentially big data sets (points) and iterating through them in a client?
I consider that raw data could be obtained by query:
SELECT "light" FROM "test3" WHERE $timeFilter
Where "test3" is your measurement name and $timeFilter is from... to... time period.
In this case we need to use a subquery which will fill our data, let's consider grouping (resolution) time as 1s:
SELECT last("light") as "filled_light" FROM "test3" WHERE $timeFilter GROUP BY time(1s) fill(previous)
This query gives us 1/0 value every 1s. We will use it as a subquery.
NOTE: You should be informed that this way does not consider if beginning of data period within $timeFilter has been started with light on or off. This way will not provide any data before hour with any value within $timeFilter.
In next step you should use integral() function on data you got from subquery, like this:
SELECT integral("filled_light",1h) from (SELECT last("light") as "filled_light" FROM "test3" WHERE $timeFilter GROUP BY time(1s) fill(previous)) group by time(1h)
This is how it looks on charts:
And how Result data looks in a table:
This is not a perfect way of getting it to work but I hope it resolves your problem.

InfluxDB - Keep timestamp of record on downsampling

InfluxDB version used: 1.8.0
Given a time series db that is used for storing e.g. temperatures from iot sensors (on different locations).
The sensors are queried e.g. every other minute.
Now the maximum temperature per sensor for the last hour can be queried using
select max(*) from temperatures where time >= now() - 1h group by location
name: temperatures
tags: location=collector
time max_temperature
---- ---------------
2020-06-24T17:41:34Z 34.8
name: temperatures
tags: location=outside
time max_temperature
---- ---------------
2020-06-24T17:43:34Z 23.4
I'm now would like to keep the max temperatures for every hour and for every day for a certain period of time.
So naturally I would use a retention policy and continuous queries.
Lets say I want to store the the maximum temperature by the hour for a month:
show RETENTION POLICIES on iotsensors
name duration shardGroupDuration replicaN default
---- -------- ------------------ -------- -------
lastmonth 744h0m0s 24h0m0s 1 false
The continuous query looks like this:
CREATE CONTINUOUS QUERY max_temperatures_per_hour ON iotsensors
BEGIN
SELECT max(temperature) INTO iotsensors.lastmonth.max_temperatures_per_hour FROM iotsensors.autogen.temperatures GROUP BY time(1h), location TZ('Europe/Berlin')
END
By the nature of the GROUP BY time(1h) term, the exact time of the temperature is lost.
Especially when the data is condensed for a whole day in the second step FROM iotsensors.lastmonth.max_temperatures_per_hour GROUP BY time(1d) the resolution is getting even more coarse. (setting it to midnight of each day 00:00:00)
select max from iotmeasurements.last2years.max_temperatures_per_day where time >= now() - 4d group by location tz('Europe/Berlin')
name: max_temperatures_per_day
tags: location=collector
time max
---- ---
2020-06-21T00:00:00+02:00 80.9
2020-06-22T00:00:00+02:00 78.5
2020-06-23T00:00:00+02:00 101.2
name: min_max_temperatures_per_day
tags: location=outside
time max
---- ---
2020-06-21T00:00:00+02:00 21.8
2020-06-22T00:00:00+02:00 22.5
2020-06-23T00:00:00+02:00 22.8
I do know that this the expected and documented behaviour
https://docs.influxdata.com/influxdb/v1.8/query_language/explore-data/#group-by-time-intervals
However, the information of when exactly the maximum value was recorded is a valuable information which I'd like to keep.
Is there any way to store the exact timestamp of the record when downsampling?
I'd prefer to keep the timestamp inside the time field like
tags: location=collector
time max
---- ---
2020-06-20T04:30:40Z 80.9
2020-06-21T04:22:00Z 78.5
2020-06-22T04:53:10Z 101.2
Alternatively but a second best solution would be to add a timestamp field for each downsampled record
time max timestamp
---- --- ---------
2020-06-20T00:00:00+02:00 80.9 2020-06-20T04:30:40Z
2020-06-21T00:00:00+02:00 78.5 2020-06-21T04:22:00Z
2020-06-22T00:00:00+02:00 101.2 2020-06-22T04:53:10Z
For this I needed to be able to query the time into a separate field, wouldn't I.
But my attempts weren't successful so far. Something I tried was this:
SELECT max(temperature),time as timestamp FROM temperatures GROUP BY time(60m),"location"
I'd consider to move to InfluxDB 2.0 if that was a prerequesit for a solution to my problem.
So far I haven't found a solution with using solely InfluxDB.
The original question was based on the misconception that there always is one single maximum value over the time frame used for downsampling.
Given a series of data points like this.
name: max_temperatures_per_day
tags: location=collector
time max
---- ---
2020-06-20T04:30:40Z 80.9
2020-06-21T04:22:00Z 78.5
2020-06-22T04:53:10Z 101.2
2020-06-22T05:33:10Z 73.3
2020-06-22T05:41:10Z 65.0
2020-06-22T05:53:10Z 48.2
2020-06-22T05:56:10Z 73.3
2020-06-22T10:30:10Z 54.3
2020-06-22T12:30:10Z 63.7
2020-06-22T18:03:10Z 101.2
2020-06-22T18:20:10Z 90.2
it would be possible to identify exactly one point in time having the maximum value with the 4th hour of the day 2020-06-22T04:53:10Z 101.2 but for the fifth hour it's not possible since the maximum value ocured at 5:33 as well at 5:56.
Downsampling the data to the resolution of one day (24h) makes it even worse as the maximum value (101.2) ocured 4:53AM as well as 6:03PM that given day. Which of this possibly multiple points in time should be kept?
However using Kapacitor for carrying out the continues queries the original desired result can be achieved.
Starting with from this article https://docs.influxdata.com/kapacitor/v1.5/guides/continuous_queries/, it's possible to setup a query like this
batch
|query('SELECT * FROM "iotmeasurements"."autogen".temperatures')
.period(1h)
.every(1h)
.groupBy('location')
.align()
|max('temperature')
.as('max_temp')
.usePointTimes()
|influxDBOut()
.database('iotmeasurements')
.retentionPolicy('lastmonth')
.measurement('max_temperatures')
.precision('s')
This will keep the point time where the maximum value ocured first. In the example below, the data point at 5:33AM would be kept and the same value at 5:56AM would be skipped.
I'm not entirely sure if usePointTimes() (https://docs.influxdata.com/kapacitor/v1.5/nodes/influx_q_l_node/#usepointtimes) is needed.
In case loosing the record of later ocurances of the maximum value in the downsampling time frame is acceptable, this might be a solution. Even though, running a second service is needed for this. Adding an additional point of possible fail overs.
Another disadvantage of using Kapacitor is that it seems to be not possible to perform a downsampling for the past.
One may carry out a GROUP BY time query like this SELECT max(temperature) INTO ... FROM temperatures WHERE time >= now() - 1w GROUP BY time(1h),"location" outside a continuous query to do the downsampling for measurement points from the past inside influxdb itself.
There seems to be now way for doing so for Kapacitor 'ticks'.

How to select data with minimum time interval between results

I am not sure how to best ask this question.. I am looking to select data but with a minimum time interval between the results. For example:
This measurement:
time field
2015-08-18T00:00:00Z 12
2015-08-18T00:00:00Z 1
2015-08-18T00:06:00Z 11
2015-08-18T00:06:00Z 3
2015-08-18T05:54:00Z 2
2015-08-18T06:00:00Z 1
2015-08-18T06:06:00Z 8
2015-08-18T06:12:00Z 7
This Query:
select sum(*) from measurement where field > 0 would return the sum of all of the rows. I would like to be able to specify a minimum interval between results and only match on the first row in a set of closely timed rows. Ex. 8 minute minimum interval would only match these rows (and result in a sum of 22):
time field
2015-08-18T00:00:00Z 12
2015-08-18T05:54:00Z 2
2015-08-18T06:06:00Z 8
Is there a way to get my expected output from influxdb?
The only alternative I can think of is to just return all of the rows without the sum() aggregate function then loop through the results and do lots of time comparisons or date math in my application.
Probably not with InfluxQL.
InfluxQL has a function elapsed which returns the time elapsed between consecutive datapoints https://docs.influxdata.com/influxdb/v1.7/query_language/functions/#elapsed
That's possibly the only function that has something to do with time but I can't think of a way to apply it for what you need.
You may have better luck with the window function of Flux https://v2.docs.influxdata.com/v2.0/query-data/guides/window-aggregate/
I'm not familiar enough to say how, if at all possible.
Doing it in your application may be the way to go.

How do I upsample time series in tsdb

I want to upsample a time series in OpenTSDB. For example, suppose I have temperatures that are recorded at 8 hour intervals, e.g., at 1am, 9am and 5pm every day. I want to retrieve by a TSDB query an upsampling of these data, so that I get temperatures at 1am, 2am, 3am, ...., 5pm, 6pm, ... midnight I want the "missing" data to be filled in by linear interpolation, e.g.,
otemp(2am) = itemp(1am) + 1/8 * ( itemp(9am) - itemp(1am) )
where otemp is the output up-sampled result and itemp is the input time series.
The problem is that OpenTSDB only seems to be willing to linearly interpolate data in the context of a multi-time-series operation like "sum". Now, I can kluge the solution that I want be creating another time series "ctemp" (the "c" is for "clock") that records a temperature of 0 every 1 hour, and then ask TSDB to give me the sum of this time-series with the itemp's.
Am I misunderstanding the OpenTSDB, and there is a way to do this without having to create the bogus "ctemp" series? Something reasonable like:
...?start=some_time&end=some_time&interval=1h&m=lerp:itemp
?
-- Mark
For comparison with Axibase TSD which runs on HBase, the interpolation can be performed using WITH INTERPOLATE clause.
SELECT date_format(time, 'MMM-dd HH:mm') AS sample_time,
value
FROM temperature
WHERE entity = 'sensor'
AND datetime BETWEEN '2017-05-14T00:00:00Z' AND '2017-05-17T00:00:00Z'
WITH INTERPOLATE(1 HOUR)
Sample commands:
series e:sensor d:2017-05-14T01:00:00Z m:temperature=25
series e:sensor d:2017-05-14T09:00:00Z m:temperature=30
series e:sensor d:2017-05-14T17:00:00Z m:temperature=29
series e:sensor d:2017-05-15T01:00:00Z m:temperature=28
series e:sensor d:2017-05-15T09:00:00Z m:temperature=35
series e:sensor d:2017-05-15T17:00:00Z m:temperature=31
series e:sensor d:2017-05-16T01:00:00Z m:temperature=22
series e:sensor d:2017-05-16T09:00:00Z m:temperature=40
series e:sensor d:2017-05-16T17:00:00Z m:temperature=33
The result:
sample_time value
May-14 01:00 25.0000
May-14 02:00 25.6250
May-14 03:00 26.2500
May-14 04:00 26.8750
May-14 05:00 27.5000
...
Disclaimer: I work for Axibase.

InfluxDB average of distinct count over time

Using Influx DB v0.9, say I have this simple query:
select count(distinct("id")) FROM "main" WHERE time > now() - 30m and time < now() GROUP BY time(1m)
Which gives results like:
08:00 5
08:01 10
08:02 5
08:03 10
08:04 5
Now I want a query that produces points with an average of those values over 5 minutes. So the points are now 5 minutes apart, instead of 1 minute, but are an average of the 1 minute values. So the above 5 points would be 1 point with a value of the result of (5+10+5+10+5)/5.
This does not produce the results I am after, for clarity, since this is just a count, and I'm after the average.
select count(distinct("id")) FROM "main" WHERE time > now() - 30m and time < now() GROUP BY time(5m)
This doesn't work (gives errors):
select mean(distinct("id")) FROM "main" WHERE time > now() - 30m and time < now() GROUP BY time(5m)
Also doesn't work (gives error):
select mean(count(distinct("id"))) FROM "main" WHERE time > now() - 30m and time < now() GROUP BY time(5m)
In my actual usage "id" is a string (content, not a tag, because count distinct not supported for tags in my version of InfluxDB).
To clarify a few points for readers, in InfluxQL, functions like COUNT() and DISTINCT() can only accept fields, not tags. In addition, while COUNT() supports the nesting of the DISTINCT() function, most nested or sub-functions are not yet supported. In addition, nested queries, subqueries, or stored procedures are not supported.
However, there is a way to address your need using continuous queries, which are a way to automate the processing of data and writing those results back to the database.
First take your original query and make it a continuous query (CQ).
CREATE CONTINUOUS QUERY count_foo ON my_database_name BEGIN
SELECT COUNT(DISTINCT("id")) AS "1m_count" INTO main_1m_count FROM "main" GROUP BY time(1m)
END
There are other options for the CQ, but that basic one will wake up every minute, calculate the COUNT(DISTINCT("id")) for the prior minute, and then store that result in a new measurement, main_1m_count.
Now, you can easily calculate your 5 minute mean COUNT from the pre-calculated 1 minute COUNT results in main_1m_count:
SELECT MEAN("1m_count") FROM main_1m_count WHERE time > now() - 30m GROUP BY time(5m)
(Note that by default, InfluxDB uses epoch 0 and now() as the lower and upper time range boundaries, so it is redundant to include and time < now() in the WHERE clause.)

Resources