Delete points between hours in InfluxDB - influxdb

I have a measurement that stores prices for every 10 seconds (their seconds last with 0-10-20-30-40-50).
I would like to delete old points (older than 1 year) to keep only prices every hour.
How to get these candidates?

You can achieve this with Retention Policy + Continuous Query:
CREATE RETENTION POLICY "one_year" ON "database_name" DURATION 52w REPLICATION 1 DEFAULT
autogen RP has an infinite retention duration:
CREATE CONTINUOUS QUERY "aggregate_prices" ON "database_name"
BEGIN
SELECT mean("value")
INTO "autogen"."prices"
FROM "prices"
GROUP BY time(1h)
END

Related

How to obtain time interval value reports from InfluxDB

Using InfluxDB: Is there any way to build a time-bucketed report of a field value representing a state that persists over time? Ideally in InfluxQL query language
More specifically as an example: Say a measurement contains points that report changes in the light bulb state (On / Off). They could be 0s and 1s as in the example below, or any other value. For example:
time light
---- -----
2022-03-18T00:00:00Z 1
2022-03-18T01:05:00Z 0
2022-03-18T01:55:00Z 0
2022-03-18T02:30:00Z 1
2022-03-18T04:06:00Z 0
The result should be a listing of intervals indicating if this light was on or off during each time interval (e.g. hours), or what percentage of that time it was on. For the given example, the result if grouping hourly should be:
Hour
Value
2022-03-18 00:00
1.00
2022-03-18 01:00
0.17
2022-03-18 02:00
0.50
2022-03-18 03:00
1.00
2022-03-18 04:00
0.10
Note that:
for 1am bucket, even if the light starts and ends in On state, it was On for only 10 over 60 minutes, so the value is low (10/60)
and more importantly the bucket from 3am to 4am has value "1" as the light was On since the last period, even if there was no change in this time period. This rules out usage of simple aggregation (e.g. MEAN) over a GROUP BY TIME(), as there would not be any way to know if an empty/missing bucket corresponds to an On or Off state as it only depends on the last reported value before that time bucket.
Is there a way to implement it in pure InfluxQL, without retrieving potentially big data sets (points) and iterating through them in a client?
I consider that raw data could be obtained by query:
SELECT "light" FROM "test3" WHERE $timeFilter
Where "test3" is your measurement name and $timeFilter is from... to... time period.
In this case we need to use a subquery which will fill our data, let's consider grouping (resolution) time as 1s:
SELECT last("light") as "filled_light" FROM "test3" WHERE $timeFilter GROUP BY time(1s) fill(previous)
This query gives us 1/0 value every 1s. We will use it as a subquery.
NOTE: You should be informed that this way does not consider if beginning of data period within $timeFilter has been started with light on or off. This way will not provide any data before hour with any value within $timeFilter.
In next step you should use integral() function on data you got from subquery, like this:
SELECT integral("filled_light",1h) from (SELECT last("light") as "filled_light" FROM "test3" WHERE $timeFilter GROUP BY time(1s) fill(previous)) group by time(1h)
This is how it looks on charts:
And how Result data looks in a table:
This is not a perfect way of getting it to work but I hope it resolves your problem.

KsqlDB HOPPING window retention doesn't work

I am using ksqlDB version 0.14.0-rc732.
Declared a query :
CREATE TABLE LIVE_TRAFFIC AS
select
devicemac,
sum(traffic -> bytesIn) AS bytes_in,
sum(traffic -> bytesOut) AS bytes_out
from FLAT_TRAFFIC
WINDOW HOPPING (SIZE 1 MINUTES, ADVANCE BY 1 MINUTES, RETENTION 15 MINUTES, GRACE PERIOD 1 MINUTES)
GROUP by devicemac;
But the 15 min defined retention doesn't work.
Rows are being added to the table.

Showing hourly average (histogramm) in grafana

Given a timeseries of (electricity) marketdata with datapoints every hour, I want to show a Bar Graph with all time / time frame averages for every hour of the data, so that an analyst can easily compare actual prices to all time averages (which hour of the day is most/least expensive).
We have cratedb as backend, which is used in grafana just like a postgres source.
SELECT
extract(HOUR from start_timestamp) as "time",
avg(marketprice) as value
FROM doc.el_marketprices
GROUP BY 1
ORDER BY 1
So my data basically looks like this
time value
23.00 23.19
22.00 25.38
21.00 29.93
20.00 31.45
19.00 34.19
18.00 41.59
17.00 39.38
16.00 35.07
15.00 30.61
14.00 26.14
13.00 25.20
12.00 24.91
11.00 26.98
10.00 28.02
9.00 28.73
8.00 29.57
7.00 31.46
6.00 30.50
5.00 27.75
4.00 20.88
3.00 19.07
2.00 18.07
1.00 19.43
0 21.91
After hours of fiddling around with Bar Graphs, Histogramm Mode, Heatmap Panel und much more, I am just not able to draw a simple Hours-of-the day histogramm with this in Grafana. I would very much appreciate any advice on how to use any panel to get this accomplished.
your query doesn't return correct time series data for the Grafana - time field is not valid timestamp, so don't extract only
hour, but provide full start_timestamp (I hope it is timestamp
data type and value is in UTC)
add WHERE time condition - use Grafana's macro __timeFilter
use Grafana's macro $__timeGroupAlias for hourly groupping
SELECT
$__timeGroupAlias(start_timestamp,1h,0),
avg(marketprice) as value
FROM doc.el_marketprices
WHERE $__timeFilter(start_timestamp)
GROUP BY 1
ORDER BY 1
This will give you data for historic graph with hourly avg values.
Required histogram may be a tricky, but you can try to create metric, which will have extracted hour, e.g.
SELECT
$__timeGroupAlias(start_timestamp,1h,0),
extract(HOUR from start_timestamp) as "metric",
avg(marketprice) as value
FROM doc.el_marketprices
WHERE $__timeFilter(start_timestamp)
GROUP BY 1
ORDER BY 1
And then visualize it as histogram. Remember that Grafana is designated for time series data, so you need proper timestamp (not only extracted hours, eventually you can fake it) otherwise you will have hard time to visualize non time series data in Grafana. This 2nd query may not work properly, but it gives you at least idea.

Query InfluxDB for specific hours every day

What is the best way to query InfluxDB for specific hours every day, for example, I have a Series that have checkin/checkout activities, and I need to see them between hour 2PM - 3PM every day for last month, am aware that there's no direct way to do this on the query language -current version 1.2- Not sure if there is a work around or something ?
I have been searching for the same and found your question. As you say, the syntax does not seem to allow to do it.
My closest attempt was trying to use a regular expression for a time WHERE clausule, which is not currently supported by InfluxDB.
So that should probably be the answer, and I would not post an answer to just say that.
However, working on a different problem, I have found a way that may or may not help you in your specific case. It is a workaround that is not very nice, but it seems to work in the case that you can formulate an aggregation/selection of what you want to see in that given hour so that you end up with having one value per hour. For example, (mean/max/count number of checkin-checkouts in that hour for a given person, which may be what you are looking for, or that you may use to identify the days that you would like to them individually query to see what happened there).
For example, I want to obtain the measurement of electricity consumption daily from 00:00 to 06:00 a.m. I make a first subquery that divides the measurements grouping by 6 hours starting at 00:00 of a given date. Then in the main query, I group by 24 hours and I select the first value. Like this
SELECT first("mean") FROM (SELECT mean("value") FROM "Energy" WHERE "devicename" = 'Electricity' AND "deviceid" = '0_5' AND time > '2017-01-01' GROUP BY time(6h) ) WHERE time > '2017-01-01' GROUP BY time(24h)
If you want 2-4 pm, so 14:00-16:00, you need to first group by 2 hours in the subquery, then offseting the set by 14h so that it starts at 14:00.
SELECT first("mean") FROM ( SELECT mean("value") FROM "Energy" WHERE "devicename" = 'Electricity' AND "deviceid" = '0_5' AND time > '2017-01-01T14:00:00Z' GROUP BY time(2h) ) WHERE time > '2017-01-01T14:00:00Z' GROUP BY time(24h,14h)
Just for checking it. In my 1.2 InfluxDB this is the final result:
Energy
time first
2017-01-01T14:00:00Z 86.41747572815534
2017-01-02T14:00:00Z 43.49722222222222
2017-01-03T14:00:00Z 81.05416666666666
The subquery returns:
Energy
time mean
2017-01-01T14:00:00Z 86.41747572815534
2017-01-01T16:00:00Z 91.46879334257974
2017-01-01T18:00:00Z 89.14027777777778
2017-01-01T20:00:00Z 94.47434119278779
2017-01-01T22:00:00Z 89.94305555555556
2017-01-02T00:00:00Z 86.29542302357837
2017-01-02T02:00:00Z 92.2625
2017-01-02T04:00:00Z 89.93619972260748
2017-01-02T06:00:00Z 87.78888888888889
2017-01-02T08:00:00Z 50.790277777777774
2017-01-02T10:00:00Z 0.6597222222222222
2017-01-02T12:00:00Z 0.10957004160887657
2017-01-02T14:00:00Z 43.49722222222222
2017-01-02T16:00:00Z 86.0610263522885
2017-01-02T18:00:00Z 86.59778085991678
2017-01-02T20:00:00Z 91.56527777777778
2017-01-02T22:00:00Z 90.52565880721221
2017-01-03T00:00:00Z 86.79166666666667
2017-01-03T02:00:00Z 87.15533980582525
2017-01-03T04:00:00Z 89.47988904299584
2017-01-03T06:00:00Z 91.58888888888889
2017-01-03T08:00:00Z 41.67732962447844
2017-01-03T10:00:00Z 16.216366158113733
2017-01-03T12:00:00Z 25.27739251040222
2017-01-03T14:00:00Z 81.05416666666666
If you would need 13:00-15:00, you need to offset the subquery in the previous example by 1h.
For 14:00-15:00:
SELECT first("mean") FROM ( SELECT mean("value") FROM "Energy" WHERE "devicename" = 'Electricity' AND "deviceid" = '0_5' AND time > '2017-01-01T14:00:00Z' GROUP BY time(1h) ) WHERE time > '2017-01-01T14:00:00Z' GROUP BY time(24h,14h)
Hope this helps :)

InfluxDB average of distinct count over time

Using Influx DB v0.9, say I have this simple query:
select count(distinct("id")) FROM "main" WHERE time > now() - 30m and time < now() GROUP BY time(1m)
Which gives results like:
08:00 5
08:01 10
08:02 5
08:03 10
08:04 5
Now I want a query that produces points with an average of those values over 5 minutes. So the points are now 5 minutes apart, instead of 1 minute, but are an average of the 1 minute values. So the above 5 points would be 1 point with a value of the result of (5+10+5+10+5)/5.
This does not produce the results I am after, for clarity, since this is just a count, and I'm after the average.
select count(distinct("id")) FROM "main" WHERE time > now() - 30m and time < now() GROUP BY time(5m)
This doesn't work (gives errors):
select mean(distinct("id")) FROM "main" WHERE time > now() - 30m and time < now() GROUP BY time(5m)
Also doesn't work (gives error):
select mean(count(distinct("id"))) FROM "main" WHERE time > now() - 30m and time < now() GROUP BY time(5m)
In my actual usage "id" is a string (content, not a tag, because count distinct not supported for tags in my version of InfluxDB).
To clarify a few points for readers, in InfluxQL, functions like COUNT() and DISTINCT() can only accept fields, not tags. In addition, while COUNT() supports the nesting of the DISTINCT() function, most nested or sub-functions are not yet supported. In addition, nested queries, subqueries, or stored procedures are not supported.
However, there is a way to address your need using continuous queries, which are a way to automate the processing of data and writing those results back to the database.
First take your original query and make it a continuous query (CQ).
CREATE CONTINUOUS QUERY count_foo ON my_database_name BEGIN
SELECT COUNT(DISTINCT("id")) AS "1m_count" INTO main_1m_count FROM "main" GROUP BY time(1m)
END
There are other options for the CQ, but that basic one will wake up every minute, calculate the COUNT(DISTINCT("id")) for the prior minute, and then store that result in a new measurement, main_1m_count.
Now, you can easily calculate your 5 minute mean COUNT from the pre-calculated 1 minute COUNT results in main_1m_count:
SELECT MEAN("1m_count") FROM main_1m_count WHERE time > now() - 30m GROUP BY time(5m)
(Note that by default, InfluxDB uses epoch 0 and now() as the lower and upper time range boundaries, so it is redundant to include and time < now() in the WHERE clause.)

Resources