How can I implement a windowed rolling average with KSQL KTable correctly? - ksqldb

I am trying to implement a volume rolling average into KSQL.
Kafka currently ingests data from a producer into the topic "KLINES". This data is across multiple markets with a consistent format. I then create a stream from that data like so:
CREATE STREAM KLINESTREAM (market VARCHAR, open DOUBLE, high DOUBLE, low DOUBLE, close DOUBLE, volume DOUBLE, start_time BIGINT, close_time BIGINT, event_time BIGINT) \
WITH (VALUE_FORMAT='JSON', KAFKA_TOPIC='KLINES', TIMESTAMP='event_time', KEY='market');
I then create a table which calculates the average volume over the last 20 minutes for each market like so:
CREATE TABLE AVERAGE_VOLUME_TABLE_BY_MARKET AS \
SELECT CEIL(SUM(volume) / COUNT(*)) AS volume_avg, market FROM KLINESTREAM \
WINDOW HOPPING (SIZE 20 MINUTES, ADVANCE BY 5 SECONDS) \
GROUP BY market;
SELECT * FROM AVERAGE_VOLUME_TABLE_BY_MARKET LIMIT 1;
For clarity, produces:
1560647412620 | EXAMPLEMARKET : Window{start=1560647410000 end=-} | 44.0 | EXAMPLEMARKET
What I wish to have is a KSQL Table that will represent the current "kline" state of each market while also including that rolling average volume calculated in "AVERAGE_VOLUME_TABLE_BY_MARKET" KTable so I can perform analysis between current volume and the average rolling volume
I have tried to join like so:
SELECT K.market, K.open, K.high, K.low, K.close, K.volume, V.volume_avg \
FROM KLINESTREAM K \
LEFT JOIN AVERAGE_VOLUME_TABLE_BY_MARKET V \
ON K.market = V.market;
But obviously this results in an error as the "AVERAGE_VOLUME_TABLE_BY_MARKET" key includes the TimeWindow and also the market.
A serializer (key:
org.apache.kafka.streams.kstream.TimeWindowedSerializer) is not compatible to
the actual key type (key type: java.lang.String). Change the default Serdes in
StreamConfig or provide correct Serdes via method parameters.
Am I approaching this problem correctly?
What I want to achieve is:
Windowed Aggregate KTable + Kline Stream ->
KTable representing current market state
including average volume from the KTable
which displays the current market state possible in KSQL. Or must I use KStreams or another library to accomplish this?
A great aggregation example is here: https://www.confluent.io/stream-processing-cookbook/ksql-recipes/aggregating-data
Applicable to this example, how would I use the aggregate to compare to fresh data as it arrives in the KSQL Table?
Cheers, James

I believe what you're looking for may be LATEST_BY_OFFSET:
CREATE TABLE AVERAGE_VOLUME_TABLE_BY_MARKET AS
SELECT
market,
LATEST_BY_OFFSET(volume) AS volume,
CEIL(SUM(volume) / COUNT(*)) AS volume_avg
FROM KLINESTREAM
WINDOW HOPPING (SIZE 20 MINUTES, ADVANCE BY 5 SECONDS)
GROUP BY market;

Related

How to merge labels of two Prometheus queries? [duplicate]

I am using the consul exporter to ingest the health and status of my services into Prometheus. I'd like to fire alerts when the status of services and nodes in Consul is critical and then use tags extracted from Consul when routing those alerts.
I understand from this discussion that service tags are likely to be exported as a separate metric, but I'm not sure how to join one series with another so I can leverage the tags with the health status.
For example, the following query:
max(consul_health_service_status{status="critical"}) by (service_name, status,node) == 1
could return:
{node="app-server-02",service_name="app-server",status="critical"} 1
but I'd also like 'env' from this series:
consul_service_tags{node="app-server-02",service_name="app-server",env="prod"} 1
to get joined along node and service_name to pass the following to the Alertmanager as a single series:
{node="app-server-02",service_name="app-server",status="critical",env="prod"} 1
I could then match 'env' in my routing.
Is there any way to do this? It doesn't look to me like any operations or functions give me the ability to group or join like this. As far as I can see, the tags would already need to be labels on the consul_health_service_status metric.
You can use the argument list of group_left to include extra labels from the right operand (parentheses and indents for clarity):
(
max(consul_health_service_status{status="critical"})
by (service_name,status,node) == 1
)
+ on(service_name,node) group_left(env)
(
0 * consul_service_tags
)
The important part here is the operation + on(service_name,node) group_left(env):
the + is "abused" as a join operator (fine since 0 * consul_service_tags always has the value 0)
group_left(env) is the modifier that includes the extra label env from the right (consul_service_tags)
The answer in this question is accurate. I want to also share a clearer explanation on joining two metrics preserving SAME Labels (might not be directly answering the question). In these metrics following label is there.
name (eg: aaa, bbb, ccc)
I have a metric name metric_a, and if this returns no data for some of the labels, I wish to fetch data from metric_b. i.e:
metric_a has values for {name="aaa"} and {name="bbb"}
metric_b has values for {name="ccc"}
I want the output to be for all three name labels. The solution is to use or in Prometheus.
sum by (name) (increase(metric_a[1w]))
or
sum by (name) (increase(metric_b[1w]))
The result of this will have values for {name="aaa"}, {name="bbb"} and {name="ccc"}.
It is a good practice in Prometheus ecosystem to expose additional labels, which can be joined to multiple metrics, via a separate info-like metric as explained in this article. For example, consul_service_tags metric exposes a set of tags, which can be joined to metrics via (service_name, node) labels.
The join is usually performed via on() and group_left() modifiers applied to * operation. The * doesn't modify values for time series on the left side because info-like metrics usually have constant 1 values. The on() modifier is used for limiting the labels used for finding matching time series on the left and the right side of *. The group_left() modifier is used for adding additional labels from time series on the right side of *. See these docs for details.
For example, the following PromQL query adds env label from consul_service_tags metric to consul_health_service_status metric with the same set of (service_name, node) labels:
consul_health_service_status
* on(service_name, node) group_left(env)
consul_service_tags
Additional label filters can be added to consul_health_service_status if needed. For example, the following query returns only time series with status="critical" label:
consul_health_service_status{status="critical"}
* on(service_name, node) group_left(env)
consul_service_tags

InfluxDB: How to create a continuous query to calculate delta values?

I'd like to calculate the delta values for a series of measurements stored in an InfluxDB. The values are readings from an electricity meter taken every 5 minutes. The values increase over time. Here is subset of the data to give you an idea (commands shown below are executed in the InfluxDB CLI):
> SELECT "Haushaltstromzaehler - cnt" FROM "myhome_measurements" WHERE time >= '2018-02-02T10:00:00Z' AND time < '2018-02-02T11:00:00Z'
name: myhome_measurements
time Haushaltstromzaehler - cnt
---- --------------------------
2018-02-02T10:00:12.610811904Z 11725.638
2018-02-02T10:05:11.242021888Z 11725.673
2018-02-02T10:10:10.689827072Z 11725.707
2018-02-02T10:15:12.143326976Z 11725.736
2018-02-02T10:20:10.753357056Z 11725.768
2018-02-02T10:25:11.18448512Z 11725.803
2018-02-02T10:30:12.922032896Z 11725.837
2018-02-02T10:35:10.618788096Z 11725.867
2018-02-02T10:40:11.820355072Z 11725.9
2018-02-02T10:45:11.634203904Z 11725.928
2018-02-02T10:50:11.10436096Z 11725.95
2018-02-02T10:55:10.753853952Z 11725.973
Calculating the differences in the InfluxDB CLI is pretty straightforward with the difference() function. This gives me the electricity consumed within the 5 minutes intervals:
> SELECT difference("Haushaltstromzaehler - cnt") FROM "myhome_measurements" WHERE time >= '2018-02-02T10:00:00Z' AND time < '2018-02-02T11:00:00Z'
name: myhome_measurements
time difference
---- ----------
2018-02-02T10:05:11.242021888Z 0.03499999999985448
2018-02-02T10:10:10.689827072Z 0.033999999999650754
2018-02-02T10:15:12.143326976Z 0.02900000000045111
2018-02-02T10:20:10.753357056Z 0.0319999999992433
2018-02-02T10:25:11.18448512Z 0.03499999999985448
2018-02-02T10:30:12.922032896Z 0.033999999999650754
2018-02-02T10:35:10.618788096Z 0.030000000000654836
2018-02-02T10:40:11.820355072Z 0.03299999999944703
2018-02-02T10:45:11.634203904Z 0.028000000000247383
2018-02-02T10:50:11.10436096Z 0.02200000000084401
2018-02-02T10:55:10.753853952Z 0.02299999999922875
Where I struggle is getting this to work in a continuous query. Here is the command I used to setup the continuous query:
CREATE CONTINUOUS QUERY cq_Haushaltstromzaehler_cnt ON myhomedb
BEGIN
SELECT difference(sum("Haushaltstromzaehler - cnt")) AS "delta" INTO "Haushaltstromzaehler_delta" FROM "myhome_measurements" GROUP BY time(1h)
END
Looking in the InfluxDB log file I see that no data is written in the new 'delta' measurement from the continuous query execution:
...finished continuous query cq_Haushaltstromzaehler_cnt, 0 points(s) written...
After much troubleshooting and experimenting I now understand why no data is generated. Setting up a continuous query requires to use the GROUP BY time() statement. This in turn requires to use an aggregate function within the differences() function. The problem now is that the aggregate function returns only one value for the time period specified by GROUP BY time(). Obviously, the differences() function cannot calculate a difference from just one value. Essentially, continuous query executes a command like this:
> SELECT difference(sum("Haushaltstromzaehler - cnt")) FROM "myhome_measurements" WHERE time >= '2018-02-02T10:00:00Z' AND time < '2018-02-02T11:00:00Z' GROUP BY time(1h)
>
I'm now somewhat clueless as to how to make this work and appreciate any advice you might have.
Does it help using the last aggregate function? Not tested this as a cq yet.
Select difference(last(T1_Consumed)) AS T1_Delta, difference(last(T2_Consumed)) AS T2_Delta
from P1Data
where time >= 1551648871000000000 group by time(1h)
DIFFERENCE() would calculate delta from the "aggregated" value taken from previous group, not within current group.
So fill free to use selector function there - since your counters seemed to be cumulative, LAST() should be working well.

Influxdb querying values from 2 measurements and using SUM() for the total value

select SUM(value)
from /measurment1|measurment2/
where time > now() - 60m and host = 'hostname' limit 2;
Name: measurment1
time sum
---- ---
1505749307008583382 4680247
name: measurment2
time sum
---- ---
1505749307008583382 3004489
But is it possible to get value of SUM(measurment1+measurment2) , so that I see only o/p .
Not possible in influx query language. It does not support functions across measurements.
If this is something you require, you may be interested in layering another API on top of influx that do this, like Graphite via Influxgraph.
For the above, something like this.
/etc/graphite-api.yaml:
finders:
- influxgraph.InfluxDBFinder
influxdb:
db: <your database>
templates:
# Produces metric paths like 'measurement1.hostname.value'
- measurement.host.field*
Start the graphite-api/influxgraph webapp.
A query /render?from=-60min&target=sum(*.hostname.value) then produces the sum of value on tag host='hostname' for all measurements.
{measurement1,measurement2}.hostname.value can be used instead to limit it to specific measurements.
NB - Performance wise (of influx), best to have multiple values in the same measurement rather than same value field name in multiple measurements.

SUM(LAST()) on GROUP BY

I have a series, disk, that contains a path (/mnt/disk1, /mnt/disk2, etc) and total space of a disk. It also includes free and used values. These values are updated at a specified interval. What I would like to do, is query to get the sum of the total of the last() of each path. I would also like to do the same for free and for used, to get a aggregate of the total size, free space, and used space of all of my disks on my server.
I have a query here that will get me the last(total) of all the disks, grouped by its path (for distinction):
select last(total) as total from disk where path =~ /(mnt\/disk).*/ group by path
Currently, this returns 5 series, each containing 1 row (the latest) and the value of its total. I then want to take the sum of those series, but I cannot just wrap the last(total) into a sum() function call. Is there a way to do this that I am missing?
Carrying on from my comment above about nested functions.
Building a toy example:
CREATE DATABASE FOO
USE FOO
Assuming your data is updated at intervals greater than[1] every minute:
CREATE CONTINUOUS QUERY disk_sum_total ON FOO
BEGIN
SELECT sum("total") AS "total_1m" INTO disk_1m_total FROM "disk"
GROUP BY time(1m)
END
Then push some values in:
INSERT disk,path="/mnt/disk1" total=30
INSERT disk,path="/mnt/disk2" total=32
INSERT disk,path="/mnt/disk3" total=33
And wait more than a minute. Then:
INSERT disk,path="/mnt/disk1" total=41
INSERT disk,path="/mnt/disk2" total=42
INSERT disk,path="/mnt/disk3" total=43
And wait a minute+ again. Then:
SELECT * FROM disk_1m_total
name: disk_1m_total
-------------------
time total_1m
1476015300000000000 95
1476015420000000000 126
The two values are 30+32+33=95 and 41+42+43=126.
From there, it's trivial to query:
SELECT last(total_1m) FROM disk_1m_total
name: disk_1m_total
-------------------
time last
1476015420000000000 126
Hope that helps.
[1] Picking intervals smaller than the update frequency prevents minor timing jitters from making all the data being accidentally summed twice for a given group. There might be some "zero update" intervals, but no "double counting" intervals. I typically run the query twice as fast as the updates. If the CQ sees no data for a window, there will be no CQ performed for that window, so last() will still give the correct answer. For example, I left the CQ running overnight and pushed no new data in: last(total_1m) gives the same answer, not zero for "no new data".

Data Warehouse Design of Fact Tables

I'm pretty new to data warehouse design and am struggling with how to design the fact table given very similar, but somewhat different metrics. Lets say you were evaluating the below metrics, how would you break up the fact tables (in this case company is a subset of client). Would you be able to use one table for all of this or would each metric being measured warrant its own fact table or would each part of the metric being measured be its own column in one fact table?
Total company daily/monthly/yearly # of files processed
Total company daily/monthly/yearly file sizes processed
Total company daily/monthly/yearly # files errored
Total company daily/monthly/yearly # files failed
Total client daily/monthly/yearly # of files processed
Total client daily/monthly/yearly file sizes processed
Total client daily/monthly/yearly # files errored
Total client daily/monthly/yearly # files failed
By the looks of the measure names, I think you'll be served with a single fact table with a record for each file and a link back to a date_dim
create table date_dim (
date_sk int,
calendar_date date,
month_ordinal int,
month_name nvarchar,
Year int,
..etc you've got your own one of these ..
)
create table fact_file_measures (
date_sk,
file_sk, --ref the file_dim with additonal file info
company_sk, --ref the company_dim with the client details
processed int, --should always be one, optional depending on how your reporting team like to work
size_Kb decimal -- specifiy a size measurement, ambiguity is bad
error_count int -- 1 if file had error, 0 if fine
failed_count int -- 1 or file failed, 0 if fine
)
so now you should be able to construct queries to get everything you asked for
for example, for your monthly stats:
select
c.company_name,
c.client_name,
sum(f.file_count) total_files,
sum(f.size_Kb) total_files_size_Kb,
sum(f.file_count) total_files,
sum(f.file_count) total_files
from
fact_file_measure f
inner join dim_company c on f.company_sk = c.company_sk
inner join dim_date d on f.date_sk = d.date_sk
where
d.month = 'January' and d.year = "1984"
If you needed to have the side by side 'day/month/year' stuff, you can construct year and month fact tables to do the roll ups and join back via dim_date's month/year fields. (You could include month and year fields in the daily fact table, but these values may end up being miss-used by less experienced report builders) It all goes back to what your users actually want - design your fact tables to their requirements and don't be afraid to have separate fact tables - data warehouse is not about normalization, its about presenting the data in a way that it can be used.
Good luck

Resources