I'm using grafana to monitor network device. As u can see at screen1 , I got many interfaces for monitor, 28 physical interfaces + many virtual (vlan).
Graph show me all interfaces, but I want and opportunity to choose interface from the drop-down list. Then I found that I can solve this problem with "variables".
I make one variable and I can choose interface I want, but it didn`t effect on graph when I chose custom interface.
screen1
My variable:
Variable config
And my db query:
SELECT derivative(mean("ifHCInOctets"), 1s) *8 AS "Input", derivative(mean("ifHCOutOctets"), 1s) *8 AS "Output" FROM "autogen"."interface" WHERE $timeFilter GROUP BY time($__interval), "ifDescr" fill(null)
WHERE "interface" =~ /^$ifDescr$/
lose the brackets around the query in the grafana query when you make the dashboard. That should work. That's how i filter host names anyway, so my full query is
SELECT mean("usage_idle") * -1 + 100 FROM "cpu" WHERE "host" =~ /^$Server$/ AND "cpu" = 'cpu-total' AND $timeFilter GROUP BY time($Interval) fill(null)
That should help piece together the query you need. You could just use Grafana's query builder, and just change the where clause to use the regex value for the variable
Query Builder in Grafana
The brackets is right if you are writing in TICK script or querying the database directly from the cli. Grafana uses slightly different query syntax.
Related
i am very new to influxdb.
I have a dataset like this; (Every row/point is a connection)
time dest_ip source_ip
---- ------- ---------
2018-08-10T11:42:38.848793088Z 211.158.223.252 10.10.10.227
2018-08-10T11:42:38.87115392Z 211.158.223.252 10.10.10.59
2018-08-10T11:42:38.875289088Z 244.181.55.139 10.10.10.59
2018-08-10T11:42:38.880222208Z 138.63.15.221 10.10.10.59
2018-08-10T11:42:38.886027008Z 229.108.28.201 10.10.10.227
2018-08-10T11:42:38.892329728Z 229.108.28.201 10.10.10.181
2018-08-10T11:42:38.896943104Z 229.108.28.201 10.10.10.59
2018-08-10T11:42:38.904005376Z 22.202.67.174 10.10.10.227
2018-08-10T11:42:38.908818688Z 138.63.15.221 10.10.10.181
2018-08-10T11:42:38.913192192Z 138.63.15.221 10.10.10.181
dest_ip and source_ip are field, not tag.
Is it possible to group by dest_ip all connection records somehow and get top 10 records with counts?
Is it possible to group by dest_ip and source_ip together and get top 10 records with counts too?
Or any other solution to get top 10 source_ip to dest_ip relations according to connection counts?
Currently InfluxDB only supports tags and time interval in GROUP BY clause; as you can see the syntax of group by clause (for more information refer to InfluxDB documention):
SELECT <function>(<field_key>) FROM_clause WHERE <time_range> GROUP BY time(<time_interval>),[tag_key]
But if you insert dest_ip and source_ip as tags instead of fields, you achieve all your mentioned desires with InfluxQL query language.
I'm using influxdb to store some service metrics. These are simple metrics, such as read bytes or active connections. Then, with grafana, I'm composing some visualizations on top of this.
Displaying something as 'read bytes' is quite simple, it's basically summing up values, grouped by a time interval.
SELECT sum("value") FROM "bytesReceived" WHERE $timeFilter GROUP BY time($__interval) fill(0)
It's on the 'active connections' that I'm having some trouble figuring out. These are tcp sockets connected to a service, where the measurement is the number of connected sockets; this is updated whenever a socket connects or disconnects.
If I had only one instance of the service, this would be easy, I would just do something like
SELECT last("value") FROM "activeConnections" WHERE $timeFilter GROUP BY time($__interval) fill(0)
The thing is that there are multiple instances of the service, which are created dynamically. The measurement is written with the additional tag 'host', that is populated with an id for the runtime service.
So, let's get into the data points.
select * from activeConnections where time > '2018-05-16T16:00:00Z' and time < '2018-05-16T16:10:00Z'
This spits out something like
time host value
---- ---- -----
1526486436041433600 58e5bd04a313 5
1526486438158741000 58e5bd04a313 4
1526486438712713000 58e5bd04a313 3
1526486811218129000 29b39780fd7b 4
So as you can notice, we end up with 3 connections on one host and 4 on another. The problem in hand is... displaying that data merged as a whole, where that last line should be 7, for example.
I tried grouping data by host
select last(value) from activeConnections where time > '2018-05-16T16:00:00Z' and time < '2018-05-16T16:10:00Z' group by host
which gives me the last value for each group
name: activeConnections
tags: host=29b39780fd7b
time last
---- ----
1526486811218129000 4
name: activeConnections
tags: host=58e5bd04a313
time last
---- ----
1526486706993942700 3
Also tried using a subquery
select * from ( select last(value) from activeConnections where time > '2018-05-16T16:00:00Z' and time < '2018-05-16T16:10:00Z' group by host )
But I get the same problem, where I don't know how to group things nicely for grafana with a time interval.
Does any care to comment and help? Would be much appreciated.
Ok,
I seem to have found a solution. It's a shame that Grafana doesn't support sub-queries, so the query needs to be inserted manually with raw view. There's an issue open here.
So, what I needed was a way to group all the hosts value into a single plot line. That can be achieved with the following query:
SELECT sum("value") FROM (SELECT last("value") as "value" FROM "activeConnections" WHERE $timeFilter GROUP BY time($__interval), "host") GROUP BY time($__interval) fill(previous)
I was close before, but failed to notice that in the inner select, if you don't give a name to the resulting select, it comes out as "last" by default. So I was trying to sum up "value", but the field didn't exist out of the sub-query.
Hope this helps someone. Thank you Yuri, for your comment. It pointed me into the right direction.
We are currently running a number of hand-crafted and optimized OData queries on Exact Online using Python. This runs on several thousand of divisions. However, I want to migrate them to Invantive SQL for ease of maintenance.
But some of the optimizations like explicit orderby in the OData query are not forwarded to Exact Online by Invantive SQL; they just retrieve all data or the top x and then do an orderby.
Especially for maximum value determination that can be a lot slower.
Simple sample on small table:
https://start.exactonline.nl/api/v1/<<division>>/financial/Journals?$select=BankAccountIBAN,BankAccountDescription&$orderby=BankAccountIBAN desc&$top=5
Is there an alternative to optimize the actual OData queries executed by Invantive SQL?
You can either use the Data Replicator or send the hand-craft OData query through a native platform request, such as:
insert into NativePlatformScalarRequests
( url
, orig_system_group
)
select replace('https://start.exactonline.nl/api/v1/{division}/financial/Journals?$select=BankAccountIBAN,BankAccountDescription&$orderby=BankAccountIBAN desc&$top=5', '{division}', code)
, 'MYSTUFF-' || code
from systempartitions#datadictionary
limit 100 /* First 100 divisions. */
create or replace table exact_online_download_journal_top5#inmemorystorage
as
select jte.*
from ( select npt.result
from NativePlatformScalarRequests npt
where npt.orig_system_group like 'MYSTUFF-%'
and npt.result is not null
) npt
join jsontable
( null
passing npt.result
columns BankAccountDescription varchar2 path 'd[0].BankAccountDescription'
, BankAccountIBAN varchar2 path 'd[0].BankAccountIBAN'
) jte
From here on you can use the in memory table, such as:
select * from exact_online_download_journal_top5#inmemorystorage
But of course you can also 'insert into sqlserver'.
I have a measurement called reading where all the rows are of the form:
time channel host value
2018-03-05T05:38:41.952057914Z "1" "4176433" 3.46
2018-03-05T05:39:26.113880408Z "0" "5222355" 120.23
2018-03-05T05:39:30.013558256Z "1" "5222355" 5.66
2018-03-05T05:40:13.827140492Z "0" "4176433" 3.45
2018-03-05T05:40:17.868363704Z "1" "4176433" 3.42
where channel and host are tags.
Is there a way I can automatically generate a continuous query such that:
The CQ measurement's name is of the form host_channel
Until now I have been doing them 1 by 1, for example
CREATE CONTINUOUS QUERY 4176433_1 ON database_name
BEGIN
SELECT mean(value) INTO 4176433_1
FROM reading
WHERE host = '4176433' AND channel = '1'
GROUP BY time(1m)
END
but is there a way I can automatically get 1m sampling per host & channel any time a new host is added to the database? Thanks!
There is no way of doing this in InfluxDB, by the number of reasons. Encoding tag values in a measurements names contradicts InfluxDB official best practices and being discouraged.
I suggest you just going with:
CREATE CONTINUOUS QUERY reading_aggregator ON database_name
BEGIN
SELECT mean(value), host + '_' + channel AS host_channel
INTO mean_reading
FROM reading
GROUP BY time(1m), host, channel
END
I am using influxdb as the backend of grafana, when I configure my metrics with the following query, everything works
SELECT "puller.request"
FROM "api_puller"
WHERE "hostname" = 'xxxxxxx'
AND $timeFilter
This generates the following sql to influxdb
SELECT "puller.request"
FROM "api_puller"
WHERE "hostname" = 'xxxxxxx'
AND time now() - 5m
The problem arises when I want to get the mean of my metrics
SELECT mean("puller.request")
FROM "api_puller"
WHERE "hostname" = 'xxxxxxx'
AND $timeFilter
The sql for this query is :
SELECT mean("puller.request")
FROM "api_puller"
WHERE "hostname" 'xxxxxxx'
AND time > now() - 5m
There is no data points show on my dashboard.
I copied that sql to execute it directly in influxdb. influxdb indeed output some value. But grafana still cannot show any data point.
Is there something wrong with my query? or how should I make the mean function work?
My grafana version is : Grafana v4.4.3 (commit: 54c79c5)
Thanks in advance.