I want to display CPU, Memory Average data in below format for all servers.
Host CPU_Avg Mem_Avg
server1 12.0 23
server2 13 12
etc..
I have used the below one but display format is not coming as expected.gr
First of all, select FORMAT AS "Table" instead of "Time series"
Related
I have created a hypertable water_meter to store the sensor data
It contains following data ordered by timestamp in ascending order
select * from water_meter order by time_stamp;
As can be seen I have data starting from 01 May 2020
if I use time_bucket() function to get aggregates per 1 day as:
SELECT
time_bucket('1 days', time_stamp) as bucket,
thing_key,
avg(pulsel) as avg_pulse_l,
avg(pulseh) as avg_pulse_h
FROM
water_meter
GROUP BY thing_key, bucket;
It works fine and I get below data:
Now if I use it to get 15 days aggregates, I get unexpected results where the starting time bucket is shown for 17 April 2020, for which there was no data in the table
SELECT
time_bucket('15 days', time_stamp) as bucket,
thing_key,
avg(pulsel) as avg_pulse_l,
avg(pulseh) as avg_pulse_h
FROM
water_meter
GROUP BY thing_key, bucket;
The time_bucket function buckets things into buckets which have an implied range, ie a 15 minute bucket might appear as '2021-01-01 01:15:00.000+00' or something, but it would contain timestamps in the range ['2021-01-01 01:15:00', '2021-01-01 01:30:00') - inclusive on the left exclusive on the right. The same thing happens for days. The bucket is determined and happens to start on the 17th of April, but will include the data in the range: ["2020-04-17 00:00:00+00","2020-05-02 00:00:00+00"). You can use the experimental function in the TimescaleDB Toolkit extension to get these ranges: SELECT toolkit_experimental.time_bucket_range('15 days'::interval, '2020-05-01');
You can also use the offset or origin parameters of the time_bucket function to modify the start: select time_bucket('15 days'::interval, '2020-05-01', origin=>'2020-05-01');
According to the influxdb official doc https://docs.influxdata.com/influxdb/v1.8/concepts/key_concepts , tags are indexed.
However, I do not need to filter data with tags when querying. Using the given example in the official doc
time location scientist butterflies honeybees
2015-08-18T00:00:00Z 1 langstroth 12 23
2015-08-18T00:00:00Z 1 perpetua 1 30
2015-08-18T00:06:00Z 1 langstroth 11 28
2015-08-18T00:06:00Z 1 perpetua 3 28
2015-08-18T05:54:00Z 2 langstroth 2 11
2015-08-18T06:00:00Z 2 langstroth 1 10
2015-08-18T06:06:00Z 2 perpetua 8 23
2015-08-18T06:12:00Z 2 perpetua 7 22
while "location" and "scientist" are tags.
If I change "location" and "scientist" to fields, will influxdb consume more space to storage them in comparison to when they are tags?
Influxdb uses line format which is
measurment,tag1=value1,tag2=value2 field1=value1,field2=value2 timestamp
So if you convert the tags into fields the number of points will decrease. This will not only reduce the storage space but also the RAM consumption. So it is always better to store the data as fields if you really don't want the usage of tags (you will not be able to include them in the group by clause)
After the measurement for about one month, I found “tag” does consume less storage space than “field” in influxdb.
I have 90 nodes to measure every 2ms and there are 3 tags. The first tag is the name of the node while the second tag is the name of another node. The third tag is a constant. Therefore, there are 8010(=89*90) combinations of tags.
Using the three tags as "tag" consumes about 500GB storage in one month while using them as "field" consumes about 1.2TB.
But I do not know whether my case fits all kinds of situations.
Given a timeseries of (electricity) marketdata with datapoints every hour, I want to show a Bar Graph with all time / time frame averages for every hour of the data, so that an analyst can easily compare actual prices to all time averages (which hour of the day is most/least expensive).
We have cratedb as backend, which is used in grafana just like a postgres source.
SELECT
extract(HOUR from start_timestamp) as "time",
avg(marketprice) as value
FROM doc.el_marketprices
GROUP BY 1
ORDER BY 1
So my data basically looks like this
time value
23.00 23.19
22.00 25.38
21.00 29.93
20.00 31.45
19.00 34.19
18.00 41.59
17.00 39.38
16.00 35.07
15.00 30.61
14.00 26.14
13.00 25.20
12.00 24.91
11.00 26.98
10.00 28.02
9.00 28.73
8.00 29.57
7.00 31.46
6.00 30.50
5.00 27.75
4.00 20.88
3.00 19.07
2.00 18.07
1.00 19.43
0 21.91
After hours of fiddling around with Bar Graphs, Histogramm Mode, Heatmap Panel und much more, I am just not able to draw a simple Hours-of-the day histogramm with this in Grafana. I would very much appreciate any advice on how to use any panel to get this accomplished.
your query doesn't return correct time series data for the Grafana - time field is not valid timestamp, so don't extract only
hour, but provide full start_timestamp (I hope it is timestamp
data type and value is in UTC)
add WHERE time condition - use Grafana's macro __timeFilter
use Grafana's macro $__timeGroupAlias for hourly groupping
SELECT
$__timeGroupAlias(start_timestamp,1h,0),
avg(marketprice) as value
FROM doc.el_marketprices
WHERE $__timeFilter(start_timestamp)
GROUP BY 1
ORDER BY 1
This will give you data for historic graph with hourly avg values.
Required histogram may be a tricky, but you can try to create metric, which will have extracted hour, e.g.
SELECT
$__timeGroupAlias(start_timestamp,1h,0),
extract(HOUR from start_timestamp) as "metric",
avg(marketprice) as value
FROM doc.el_marketprices
WHERE $__timeFilter(start_timestamp)
GROUP BY 1
ORDER BY 1
And then visualize it as histogram. Remember that Grafana is designated for time series data, so you need proper timestamp (not only extracted hours, eventually you can fake it) otherwise you will have hard time to visualize non time series data in Grafana. This 2nd query may not work properly, but it gives you at least idea.
Problem : I want to aggregate multiple conditions in discontinueted column.
Link : Google Sheet link
Context: I'm a mushroom grower and I wanna schedule my production program for 2020.
We call batch an amount of inoculated mushroom substrate put in production at the same time. Usually, it weights a ton, and for the total year 2020 we'll have about 40 batches put in production.
A batch has several characteristics :
An ID. 20-B1 is the first batch for 2020, 20-B2 the second, and so on.
A scheduled week of production start. Usually, the production for 1 batch last 14 week. The week of production is numbered from 1 to 14.
A calendar week, where the array of 14 production week is placed.
An Origin. We work with two companies (Eurosubstrat and CNC) and are producing our own substrate. This means 3 arguments: "EURO", "CNC" and "LCU"
A species cultivated. We have two species: Shiitake and Pleurotus Ostreatus
A weight; about 1 ton but few batches can be much lighter
A production capability VS time: For example, in The third week of exploitation, a batch will produce 3,4% of his mass in mushroom.
I wanna extract and put the following results in a column :
Target 1 : In order to establish an order schedule, the amount of batch who match those criteria for 1 given calendar week; for example "Week production = 1" AND "Origin = Euro" AND "Species = shii."
Target 2 : In order to estimate total production capability for 1 given species, the sum of individual production capability for 1 given calendar week: for example "Species = shii" AND (related) "production capability"
Hope I'm clear
If you imagine a better way to organize my data, feel free to suggest. I don't know if the current organization is the best.
Sorry, English is not my native tongue.
I have recently set up Grafana with InfluxDB. I'd like to show a panel that indicates how long it has been since an event took place.
Examples:
Server last reported in: 33 minutes ago
Last user sign up: 17 minutes ago
I can get a single metric pretty easily with the following code:
SELECT time, last("duration") as last_duration FROM custom_events ORDER BY time DESC
But I can't seem to get Grafana to do what I want with the time field.
Any suggestions?
Since Grafana(4.6.0) this is now possible with singlestat panels.
GoTo the Options-Tab
Select Value -> Stat -> Time of last point
Select Value -> Stat -> Unit -> Date & time -> From Now
Its currently(4.0.3) not possible to show the last timestamp in singlestat panels. But there is an issue for supporting this. Hopefully we will find time to implement this in the future.
In 8.4.0 There is a unit selection that allows you to do this, make sure your timestamp is in milliseconds since the epoch and select From Now as the unit in the dropdown menu
singlestat pannel expressing time since