grafana/influx says "no data points" but only sometimes - influxdb

I have some grafana dashboards with graphs that sometimes show "No data points". I know there's data, because other times I see graphs and other graphs on the same page display other results from the same measurements. Also, I can query the data directly in influxdb.
Anecdotally it appears that longer time periods are more likely to result in this failure than shorter (i.e., 30 days sometimes fails, 1 day rarely fails). This is every few seconds data, like system stats.
I suspect (with inadequate data) that influxdb is sometimes taking too long to respond and grafana times out, or else that influxdb outright fails the query due to too much data relative to resources available. OTOH, directly querying influxdb works fine (see below), though I'm throwing only one query at a time at it. If I query while the dashboard updates, the query takes much longer, as if I'm waiting for a worker thread to handle my query.
But before I just start growing hardware, I'd like to have more than just a hunch. I don't have that much data. Yet the influx and grafana logs aren't showing me anything terribly interesting (such as OOM, timeouts, or query failures).
Any suggestions?
BTW, a sample query in grafana is this:
SELECT percentile("usage_system", 95) FROM "cpu"
WHERE "host" =~ /^$host$/ AND $timeFilter
GROUP BY time($__interval), "host"
If I query directly against influxdb, the query results are returned almost immediately, whereas in grafana I wait a good long bit with a spinner displaying. (If I query at the same time that I update a dashboard, the query takes a bit, consistent with waiting for a worker thread to handle my query.)
select percentile(usage_system, 95) from cpu
WHERE host = 'seine3'
AND time >= 1519216559000000000 AND time <= 1521808559000000000
GROUP BY time(1h), host
or
select percentile(usage_system, 95) from cpu
WHERE host = 'seine3'
AND time >= '2018-02-23T00:00:00Z' AND time <= '2018-03-23T00:30:00Z'
GROUP BY time(1h), host

Related

Using GROUP BY tag across entire database

How does running a query against each value of a tag perform compared to running a query for the same data, across the entire database, with GROUP BY "tag"? The first works for me. The second runs for a while but does not return anything (which could be the fault of either Influx or the software package that is playing middle-man i assume).
My InfluxDB database has data from 220 test events, and each test event lasts about two hours, and each test has tens-of-thousands of parameters. Test number is a tag.
I want to compute the COUNT(), MIN(), MEAN(), and MAX() for each of 10 different parameters, for each test. I know that I can write a Python script that will submit a separate InfluxQL query for each test number (WHERE "test"=xxx), and compile the results from each query, resulting in a relatively small amount of data. This takes maybe 12 minutes.
Alternatively, I've tried running one single query (same SELECT and FROM clauses) but instead of "WHERE "test" = xx, I simply GROUP BY "test". This seems to run for at least 14 minutes but then disappears (per "show queries"), without responding to my influxdb.DataFrameClient.
Is there something particularly-problematic about the second approach? It seems to be the more-intuitive approach for the analyst, but I can't get it to work.
Thanks!

How to send non aggregated metric to Influx from Springboot application?

I have a SpringBoot application that is under moderate load. I want to collect metric data for a few of the operations of my app. I am majorly interested in Counters and Timers.
I want to count the number of times a method was invoked (# of invocation over a window, for example, #invocation over last 1 day, 1 week, or 1 month)
If the method produces any unexpected result increase failure count and publish a few tags with that metric
I want to time a couple of expensive methods, i.e. I want to see how much time did that method took, and also I want to publish a few tags with metrics to get more context
I have tried StatsD-SignalFx and Micrometer-InfluxDB, but both these solutions have some issues I could not solve
StatsD aggregates the data over flush window and due to aggregation metric tags get messed up. For example, if I send 10 events in a flush window with different tag values, and the StatsD agent aggregates those events and publishes only one event with counter = 10, then I am not sure what tag values it's sending with aggregated data
Micrometer-InfluxDB setup has its own problems, one of them being micrometer sending 0 values for counters if no new metric is produced and in that fake ( 0 value counter) it uses same tag values from last valid (non zero counter)
I am not sure how, but Micrometer also does some sort of aggregation at the client-side in MeterRegistry I believe, because I was getting a few counters with a value of 0.5 in InfluxDB
Next, I am planning to explore Micrometer/StatsD + Telegraf + Influx + Grafana to see if it suits my use case.
Questions:
How to avoid metric aggregation till it reaches the data store (InfluxDB). I can do the required aggregation in Grafana
Is there any standard solution to the problem that I am trying to solve?
Any other suggestion or direction for my use case?

Grafana does not display any InfluxDB data ( failed to fetch ) after 60s for large datasets

Grafana does not display any data ( failed to fetch ) after 60s for large datasets but when the interval is smaller dashboard loads fine any help here?
Tried tweaking timeouts in grafana.ini does not seem to help here looks like Grafana has a hard - limit on those parameters
Grafana version > 7.0.3
Data source : influxdb
dashboard loads fine for smaller intervals
any help here would be appreciated here?
Use time groupping GROUP BY time($__interval) in your InfluxDB query - https://grafana.com/docs/grafana/latest/datasources/influxdb/#query-editor - Grafana already has macro $__interval which will select "optimal" time aggregation based on current dashboard time range.
It doesn't make sense to load huge datasets with original granularity. You may solve it on the Grafana level somehow, but then you may have a problem in the browser - it may not have so much memory or it will take ages.

InfluxDB Continuous Query running on entire time series data

If my interpretation is correct, according to the documentation provided here:InfluxDB Downsampling when we down-sample data using a Continuous Query running every 30 minutes, it runs only for the previous 30 minutes data.
Relevant part of the document:
Use the CREATE CONTINUOUS QUERY statement to generate a CQ:
CREATE CONTINUOUS QUERY "cq_30m" ON "food_data" BEGIN
SELECT mean("website") AS "mean_website",mean("phone") AS "mean_phone"
INTO "a_year"."downsampled_orders"
FROM "orders"
GROUP BY time(30m)
END
That query creates a CQ called cq_30m in the database food_data.
cq_30m tells InfluxDB to calculate the 30-minute average of the two
fields website and phone in the measurement orders and in the DEFAULT
RP two_hours. It also tells InfluxDB to write those results to the
measurement downsampled_orders in the retention policy a_year with the
field keys mean_website and mean_phone. InfluxDB will run this query
every 30 minutes for the previous 30 minutes.
When I create a Continuous Query it actually runs on the entire dataset, and not on the previous 30 minutes. My question is, does this happen only the first time after which it runs on the previous 30 minutes of data instead of the entire dataset?
I understand that the query itself uses GROUP BY time(30m) which means it'll return all data grouped together but does this also hold true for the Continuous Query? If so, should I then include a filter to only process the last 30 minutes of data in the Continuous Query?
What you have described is expected functionality.
Schedule and coverage
Continuous queries operate on real-time data. They use the local server’s timestamp, the GROUP BY time() interval, and InfluxDB database’s preset time boundaries to determine when to execute and what time range to cover in the query.
CQs execute at the same interval as the cq_query’s GROUP BY time() interval, and they run at the start of the InfluxDB database’s preset time boundaries. If the GROUP BY time() interval is one hour, the CQ executes at the start of every hour.
When the CQ executes, it runs a single query for the time range between now() and now() minus the GROUP BY time() interval. If the GROUP BY time() interval is one hour and the current time is 17:00, the query’s time range is between 16:00 and 16:59.999999999.
So it should only process the last 30 minutes.
Its a good point about the first run.
I did manage to find a snippet from an old document
Backfilling Data
In the event that the source time series already has data in it when you create a new downsampled continuous query, InfluxDB will go back in time and calculate the values for all intervals up to the present. The continuous query will then continue running in the background for all current and future intervals.
https://influxdbcom.readthedocs.io/en/latest/content/docs/v0.8/api/continuous_queries/#backfilling-data
Which would explain the behaviour you have found

InfluxDB performance

For my case, I need to capture 15 performance metrics for devices and save it to InfluxDB. Each device has a unique device id.
Metrics are written into InfluxDB in the following way. Here I only show one as an example
new Serie.Builder("perfmetric1")
.columns("time", "value", "id", "type")
.values(getTime(), getPerf1(), getId(), getType())
.build()
Writing data is fast and easy. But I saw bad performance when I run query. I'm trying to get all 15 metric values for the last one hour.
select value from perfmetric1, perfmetric2, ..., permetric15
where id='testdeviceid' and time > now() - 1h
For an hour, each metric has 120 data points, in total it's 1800 data points. The query takes about 5 seconds on a c4.4xlarge EC2 instance when it's idle.
I believe InfluxDB can do better. Is this a problem of my schema design, or is it something else? Would splitting the query into 15 parallel calls go faster?
As #valentin answer says, you need to build an index for the id column for InfluxDB to perform these queries efficiently.
In 0.8 stable you can do this "indexing" using continuous fanout queries. For example, the following continuous query will expand your perfmetric1 series into multiple series of the form perfmetric1.id:
select * from perfmetric1 into perfmetric1.[id];
Later you would do:
select value from perfmetric1.testdeviceid, perfmetric2.testdeviceid, ..., permetric15.testdeviceid where time > now() - 1h
This query will take much less time to complete since InfluxDB won't have to perform a full scan of the timeseries to get the points for each testdeviceid.
Build an index on id column. Seems that he engine uses full scan on table to retrieve data. By splitting your query in 15 threads, the engine will use 15 full scans and the performance will be much worse.

Resources