Prometheus exporter with historical data - monitoring

Is it possible for a Prometheus exporter to save historical data and not only devliver the value while scraping?
My goal is that my exporter is reading a value (let's say a sensor) every 1ms and saving it. Every 15 seconds now Prometheus pulls the data and gets the list of values since last scraping.
Is this possible/intenden to be done with an exporter?
Because if i get it correctly the exporter is not intended to save values, only to read a value when Prometheus scrapes it.
Scheduling of scraping
If it is not possible to solve this with an exporter i only see the solution to add a timeseries database between the node and the exporter. And the exporter then only pulls the data from the tsdb.
|Node| --[produces value each ms] --> |InfluxDB| --> |Exporter| --> |Prometheus|
Do i miss something here?

There are the following options:
To push data directly to Prometheus-compatible remote storage such as VictoriaMetrics, so the data could be queried later with PromQL from Grafana.
To scrape data from the exporter with vmagent with short scrape interval, so it could push the scraped data to remote storage when it is available.
To collect the data at exporter side in Histograms, so they are scraped later by Prometheus, vmagent or VictoriaMetrics. This approach may lead to the lowest amounts of storage space requred for metrics and the highest query speed.

Related

How to send non aggregated metric to Influx from Springboot application?

I have a SpringBoot application that is under moderate load. I want to collect metric data for a few of the operations of my app. I am majorly interested in Counters and Timers.
I want to count the number of times a method was invoked (# of invocation over a window, for example, #invocation over last 1 day, 1 week, or 1 month)
If the method produces any unexpected result increase failure count and publish a few tags with that metric
I want to time a couple of expensive methods, i.e. I want to see how much time did that method took, and also I want to publish a few tags with metrics to get more context
I have tried StatsD-SignalFx and Micrometer-InfluxDB, but both these solutions have some issues I could not solve
StatsD aggregates the data over flush window and due to aggregation metric tags get messed up. For example, if I send 10 events in a flush window with different tag values, and the StatsD agent aggregates those events and publishes only one event with counter = 10, then I am not sure what tag values it's sending with aggregated data
Micrometer-InfluxDB setup has its own problems, one of them being micrometer sending 0 values for counters if no new metric is produced and in that fake ( 0 value counter) it uses same tag values from last valid (non zero counter)
I am not sure how, but Micrometer also does some sort of aggregation at the client-side in MeterRegistry I believe, because I was getting a few counters with a value of 0.5 in InfluxDB
Next, I am planning to explore Micrometer/StatsD + Telegraf + Influx + Grafana to see if it suits my use case.
Questions:
How to avoid metric aggregation till it reaches the data store (InfluxDB). I can do the required aggregation in Grafana
Is there any standard solution to the problem that I am trying to solve?
Any other suggestion or direction for my use case?

export telemetry data of a device in ThingsBoard

im using thingsboard community edition.
i want to know if there is a way to export all time series data of a device into csv or any other file format. i need all the data to analyse it.
thingsboard Professional edition has this feature. but how about Community Edition?
Default csv/xls export is only available in the professional version.
But you could use the REST api to acquire the historical data.
My reference below states:
You can also fetch list of historical values for particular entity type and entity id using GET request to the following URL
http(s)://host:port/api/plugins/telemetry/{entityType}/{entityId}/values/timeseries?keys=key1,key2,key3&startTs=1479735870785&endTs=1479735871858&interval=60000&limit=100&agg=AVG
The supported parameters are described below:
keys - comma separated list of telemetry keys to fetch.
startTs - unix timestamp that identifies start of the interval in milliseconds.
endTs - unix timestamp that identifies end of the interval in milliseconds.
interval - the aggregation interval, in milliseconds.
agg - the aggregation function. One of MIN, MAX, AVG, SUM, COUNT, NONE.
limit - the max amount of data points to return or intervals to process.
ThingsBoard will use startTs, endTs and interval to identify aggregation partitions or sub-queries and execute asynchronous queries to DB that leverage built-in aggregation functions."
Reference:Thingsboard docs: ts data values api

Prometheus remote read influxdb

I'm new to Prometheus but familiar with Influx (currently running 1.6).
My understanding is it's possible to configure Prometheus to remotely read data from influx with the following configuration in prometheus.yml:
remote_read:
url: "http://localhost:8086/api/v1/prom/read?db=bulkstats"
"bulkstats" is the database I'm trying to read data from in Prometheus. An example query that would work in influx would be:
SELECT "sess-curaaaactive" FROM "PDSNSYSTEM1" WHERE ("Nodename" = 'ALPRGAGQPNC') AND time >= now() - 6h"
However I cannot find one example of how to query that data from PromQL. Please help!
Here is the link which matches prometheus format with influxdb's one.
In terms of prometheus's jargon, in your example, sess-curaaaactive is the metric name (measurement in influx) and ("Nodename" = 'ALPRGAGQPNC') is just a label which prometheus attaches to the measurement to create a time series.

Grafana with prometheus data source wrong values

I am using grafana to display metrics based on prometheus data source. When using single stat panel with delta configuration I am getting wrong values. The values on prometheus are stored correctly, it looks something wrong in grafana when query with date filter, it show a lot of non sense results. Has something similar happened to someone?

Is it possible keep only value changes in influxdb?

Is it possible to downsample older data using influxdb in a way that it only keeps change of values?
My example is the following:
I have a binary sensor sending data every 10 min, so naturally the consecutive values look something like this: 0,0,0,0,0,1,1,0,0,0,0...
My goal is to keep this kind of raw data over a certain period of time using retention policies and downsample the data for longer storage. I want to delete all successive values with the same number so that I have only the datapoint with their timestamps when the value actually changed. The downsampled data should look like this: 0,1,0,1,0,1,0.... but with the correct timestamp when the event actually occurred.
Currently this isn't possible with InfluxDB, though the plan is to eventually support this kind of use case.
I would encourage you to open an feature request on the InfluxDB repo asking for this.

Resources