Is it possible to aggregate measurements or create custom queries beyond the standard dateFrom dateTo queries?
As an example, I have measurements which have a time delta of 1 minute (2015-01-01T05:05:00, 2015-01-01T05:05:00, 2015-01-01T05:05:00, ...) and I would like to query the measurements at 15 minute intervals (2015-01-01T05:15:00, 2015-01-01T05:30:00, 2015-01-01T05:45:00, ...)
So far I have only come up with these solutions:
Using the standard api request as in
https://tenant.cumulocity.com/measurement/measurements?dateFrom=2015-10-01&dateTo=2015-11-05
and then throwing away most of the data will use a massive amount of time loading the data.
Using cep (cumulocity event language) to generate a new measurement every 15 minutes using the nearest 1 minute measurement seems like a bit of overkill and not very elegant.
Batch requesting the exact minute
https://tenant.cumulocity.com/measurement/measurements?dateFrom=2015-11-05T05:15:00%2B01:00&dateTo=2015-11-05T05:16:00%2B01:00
which will in a massive amount of API requests and also does not seem very efficient.
Use the /measurements/series endpoint which will only give me all series, even those I do not want, as well as only having the aggregation options hourly and daily (as far as I can tell).
Is there a better way of doing this?
you have captured nearly all of the mechanisms that are currently available. There is one more possibility -- not sure if this is an option for you:
Mark the fifteenth measurement when sending it from the device, using e.g. a different type.
I would normally use 2. It's actually quite efficient, it's similar to a materialized view in traditional SQL, plus you can use the data everywhere and in all widgets.
Good luck :-)
Cheers,
André
I would prefer the CEP solution. The rule wouldn't be that complicated. You would of course then store these measurements twice which is not that nice but having your desired measurement with a specific type or fragment will give you the fastest way to query it.
Instead of copying the measurement you could just add a special fragment to the measurement every 15 min in the CEP rule. You cannot update measurements so you would have to delete the measurement incoming every 15 min and then create a new measurement with exactly the same values but add a fragement (e.g. "aggregatedMeasurement": {}).
Your query then looks like this:
https://tenant.cumulocity.com/measurement/measurements?dateFrom=2015-10-01&dateTo=2015-11-05&fragmentType=aggregatedMeasurement
One more idea for point 3:
You could use SmartREST to create a template with the query string and leave the dateFrom and dateTo as placeholders.
From the client side you then would have to make only one request using the bulking feature in SmartREST.
On the server side this would still be transformed into the single requests so you wouldn't gain anything in speed.
Related
We are measuring throughput using Grafana and Influx. Of course, we would like to measure throughput in terms how many requests, approximately, happens every single second (rps).
The typical request is:
SELECT sum("count") / 10 FROM "http_requests" GROUP BY time(10s)
But we are loosing possibility to use astonishing dynamic $__interval that very useful when graph scope is large, like a day of week. When we are changing interval we should change divider into SELECT expression.
SELECT sum("count") / $__interval FROM "http_requests" GROUP BY time($__interval)
But this approach does not work, because of empty result returns.
How to create request using dynamic $__interval for throughput measuring?
The reason you get no results is that $__interval is not a number but a string such as 10s, 1m, etc. that is understood by influxdb as a time range. So it is not possible to use it the way you are trying.
However, what you want to calculate is the mean which is available as a function in InfluxQL. The way to get the behavior that you want is with something like this.
SELECT mean("count") FROM "http_requests" GROUP BY time($__interval)
EDIT: On a second thought that is not quite what you want.
You'd probably need to use derivative. I'll come back to you on that one later.
Edit2: Do you think this answers the question that you have Calculating request per second using InfluxDB on Grafana
Edit3: Third edit's a charm.
We use your starting query and wrap it in another one as such:
SELECT sum("rps") from (SELECT sum("count") / 10 as rps FROM "http_requests" GROUP BY time(10s)) GROUP BY time($__interval)
Is it possible to downsample older data using influxdb in a way that it only keeps change of values?
My example is the following:
I have a binary sensor sending data every 10 min, so naturally the consecutive values look something like this: 0,0,0,0,0,1,1,0,0,0,0...
My goal is to keep this kind of raw data over a certain period of time using retention policies and downsample the data for longer storage. I want to delete all successive values with the same number so that I have only the datapoint with their timestamps when the value actually changed. The downsampled data should look like this: 0,1,0,1,0,1,0.... but with the correct timestamp when the event actually occurred.
Currently this isn't possible with InfluxDB, though the plan is to eventually support this kind of use case.
I would encourage you to open an feature request on the InfluxDB repo asking for this.
I have two write points for InfluxDB, one is the start and the other is the end. I just need to determine the duration between those two events, and make queries around it. InfluxDB has difference() aggregate method, but it doesn't work on the time meta field.
Is supplying a custom timestamp value the only way to accomplish this?
As per "Can I perform mathematical operations against timestamps?"
No:
"Currently, it is not possible to execute mathematical operators against timestamp values in InfluxDB. Most time calculations must be carried out by the client receiving the query results."
and yes, maybe:
The function ELAPSED() returns the difference between subsequent timestamps in a single field.
So it depends on the shape of your data.
If you write only the mentioned two entries then you can follow the below steps -
Limit the result to two (Eg: select * from timeseries limit 2)
Extract the time from the result set
Take the difference between the time
I was reading about Time-Based Versioned Graphs and came across the following example:
CREATE (s1:Shop{shop_id:1})
-[:STATE{from:1388534400000,to:9223372036854775807}]->
(ss1:ShopState{name:'General Store'})
My question: how do I calculate this date? from:1388534400000,to:9223372036854775807
Those two values are timestamps which in java are the number of milliseconds since the Epoch (1/1/1970) began. The second value is the maximum Long value, the end of Java time, a long way away.
There are ways in all languages for generating these values for specific dates (beware that some will be based on seconds), there is quite a handy list on this site.
If you are not working in any particular programming language and just want to enter queries then you can use an online date converter like this one.
You can also calculate timestamps in Cypher if you are working with dates that relate to Now somehow using the timestamp() function:
CREATE (s1:Shop{shop_id:1})
-[:STATE{from:timestamp(),to:9223372036854775807}]->
(ss1:ShopState{name:'General Store'})
IIUC to is just a Long.MAX_VALUE, and from can be a result of either calling timestamp() function via Cypher or setting the property with the value of System.currentTimeMills() via Java API.
Take a look at the example: http://console.neo4j.org/?id=43uoyt (Note that you can skip setting rel.to and use coalesce when querying instead).
I have a lot of logfile data that I want to display dynamic graphs from, for basically arbitrary time periods, optionally filtered or aggregated by different columns (that I could pregenerate). I'm wondering about the best way to store the data in a database and access it for displaying charts, when:
the time resolution should be variable from one second to a year
there are entries that span several 'time buckets', e.g. a connection might have been open for a few days and I want to count and display the user for every hour she was connected, not just in the hour 'slot' the connection was created or finished
Are there best practices, or tools/plugins for rails that help handle this kind and amount of data? Are there maybe database engines specifically tailored towards this, or having helpful functions (e.g. CouchDB indexes)?
EDIT: I'm looking for a scalable way to handle this data and access pattern. Things we considered: Run a query for each bucket, merge in app - probably way too slow. GROUP BY timestamp/granularity - does not count connections correctly. Preprocessing data into rows by smallest granularity and downsampling on query - probably the best way.
I think you can use mysql timestamps for this.
The way I solved it in the end was to pre-process the data into per-minute buckets, so there's one row for every event and minute. That makes it easy and fast enough to select and yields correct results. To get different granularity, you can do integer arithmetic on the timestamp columns - select abs(timestamp/factor)*factor and group by abs(timestamp/factor)*factor.