InfluxDB design issue - influxdb

I am using influxDB and using line protocol to insert large set of data into Data base. Data i am getting is in the form of Key value pair, where key is long string contains Hierarchical data and value is simple integer value.
Sample Key Value data :
/path/units/unit/subunits/subunit[name\='NAME1']/memory/chip/application/filter/allocations
value = 500
/path/units/unit/subunits/subunit[name\='NAME2']/memory/chip/application/filter/allocations
value = 100
(Note Name = 2)
/path/units/unit/subunits/subunit[name\='NAME1']/memory/chip/application/filter/free
value = 700
(Note Instead of allocation it is free at the leaf)
/path/units/unit/subunits/subunit[name\='NAME2']/memory/graphics/application/filter/swap
value = 600
Note Instead of chip, graphics is in path)
/path/units/unit/subunits/subunit[name\='NAME2']/harddisk/data/size
value = 400
Note Different path but till subunit it is same
/path/units/unit/subunits/subunit[name\='NAME2']/harddisk/data/free
value=100
Note Same path but last element is different
Below is the line protocol i am using to insert data.
interface, Key= /path/units/unit/subunits/subunit[name\='NAME2']/harddisk/data/free, valueData= 500
I am Using one measurement namely, Interface. And one tag and one field set. But this DB design is causing issue for querying data.
How can I design database so that i can query like, Get all record for subunit where name = Name1 or get all size data for every hard disk.
Thanks in advance.

The Schema I'd recommend would be the following:
interface,filename=/path/units/unit/subunits/subunit[name\='NAME2']/harddisk/data/free value=500
Where filename is a tag and value is the field.
Given that the cardinality of filename in the thousands this schema should work well.

Related

deleting columns from influx DN using flux command line

Is there any way to delete columns of an influx timeseries as we have accidentally injected data using the wrong data type (int instead of float).
Or to change the type of data instead.
Unfortunately, there is no way to delete a "column" (i.e. a tag or a field) from an Influx measurement so far. Here's the feature request for that but there is no ETA yet.
Three workarounds:
use SELECT INTO to copy the desirable data into a different measurement, excluding the undesirable "columns". e.g.:
SELECT desirableTag1, desirableTag2, desirableField1, desirableField2 INTO new_measurement FROM measurement
use CAST operations to "change the data type" from float to int. e.g.:
SELECT desirableTag1, desirableTag2, desirableField1, desirableField2, undesiredableTag3::integer, undesiredableField3::integer INTO new_measurement FROM measurement
"Update" the data with insert statement, which will overwrite the data with the same timestamp, same tags, same field keys. Keep all other things equal, except that the "columns" that you would like to update. To make the data in integer data type, remember to put a trailing i on the number. Example: 42i. e.g.:
insert measurement,desirableTag1=v1 desirableField1=fv1,desirableField2=fv2,undesirableField1=someValueA-i 1505799797664800000
insert measurement,desirableTag1=v21 desirableField1=fv21,desirableField2=fv22,undesirableField1=someValueB-i 1505799797664800000

Overwriting existing table with Jenkins, InfluxDB plugin

I can specify tables and fill them with data, but I don't know how to overwrite existing fields.
I'm using the InfluxDB plugin using the following code to send data to a specific table in the influx database, as per documentation:
def myFields1 = [:]
def myFields2 = [:]
def myCustomMeasurementFields = [:]
myFields1['field_a'] = 11
myFields1['field_b'] = 12
myFields2['field_c'] = 21
myFields2['field_d'] = 22
myCustomMeasurementFields['series_1'] = myFields1
myCustomMeasurementFields['series_2'] = myFields2
myTags = ['series_1':['tag_a':'a','tag_b':'b'],'series_2':['tag_c':'c','tag_d':'d']]
influxDbPublisher(selectedTarget: 'my-target', customDataMap: myCustomMeasurementFields, customDataMapTags: myTags)
So I can define fields, assign values, assign them to tables (series_1,series_2). But how can I overwrite existing fields in a table that already exists? Thank you.
There is only one way to overwrite values in InfluxDB.
Value field which you write into InfluxDB can vary from original data only by its value. No difference with tag keys, measurement name and timestamp can be made. If you write new value as I described, it will automatically overwrite value you had there before.
In other cases you need to delete old data and then write new values or just accept existing old data and use filtering possibilities of InfluxDB.
By the way. Thinking about InfluxDB as table data generates problems like yours. My advice is to think about it as an indexed time series database ( it is truly time series database).

One measurement - three datatypes

I have a Line Protocol like this:
Measurement1,Valuetype=Act_value,metric=Max,dt=Int value=200i 1553537228984000000
Measurement1,Valuetype=Act_value,metric=Lower_bound,dt=Int value=25i 1553537228987000000
Measurement1,Valuetype=Act_value,metric=Min,dt=Int value=10i 1553537228994000000
Measurement1,Valuetype=Act_value,metric=Upper_limit,dt=Int value=222i 1553537228997000000
Measurement1,Valuetype=Act_value,metric=Lower_limit,dt=Int value=0i 1553537229004000000
Measurement1,Valuetype=Act_value,metric=Simulation,dt=bool value=False 1553537229007000000
Measurement1,Valuetype=Act_value,metric=Value,dt=Int value=69i 1553537229014000000
Measurement1,Valuetype=Act_value,metric=Percentage,dt=Int value=31i 1553537229017000000
Measurement1,Valuetype=Set_value,metric=Upper_limit,dt=Int value=222i 1553537229024000000
Measurement1,Valuetype=Set_value,metric=Lower_limit,dt=Int value=0i 1553537229028000000
Measurement1,Valuetype=Set_value,metric=Unit,dt=string value="Kelvin" 1553537229035000000
Measurement1,Valuetype=Set_value,metric=Value,dt=Int value=222i 1553537229038000000
Measurement1,Valuetype=Set_value,metric=Percentage,dt=Int value=0i 1553537229045000000
I need to insert multiple Lines at once. The issue is likely that I insert integers, booleans and strings into the same table. It worked when I created measurements like, e.g. Measurement1_Int,Measurement1_bool,Measurement1_string. In the above configuration I get an error.
I have the following questions:
Is there any way to save values of different (data-)types to one
table/measurement?
If yes how do I need to adjust my Line Protocol?
Would it work I write the three datatypes seperately but still in the same table?
If you can afford to assign the same timestamp to all metrics within measurement datapoint the best variant would be to use metric name a field name in influxdb record:
Measurement1,Valuetype=Act_value Max=200i,Lower_bound=25i,Min=10i,Upper_limit=222i,Lower_limit=0i,Simulation=False,Value=69i,Percentage=31i 1553537228984000000
Otherwise you can still use metric name as field name but missing fields for each timestamp will have null values:
Measurement1,Valuetype=Set_value Upper_limit=222i 1553537229024000000
Measurement1,Valuetype=Set_value Lower_limit=0i 1553537229028000000
Measurement1,Valuetype=Set_value Unit="Kelvin" 1553537229035000000
Measurement1,Valuetype=Set_value Value=222i 1553537229038000000
Measurement1,Valuetype=Set_value Percentage=0i 1553537229045000000

How to generate dynamic values in fitnesse

I have to insert incremental values in a column of the table using Fitnesse. The incremental value I'll get from a stored procedure which returns the last inserted value. So I have to increment the value and store it.
For example: I'll get a value from the stored procedure output. And I have to increment the value by 1 and insert into the table.
Any ideas?
Output from stored procedure is like : ACRDE0001 (PK)
Value to store in table : ACRDE0002, ACRDE0003, .....
Expected output
!|insert|table1|
|col1|col2|col3|
|ACRDE0001|abc|def|
|ACRDE0002|abc|def|
|ACRDE0003|abc|def|
.
.
.
.
As far as I'm aware the only way to change (e.g. increment) a value you get during your test is by writing some code in a fixture. There is a pull request to allow more dynamic Slim expression directly in the wiki, but that has not been merged (let alone released) yet.
Your questions suggests that the value is something you get from a database and that you then want to send back the generated/incremented value with new records you insert. In that case I wonder whether the increment is actually that useful to actually have in your wiki (your test case is not about the generated values, is it?).
Maybe your fixture could just retrieve the initial value (or have it supplied as constructor value) and the fixture could generate the a new value for each row and send them to the database.

Query Influxdb based on tags?

I have started playing around with Influxdb v0.13 and I have some dummy values in my test db where id is a tag and value is a field:
> SELECT * FROM dummy
name: dummy
--------------
time id value
1468276508161069051 1234 12345
1468276539152853428 613 352
1468276543470535110 613 4899
1468276553853436191 1234 12
I get no results returned when I run this query:
> SELECT * FROM dummy WHERE id=1234
but I do get the following when querying with the field instead:
> SELECT * FROM dummy WHERE value=12
name: dummy
--------------
time id value
1468276553853436191 1234 12
Am I doing something wrong here? I thought the point of tags were to be queried (since they are indexed and fields are not), but they seem to break my queries.
It appears that Influx will treat every tag key and value we insert as string and this is evidently shown in their official documentation.
See: https://docs.influxdata.com/influxdb/v0.13/guides/writing_data/
When writing points, you must specify an existing database in the db
query parameter. See the HTTP section on the Write Syntax page for a
complete list of the available query parameters. The body of the POST
- we call this the Line Protocol - contains the time-series data that you wish to store. They consist of a measurement, tags, fields, and a
timestamp. InfluxDB requires a measurement name. Strictly speaking,
tags are optional but most series include tags to differentiate data
sources and to make querying both easy and efficient. Both tag keys
and tag values are strings.
Note: the text in bold.
Hence to filter by tag key value - the query must be enquoted.
Example:
SELECT * FROM dummy WHERE id='1234'

Resources