One measurement - three datatypes - influxdb

I have a Line Protocol like this:
Measurement1,Valuetype=Act_value,metric=Max,dt=Int value=200i 1553537228984000000
Measurement1,Valuetype=Act_value,metric=Lower_bound,dt=Int value=25i 1553537228987000000
Measurement1,Valuetype=Act_value,metric=Min,dt=Int value=10i 1553537228994000000
Measurement1,Valuetype=Act_value,metric=Upper_limit,dt=Int value=222i 1553537228997000000
Measurement1,Valuetype=Act_value,metric=Lower_limit,dt=Int value=0i 1553537229004000000
Measurement1,Valuetype=Act_value,metric=Simulation,dt=bool value=False 1553537229007000000
Measurement1,Valuetype=Act_value,metric=Value,dt=Int value=69i 1553537229014000000
Measurement1,Valuetype=Act_value,metric=Percentage,dt=Int value=31i 1553537229017000000
Measurement1,Valuetype=Set_value,metric=Upper_limit,dt=Int value=222i 1553537229024000000
Measurement1,Valuetype=Set_value,metric=Lower_limit,dt=Int value=0i 1553537229028000000
Measurement1,Valuetype=Set_value,metric=Unit,dt=string value="Kelvin" 1553537229035000000
Measurement1,Valuetype=Set_value,metric=Value,dt=Int value=222i 1553537229038000000
Measurement1,Valuetype=Set_value,metric=Percentage,dt=Int value=0i 1553537229045000000
I need to insert multiple Lines at once. The issue is likely that I insert integers, booleans and strings into the same table. It worked when I created measurements like, e.g. Measurement1_Int,Measurement1_bool,Measurement1_string. In the above configuration I get an error.
I have the following questions:
Is there any way to save values of different (data-)types to one
table/measurement?
If yes how do I need to adjust my Line Protocol?
Would it work I write the three datatypes seperately but still in the same table?

If you can afford to assign the same timestamp to all metrics within measurement datapoint the best variant would be to use metric name a field name in influxdb record:
Measurement1,Valuetype=Act_value Max=200i,Lower_bound=25i,Min=10i,Upper_limit=222i,Lower_limit=0i,Simulation=False,Value=69i,Percentage=31i 1553537228984000000
Otherwise you can still use metric name as field name but missing fields for each timestamp will have null values:
Measurement1,Valuetype=Set_value Upper_limit=222i 1553537229024000000
Measurement1,Valuetype=Set_value Lower_limit=0i 1553537229028000000
Measurement1,Valuetype=Set_value Unit="Kelvin" 1553537229035000000
Measurement1,Valuetype=Set_value Value=222i 1553537229038000000
Measurement1,Valuetype=Set_value Percentage=0i 1553537229045000000

Related

deleting columns from influx DN using flux command line

Is there any way to delete columns of an influx timeseries as we have accidentally injected data using the wrong data type (int instead of float).
Or to change the type of data instead.
Unfortunately, there is no way to delete a "column" (i.e. a tag or a field) from an Influx measurement so far. Here's the feature request for that but there is no ETA yet.
Three workarounds:
use SELECT INTO to copy the desirable data into a different measurement, excluding the undesirable "columns". e.g.:
SELECT desirableTag1, desirableTag2, desirableField1, desirableField2 INTO new_measurement FROM measurement
use CAST operations to "change the data type" from float to int. e.g.:
SELECT desirableTag1, desirableTag2, desirableField1, desirableField2, undesiredableTag3::integer, undesiredableField3::integer INTO new_measurement FROM measurement
"Update" the data with insert statement, which will overwrite the data with the same timestamp, same tags, same field keys. Keep all other things equal, except that the "columns" that you would like to update. To make the data in integer data type, remember to put a trailing i on the number. Example: 42i. e.g.:
insert measurement,desirableTag1=v1 desirableField1=fv1,desirableField2=fv2,undesirableField1=someValueA-i 1505799797664800000
insert measurement,desirableTag1=v21 desirableField1=fv21,desirableField2=fv22,undesirableField1=someValueB-i 1505799797664800000

InfluxDB design issue

I am using influxDB and using line protocol to insert large set of data into Data base. Data i am getting is in the form of Key value pair, where key is long string contains Hierarchical data and value is simple integer value.
Sample Key Value data :
/path/units/unit/subunits/subunit[name\='NAME1']/memory/chip/application/filter/allocations
value = 500
/path/units/unit/subunits/subunit[name\='NAME2']/memory/chip/application/filter/allocations
value = 100
(Note Name = 2)
/path/units/unit/subunits/subunit[name\='NAME1']/memory/chip/application/filter/free
value = 700
(Note Instead of allocation it is free at the leaf)
/path/units/unit/subunits/subunit[name\='NAME2']/memory/graphics/application/filter/swap
value = 600
Note Instead of chip, graphics is in path)
/path/units/unit/subunits/subunit[name\='NAME2']/harddisk/data/size
value = 400
Note Different path but till subunit it is same
/path/units/unit/subunits/subunit[name\='NAME2']/harddisk/data/free
value=100
Note Same path but last element is different
Below is the line protocol i am using to insert data.
interface, Key= /path/units/unit/subunits/subunit[name\='NAME2']/harddisk/data/free, valueData= 500
I am Using one measurement namely, Interface. And one tag and one field set. But this DB design is causing issue for querying data.
How can I design database so that i can query like, Get all record for subunit where name = Name1 or get all size data for every hard disk.
Thanks in advance.
The Schema I'd recommend would be the following:
interface,filename=/path/units/unit/subunits/subunit[name\='NAME2']/harddisk/data/free value=500
Where filename is a tag and value is the field.
Given that the cardinality of filename in the thousands this schema should work well.

Get Rapidminer to transpose/pivot a single attribute/column in a table

I have a table that looks like the following:
ID City Code
"1005AE" "Oakland" "Value1"
"1006BR" "St.Louis" "Value2"
"102AC" "Miami" "Value1"
"103AE" "Denver" "Value3"
And I want to transpose/pivot the Code examples/values into column attributes like this:
ID City Value1 Value2 Value3
"1005" "Oakland" 1 0 0
"1006" "St.Louis" 0 1 0
"1012" "Miami" 1 0 0
"1030" "Denver" 0 0 1
Note that the ID field is numeric values encoded as strings because Rapidminer had trouble importing bigint datatypes. So that is a separate issue I need to fix--but my focus here is the pivoting or transposing of the data.
I read through a few different Stackoverflow posts listed below. They suggested the Pivot or Transpose operations. I tried both of these, but for some reason I am getting either a huge table which creates City as a dummy variable as well, or just some subset of attribute columns.
How can I set the rows to be the attributes and columns the samples in rapidminer?
Rapidminer data transpose equivalent to melt in R
Any suggestions would be appreciated.
In pivoting, the group attribute parameter dictates how many rows there will be and the index attribute parameter dictates what the last part of the name of new attributes will be. The first part of the name of each new attribute is driven by any other regular attributes that are neither group nor index and the value within the cell is the value found in the original example set.
This means you have to create a new attribute with a constant value of 1; use Generate Attributes for this. Set the role of the ID attribute to be ID so that it is no longer a regular attribute; use Set Role for this. In the Pivot operator, set the group attribute to be City and the index attribute to be Code. The end result is close to what you want. The final steps are, firstly to set missing values to be 0; use Replace Missing Values for this and, secondly to rename the attributes to match what you want; use Rename for this.
You will have to join the result back to the original since the pivot operation loses the ID.
You can find a worked example here http://rapidminernotes.blogspot.co.uk/2011/05/worked-example-using-pivot-operator.html

How to match ets:match against a record in Erlang?

I have heard that specifying records through tuples in the code is a bad practice: I should always use record fields (#record_name{record_field = something}) instead of plain tuples {record_name, value1, value2, something}.
But how do I match the record against an ETS table? If I have a table with records, I can only match with the following:
ets:match(Table, {$1,$2,$3,something}
It is obvious that once I add some new fields to the record definition this pattern match will stop working.
Instead, I would like to use something like this:
ets:match(Table, #record_name{record_field=something})
Unfortunately, it returns an empty list.
The cause of your problem is what the unspecified fields are set to when you do a #record_name{record_field=something}. This is the syntax for creating a record, here you are creating a record/tuple which ETS will interpret as a pattern. When you create a record then all the unspecified fields will get their default values, either ones defined in the record definition or the default default value undefined.
So if you want to give fields specific values then you must explicitly do this in the record, for example #record_name{f1='$1',f2='$2',record_field=something}. Often when using records and ets you want to set all the unspecified fields to '_', the "don't care variable" for ets matching. There is a special syntax for this using the special, and otherwise illegal, field name _. For example #record_name{record_field=something,_='_'}.
Note that in your example you have set the the record name element in the tuple to '$1'. The tuple representing a record always has the record name as the first element. This means that when you create the ets table you should set the key position with {keypos,Pos} to something other than the default 1 otherwise there won't be any indexing and worse if you have a table of type 'set' or 'ordered_set' you will only get 1 element in the table. To get the index of a record field you can use the syntax #Record.Field, in your example #record_name.record_field.
Try using
ets:match(Table, #record_name{record_field=something, _='_'})
See this for explanation.
Format you are looking for is #record_name{record_field=something, _ = '_'}
http://www.erlang.org/doc/man/ets.html#match-2
http://www.erlang.org/doc/programming_examples/records.html (see 1.3 Creating a record)

SQL Query: Using IF statement in defining new field

I have a table with many fields and additionally several boolean fields (ex: BField1, BField2, BField3 etc.).
I need to make a Select Query, which will select all fields except for boolean ones, and a new virtual field (ex: FirstTrueBool) whose value will equal to the name of the first TRUE Boolean Field.
For ex: Say I have BField1 = False, BField2 = True, BField3 = true, BField4=false, in that case SQL Query should set [FirstTrueBool] to "BField2". Is that possible?
Thank you in advance.
P.S. I use Microsoft Access (MDB) Database and Jet Engine.
If you want to keep the current architecture (mixed 'x' non-null status and 'y' non-status fields) you have (AFAIS now) only the option to use IIF:
Select MyNonStatusField1, /* other non-status fields here */
IIF([BField1], "BField1",
IIF([BField2], "BField2",
...
IIF([BFieldLast], "BFieldLast", "#No Flag#")
))))) -- put as many parenthesis as it needs to close the imbricated IIFs
From
MyTable
Of course you can add any Where clause you like.
EDIT:
Alternatively you can use the following trick:
Set the fields to null when the flag is false and put the order number (iow, "1" for BField1, "2" for BField2 etc.) when the flag is true. Be sure that the status fields are strings (ie. Varchar(2) or, better, Char(2) in SQL terminology)
Then you can use the COALESCE function in order to return the first non-value from the status fields which will be the index number as string. Then you can add in front of this string any text you like (for example "BField"). Then you will end with something like:
Select "BField" || Coalesce(BField1, BField2, BField3, BField4) /*etc. (add as many fields you like) */
From MyTable
Much clearer IMHO.
HTH
You would be better using a single 'int' column as a bitset (provided you have up to 32 columns) to represent the columns.
e.g. see SQL Server: Updating Integer Status Columns (it's sql server, but the same technique applies equally well to MS Access)

Resources